AES E-Library

← Back to search

Audio Clip Classification Using Social Tags and the Effect of Tag Expansion

Methods for automatic sound and music classification are of great value when trying to organise the large amounts of unstructured, user-contributed audio content uploaded to online sharing platforms. Currently, most of these methods are based on the audio signal, leaving the exploitation of users’ annotations or other contextual data rather unexplored. In this paper, we describe a method for the automatic classification of audio clips which is solely based on user-supplied tags. As a novelty, the method includes a tag expansion step for increasing classification accuracy when audio clips are scarcely tagged. Our results suggest that very high accuracies can be achieved in tag-based audio classification (even for poorly or badly annotated clips), and that the proposed tag expansion step can, in some cases, significantly increase classification performance. We are interested in the use of the described classification method as a first step for tailoring assistive tagging systems to the particularities of different audio categories, and as a way to improve the overall quality of online user annotations.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Session subject:
Permalink: https://aes2.org/publications/elibrary-page/?id=17091


(1636KB)


Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content