AES E-Library

Acoustic Scene Classification Using Pixel-Based Attention

In this paper, we propose a pixel-based attention (PBA) module for acoustic scene classification (ASC). By performing feature compression on the input spectrogram along the spatial dimension, PBA can obtain the global information of the spectrogram. Besides, PBA applies attention weights to each pixel of each channel through two convolutional layers combined with global information. In addition, the spectrogram applied after the attention weights is multiplied by the gamma coefficient and superimposed with the original spectrogram to obtain more effective spectrogram features for training the network model. Furthermore, this paper implements a convolutional neural network (CNN) based on PBA (PB-CNN) and compares its classification performance on task 1 of Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 Challenge with CNN based on time attention (TB-CNN), CNN based on frequency attention (FB-CNN), and pure CNN. The experimental results show that the proposed PB-CNN achieves the highest accuracy of 89.2% among the four CNNs, 1.9% higher than that of TB-CNN (87.3%), 2.2% higher than that of FB-CNN (86.6%), and 3% higher than that of pure CNN (86.2%). Compared with DCASE 2016’s baseline system, the PB-CNN improved by 12%, and its 89.2% accuracy was the highest among all submitted single models.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Permalink: https://aes2.org/publications/elibrary-page/?id=20998


(588KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content