AES E-Library

Improving Domain Generalization Via Event-Based Acoustic Scene Classification

Acoustic Scene Classification (ASC) has been typically addressed by feeding raw audio features to deep neural networks. However, such an audio-based approach has consistently proved to result in poor model generalization across different recording devices. In fact, device-specific transfer functions and nonlinear dynamic range compression highly affect spectro-temporal features, resulting in a deviation from the learned data distribution known as domain shift. In this paper, we present an alternative ASC paradigm that involves ditching the classic end-to-end audio-based training in favor of gathering an intermediate event-based representation of the acoustic scenes using large-scale pretrained models. Performance evaluation on the TAU Urban Acoustic Scenes 2020 Mobile Development dataset shows that the proposed event-based approach is up to 160% more robust than corresponding audio-based methods in the face of mismatched recording devices.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
AES Convention: Paper Number:
Publication Date:
Session subject:
Permalink: https://aes2.org/publications/elibrary-page/?id=21933


(2681KB)


Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content