AES E-Library

Automatic Audio Equalization with Semantic Embeddings

This paper presents a data-driven approach to automatic blind equalization of audio by predicting log-mel spectral features and deriving an inverse filter. The method uses a deep neural network, where a pre-trained model provides semantic embeddings as a backbone, and only a lightweight head is trained. This design improves training-time efficiency and generalization. Trained on both music and speech, the model is robust to noise and reverberation. An objective evaluation confirms its effectiveness, whereas a subjective test shows a performance comparable to an oracle that uses true log-mel spectral features, demonstrating its potential for real-world applications.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Session subject:
Permalink: https://aes2.org/publications/elibrary-page/?id=22996


(595KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content