AES E-Library

Compressing Neural Network Models of Audio Distortion Effects Using Knowledge Distillation Techniques

Neural networks have proven to be effective for modeling analog audio effects using a black-box approach. However, few can guarantee lightweight solutions suitable for real-time environments where the models must run concurrently on consumer-grade equipment. This paper explores knowledge distillation techniques for compressing recurrent neural network models for audio distortion effects, aiming to produce computationally efficient solutions with high accuracy that maintain compact model size. In particular, we consider an audio-to-audio LSTM architecture for regression tasks where small networks are trained to mimic the internal representations of larger networks, known as feature-based knowledge distillation. The evaluation was conducted on three different audio distortion effect datasets with experiments on both parametric and non-parametric data. The results show that distilled models are more accurate than non-distilled models with equal parameter size, especially for models that exhibit higher error rates. Furthermore, we observe that smaller complexity gaps between student and teacher models yield greater improvements in non-parametric cases.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Session subject:
Permalink: https://aes2.org/publications/elibrary-page/?id=23001


(542KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content