You are currently logged in as an
Institutional Subscriber.
If you would like to logout,
please click on the button below.
Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
Neural networks have proven to be effective for modeling analog audio effects using a black-box approach. However, few can guarantee lightweight solutions suitable for real-time environments where the models must run concurrently on consumer-grade equipment. This paper explores knowledge distillation techniques for compressing recurrent neural network models for audio distortion effects, aiming to produce computationally efficient solutions with high accuracy that maintain compact model size. In particular, we consider an audio-to-audio LSTM architecture for regression tasks where small networks are trained to mimic the internal representations of larger networks, known as feature-based knowledge distillation. The evaluation was conducted on three different audio distortion effect datasets with experiments on both parametric and non-parametric data. The results show that distilled models are more accurate than non-distilled models with equal parameter size, especially for models that exhibit higher error rates. Furthermore, we observe that smaller complexity gaps between student and teacher models yield greater improvements in non-parametric cases.
Author (s): Simionato, Riccardo; Tidemann, Aleksander
Affiliation:
University of Oslo
(See document for exact affiliation information.)
Publication Date:
2025-09-02
Import into BibTeX
Session subject:
Artificial Intelligence and Machine Learning for Audio
Permalink: https://aes2.org/publications/elibrary-page/?id=23001
(542KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.
Simionato, Riccardo; Tidemann, Aleksander; 2025; Compressing Neural Network Models of Audio Distortion Effects Using Knowledge Distillation Techniques [PDF]; University of Oslo; Paper 12; Available from: https://aes2.org/publications/elibrary-page/?id=23001
Simionato, Riccardo; Tidemann, Aleksander; Compressing Neural Network Models of Audio Distortion Effects Using Knowledge Distillation Techniques [PDF]; University of Oslo; Paper 12; 2025 Available: https://aes2.org/publications/elibrary-page/?id=23001
@article{simionato2025compressing,
author={simionato riccardo and tidemann aleksander},
journal={journal of the audio engineering society},
title={compressing neural network models of audio distortion effects using knowledge distillation techniques},
year={2025},
number={12},
month={september},}