You are currently logged in as an
Institutional Subscriber.
If you would like to logout,
please click on the button below.
Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
Speech denoising is a prominent and widely utilized task, appearing in many common use-cases. Although there are very powerful published machine learning methods, most of those are too complex for deployment in everyday and/or low resources computational environments, like hand-held devices, smart glasses, hearing aids, automotive platforms, etc. Knowledge distillation (KD) is a prominent way for alleviating this complexity mismatch, by transferring the learned knowledge from a pre-trained complex model, the teacher, to another less complex one, the student. KD is implemented by using minimization criteria (e.g. loss functions) between learned information of the teacher and the corresponding one from the student. Existing KD methods for speech denoising hamper the KD by bounding the learning of the student to the distribution learned by the teacher. Our work focuses on a method that tries to alleviate this issue, by exploiting properties of the cosine similarity used as the KD loss function. We use a publicly available dataset, a typical architecture for speech denoising (e.g. UNet) that is tuned for low resources environments and conduct repeated experiments with different architectural variations between the teacher and the student, reporting mean and standard deviation of metrics of our method and another, state-of-the-art method that is used as a baseline. Our results show that with our method we can make smaller speech denoising models, capable to be deployed into small devices/embedded systems, to perform better compared to when typically trained and when using other KD methods.
Author (s): Luong, Diep; Heikkinen, Mikko; Drossos, Konstantinos; Virtanen, Tuomas
Affiliation:
Tampere University; Tampere University; Nokia Technologies; Nokia Technologies
(See document for exact affiliation information.)
AES Convention: 158
Paper Number:318
Publication Date:
2025-05-12
Import into BibTeX
Permalink: https://aes2.org/publications/elibrary-page/?id=22869
(622KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.
Luong, Diep; Heikkinen, Mikko; Drossos, Konstantinos; Virtanen, Tuomas; 2025; Knowledge Distillation for Speech Denoising by Latent Representation Alignment with Cosine Distance [PDF]; Tampere University; Tampere University; Nokia Technologies; Nokia Technologies; Paper 318; Available from: https://aes2.org/publications/elibrary-page/?id=22869
Luong, Diep; Heikkinen, Mikko; Drossos, Konstantinos; Virtanen, Tuomas; Knowledge Distillation for Speech Denoising by Latent Representation Alignment with Cosine Distance [PDF]; Tampere University; Tampere University; Nokia Technologies; Nokia Technologies; Paper 318; 2025 Available: https://aes2.org/publications/elibrary-page/?id=22869
@article{luong2025knowledge,
author={luong diep and heikkinen mikko and drossos konstantinos and virtanen tuomas},
journal={journal of the audio engineering society},
title={knowledge distillation for speech denoising by latent representation alignment with cosine distance},
year={2025},
number={318},
month={june},}