You are currently logged in as an
Institutional Subscriber.
If you would like to logout,
please click on the button below.
Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
Keyword spotting (KWS) is a foundational technology for enabling voice-driven user interfaces across a wide range of applications, including smart assistants, IoT devices, and wearables. Traditional KWS systems require large amounts of labeled data and are often limited to a fixed set of keywords, making them less adaptable to dynamic user needs and new commands. Few-shot keyword spotting (FS-KWS) systems address these limitations by enabling the detection and registration of new keywords from only a handful of examples, thus supporting real-time integration of new commands without the need for extensive retraining. Despite these advances, deploying accurate FS-KWS models on resource-constrained devices remains a significant challenge due to strict limitations on computational power, memory, and energy consumption. In this work, we adapt a suite of lightweight neural network architectures originally designed for KWS, for FS-KWS in resource-limited environments. Our models are trained using a knowledge distillation framework that leverages self-supervised learning (SSL) models to generate compact and highly discriminative speech embeddings. This approach enables the transformation of speech segments into lower-dimensional representations, facilitating efficient and robust keyword detection even with limited data. Experiments on benchmark KWS datasets show that by use of our training approach, the lightweight model architectures performs on par with larger and computationally more demanding architectures for FS-KWS applications. Our findings underscore the potential of combining knowledge distillation and SSL-based embeddings to advance FS-KWS, paving the way for practical, scalable, and adaptive voice interfaces in next-generation smart devices.
Author (s): Okman, Erman; Gok, Alican; Buyuksolak, Oguzhan
Affiliation:
Analog Devices Inc.; Analog Devices Inc.; Analog Devices Inc.
(See document for exact affiliation information.)
AES Convention: 159
Paper Number:387
Publication Date:
2025-10-14
Import into BibTeX
Permalink: https://aes2.org/publications/elibrary-page/?id=23061
(322KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Okman, Erman; Gok, Alican; Buyuksolak, Oguzhan; 2025; Efficient Few-Shot Keyword Spotting for Edge Devices Using Self-Supervised Learning [PDF]; Analog Devices Inc.; Analog Devices Inc.; Analog Devices Inc.; Paper 387; Available from: https://aes2.org/publications/elibrary-page/?id=23061
Okman, Erman; Gok, Alican; Buyuksolak, Oguzhan; Efficient Few-Shot Keyword Spotting for Edge Devices Using Self-Supervised Learning [PDF]; Analog Devices Inc.; Analog Devices Inc.; Analog Devices Inc.; Paper 387; 2025 Available: https://aes2.org/publications/elibrary-page/?id=23061