AES E-Library

Real-time binaural speech reproduction with deep neural networks and dynamic head tracking

This study examines the efficacy of a deep learning-based binaural speech reproduction system via headphones that integrates real-time head tracking to simulate a stable speech source in a virtual environment. We developed a binaural speech reproduction system that operates in real time using a custom head-tracking device based on the Arduino platform and a pre-trained neural network model for binaural speech synthesis. The system dynamically adapts audio output based on the head orientation of the listener, operating with an overall latency of 70 ms on standard computing hardware. We conducted a subjective speech localization task with 15 young, normal-hearing participants. Initial results showed a 40% accuracy in localizing a static speech source, which significantly improved with dynamic head movements. Participants reported minimal dizziness and distortion, indicating good tolerability and sound quality. Despite some limitations, such as non-personalized settings and perceivable latency, the findings demonstrate the potential of deep learning approaches to enhance realism in virtual auditory environments. Further research is needed to refine these technologies for broader applications in immersive audio settings.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
AES Convention: Paper Number:
Publication Date:
Permalink: https://aes2.org/publications/elibrary-page/?id=22553


(378KB)


Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content