You are currently logged in as an
Institutional Subscriber.
If you would like to logout,
please click on the button below.
Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
In future Augmented Reality applications, the level of virtual sound sources should likely be aligned to the level of an equivalent real source. With the goal of working towards understanding the importance of such alignment, this paper studies the accuracy of listeners expectations regarding the level of speech in a room. The presented experiment asks listeners to adjust the level of reproduced speech to match their expectations of how loud the corresponding real speaker would be. For the experiment, a dataset of speech recordings was used, where different speakers articulated a sentence at five distinct voice levels. The test was conducted in two different room acoustic conditions. In addition to adjusting the playback level, participants were asked to rate how well they knew the speaker. Results show that participants tended to underestimate the level, especially when the voice level was high. Moreover, the errors and their variability were smaller when participants knew the speaker well. There was no difference between the two room acoustic conditions, nor were we able to show strong adaptation to the room acoustics over time. This suggests that listeners are able to form level expectations instantaneously, incorporating the memory of source characteristics and the acoustic conditions of the room.
Author (s): Meyer-Kahlen, Nils; de Las Heras, Sergio; Lokki, Tapio
Affiliation:
Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland, and Institute of Technical Acoustics, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen University, Germany
(See document for exact affiliation information.)
Publication Date:
2024-08-05
Import into BibTeX
Session subject:
Audio for Virtual and Augmented Reality
Permalink: https://aes2.org/publications/elibrary-page/?id=22661
(16847KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.
Meyer-Kahlen, Nils; de Las Heras, Sergio; Lokki, Tapio; 2024; Expected Levels of Reproduced Speech [PDF]; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland, and Institute of Technical Acoustics, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen University, Germany; Paper 12; Available from: https://aes2.org/publications/elibrary-page/?id=22661
Meyer-Kahlen, Nils; de Las Heras, Sergio; Lokki, Tapio; Expected Levels of Reproduced Speech [PDF]; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland; Acoustics Lab, Dpt. of Information and Communications Engineering, Aalto University, Finland, and Institute of Technical Acoustics, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen University, Germany; Paper 12; 2024 Available: https://aes2.org/publications/elibrary-page/?id=22661