AES E-Library

Synthesis of Spatially Extended Virtual Source with Time-Frequency Decomposition of Mono Signals

Auditory displays, driven by nonauditory data, are often used to present a sound scene to a listener. Typically, the sound field places sound objects at different locations, but the scene becomes aurally richer if the perceived sonic objects have a spatial extent (size), called volumetric virtual coding. Previous research in virtual-world Directional Audio Coding has shown that spatial extent can be synthesized from monophonic sources by applying a time-frequency-space decomposition, i.e., randomly distributing time-frequency bins of the source signal. This technique does not guarantee a stable size and the timbre can degrade. This study explores how to optimize volumetric coding in terms of timbral and spatial perception. The suggested approach for most types of audio uses an STFT window size of 1024 samples and then distributes the frequency bands from lowest to highest using the Halton sequence. The results from two formal listening experiments are presented.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Permalink: https://aes2.org/publications/elibrary-page/?id=17339


(697KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content