AES E-Library

← Back to search

Synthetic Transaural Audio Rendering (STAR): Extension to Full 3D Spatialization

The Synthetic Transaural Audio Rendering (STAR) method was recently published in the Journal of the Audio Engineering Society. That article proposed a method for sound spatialization in a perceptual way, by the reproduction of acoustic cues based on some models, as well as tests for its validation. In that article, the authors focused on azimuth and gave only hints for extensions to distance and elevation. Since then, the implementation and testing of these extensions have been carried out, and this article aims at completing the STAR method. Although for the distance, the authors rather simulate physical phenomena, but for the elevation, they propose to reproduce monaural cues by shaping the Head-Related Transfer Functions with peaks and notches controlled by some models, in order to give the listener the sensation of elevation. The extensions to distance and elevation have been validated by subjective listening tests. The independence of these two parameters is also demonstrated. For the azimuth, there is a robust localization method giving objective results consistent with human hearing. Thanks to this method, the independence of azimuth and distance or elevation is also demonstrated. Finally, there is now a full 3D system for sound spatialization, managing each parameter of each sound source position (azimuth, elevation, and distance) independently.


Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:


Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

E-Libary location:
Choose your country of residence from this list:

Skip to content