Journal of the Audio Engineering Society

2015 July/August - Volume 63 Number 7/8


For decades, it has been widely accepted that a steady-state amplitude response measured with an omnidirectional microphone at the listening location in a room is an important indicator of how an audio system will sound. This paper examines both small and large venues, home theaters to cinemas, seeking a calibration methodology that could be applied throughout the audio industry. Room equalization schemes adjust the room curve to match a target believing that this ensures good and consistent sound. The implication is that by making in-situ measurements and manipulating the input signal so that the room curve matches a predetermined target shape, imperfections in (unspecified) loudspeakers and (unspecified) rooms are measured and repaired. It is an enticing marketing story.

When constructing virtual realities, the task of the sound designer is usually to create a suitable immersive sound scene to fit the situation in the virtual world. This paper presented a real-time method (spatially modified synthesis, i.e., SpaMoS) for modifying the reproduction of a spatial sound recording based on the listener movement in the virtual reality to provide the perception of believable movement inside the recorded sound scene. The method decomposes the omnidirectional signal of a B-format recording into granular virtual omnidirectional sources and then places them on an arbitrary surface defining a convex hull. The placement is based on an energetic analysis performed on the B-format signal. This synthesis is complemented with a separate diffuse synthesis that ideally reproduces the ambience and reverberation from the original signal. Listening experiments show that the method is plausible and the quality of the method is very good when the listener is inside the specific cylinder-shaped surface.

The Effect of Vision on Psychoacoustic Testing with Headphone-Based Virtual Sound

Authors: Udesen, Jesper; Piechowiak, Tobias; Gran, Fredrik

This research investigates the degree to which the visual impression of a space affects virtual sound-based psychoacoustic testing. Three tests were considered: (1) the hearing-in-noise test (HINT), (2) a Direction-of-Arrival (DOA) test, and (3) an externalization test where the distance to the sound source is evaluated. The test stimuli remained unchanged when presented in three different environments: (1) a model of a living room where the sound is presented through loudspeakers; (2) the same living room with headphone-based virtual sound; and (3) a large hall. Neither the HINT nor the DOA test was significantly affected by changes in room environment. However, a multiple regression analysis of the externalization data shows that the results of the externalization tests are significantly affected by the room type. Auditory externalization depends not only on the acoustic stimuli to the eardrum but also on brain responses to a visual stimulus. When virtual sound is used to investigate psychoacoustic phenomena, such as sound externalization, the visual stimuli cannot be ignored.

Vector-base amplitude panning (VBAP) aims at creating virtual sound sources at arbitrary directions within 3D multichannel sound reproduction systems. However, VBAP does not consistently produce listener-specific monaural spectral cues that are essential for localization of sound sources in sagittal planes, including the front-back and up-down dimensions. In order to better understand the limitations of VBAP, a functional model approximating human processing of spectro-spatial information was applied to assess accuracy in sagittal-plane localization of virtual sources created by means of VBAP. The model predicted a strong dependence on listeners’ individual head-related transfer functions, on virtual source directions, and on loudspeaker arrangements. In general, simulations showed a systematic degradation with increasing polar-angle span between neighboring loudspeakers.

Engineering reports

The Effects of Recording and Playback Methods in Virtual Listening Tests

Authors: Sun, Shuyuan; Shen, Yong; Liu, Ziyun; Feng, Xuelei

To better control confounding variables found in many listening test, researchers often use virtual listening tests, an accepted method in psychoacoustic research. A virtual listening test method can insure that listening conditions are the same for every test subject, but the choices of recording and playback methods now play an important role. In this report, the results of live and virtual listening tests with four different loudspeakers were compared. The analyses on listeners’ reliability, dispersion of data, preference rating, and Least Significant Difference (LSD) comparison results are presented. The experimental evidence showed the significant effects of recording and playback methods in the virtual listening tests. These results provide evidence that there are significant effects of recording and playback methods in virtual listening tests. A reliable and authentic evaluation can be obtained from virtual listening tests more easily and efficiently by the comprehensive consideration and selection of recording and playback methods.

Strong interest in loudness normalization across the audio industry provokes questions about audio level measurement using available meters. This report describes a method for characterizing the dynamic response of any audio level meter using a pulsed sine test signal. For example, it shows how the dynamic response of an SVI (VU) meter is closer to that of a true RMS meter than to an average-responding meter, showing that in general classical VU meters provide a better representation of volume than today’s more common LED ladder semi-peak responding meters. This method exposes that the response of the different types of meters differ on their slopes in function of the crest factor of the applied signal. It allows the categorization of different types of meters by how they respond to the proposed test signal, making evident that a steady sine tone calibration at 0 dB gives no certainty of what is being measured dynamically by any particular instrument.

Downmixing Method for 22.2 Multichannel Sound Signal in 8K Super Hi-Vision Broadcasting

Authors: Sugimoto, Takehiro; Oode, Satoshi; Nakayama, Yasushige

8K Super Hi-Vision (SHV) is one of the Ultra-high definition television systems, characterized as the next-generation television systems intended to convey a far stronger sense of reality than existing audio visual systems. The system has SHV audio presented with a 22.2 multichannel sound system composed of 24 channels placed in three layers. Although it is desirable for the 22.2 channel signal to be reproduced as discrete signals in a configuration similar to that of the production environment, such a loudspeaker arrangement is not always practical. A downmixing method for a 22.2 multichannel sound signal that can provide a 2 channel signal via a 5.1 channel signal was investigated for SHV broadcasting. The proposed method is composed of equations and initial coefficients optimized for transmission by MPEG-4 AAC. A subjective evaluation was carried out and the result proved that the proposed method is suitable for the 22.2 multichannel sound broadcasting.

Standards and Information Documents

AES Standards Committee News

Download: PDF (49.53 KB)


Cinema Sound Reproduction

Authors: Rumsey, Francis

[Feature] Work on cinema sound reproduction published at the 57th International Conference points to ways of predicting the response at the listening position from loudspeaker data, to the practicality of equalizing perforated screen losses, and to the management of low frequencies in cinemas.

Sound Board: Object-Based Audio

Authors: Parmentier, Matthieu

[Feature] This is the very exciting period we enter now, when our tools and methods are redefined, when it is time to rethink every basic part of our job. In the Internet age, one of the key words is “disruption,” meaning that sometimes new technologies do not improve an industry, but simply create a shortcut and kill former economic models. With objects, the sound engineer of the near future is invited to directly serve a demanding audience. In this case of disruption, object-based broadcasting keeps a strategic seat for the sound engineer, that deserves our keen interest.

138th Convention Report, Warsaw

Download: PDF (1.47 MB)

138th Convention Exhibitors and Sponsors

Download: PDF (79.36 KB)

138th Convention Abstracts of Convention Papers

Download: PDF (228.21 KB)

139th Convention Preview, New York

Download: PDF (763.18 KB)

139th Convention Exhibitor and Sponsor Previews

Download: PDF (418.96 KB)

61st Conference, Call for Papers

Download: PDF (64.2 KB)

2016 Headphone Technology Conference, Call for Papers

Download: PDF (68.8 KB)


Section News

Download: PDF (224.9 KB)

AES Conventions and Conferences

Download: PDF (72.13 KB)


Table of Contents

Download: PDF (62.94 KB)

Cover & Sustaining Members List

Download: PDF (41.3 KB)

AES Officers, Committees, Offices & Journal Staff

Download: PDF (58.08 KB)

Institutional Subscribers: If you would like to log into the E-Library using your institutional log in information, please click HERE.

Choose your country of residence from this list:

Skip to content