Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
Object-based audio (OBA) is an approach to sound storage, transmission, and reproduction whereby individual audio objects contain associated metadata information that is rendered at the client side of the broadcast chain. For example, metadata may indicate the object’s position and the level or language of a dialogue track. An experiment was conducted to investigate how content creators perceive changes in perceptual attributes when the same content is rendered to different systems and how they would change the mix if they had control of it. The main aims of this experiment were to identify a small number of the most common mix processes used by sound designers when mixing object-based content to loudspeaker systems with different numbers of channels and to understand how the perceptual attributes of OBA content changes when it is rendered to different systems. The goal is to minimize perceived changes in the context of standard Vector Base Amplitude Panning and matrix-based downmixes. Text mining and clustering of the content creators’ responses revealed 6 general mix processes: the spatial spread of individual objects, EQ and processing, reverberation, position, bass, and level. Logistic regression models show the relationships between the mix processes, perceived changes in perceptual attributes, and the rendering method/speaker layout. The relative frequency of different mix processes was found to differ among categories of audio object, suggesting that any downmix rules should be object category specific. These results give insight into how OBA can be used to improve listener experience.
Author (s): Woodcock, James; Davies, William J.; Melchior, Frank; Cox, Trevor J.
Affiliation:
University of Salford, Salford, United Kingdom; BBC R&D, Dock House, MediaCityUK, Salford, United Kingdom
(See document for exact affiliation information.)
Publication Date:
2018-01-06
Import into BibTeX
Permalink: https://aes2.org/publications/elibrary-page/?id=19375
(1800KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.
Woodcock, James; Davies, William J.; Melchior, Frank; Cox, Trevor J.; 2018; Elicitation of Expert Knowledge to Inform Object-Based Audio Rendering to Different Systems [PDF]; University of Salford, Salford, United Kingdom; BBC R&D, Dock House, MediaCityUK, Salford, United Kingdom; Paper ; Available from: https://aes2.org/publications/elibrary-page/?id=19375
Woodcock, James; Davies, William J.; Melchior, Frank; Cox, Trevor J.; Elicitation of Expert Knowledge to Inform Object-Based Audio Rendering to Different Systems [PDF]; University of Salford, Salford, United Kingdom; BBC R&D, Dock House, MediaCityUK, Salford, United Kingdom; Paper ; 2018 Available: https://aes2.org/publications/elibrary-page/?id=19375
@article{woodcock2018elicitation,
author={woodcock james and davies william j. and melchior frank and cox trevor j.},
journal={journal of the audio engineering society},
title={elicitation of expert knowledge to inform object-based audio rendering to different systems},
year={2018},
volume={66},
issue={1/2},
pages={44-59},
month={january},}
TY – paper
TI – Elicitation of Expert Knowledge to Inform Object-Based Audio Rendering to Different Systems
SP – 44 EP – 59
AU – Woodcock, James
AU – Davies, William J.
AU – Melchior, Frank
AU – Cox, Trevor J.
PY – 2018
JO – Journal of the Audio Engineering Society
VO – 66
IS – 1/2
Y1 – January 2018