Sunday, March 23, 18:10 h
COMMUNICATION ACOUSTICS: AUDIO GOES COGNITIVE!
by Jens Blauert, Institute of Communication Acoustics, Ruhr-Universität Bochum, Germany
Abstract
I will discuss how the specific branch of acoustics relating to information technologies has experienced a dramatic evolution during the last 30 years. With input from signal processing, psychoacoustics, and computer science, among other fields, communication acoustics has evolved from audio engineering, which grew from a symbiosis of electrical engineering and acoustics.
Taking computational auditory scene analysis (CASA) as an example, I will show that a major goal in this field is the development of algorithms to extract parametric representations of real auditory scenes. To achieve such results, audio signal processing, symbolic processing, and content processing have become significantly important. Increasingly knowledge-based CASA systems are the result. We have already developed CASA systems that mimic and even surpass some human capabilities of analyzing and recognizing auditory scenes.
Using synthesis tools (such as virtual-reality generators) to manufacture auditory scenes, we can provide an astonishing amount of perceptual plausibility, presence, and immersion. These synthesis tools are multi-modal: capable of simulating, in addition to sound, the senses of touch (also vibration), vision, and even the illusion of movement. The synthesis systems are parameter controlled and often interactive.
Cognitive and multi-modal phenomena have to be considered in both audio analysis and synthesis. Consequently, future audio systems will often contain knowledge-based and multi-modal components. And we will increasingly see audio systems being embedded in more complex systems. This technological trend will coin the future of communication acoustics, and audio engineering as a subset. We must address the urgent problem that many audio engineers are not yet ready to meet the challenge of designing knowledge-based and multi-modal systems.