AES E-Library

A Robust and Computationally Efficient Speech/Music Discriminator

A New method for discriminating between speech and music signals is introduced. The strategy is based on the extraction of four features, whose values are combined linearly into a unique parameter. This parameter is used to distinguish between the two kinds of signals. The method has achieved an accuracy superior to 99%, even for severely degraded and noisy signals. Moreover, the low dimensionality of the feature space, together with a very simple information-merging technique, has resulted in a remarkable robustness to new situations. The low computational complexity of the method makes it appropriate for applications that demand real-time operation. Finally excellent resolution for the segmentation of audio streams is achieved by manipulating the analyzed data properly.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Permalink: https://aes2.org/publications/elibrary-page/?id=13897


(594KB)


Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content