Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
Listeners of audio are increasingly shifting to a participatory culture where technology allows them to modify and control the listening experience. This report describes the developments of a mood-driven music player, Moodplay, which incorporates semantic computing technologies for musical mood using social tags and informative and aesthetic browsing visualizations. The prototype runs with a dataset of over 10,000 songs covering various genres, arousal, and valence levels. Changes in the design of the system were made in response to user evaluations from over 120 participants in 15 different sectors of work or education. The proposed client/server architecture integrates modular components powered by semantic web technologies and audio content feature extraction. This enables recorded music content to be controlled in flexible and nonlinear ways. Dynamic music objects can be used to create mashups on the fly of two or more simultaneous songs to allow selection of multiple moods. The authors also consider nonlinear audio techniques that could transform the player into a creative tool, for instance, by reorganizing, compressing, or expanding temporally prerecorded content.
Author (s): Barthet, Mathieu; Fazekas, György; Allik, Alo; Thalmann, Florian; B.Sandler, Mark
Affiliation:
Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
(See document for exact affiliation information.)
Publication Date:
2016-09-06
Import into BibTeX
Permalink: https://aes2.org/publications/elibrary-page/?id=18376
(518KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.
Barthet, Mathieu; Fazekas, György; Allik, Alo; Thalmann, Florian; B.Sandler, Mark; 2016; From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts [PDF]; Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK; Paper ; Available from: https://aes2.org/publications/elibrary-page/?id=18376
Barthet, Mathieu; Fazekas, György; Allik, Alo; Thalmann, Florian; B.Sandler, Mark; From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts [PDF]; Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK; Paper ; 2016 Available: https://aes2.org/publications/elibrary-page/?id=18376
@article{barthet2016from,
author={barthet mathieu and fazekas györgy and allik alo and thalmann florian and b.sandler mark},
journal={journal of the audio engineering society},
title={from interactive to adaptive mood-based music listening experiences in social or personal contexts},
year={2016},
volume={64},
issue={9},
pages={673-682},
month={september},}
TY – paper
TI – From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts
SP – 673 EP – 682
AU – Barthet, Mathieu
AU – Fazekas, György
AU – Allik, Alo
AU – Thalmann, Florian
AU – B.Sandler, Mark
PY – 2016
JO – Journal of the Audio Engineering Society
VO – 64
IS – 9
Y1 – September 2016