You are currently logged in as an
Institutional Subscriber.
If you would like to logout,
please click on the button below.
Home / Publications / E-library page
Only AES members and Institutional Journal Subscribers can download
Appropriate sound effects are an important aspect of immersive virtual experiences. Particularly in mixed reality scenarios it may be desirable to change the acoustic properties of a naturally occurring interaction sound (e.g., the sound of a metal spoon scraping a wooden bowl) to a sound matching the characteristics of the corresponding interaction in the virtual environment (e.g., using wooden tools in a porcelain bowl). In this paper, we adapt the concept of a Y-Autoencoder (YAE) to the domain of sound e?ect analysis and synthesis. The YAE model makes it possible to disentangle the gesture and material properties of sound e?ects with a weakly supervised training strategy where only an identifier label for the material in each training example is given. We show that such a model makes it possible to resynthesize sound e?ects after exchanging the material label of an encoded example and obtain perceptually meaningful synthesis results with relatively low computational e?ort. By introducing a variational regularization for the encoded gesture, as well as an adversarial loss, we can further use the model to generate new and varying sound e?ects with the material characteristics of the training data, while the analyzed audio signal can originate from interactions with unknown materials.
Author (s): Schwär, Simon; Müller, Meinard; Schlecht, Sebastian J.
Affiliation:
International Audio Laboratories, Erlangen, Germany; Aalto University, Espoo, Finland
(See document for exact affiliation information.)
Publication Date:
2022-08-06
Import into BibTeX
Session subject:
Paper
Permalink: https://aes2.org/publications/elibrary-page/?id=21853
(4719KB)
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.
Schwär, Simon; Müller, Meinard; Schlecht, Sebastian J.; 2022; A Variational Y-Autoencoder for Disentangling Gesture and Material of Interaction Sounds [PDF]; International Audio Laboratories, Erlangen, Germany; Aalto University, Espoo, Finland; Paper 23; Available from: https://aes2.org/publications/elibrary-page/?id=21853
Schwär, Simon; Müller, Meinard; Schlecht, Sebastian J.; A Variational Y-Autoencoder for Disentangling Gesture and Material of Interaction Sounds [PDF]; International Audio Laboratories, Erlangen, Germany; Aalto University, Espoo, Finland; Paper 23; 2022 Available: https://aes2.org/publications/elibrary-page/?id=21853