A Knowledge-based, Data-driven Method for Action-sound Mapping

Federico Visi, Baptiste Caramiaux, Michael Mcloughlin, and Eduardo Miranda

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract:

This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given. Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus. The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance. To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.

Citation:

Federico Visi, Baptiste Caramiaux, Michael Mcloughlin, and Eduardo Miranda. 2017. A Knowledge-based, Data-driven Method for Action-sound Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1176230

BibTeX Entry:

  @inproceedings{fvisi2017,
 abstract = {This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis.  This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters.  A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given.  Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus.  The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance.  To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.},
 address = {Copenhagen, Denmark},
 author = {Federico Visi and Baptiste Caramiaux and Michael Mcloughlin and Eduardo Miranda},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.5281/zenodo.1176230},
 issn = {2220-4806},
 pages = {231--236},
 publisher = {Aalborg University Copenhagen},
 title = {A Knowledge-based, Data-driven Method for Action-sound Mapping},
 url = {http://www.nime.org/proceedings/2017/nime2017_paper0043.pdf},
 year = {2017}
}