Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance

James Leonard, and Andrea Giomi

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract:

This paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can provide useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.

Citation:

James Leonard, and Andrea Giomi. 2020. Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.4813422

BibTeX Entry:

  @inproceedings{NIME20_72,
 abstract = {This paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can provide useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.},
 address = {Birmingham, UK},
 author = {Leonard, James and Giomi, Andrea},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.5281/zenodo.4813422},
 editor = {Romain Michon and Franziska Schroeder},
 issn = {2220-4806},
 month = {July},
 pages = {369--374},
 presentation-video = {https://youtu.be/HQqIjL-Z8dA},
 publisher = {Birmingham City University},
 title = {Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance},
 url = {https://www.nime.org/proceedings/2020/nime2020_paper72.pdf},
 year = {2020}
}