GRASSP: Gesturally-Realized Audio, Speech and Song Performance
Bob Pritchard, and Sidney S. Fels
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2006
- Location: Paris, France
- Pages: 272–276
- Keywords: Speech synthesis, parallel formant speech synthesizer, gesture control, Max/MSP, Jitter, Cyberglove, Polhemus, sound diffusion, UBC Toolbox, Glove-Talk,
- DOI: 10.5281/zenodo.1176987 (Link to paper)
- PDF link
Abstract:
We describe the implementation of an environment for Gesturally-Realized Audio, Speech and Song Performance (GRASSP), which includes a glove-based interface, a mapping/training interface, and a collection of Max/MSP/Jitter bpatchers that allow the user to improvise speech, song, sound synthesis, sound processing, sound localization, and video processing. The mapping/training interface provides a framework for performers to specify by example the mapping between gesture and sound or video controls. We demonstrate the effectiveness of the GRASSP environment for gestural control of musical expression by creating a gesture-to-voice system that is currently being used by performers.
Citation:
Bob Pritchard, and Sidney S. Fels. 2006. GRASSP: Gesturally-Realized Audio, Speech and Song Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1176987BibTeX Entry:
@inproceedings{Pritchard2006, abstract = {We describe the implementation of an environment for Gesturally-Realized Audio, Speech and Song Performance (GRASSP), which includes a glove-based interface, a mapping/training interface, and a collection of Max/MSP/Jitter bpatchers that allow the user to improvise speech, song, sound synthesis, sound processing, sound localization, and video processing. The mapping/training interface provides a framework for performers to specify by example the mapping between gesture and sound or video controls. We demonstrate the effectiveness of the GRASSP environment for gestural control of musical expression by creating a gesture-to-voice system that is currently being used by performers. }, address = {Paris, France}, author = {Pritchard, Bob and Fels, Sidney S.}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.1176987}, issn = {2220-4806}, keywords = {Speech synthesis, parallel formant speech synthesizer, gesture control, Max/MSP, Jitter, Cyberglove, Polhemus, sound diffusion, UBC Toolbox, Glove-Talk, }, pages = {272--276}, title = {GRASSP: Gesturally-Realized Audio, Speech and Song Performance}, url = {http://www.nime.org/proceedings/2006/nime2006_272.pdf}, year = {2006} }