Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds
David Antonio Gómez Jáuregui, Irvin Dongo, and Nadine Couture
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2019
- Location: Porto Alegre, Brazil
- Pages: 59–64
- DOI: 10.5281/zenodo.3672866 (Link to paper)
- PDF link
Abstract:
This work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor. Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter's user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language.
Citation:
David Antonio Gómez Jáuregui, Irvin Dongo, and Nadine Couture. 2019. Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.3672866BibTeX Entry:
@inproceedings{GomezJauregui2019, abstract = {This work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor. Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter's user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language.}, address = {Porto Alegre, Brazil}, author = {David Antonio Gómez Jáuregui and Irvin Dongo and Nadine Couture}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.3672866}, editor = {Marcelo Queiroz and Anna Xambó Sedó}, issn = {2220-4806}, month = {June}, pages = {59--64}, publisher = {UFRGS}, title = {Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds}, url = {http://www.nime.org/proceedings/2019/nime2019_paper012.pdf}, year = {2019} }