Me & My Musical AI "Toddler"

Cagri Erdem, and Alexander Refsum Jensenius

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract

This performance features the coadaptive audiovisual instrument CAVI for collaborative human-machine improvisation. The system details are presented in a related paper submission for this year’s NIME, entitled “CAVI: A Coadaptive Audiovisual Instrument–Composition”. Briefly, CAVI tracks muscle and motion data of a performer's actions and uses deep learning to generate control signals used in a live sound processing system based on layered time-based effects modules. In addition, CAVI also has a virtual body that is present visually on stage. The artistic motivation of the project is related to how elements of surprise can emerge between a human performer and a computer-based musical agent. We explore this through CAVI, which builds on a dataset collected in a previous laboratory study of the sound-producing actions of guitarists. The particular dataset used in this project consists of electromyogram (EMG) and acceleration (ACC) data of thirty-three guitarists playing a number of basic sound-producing actions (impulsive, sustained, and iterative) and free improvisations. In the performance setup, CAVI continuously monitors the data streamed from a Myo armband located on the right forearm of the guitarist, which consists of 4-channel EMG and 3-channel ACC data. These data streams are used to generate new control signals akin to what will likely come next. CAVI is concerned with (1) how musical agents can interact with a performer’s body motion and (2) how artists can diversify performance repertoires using AI technologies. This results in serendipitous performances, based on the interaction between the human guitarist and the artificial agent. For this NIME performance, CAVI’s creator will perform with the system, using improved mappings, model optimization, sound mix, and spatialization. We will also include the Self-playing Guitars that were presented as part of an online installation, Strings On-Line, during NIME 2020. In doing so, we aim to address this year’s special call for music option, “NIME with a story,” and enhance the piece’s multi-agent structure using acoustic guitars that interact with the environment autonomously via Bela boards, actuators, and sound and motion sensors. Thus, the final performance setup will comprise an electric guitarist human performer, a virtual agent responsible for live sound processing, and six self-playing guitars.

Citation

Cagri Erdem, and Alexander Refsum Jensenius. 2022. Me & My Musical AI "Toddler". Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.21428/92fbeb44.1d9762bd [PDF]

BibTeX Entry

@inproceedings{nime2022_music_22,
 abstract = {This performance features the coadaptive audiovisual instrument CAVI for collaborative human-machine improvisation. The system details are presented in a related paper submission for this year’s NIME, entitled “CAVI: A Coadaptive Audiovisual Instrument–Composition”. Briefly, CAVI tracks muscle and motion data of a performer's actions and uses deep learning to generate control signals used in a live sound processing system based on layered time-based effects modules. In addition, CAVI also has a virtual body that is present visually on stage. The artistic motivation of the project is related to how elements of surprise can emerge between a human performer and a computer-based musical agent. We explore this through CAVI, which builds on a dataset collected in a previous laboratory study of the sound-producing actions of guitarists. The particular dataset used in this project consists of electromyogram (EMG) and acceleration (ACC) data of thirty-three guitarists playing a number of basic sound-producing actions (impulsive, sustained, and iterative) and free improvisations. In the performance setup, CAVI continuously monitors the data streamed from a Myo armband located on the right forearm of the guitarist, which consists of 4-channel EMG and 3-channel ACC data. These data streams are used to generate new control signals akin to what will likely come next. CAVI is concerned with (1) how musical agents can interact with a performer’s body motion and (2) how artists can diversify performance repertoires using AI technologies. This results in serendipitous performances, based on the interaction between the human guitarist and the artificial agent. For this NIME performance, CAVI’s creator will perform with the system, using improved mappings, model optimization, sound mix, and spatialization. We will also include the Self-playing Guitars that were presented as part of an online installation, Strings On-Line, during NIME 2020. In doing so, we aim to address this year’s special call for music option, “NIME with a story,” and enhance the piece’s multi-agent structure using acoustic guitars that interact with the environment autonomously via Bela boards, actuators, and sound and motion sensors. Thus, the final performance setup will comprise an electric guitarist human performer, a virtual agent responsible for live sound processing, and six self-playing guitars.},
 address = {Auckland, New Zealand},
 articleno = {22},
 author = {Cagri Erdem and Alexander Refsum Jensenius},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.21428/92fbeb44.1d9762bd},
 editor = {Raul Masu},
 issn = {2220-4806},
 month = {jun},
 title = {Me & My Musical AI "Toddler"},
 track = {Music},
 url = {https://doi.org/10.21428/92fbeb44.1d9762bd},
 year = {2022}
}