A Live Coding Session With the Cloud and a Virtual Agent

Anna Xambó

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract

This live coding performance is a collaboration between a human live coder and a virtual agent (VA). MIRLCa is a self-built SuperCollider extension and a follow-up of the also self-built SuperCollider extension MIRLC. The system combines machine learning algorithms with music information retrieval techniques to retrieve crowdsourced sounds from the online database Freesound.org, which results in a sound-based music style. In this performance, the live coder will explore the online database by only retrieving sounds predicted as “good” sounds when using the retrieval methods from the live coding system. This approach aims at facilitating serendipity instead of randomness in the retrieval of crowdsourced sounds. The VA has been trained to learn from the musical preference of a live coder within context-dependent decisions, ‘situated musical actions’. A binary classifier based on a multilayer perceptron (MLP) neural network has been used for sound prediction. The themes of legibility, agency and negotiability in performance will be sought through the collaboration between the human live coder, the virtual agent live coder and the audience. This project has been funded by the EPSRC HDI Network Plus Grant - Art, Music, and Culture theme.

Citation

Anna Xambó. 2021. A Live Coding Session With the Cloud and a Virtual Agent. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.21428/92fbeb44.7299536b [PDF]

BibTeX Entry

@inproceedings{nime2021_music_24,
 abstract = {This live coding performance is a collaboration between a human live coder and a virtual agent (VA). MIRLCa is a self-built SuperCollider extension and a follow-up of the also self-built SuperCollider extension MIRLC.  The system combines machine learning algorithms with music information retrieval techniques to retrieve crowdsourced sounds from the online database Freesound.org, which results in a sound-based music style.  In this performance, the live coder will explore the online database by only retrieving sounds predicted as “good” sounds when using the retrieval methods from the live coding system. This approach aims at facilitating serendipity instead of randomness in the retrieval of crowdsourced sounds. The VA has been trained to learn from the musical preference of a live coder within context-dependent decisions, ‘situated musical actions’. A binary classifier based on a multilayer perceptron (MLP) neural network has been used for sound prediction. The themes of legibility, agency and negotiability in performance will be sought through the collaboration between the human live coder, the virtual agent live coder and the audience. This project has been funded by the EPSRC HDI Network Plus Grant - Art, Music, and Culture theme.},
 address = {Shanghai, China},
 articleno = {24},
 author = {Anna Xambó},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.21428/92fbeb44.7299536b},
 editor = {Eric Parren and Wei Chen},
 issn = {2220-4806},
 month = {June},
 title = {A Live Coding Session With the Cloud and a Virtual Agent},
 track = {Music},
 url = {https://doi.org/10.21428/92fbeb44.7299536b},
 year = {2021}
}