Live Coding with the Cloud and a Virtual Agent

Anna Xambó, Gerard Roma, Sam Roig, and Eduard Solaz

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract:

The use of crowdsourced sounds in live coding can be seen as an example of asynchronous collaboration. It is not uncommon for crowdsourced databases to return unexpected results to the queries submitted by a user. In such a situation, a live coder is likely to require some degree of additional filtering to adapt the results to her/his musical intentions. We refer to this context-dependent decisions as situated musical actions. Here, we present directions for designing a customisable virtual companion to help live coders in their practice. In particular, we introduce a machine learning (ML) model that, based on a set of examples provided by the live coder, filters the crowdsourced sounds retrieved from the Freesound online database at performance time. We evaluated a first illustrative model using objective and subjective measures. We tested a more generic live coding framework in two performances and two workshops, where several ML models have been trained and used. We discuss the promising results for ML in education, live coding practices and the design of future NIMEs.

Citation:

Anna Xambó, Gerard Roma, Sam Roig, and Eduard Solaz. 2021. Live Coding with the Cloud and a Virtual Agent. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.21428/92fbeb44.64c9f217

BibTeX Entry:

  @inproceedings{NIME21_40,
 abstract = {The use of crowdsourced sounds in live coding can be seen as an example of asynchronous collaboration. It is not uncommon for crowdsourced databases to return unexpected results to the queries submitted by a user. In such a situation, a live coder is likely to require some degree of additional filtering to adapt the results to her/his musical intentions. We refer to this context-dependent decisions as situated musical actions. Here, we present directions for designing a customisable virtual companion to help live coders in their practice. In particular, we introduce a machine learning (ML) model that, based on a set of examples provided by the live coder, filters the crowdsourced sounds retrieved from the Freesound online database at performance time. We evaluated a first illustrative model using objective and subjective measures. We tested a more generic live coding framework in two performances and two workshops, where several ML models have been trained and used. We discuss the promising results for ML in education, live coding practices and the design of future NIMEs.},
 address = {Shanghai, China},
 articleno = {40},
 author = {Xambó, Anna and Roma, Gerard and Roig, Sam and Solaz, Eduard},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.21428/92fbeb44.64c9f217},
 issn = {2220-4806},
 month = {June},
 presentation-video = {https://youtu.be/F4UoH1hRMoU},
 title = {Live Coding with the Cloud and a Virtual Agent},
 url = {https://nime.pubpub.org/pub/zpdgg2fg},
 year = {2021}
}