Moon via Spirit (2019) for hybrid analogue/digital live electronics

Lauren Hayes

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract

The theme of NIME 2021 is “Learning to Play, Playing to Learn”, and this performance involves the exploration of how novel machine learning and audio decomposition tools can be integrated into a NIME that is already well-established, and has been performed with publicly for fourteen years. While much NIME research has focused on novelty within musical human-computer interaction, there has been a significant body of work which has demonstrated that much can be learnt from examining not only issues of longevity within NIME, but also how the performance practices of musicians working within the NIME field can provide fruitful sites of knowledge. This piece was created using new tools from the Fluid Corpus Manipulation (FluCoMa) project, from the University of Huddersfield. The project studies how creative coders and technologists work with and incorporate new digital tools for deconstructing audio in novel ways: “FluCoMa instigates new musical ways of exploiting ever-growing baks of sound and gestures within the digital composition process, by bringing breakthroughs of signal decomposition DSP and machine learning to the toolset of techno-fluent computer composers, creative coders and digital artists”. In this piece, I explore these tools through an embodied approach to segmentation, slicing, and layering of sound in real time. Extensive use of the micro-sound technique pulsar synthesis is employed which is explored through the use of tangible controllers. Using the FluCoMa toolkit, I was able to incorporate novel machine learning techniques in Max which deal with exploring large corpora of sound files. Specifically, this work involves, among other relevant AI techniques, machine learning in order to train based on preference; sort and select based on descriptors; and then concatenate percussion sounds from a large collection of drum machine samples. More broadly, my improvisation instrument that I have been developing and performing with since 2007 is heavily based on machine listening techniques such as transient detection and pitch detection. While the former is linked not only to its origins involving the hybrid piano [3], but also its heavily percussive or attack based aesthetics, the latter has always afforded an element of unpredictability, given the sonic material that I work with. Using FluCoMa’s toolkit, I was able to explore not only transient detection, but other amplitude-based models. Furthermore, pitch detection involved ‘confidence’ estimates, rather than simply delivering values. In general, my approach to improvisation involves designing mutually affecting networks between my hardware and software. By introducing machine learning, I hope to explore this further so that performance remains less about decision making and control, and more about navigation, vulnerability, and play.

Citation

Lauren Hayes. 2021. Moon via Spirit (2019) for hybrid analogue/digital live electronics. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.21428/92fbeb44.f3163249 [PDF]

BibTeX Entry

@inproceedings{nime2021_music_22,
 abstract = {The theme of NIME 2021 is “Learning to Play, Playing to Learn”, and this performance involves the exploration of how novel machine learning and audio decomposition tools can be integrated into a NIME that is already well-established, and has been performed with publicly for fourteen years. While much NIME research has focused on novelty within musical human-computer interaction, there has been a significant body of work which has demonstrated that much can be learnt from examining not only issues of longevity within NIME, but also how the performance practices of musicians working within the NIME field can provide fruitful sites of knowledge. This piece was created using new tools from the Fluid Corpus Manipulation (FluCoMa) project, from the University of Huddersfield. The project studies how creative coders and technologists work with and incorporate new digital tools for deconstructing audio in novel ways: “FluCoMa instigates new musical ways of exploiting ever-growing baks of sound and gestures within the digital composition process, by bringing breakthroughs of signal decomposition DSP and machine learning to the toolset of techno-fluent computer composers, creative coders and digital artists”. In this piece, I explore these tools through an embodied approach to segmentation, slicing, and layering of sound in real time. Extensive use of the micro-sound technique pulsar synthesis is employed which is explored through the use of tangible controllers. Using the FluCoMa toolkit, I was able to incorporate novel machine learning techniques in Max which deal with exploring large corpora of sound files. Specifically, this work involves, among other relevant AI techniques, machine learning in order to train based on preference; sort and select based on descriptors; and then concatenate percussion sounds from a large collection of drum machine samples. More broadly, my improvisation instrument that I have been developing and performing with since 2007 is heavily based on machine listening techniques such as transient detection and pitch detection. While the former is linked not only to its origins involving the hybrid piano [3], but also its heavily percussive or attack based aesthetics, the latter has always afforded an element of unpredictability, given the sonic material that I work with. Using FluCoMa’s toolkit, I was able to explore not only transient detection, but other amplitude-based models. Furthermore, pitch detection involved ‘confidence’ estimates, rather than simply delivering values. In general, my approach to improvisation involves designing mutually affecting networks between my hardware and software. By introducing machine learning, I hope to explore this further so that performance remains less about decision making and control, and more about navigation, vulnerability, and play.},
 address = {Shanghai, China},
 articleno = {22},
 author = {Lauren Hayes},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.21428/92fbeb44.f3163249},
 editor = {Eric Parren and Wei Chen},
 issn = {2220-4806},
 month = {June},
 title = {Moon via Spirit (2019) for hybrid analogue/digital live electronics},
 track = {Music},
 url = {https://doi.org/10.21428/92fbeb44.f3163249},
 year = {2021}
}