Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis

Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract:

A deformable musical instrument can take numerous distinct shapes with its non-rigid features. Building audio synthesis module for such an interface behaviour can be challenging. In this paper, we present the Al-terity, a non-rigid musical instrument that comprises a deep learning model with generative adversarial network architecture and use it for generating audio samples for real-time audio synthesis. The particular deep learning model we use for this instrument was trained with existing data set as input for purposes of further experimentation. The main benefits of the model used are the ability to produce the realistic range of timbre of the trained data set and the ability to generate new audio samples in real-time, in the moment of playing, with the characteristics of sounds that the performer ever heard before. We argue that these advanced intelligence features on the audio synthesis level could allow us to explore performing music with particular response features that define the instrument's digital idiomaticity and allow us reinvent the instrument in the act of music performance.

Citation:

Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli. 2020. Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.4813402

BibTeX Entry:

  @inproceedings{NIME20_65,
 abstract = {A deformable musical instrument can take numerous distinct shapes with its non-rigid features. Building audio synthesis module for such an interface behaviour can be challenging. In this paper, we present the Al-terity, a non-rigid musical instrument that comprises a deep learning model with generative adversarial network  architecture and use it for generating audio samples for real-time audio synthesis. The particular deep learning model we use for this instrument was trained with existing data set as input for purposes of further experimentation. The main benefits of the model used are the ability to produce the realistic range of timbre of the trained data set and the ability to generate new audio samples in real-time, in the moment of playing, with the characteristics of sounds that the performer ever heard before.   We argue that these advanced intelligence features on the audio synthesis level could allow us to explore performing music with particular response features that define the instrument's digital idiomaticity and allow us reinvent the instrument in the act of music performance.},
 address = {Birmingham, UK},
 author = {Tahiroğlu, Koray and Kastemaa, Miranda and Koli, Oskar},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.5281/zenodo.4813402},
 editor = {Romain Michon and Franziska Schroeder},
 issn = {2220-4806},
 month = {July},
 pages = {337--342},
 presentation-video = {https://youtu.be/giYxFovZAvQ},
 publisher = {Birmingham City University},
 title = {Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis},
 url = {https://www.nime.org/proceedings/2020/nime2020_paper65.pdf},
 year = {2020}
}