AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model

Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract:

In this paper we present the recent developments in the AI-terity instrument. AI-terity is a deformable, non-rigid musical instrument that comprises a particular artificial intelligence (AI) method for generating audio samples for real-time audio synthesis. As an improvement, we developed the control interface structure with additional sensor hardware. In addition, we implemented a new hybrid deep learning architecture, GANSpaceSynth, in which we applied the GANSpace method on the GANSynth model. Following the deep learning model improvement, we developed new autonomous features for the instrument that aim at keeping the musician in an active and uncertain state of exploration. Through these new features, the instrument enables more accurate control on GAN latent space. Further, we intend to investigate the current developments through a musical composition that idiomatically reflects the new autonomous features of the AI-terity instrument. We argue that the present technology of AI is suitable for enabling alternative autonomous features in audio domain for the creative practices of musicians.

Citation:

Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli. 2021. AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.21428/92fbeb44.3d0e9e12

BibTeX Entry:

  @inproceedings{NIME21_80,
 abstract = {In this paper we present the recent developments in the AI-terity instrument. AI-terity is a deformable, non-rigid musical instrument that comprises a particular artificial intelligence (AI) method for generating audio samples for real-time audio synthesis. As an improvement, we developed the control interface structure with additional sensor hardware. In addition, we implemented a new hybrid deep learning architecture, GANSpaceSynth, in which we applied the GANSpace method on the GANSynth model. Following the deep learning model improvement, we developed new autonomous features for the instrument that aim at keeping the musician in an active and uncertain state of exploration. Through these new features, the instrument enables more accurate control on GAN latent space. Further, we intend to investigate the current developments through a musical composition that idiomatically reflects the new autonomous features of the AI-terity instrument. We argue that the present technology of AI is suitable for enabling alternative autonomous features in audio domain for the creative practices of musicians.},
 address = {Shanghai, China},
 articleno = {80},
 author = {Tahiroğlu, Koray and Kastemaa, Miranda and Koli, Oskar},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.21428/92fbeb44.3d0e9e12},
 issn = {2220-4806},
 month = {June},
 presentation-video = {https://youtu.be/WVAIPwI-3P8},
 title = {AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model},
 url = {https://nime.pubpub.org/pub/9zu49nu5},
 year = {2021}
}