Melia: An Expressive Harmonizer at the Limits of AI

Matthew Caren, and Joshua Bennett

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract

We present Melia, a digital harmonizer instrument that explores how common failure modes of machine learning and artificial intelligence (ML/AI) systems can be used in expressive and musical ways. The instrument is anchored by an audio-to-audio neural network trained on a hand-curated dataset to perform pitch-shifting and dynamic filtering. Biased training data and poor out-of-distribution generalization are deliberately leveraged as musical devices and sources of instrument-defining idiosyncrasies. Melia features a custom hardware interface with a MIDI keyboard that polyphonically allocates instances of the model to harmonize live audio input, as well as controls that manipulate model parameters and various audio effects in real-time. This paper presents an overview of related work, the instrument itself, and a discussion of how audio-to-audio AI models might fit into the long-standing tradition of musicians, artists, and instrument-makers finding inspiration in a medium's shortcomings.

Citation

Matthew Caren, and Joshua Bennett. 2025. Melia: An Expressive Harmonizer at the Limits of AI. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.15698990 [PDF]

BibTeX Entry

@article{nime2025_93,
 abstract = {We present Melia, a digital harmonizer instrument that explores how common failure modes of machine learning and artificial intelligence (ML/AI) systems can be used in expressive and musical ways. The instrument is anchored by an audio-to-audio neural network trained on a hand-curated dataset to perform pitch-shifting and dynamic filtering. Biased training data and poor out-of-distribution generalization are deliberately leveraged as musical devices and sources of instrument-defining idiosyncrasies. Melia features a custom hardware interface with a MIDI keyboard that polyphonically allocates instances of the model to harmonize live audio input, as well as controls that manipulate model parameters and various audio effects in real-time. This paper presents an overview of related work, the instrument itself, and a discussion of how audio-to-audio AI models might fit into the long-standing tradition of musicians, artists, and instrument-makers finding inspiration in a medium's shortcomings.},
 address = {Canberra, Australia},
 articleno = {93},
 author = {Matthew Caren and Joshua Bennett},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.5281/zenodo.15698990},
 editor = {Doga Cavdir and Florent Berthaut},
 issn = {2220-4806},
 month = {June},
 numpages = {3},
 pages = {632--634},
 title = {Melia: An Expressive Harmonizer at the Limits of AI},
 track = {Paper},
 url = {http://nime.org/proceedings/2025/nime2025_93.pdf},
 year = {2025}
}