Simulated EEG-Driven Audio Information Mapping Using Inner Hair-Cell Model and Spiking Neural Network

Pasquale Mainolfi

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract

This study presents a framework for mapping audio information into simulated neural signals and dynamic control maps. The system is based on a biologically-inspired architecture that traces the sound pathway from the cochlea to the auditory cortex. The system transforms acoustic features into neural representations by integrating Meddis's Inner Hair-Cell (IHC) model with spiking neural networks (SNN).The mapping process occurs in three phases: the IHC model converts sound waves into neural impulses, simulating hair cell mechano-electrical transduction. These impulses are then encoded into spatio-temporal patterns through an Izhikevich-based neural network, where spike-timing-dependent plasticity (STDP) mechanisms enable the emergence of activation structures reflecting the acoustic information's complexity. Finally, these patterns are mapped into both EEG-like signals and continuous control maps for real-time interactive performance control.This approach bridges neural dynamics and signal processing, offering a new paradigm for sound information representation. The generated control maps provide a natural interface between acoustic and parametric domains, enabling applications from generative sound design to adaptive performance control, where neuromorphological sound translation explores new forms of audio-driven interaction.

Citation

Pasquale Mainolfi. 2025. Simulated EEG-Driven Audio Information Mapping Using Inner Hair-Cell Model and Spiking Neural Network . Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.15698778 [PDF]

BibTeX Entry

@article{nime2025_3,
 abstract = {This study presents a framework for mapping audio information into simulated neural signals and dynamic control maps. The system is based on a biologically-inspired architecture that traces the sound pathway from the cochlea to the auditory cortex. The system transforms acoustic features into neural representations by integrating Meddis's Inner Hair-Cell (IHC) model with spiking neural networks (SNN).The mapping process occurs in three phases: the IHC model converts sound waves into neural impulses, simulating hair cell mechano-electrical transduction. These impulses are then encoded into spatio-temporal patterns through an Izhikevich-based neural network, where spike-timing-dependent plasticity (STDP) mechanisms enable the emergence of activation structures reflecting the acoustic information's complexity. Finally, these patterns are mapped into both EEG-like signals and continuous control maps for real-time interactive performance control.This approach bridges neural dynamics and signal processing, offering a new paradigm for sound information representation. The generated control maps provide a natural interface between acoustic and parametric domains, enabling applications from generative sound design to adaptive performance control, where neuromorphological sound translation explores new forms of audio-driven interaction.},
 address = {Canberra, Australia},
 articleno = {3},
 author = {Pasquale Mainolfi},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.5281/zenodo.15698778},
 editor = {Doga Cavdir and Florent Berthaut},
 issn = {2220-4806},
 month = {June},
 numpages = {9},
 pages = {17--25},
 title = {Simulated EEG-Driven Audio Information Mapping Using Inner Hair-Cell Model and Spiking Neural Network },
 track = {Paper},
 url = {http://nime.org/proceedings/2025/nime2025_3.pdf},
 year = {2025}
}