Neuron-modeled Audio Synthesis: Nonlinear Sound and Control
Jeff Snyder, Aatish Bhatia, and Michael R Mulshine
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2018
- Location: Blacksburg, Virginia, USA
- Pages: 394–397
- DOI: 10.5281/zenodo.1302639 (Link to paper)
- PDF link
Abstract:
This paper describes a project to create a software instrument using a biological model of neuron behavior for audio synthesis. The translation of the model to a usable audio synthesis process is described, and a piece for laptop orchestra created using the instrument is discussed.
Citation:
Jeff Snyder, Aatish Bhatia, and Michael R Mulshine. 2018. Neuron-modeled Audio Synthesis: Nonlinear Sound and Control. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1302639BibTeX Entry:
@inproceedings{Snyderb2018, abstract = {This paper describes a project to create a software instrument using a biological model of neuron behavior for audio synthesis. The translation of the model to a usable audio synthesis process is described, and a piece for laptop orchestra created using the instrument is discussed.}, address = {Blacksburg, Virginia, USA}, author = {Jeff Snyder and Aatish Bhatia and Michael R Mulshine}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.1302639}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, month = {June}, pages = {394--397}, publisher = {Virginia Tech}, title = {Neuron-modeled Audio Synthesis: Nonlinear Sound and Control}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0088.pdf}, year = {2018} }