Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform

Teresa Pelinski, Rodrigo Diaz, Adan L. Benito Temprano, and Andrew McPherson

Proceedings of the International Conference on New Interfaces for Musical Expression

  • Year: 2023
  • Location: Mexico City, Mexico
  • Track: Papers
  • Pages: 160–166
  • Article Number: 22
  • PDF link

Abstract:

Deploying deep learning models on embedded devices is an arduous task: oftentimes, there exist no platform-specific instructions, and compilation times can be considerably large due to the limited computational resources available on-device. Moreover, many music-making applications demand real-time inference. Embedded hardware platforms for audio, such as Bela, offer an entry point for beginners into physical audio computing; however, the need for cross-compilation environments and low-level software development tools for deploying embedded deep learning models imposes high entry barriers on non-expert users. We present a pipeline for deploying neural networks in the Bela embedded hardware platform. In our pipeline, we include a tool to record a multichannel dataset of sensor signals. Additionally, we provide a dockerised cross-compilation environment for faster compilation. With this pipeline, we aim to provide a template for programmers and makers to prototype and experiment with neural networks for real-time embedded musical applications.

Citation:

Teresa Pelinski, Rodrigo Diaz, Adan L. Benito Temprano, and Andrew McPherson. 2023. Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI:

BibTeX Entry:

  @article{nime2023_22,
 abstract = {Deploying deep learning models on embedded devices is an arduous task: oftentimes, there exist no platform-specific instructions, and compilation times can be considerably large due to the limited computational resources available on-device. Moreover, many music-making applications demand real-time inference. Embedded hardware platforms for audio, such as Bela, offer an entry point for beginners into physical audio computing; however, the need for cross-compilation environments and low-level software development tools for deploying embedded deep learning models imposes high entry barriers on non-expert users.

We present a pipeline for deploying neural networks in the Bela embedded hardware platform. In our pipeline, we include a tool to record a multichannel dataset of sensor signals. Additionally, we provide a dockerised cross-compilation environment for faster compilation. With this pipeline, we aim to provide a template for programmers and makers to prototype and experiment with neural networks for real-time embedded musical applications.},
 address = {Mexico City, Mexico},
 articleno = {22},
 author = {Teresa Pelinski and Rodrigo Diaz and Adan L. Benito Temprano and Andrew McPherson},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 editor = {Miguel Ortiz and Adnan Marquez-Borbon},
 issn = {2220-4806},
 month = {May},
 numpages = {7},
 pages = {160--166},
 title = {Pipeline for recording datasets and running neural networks on the Bela embedded hardware platform},
 track = {Papers},
 url = {http://nime.org/proceedings/2023/nime2023_22.pdf},
 year = {2023}
}