Sonifyd:Colormatrics

Woohun Joo

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract

Based on my real-time line-based sonification engine developed in 2016, Colormatrics has been newly created as a set of three sonification-driven audiovisual works specially designed for Cylorama of the Cube at the Moss Arts Center, Virginia Tech in November 2021. Colormatrics converts generative visual patterns into sound and conversely visualizes the sonification process in real-time. First, Colormatrics_01 generates additive synthesis-based ambient- or pad-like sounds fitting in with the immersive atmosphere of the space. Second, Colormatrics_02 creates additive synthesis-based beat music following self-changing graphic patterns that become more complex. Those patterns have a total of 18 steps and will loop again when they reached the last. Lastly, Colormatrics_03 creates timbre by additive synthesis and spatializes sound where the vertical and horizontal positions of graphics are mapped into 128-speaker systems in the Cube. The number of sine waves for additive synthesis can be flexible depending on the image size, which also indicates the length of the scan line. For the color-sound mapping for each pixel, hue values are tuned for musical pitch, saturation values detune the original pitch up to -100 cents, and the brightness values determine amplitude levels. The visual scores generated in Processing are transmitted to Max/MSP via Syphon for sonification.

Citation

Woohun Joo. 2022. Sonifyd:Colormatrics. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.21428/92fbeb44.e0105d02 [PDF]

BibTeX Entry

@inproceedings{nime20202_installations_3,
 abstract = {Based on my real-time line-based sonification engine developed in 2016, Colormatrics has been newly created as a set of three sonification-driven audiovisual works specially designed for Cylorama of the Cube at the Moss Arts Center, Virginia Tech in November 2021. Colormatrics converts generative visual patterns into sound and conversely visualizes the sonification process in real-time. First, Colormatrics_01 generates additive synthesis-based ambient- or pad-like sounds fitting in with the immersive atmosphere of the space. Second, Colormatrics_02 creates additive synthesis-based beat music following self-changing graphic patterns that become more complex. Those patterns have a total of 18 steps and will loop again when they reached the last. Lastly, Colormatrics_03 creates timbre by additive synthesis and spatializes sound where the vertical and horizontal positions of graphics are mapped into 128-speaker systems in the Cube. The number of sine waves for additive synthesis can be flexible depending on the image size, which also indicates the length of the scan line. For the color-sound mapping for each pixel, hue values are tuned for musical pitch, saturation values detune the original pitch up to -100 cents, and the brightness values determine amplitude levels. The visual scores generated in Processing are transmitted to Max/MSP via Syphon for sonification.},
 address = {Auckland, New Zealand},
 articleno = {3},
 author = {Woohun Joo},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.21428/92fbeb44.e0105d02},
 editor = {Meg Schedel and Paul Dunham},
 issn = {2220-4806},
 month = {jun},
 title = {Sonifyd:Colormatrics},
 track = {Installations},
 url = {https://doi.org/10.21428/92fbeb44.e0105d02},
 year = {2022}
}