LOLOL: Laugh Out Loud On Laptop
Jieun Oh, and Ge Wang
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2013
- Location: Daejeon, Republic of Korea
- Pages: 190–195
- Keywords: laughter, vocalization, synthesis model, real-time controller, interface for musical expression
- DOI: 10.5281/zenodo.1178626 (Link to paper)
- PDF link
Abstract:
Significant progress in the domains of speech- and singing-synthesis has enhanced communicative potential of machines. To make computers more vocallyexpressive, however, we need a deeper understanding of how nonlinguistic social signals are patterned and perceived. In this paper, we focus on laughter expressions: how a phrase of vocalized notes that we call ''laughter'' may bemodeled and performed to implicate nuanced meaning imbued in the acousticsignal. In designing our model, we emphasize (1) using high-level descriptors as control parameters, (2) enabling real-time performable laughter, and (3) prioritizing expressiveness over realism. We present an interactive systemimplemented in ChucK that allows users to systematically play with the musicalingredients of laughter. A crowd sourced study on the perception of synthesized laughter showed that our model is capable of generating a range of laughter types, suggesting an exciting potential for expressive laughter synthesis.
Citation:
Jieun Oh, and Ge Wang. 2013. LOLOL: Laugh Out Loud On Laptop. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1178626BibTeX Entry:
@inproceedings{Oh2013, abstract = {Significant progress in the domains of speech- and singing-synthesis has enhanced communicative potential of machines. To make computers more vocallyexpressive, however, we need a deeper understanding of how nonlinguistic social signals are patterned and perceived. In this paper, we focus on laughter expressions: how a phrase of vocalized notes that we call ''laughter'' may bemodeled and performed to implicate nuanced meaning imbued in the acousticsignal. In designing our model, we emphasize (1) using high-level descriptors as control parameters, (2) enabling real-time performable laughter, and (3) prioritizing expressiveness over realism. We present an interactive systemimplemented in ChucK that allows users to systematically play with the musicalingredients of laughter. A crowd sourced study on the perception of synthesized laughter showed that our model is capable of generating a range of laughter types, suggesting an exciting potential for expressive laughter synthesis.}, address = {Daejeon, Republic of Korea}, author = {Jieun Oh and Ge Wang}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.1178626}, issn = {2220-4806}, keywords = {laughter, vocalization, synthesis model, real-time controller, interface for musical expression}, month = {May}, pages = {190--195}, publisher = {Graduate School of Culture Technology, KAIST}, title = {LOLOL: Laugh Out Loud On Laptop}, url = {http://www.nime.org/proceedings/2013/nime2013_86.pdf}, year = {2013} }