共 50 条
Computational Modeling of Mind and Music
被引:0
|作者:
Verschure, Paul F. M. J.
[1
]
Manzolli, Jonatas
[2
]
机构:
[1] Univ Pompeu Fabra, Ctr Autonomous Syst & Neurorobot, SPECS, Barcelona 08018, Spain
[2] Univ Estadual Campinas, Inst Arts, Dept Mus, NIC, BR-13087500 Campinas, SP, Brazil
关键词:
D O I:
暂无
中图分类号:
B84 [心理学];
学科分类号:
04 ;
0402 ;
摘要:
Music can be defined as organized sound material in time. This chapter explores the links between the development of ideas about music and those driven by the concept of the embodied mind. Music composition has evolved from symbolic notated pitches to converge onto the expression of sound filigrees driven by new techniques of instrumental practice and composition associated with the development of new interfaces for musical expression. The notion of the organization of sound material in time adds new dimensions to musical information and its symbolic representations. To illustrate our point of view, a number of music systems are presented that have been realized as exhibitions and performances. We consided these synthetic music compositions as experiments in situated aesthetics. These examples follow the philosophy that a theory of mind, including one of creativity and aesthetics, will be critically dependent on its realization as a real-world artifact because only in this way can such a theory of an open and interactive system as the mind be fully validated. Examples considered include RoBoser, a real-world composition system that was developed in the context of a theory of mind and brain called distributed adaptive control (DAC), and "ADA: intelligent space," where the process of music expression was transported from a robot arena to a large-scale interactive space that established communication with its visitors through multi-modal composition. Subsequently, re(per) curso, a mixed reality hybrid human-machine performance, is analyzed for ways of integrating the development of music and narrative. Finally, the chapter concludes with an examination of how multimodal control structures driven by brain-computer interfaces (BCI) can give rise to a brain orchestra that controls complex sound production without the use of physical interfaces. All these examples show that the production of sound material in time that is appreciated by human observers does not need to depend on the symbolically notated pitches of a single human composer but can emerge from the interaction between machines, driven by simple rules and their environment.
引用
收藏
页码:393 / 416
页数:24
相关论文