Here and Now: Creating Improvisational Dance Movements with a Mixed Reality Mirror

被引:7
|
作者
Zhou, Qiushi [1 ]
Grebel, Louise [2 ]
Irlitti, Andrew [1 ]
Minaai, Julie Ann [1 ]
Goncalves, Jorge [1 ]
Velloso, Eduardo [1 ]
机构
[1] Univ Melbourne, Melbourne, Australia
[2] Univ Paris Saclay, Orsay, France
关键词
dance; mixed reality; augmented reality; mirror; improvisation; BODY; REFLECTIONS; THINKING;
D O I
10.1145/3544548.3580666
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper explores using mixed reality (MR) mirrors for supporting improvisational dance making. Motivated by the prevalence of mirrors in dance studios and inspired by Forsythe's Improvisation Technologies, we conducted workshops with 13 dancers and choreographers to inform the design of future MR visualisation and annotation tools for dance. The workshops involved using a prototype MR mirror as a technology probe that reveals the spatial and temporal relationships between the reflected dancing body and its surroundings during improvisation; speed dating group interviews around future design ideas; follow-up surveys and extended interviews with a digital media dance artist and a dance educator. Our findings highlight how the MR mirror enriches dancers' temporal and spatial perception, creates multi-layered presence, and affords appropriation by dancers. We also discuss the unique place of MR mirrors in the theoretical context of dance and in the history of movement visualisation, and distil lessons for broader HCI research.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Challenges and Findings in Creating Smart Assistants for Mixed Reality Training Apps
    Chuan, Ching-Hua
    Soriano, Erik
    HHAI 2023: AUGMENTING HUMAN INTELLECT, 2023, 368 : 404 - 406
  • [32] REALational Perspectives: Strategies for Expanding beyond the Here and Now in Mobile Augmented Reality (AR) Art
    Efrat, Liron
    LEONARDO, 2020, 53 (04) : 374 - 379
  • [33] The McNorm library: creating and validating a new library of emotionally expressive whole body dance movements
    Smith, Rebecca A.
    Cross, Emily S.
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2023, 87 (02): : 484 - 508
  • [34] The McNorm library: creating and validating a new library of emotionally expressive whole body dance movements
    Rebecca A. Smith
    Emily S. Cross
    Psychological Research, 2023, 87 : 484 - 508
  • [35] Now Watch Me Dance: Responding to critical observations, constructions, and performances of race on reality television
    Hopson, Mark C.
    CRITICAL STUDIES IN MEDIA COMMUNICATION, 2008, 25 (04): : 441 - 446
  • [36] In the here and now: Enhanced motor corticospinal excitability in novices when watching live compared to video recorded dance
    Jola, Corinne
    Grosbras, Marie-Helene
    COGNITIVE NEUROSCIENCE, 2013, 4 (02) : 90 - 98
  • [37] A prototype dance training support system with motion capture and mixed reality technologies
    Hachimura, K
    Kato, H
    Tamura, H
    RO-MAN 2004: 13TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS, 2004, : 217 - 222
  • [38] Creating and manipulating 3D paths with mixed reality spatial interfaces
    Pospick, Courtney Hutton
    Rosenberg, Evan Suma
    FRONTIERS IN VIRTUAL REALITY, 2023, 4
  • [39] Just talking about art - Creating virtual storytelling experiences in mixed reality
    Spierling, U
    Iurgel, I
    VIRTUAL STORYTELLING, PROCEEDINGS: USING VIRTUAL REALITY TECHNOLOGIES FOR STORYTELLING, 2003, 2897 : 179 - 188
  • [40] Mirrorlabs - creating accessible Digital Twins of robotic production environment with Mixed Reality
    Aschenbrenner, Doris
    Rieder, Jonas S., I
    van Tol, Danielle
    van Dam, Joris
    Rusak, Zoltan
    Blech, Jan Olaf
    Azangoo, Mohammad
    Panu, Salo
    Kruusamae, Karl
    Masnavi, Houman
    Rybalskii, Igor
    Aabloo, Alvo
    Petry, Marcelo
    Teixeira, Gustavo
    Thiede, Bastian
    Pedrazzoli, Paolo
    Ferrario, Andrea
    Foletti, Michele
    Confalonieri, Matteo
    Bertaggia, Daniele
    Togias, Thodoris
    Makris, Sotiris
    2020 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY (AIVR 2020), 2020, : 43 - 48