Simple Wearable Device to Reduce Stress When Delivering a Speech without Pre-training

被引:1
|
作者
Yamane, Takahiro [1 ]
Nakadoi, Yuma [2 ]
Takagi, Mina [2 ]
Morita, Mizuki [1 ,2 ]
机构
[1] Okayama Univ, Grad Sch Interdisciplinary Sci & Engn Hlth Syst, Okayama, Japan
[2] Okayama Univ, Fac Hlth Sci, Med Sch, Okayama, Japan
关键词
Wearable Electronic Device; Speech; Anxiety; Breathing Exercises; Respiration; PERFORMANCE; RESPONSES;
D O I
10.4258/hir.2021.27.3.231
中图分类号
R-058 [];
学科分类号
摘要
Objectives: There are many occasions in modern life when people must deliver presentations in front of audiences. Most people feel nervous before and while giving a speech. If there were a simple way to ease their stress, speakers would be able to perform better and their quality of life would improve. Consequently, this study aimed to alleviate the stress of speakers giving speeches by regulating breathing using a simple device. Methods: To achieve this goal, a popular device, the Apple Watch, was chosen. Twenty-eight participants were divided into two groups: the Breathe app group and the non-Breathe app group. The Breathe app group regulated their breathing using the Breathe app installed on an Apple Watch before speech preparation. The non-Breathe app group listened to an explanation of the experiment so that they could not undertake their own stress-easing strategies. Participants prepared speeches about themselves and delivered them in front of the researcher. Results: The Breathe app exercise eased stress during the exercise itself and the preparation phase of the speech task based on participants' cardiac activity. However, stress was not alleviated during speech delivery. Conclusions: Based on the experimental setting and results of this study, together with the findings of previous studies, introducing pre-training sessions and performing stress-easing tasks before and/or during a speech, such as sending vibrations to participants' wearable devices, might be an effective way to reduce stress when delivering speeches immediately after the breath-regulating task.
引用
收藏
页码:231 / 240
页数:10
相关论文
共 50 条
  • [31] Multi-task Pre-training for Lhasa-Tibetan Speech Recognition
    Liu, Yigang
    Zhao, Yue
    Xu, Xiaona
    Xu, Liang
    Zhang, Xubei
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IX, 2023, 14262 : 78 - 90
  • [32] GUIDED CONTRASTIVE SELF-SUPERVISED PRE-TRAINING FOR AUTOMATIC SPEECH RECOGNITION
    Khare, Aparna
    Wu, Minhua
    Bhati, Saurabhchand
    Droppo, Jasha
    Maas, Roland
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 174 - 181
  • [33] Unsupervised Green Object Tracker (GOT) without Offline Pre-training
    Zhou, Zhiruo
    You, Suya
    Kuo, C. -C. Jay
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2024, 13 (01)
  • [34] Ship detector in SAR images based on EfficientDet without pre-training
    Bao Z.
    Zhao X.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2021, 47 (08): : 1664 - 1672
  • [35] Speech Model Pre-training for End-to-End Spoken Language Understanding
    Lugosch, Loren
    Ravanelli, Mirco
    Ignoto, Patrick
    Tomar, Vikrant Singh
    Bengio, Yoshua
    INTERSPEECH 2019, 2019, : 814 - 818
  • [36] COMPARISON OF SELF-SUPERVISED SPEECH PRE-TRAINING METHODS ON FLEMISH DUTCH
    Poncelet, Jakob
    Hamme, Hugo Van
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 169 - 176
  • [37] PROSOSPEECH: ENHANCING PROSODY WITH QUANTIZED VECTOR PRE-TRAINING IN TEXT-TO-SPEECH
    Ren, Yi
    Lei, Ming
    Huang, Zhiying
    Zhang, Shiliang
    Chen, Qian
    Yan, Zhijie
    Zhao, Zhou
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7577 - 7581
  • [38] Pre-Training of DNN-Based Speech Synthesis Based on Bidirectional Conversion between Text and Speech
    Sone, Kentaro
    Nakashika, Toru
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (08) : 1546 - 1553
  • [39] DenseCL: A simple framework for self-supervised dense visual pre-training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    VISUAL INFORMATICS, 2023, 7 (01) : 30 - 40
  • [40] SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling
    Dong, Jiaxiang
    Wu, Haixu
    Zhang, Haoran
    Zhang, Li
    Wang, Jianmin
    Long, Mingsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,