Performing predefined tasks using the human-robot interaction on speech recognition for an industrial robot

被引:46
|
作者
Bingol, Mustafa Can [1 ]
Aydogmus, Omur [1 ]
机构
[1] Firat Univ, Technol Fac, Mechatron Engn, Elazig, Turkey
关键词
Deep neural networks; Intelligent robots; Human-robot interaction; Robotic vision; Turkish speech recognition; SYSTEM; FRAMEWORK; SELECTION;
D O I
10.1016/j.engappai.2020.103903
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
People who are not experts in robotics can easily implement complex robotic applications by using human- robot interaction (HRI). HRI systems require many complex operations such as robot control, image processing, natural speech recognition, and decision making. In this study, interactive control with an industrial robot was performed by using speech recognition software in the Turkish language. The collected voice data were converted to text data by using automatic speech recognition module based on deep neural networks (DNN). The proposed DNN (p-DNN) was compared to classic classification algorithms. Converted text data was improved in another module to select the process to be applied. According to selected process, position data were defined using image processing. The determined position information was sent to the robot using a fuzzy controller. The developed HRI system was implemented on a KUKA KR Agilus KR6 R900 sixx robot manipulator. The word accuracy rate of the p-DNN model was measured as 90.37%. The developed image processing module and fuzzy controller worked with minimal errors. The contribution of this study is that an industrial robot is easily programming using this software by people who are not experts in robotics and know Turkish.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Human-Robot Physical Interaction and Collaboration using an Industrial Robot with a Closed Control Architecture
    Geravand, Milad
    Flacco, Fabrizio
    De Luca, Alessandro
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 4000 - 4007
  • [22] Paralinguistic Cues in Speech to Adapt Robot Behavior in Human-Robot Interaction
    Ashok, Ashita
    Pawlak, Jakub
    Paplu, Sarwar
    Zafar, Zuhair
    Berns, Karsten
    [J]. 2022 9TH IEEE RAS/EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS (BIOROB 2022), 2022,
  • [23] Review of Interfaces for Industrial Human-Robot Interaction
    Julia Berg
    Shuang Lu
    [J]. Current Robotics Reports, 2020, 1 (2): : 27 - 34
  • [24] Affective Recognition Using EEG Signal in Human-Robot Interaction
    Qian, Chen
    Hou, Tingting
    Lu, Yanyu
    Fu, Shan
    [J]. ENGINEERING PSYCHOLOGY AND COGNITIVE ERGONOMICS (EPCE 2018), 2018, 10906 : 336 - 351
  • [25] Human-Robot Interaction in Handing-Over Tasks
    Huber, Markus
    Rickert, Markus
    Knoll, Alois
    Brandt, Thomas
    Glasauer, Stefan
    [J]. 2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2008, : 107 - +
  • [26] Emotion in human-robot interaction: Recognition and display
    Wendt, Cornalia
    Kuehnlenz, Kolja
    Popp, Michael
    Karg, Michella
    [J]. INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2008, 43 (3-4) : 578 - 578
  • [27] Face recognition and tracking for human-robot interaction
    Song, KT
    Chen, WJ
    [J]. 2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7, 2004, : 2877 - 2882
  • [28] On the Robustness of Speech Emotion Recognition for Human-Robot Interaction with Deep Neural Networks
    Lakomkin, Egor
    Zamani, Mohammad Ali
    Weber, Cornelius
    Magg, Sven
    Wermter, Stefan
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 854 - 860
  • [29] Recognition in Human-Robot Interaction: The Gateway to Engagement
    Brinck, Ingar
    Balkenius, Christian
    [J]. 2019 JOINT IEEE 9TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), 2019, : 31 - 36
  • [30] Speech emotion recognition in real static and dynamic human-robot interaction scenarios
    Grageda, Nicolas
    Busso, Carlos
    Alvarado, Eduardo
    Garcia, Ricardo
    Mahu, Rodrigo
    Huenupan, Fernando
    Yoma, Nestor Becerra
    [J]. COMPUTER SPEECH AND LANGUAGE, 2025, 89