Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion

被引:54
|
作者
Liu, Hongyi [1 ]
Fang, Tongtong [2 ]
Zhou, Tianyu [2 ]
Wang, Lihui [1 ]
机构
[1] KTH Royal Inst Technol, Dept Prod Engn, SE-10044 Stockholm, Sweden
[2] KTH Royal Inst Technol, Dept Software & Comp Syst, SE-10044 Stockholm, Sweden
来源
IEEE ACCESS | 2018年 / 6卷
基金
欧盟地平线“2020”;
关键词
Deep learning; human-robot collaboration; multimodal fusion; intelligent manufacturing systems; NEURAL-NETWORKS; RECOGNITION; INTERFACE;
D O I
10.1109/ACCESS.2018.2884793
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Intuitive and robust multimodal robot control is the key toward human-robot collaboration (HRC) for manufacturing systems. Multimodal robot control methods were introduced in previous studies. The methods allow human operators to control robot intuitively without programming brand-specific code. However, most of the multimodal robot control methods are unreliable because the feature representations are not shared across multiple modalities. To target this problem, a deep learning-based multimodal fusion architecture is proposed in this paper for robust multimodal HRC manufacturing systems. The proposed architecture consists of three modalities: speech command, hand motion, and body motion. Three unimodal models are first trained to extract features, which are further fused for representation sharing. Experiments show that the proposed multimodal fusion model outperforms the three unimodal models. This paper indicates a great potential to apply the proposed multimodal fusion architecture to robust HRC manufacturing systems.
引用
收藏
页码:74762 / 74771
页数:10
相关论文
共 50 条
  • [1] A Robust Multimodal Fusion Framework for Command Interpretation in Human-Robot Cooperation
    Cacace, Jonathan
    Finzi, Alberto
    Lippiello, Vincenzo
    2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 372 - 377
  • [2] Multimodal perception-fusion-control and human-robot collaboration in manufacturing: a review
    Duan, Jianguo
    Zhuang, Liwen
    Zhang, Qinglei
    Zhou, Ying
    Qin, Jiyun
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2024, 132 (3-4): : 1071 - 1093
  • [3] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540
  • [4] Collaborative manufacturing with physical human-robot interaction
    Cherubini, Andrea
    Passama, Robin
    Crosnier, Andre
    Lasnier, Antoine
    Fraisse, Philippe
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2016, 40 : 1 - 13
  • [5] Multimodal fusion and human-robot interaction control of an intelligent robot
    Gong, Tao
    Chen, Dan
    Wang, Guangping
    Zhang, Weicai
    Zhang, Junqi
    Ouyang, Zhongchuan
    Zhang, Fan
    Sun, Ruifeng
    Ji, Jiancheng Charles
    Chen, Wei
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2024, 11
  • [6] Evaluation of Robot Degradation on Human-Robot Collaborative Performance in Manufacturing
    Nguyen, Vinh
    Marvel, Jeremy
    SMART AND SUSTAINABLE MANUFACTURING SYSTEMS, 2022, 6 (01): : 23 - 36
  • [7] An Extensible Architecture for Robust Multimodal Human-Robot Communication
    Rossi, Silvia
    Leone, Enrico
    Fiore, Michelangelo
    Finzi, Alberto
    Cutugno, Francesco
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 2208 - 2213
  • [8] A FRAMEWORK FOR HUMAN-ROBOT INTERACTION IN COLLABORATIVE MANUFACTURING ENVIRONMENTS
    Zhao, Ran
    Sidobre, Daniel
    INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2019, 34 (06): : 668 - 677
  • [9] Cybersecurity Metrics for Human-Robot Collaborative Automotive Manufacturing
    Rahman, S. M. Mizanoor
    2021 IEEE INTERNATIONAL WORKSHOP ON METROLOGY FOR AUTOMOTIVE (METROAUTOMOTIVE), 2021, : 254 - 259
  • [10] Human-Robot Interaction and Collaborative Manipulation with Multimodal Perception Interface for Human
    Huang, Shouren
    Ishikawa, Masatoshi
    Yamakawa, Yuji
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 289 - 291