Shared environment representation for a human-robot team performing information fusion

被引:25
|
作者
Kaupp, Tobias [1 ]
Douillard, Bertrand [1 ]
Ramos, Fabio [1 ]
Makarenko, Alexei [1 ]
Upcroft, Ben [1 ]
机构
[1] Univ Sydney, ARC Ctr Excellence Autonomous Syst CAS, Sydney, NSW 2006, Australia
关键词
D O I
10.1002/rob.20201
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This paper addresses the problem of building a shared environment representation by a human-robot team. Rich environment models are required in real applications for both autonomous operation of robots and to support human decision-making. Two probabilistic models are used to describe outdoor environment features such as trees: geometric (position in the world) and visual. The visual representation is used to improve data association and to classify features. Both models are able to incorporate observations from robotic platforms and human operators. Physically, humans and robots form a heterogeneous sensor network. In our experiments, the human-robot team consists of an unmanned air vehicle, a ground vehicle, and two human operators. They are deployed for an information gathering task and perform information fusion cooperatively. All aspects of the system including the fusion algorithms are fully decentralized. Experimental results are presented in form of the acquired multi-attribute feature map, information exchange patterns demonstrating human-robot information fusion, and quantitative model evaluation. Learned lessons from deploying the system in the field are also presented. (C) 2007 Wiley Periodicals, Inc.
引用
下载
收藏
页码:911 / 942
页数:32
相关论文
共 50 条
  • [21] Safety barrier functions and multi-camera tracking for human-robot shared environment
    Ferraguti, Federica
    Landi, Chiara Talignani
    Costi, Silvia
    Bonfe, Marcello
    Farsoni, Saverio
    Secchi, Cristian
    Fantuzzi, Cesare
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2020, 124
  • [22] Understanding human-robot teams in light of all-human teams: Aspects of team interaction and shared cognition
    Demir, Mustafa
    McNeese, Nathan J.
    Cooke, Nancy J.
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2020, 140
  • [23] Learning fusion feature representation for garbage image classification model in human-robot interaction
    Li, Xi
    Li, Tian
    Li, Shaoyi
    Tian, Bin
    Ju, Jianping
    Liu, Tingting
    Liu, Hai
    INFRARED PHYSICS & TECHNOLOGY, 2023, 128
  • [24] Graphical Narrative Interfaces: Representing Spatiotemporal Information for a Highly Autonomous Human-Robot Team
    Nakano, Hiroaki
    Goodrich, Michael A.
    2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2015, : 634 - 639
  • [25] Breaking the Human-Robot Deadlock: Surpassing Shared Control Performance Limits with Sparse Human-Robot Interaction
    Trautman, Pete
    ROBOTICS: SCIENCE AND SYSTEMS XIII, 2017,
  • [26] Human-Robot Interaction by Information Sharing
    Anzai, Yuichiro
    PROCEEDINGS OF THE 8TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2013), 2013, : 65 - 66
  • [27] Digital Representation of Skills for Human-Robot Interaction
    Avizzano, Carlo Alberto
    Ruffaldi, Emanuele
    Bergamasco, Massimo
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 347 - 352
  • [28] Human-Robot Teaming with Human Intent Prediction and Shared Control
    Jin, Zongyao
    Pagilla, Prabhakar R.
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [29] Spatial knowledge representation for human-robot interaction
    Moratz, R
    Tenbrink, T
    Bateman, J
    Fischer, K
    SPATIAL COGNITION III, 2003, 2685 : 263 - 286
  • [30] From Perception to Semantics : An environment representation model based on human-robot interactions.
    Breux, Yohan
    Druon, Sebastien
    Zapata, Rene
    2018 27TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2018), 2018, : 672 - 677