Object segmentation in cluttered environment based on gaze tracing and gaze blinking

被引:0
|
作者
Photchara Ratsamee
Yasushi Mae
Kazuto Kamiyama
Mitsuhiro Horade
Masaru Kojima
Tatsuo Arai
机构
[1] Osaka University,Graduate School of Information Science and Technology
[2] Kansai University,Graduate School of Engineering
[3] Takenaka Corporation,Takenaka Research & Development Institute
[4] National Defense Academy,Department of Mechanical Systems Engineering, School of Systems Engineering
[5] Osaka University,Graduate School of Engineering Science
[6] The University of Electro-Communications,undefined
来源
关键词
Gaze interface; Human–robot interaction; Object segmentation;
D O I
暂无
中图分类号
学科分类号
摘要
People with disabilities, such as patients with motor paralysis conditions, lack independence and cannot move most parts of their bodies except for their eyes. Supportive robot technology is highly beneficial in supporting these types of patients. We propose a gaze-informed location-based (or gaze-based) object segmentation, which is a core module of successful patient-robot interaction in an object-search task (i.e., a situation when a robot has to search for and deliver a target object to the patient). We have introduced the concepts of gaze tracing (GT) and gaze blinking (GB), which are integrated into our proposed object segmentation technique, to yield the benefit of an accurate visual segmentation of unknown objects in a complex scene. Gaze tracing information can be used as a clue as to where the target object is located in a scene. Then, gaze blinking can be used to confirm the position of the target object. The effectiveness of our proposed method has been demonstrated using a humanoid robot in experiments with different types of highly cluttered scenes. Based on the limited gaze guidance from the user, we achieved an 85% F-score of unknown object segmentation in an unknown environment.
引用
收藏
相关论文
共 50 条
  • [41] Target Selection Based on Gaze Likelihood in Gaze Input Interface
    Kobayashi, Fumihiro
    Takahashi, Hiroki
    2016 8TH INTERNATIONAL CONFERENCE ON KNOWLEDGE AND SMART TECHNOLOGY (KST), 2016, : 249 - 252
  • [42] Model-based gaze direction estimation in office environment
    Jung, Do Joon
    Kwon, Kyung Su
    Park, Se Hyun
    Kim, Jong Bae
    Kim, Hang Joon
    DELTA 2008: FOURTH IEEE INTERNATIONAL SYMPOSIUM ON ELECTRONIC DESIGN, TEST AND APPLICATIONS, PROCEEDINGS, 2008, : 470 - +
  • [43] Gaze-based interaction on multiple displays in an automotive environment
    Poitschke, Tony
    Laquai, Florian
    Stamboliev, Stilyan
    Rigoll, Gerhard
    2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2011, : 543 - 548
  • [44] Gaze interaction: anticipation-based control of the gaze of others
    Riechelmann, Eva
    Raettig, Tim
    Boeckler, Anne
    Huestegge, Lynn
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2021, 85 (01): : 302 - 321
  • [45] Gaze-Driven Object Tracking Based on Optical Flow Estimation
    Bazyluk, Bartosz
    Mantiuk, Radoslaw
    COMPUTER VISION AND GRAPHICS, ICCVG 2014, 2014, 8671 : 84 - 91
  • [46] Topology for gaze analyses - Raw data segmentation
    Hein, Oliver
    Zangemeister, Wolfgang H.
    JOURNAL OF EYE MOVEMENT RESEARCH, 2017, 10 (01):
  • [47] GaTector: A Unified Framework for Gaze Object Prediction
    Wang, Binglu
    Hu, Tao
    Li, Baoshan
    Chen, Xiaojuan
    Zhang, Zhijie
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 19566 - 19575
  • [48] Object Referring in Videos with Language and Human Gaze
    Vasudevan, Arun Balajee
    Dai, Dengxin
    Van Gool, Luc
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4129 - 4138
  • [49] Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks
    Kim, Jung-Hwa
    Jeong, Jin-Woo
    SENSORS, 2020, 20 (17) : 1 - 20
  • [50] Gesture, speech, and gaze cues for discourse segmentation
    Quek, F
    McNeill, D
    Bryll, R
    Kirbas, C
    Arslan, H
    McCullough, KE
    Furuyama, N
    Ansari, R
    IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, VOL II, 2000, : 247 - 254