Vision-Based Object Manipulation for Activities of Daily Living Assistance Using Assistive Robot

被引:0
|
作者
Shahria, Md Tanzil [1 ,5 ]
Ghommam, Jawhar [2 ]
Fareh, Raouf [3 ]
Rahman, Mohammad Habibur [1 ,4 ]
机构
[1] Univ Wisconsin, Comp Sci, Milwaukee, WI 53211 USA
[2] Sultan Qaboos Univ, Elect & Comp Engn, Muscat 123, Oman
[3] Univ Sharjah, Elect & Comp Engn, Sharjah 27272, U Arab Emirates
[4] Univ Wisconsin, Mech Engn, Milwaukee, WI 53211 USA
[5] Univ Wisconsin, Biorobot Lab, 115 East Reindl Way,USR 281, Milwaukee, WI 53212 USA
来源
AUTOMATION | 2024年 / 5卷 / 02期
关键词
activities of daily living (ADLs); assistive robot; localization; pre-trained deep learning model; robotic assistance; vision-based object manipulation;
D O I
10.3390/automation5020006
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The increasing prevalence of upper and lower extremity (ULE) functional deficiencies presents a significant challenge, as it restricts individuals' ability to perform daily tasks independently. Robotic devices are emerging as assistive devices to assist individuals with limited ULE functionalities in activities of daily living (ADLs). While assistive manipulators are available, manual control through traditional methods like joysticks can be cumbersome, particularly for individuals with severe hand impairments and vision limitations. Therefore, autonomous/semi-autonomous control of a robotic assistive device to perform any ADL task is open to research. This study addresses the necessity of fostering independence in ADLs by proposing a creative approach. We present a vision-based control system for a six-degrees-of-freedom (DoF) robotic manipulator designed for semi-autonomous "pick-and-place" tasks, one of the most common activities among ADLs. Our approach involves selecting and training a deep-learning-based object detection model with a dataset of 47 ADL objects, forming the base for a 3D ADL object localization algorithm. The proposed vision-based control system integrates this localization technique to identify and manipulate ADL objects (e.g., apples, oranges, capsicums, and cups) in real time, returning them to specific locations to complete the "pick-and-place" task. Experimental validation involving an xArm6 (six DoF) robot from UFACTORY in diverse settings demonstrates the system's adaptability and effectiveness, achieving an overall 72.9% success rate in detecting, localizing, and executing ADL tasks. This research contributes to the growing field of autonomous assistive devices, enhancing independence for individuals with functional impairments.
引用
收藏
页码:68 / 89
页数:22
相关论文
共 50 条
  • [1] VIBI: Assistive Vision-Based Interface for Robot Manipulation
    Quintero, Camilo Perez
    Ramirez, Oscar
    Jaegersand, Martin
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 4458 - 4463
  • [2] Robot Manipulation of Dynamic Object with Vision-based Reinforcement Learning
    Liu, Chenchen
    Zhang, Zhengshen
    Zhou, Lei
    Liu, Zhiyang
    Ang, Marcelo H., Jr.
    Lu, Wenfeng
    Tay, Francis E. H.
    [J]. 2024 9TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS ENGINEERING, ICCRE 2024, 2024, : 21 - 26
  • [3] Vision-based Automatic Control of a 5-Fingered Assistive Robotic Manipulator for Activities of Daily Living
    Wang, Chen
    Freer, Daniel
    Lui, Jindong
    Yang, Guang-Zhong
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 627 - 633
  • [4] Vision-based Belt Manipulation by Humanoid Robot
    Qin, Yili
    Escande, Adrien
    Tanguy, Arnaud
    Yoshida, Eiichi
    [J]. 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 3547 - 3552
  • [5] Vision-based manipulation with the humanoid robot Romeo
    Claudio, Giovanni
    Spindler, Fabien
    Chaumette, Francois
    [J]. 2016 IEEE-RAS 16TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2016, : 286 - 293
  • [6] Vision-based Teleoperation of A Mobile Robot with Visual Assistance
    Kubota, Naoyuki
    Koudu, Daisuke
    Kamijima, Shinichi
    Taniguchi, Kazuhiko
    Nogawa, Yasutsugu
    [J]. INTELLIGENT AUTONOMOUS SYSTEMS 9, 2006, : 365 - +
  • [7] Graceful User Following for Mobile Balance Assistive Robot in Daily Activities Assistance
    Wang, Yifan
    Yuan, Meng
    Li, Lei
    Chua, Karen Sui Geok
    Wee, Seng Kwee
    Ang, Wei Tech
    [J]. IFAC PAPERSONLINE, 2023, 56 (02): : 1139 - 1144
  • [8] Recognition of vision-based activities of daily living using linear predictive coding of histogram of directional derivative
    Sidharth B. Bhorge
    Ramchandra R. Manthalkar
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2019, 10 : 199 - 214
  • [9] Recognition of vision-based activities of daily living using linear predictive coding of histogram of directional derivative
    Bhorge, Sidharth B.
    Manthalkar, Ramchandra R.
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2019, 10 (01) : 199 - 214
  • [10] Object manipulation by learning stereo vision-based robots
    Nguyen, MC
    Graefe, V
    [J]. IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 146 - 151