A Voice-Controlled Motion Reproduction Using Large Language Models for Polishing Robots

被引:0
|
作者
Tanaka, Yuki [1 ]
Katsura, Seiichiro [1 ]
机构
[1] Keio Univ, Dept Syst Design Engn, Keio, Japan
关键词
motion control; motion reproduction system; natural language processing; large language models; polishing robot; transfer of skills; human-robot interaction; robot teaching;
D O I
10.1109/ICM54990.2023.10101966
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, the shortage of professionally skilled people in industrial fields has been a major social problem. To solve this problem, the transfer of skills to robots has been attracting much attention. However, they are not familiar with robot control, and hard to teach robots their skills by numerical commands or program source code. For more user-friendly human-robot interaction, a lot of studies have been conducted. In previous researches, robot task processes are pre-defined and not changed in task execution. We developed a robot system using the motion-copying system and GPT-3, one of the Large Language Models. This system can not only copy the motion but also modify saved motion in execution by using natural language commands. We evaluated the proposed system by applying it to polishing robots and confirmed that the surface of used workpieces was changed following to input commands.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Learning Visual-Audio Representations for Voice-Controlled Robots
    Chang, Peixin
    Liu, Shuijing
    McPherson, D. Livingston
    Driggs-Campbell, Katherine
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9508 - 9514
  • [2] Complex Motion Planning for Quadruped Robots Using Large Language Models
    Zhang, Xiang
    He, Run
    Tong, Kai
    Man, Shuquan
    Tong, Jingyu
    Li, Haodong
    Zhuang, Huiping
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [3] Voice-Controlled Autonomous Vehicle Using IoT
    Sachdev, Sumeet
    Macwan, Joel
    Patel, Chintan
    Doshi, Nishant
    10TH INT CONF ON EMERGING UBIQUITOUS SYST AND PERVAS NETWORKS (EUSPN-2019) / THE 9TH INT CONF ON CURRENT AND FUTURE TRENDS OF INFORMAT AND COMMUN TECHNOLOGIES IN HEALTHCARE (ICTH-2019) / AFFILIATED WORKOPS, 2019, 160 : 712 - 717
  • [4] Designing of voice-controlled drone using BT-voice control for Arduino
    Ambika, Farook
    Renuka, K.
    Shifa, Praveen
    Badiuddin, Farook
    Sneha
    Shenoy, Praveen
    JOURNAL OF ADVANCED APPLIED SCIENTIFIC RESEARCH, 2023, 5 (03): : 50 - 56
  • [5] Development of a Voice-Controlled Home Automation Using Zigbee Module
    Cubukcu, Aykut
    Kuncan, Melih
    Kaplan, Kaplan
    Ertunc, H. Metin
    2015 23RD SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2015, : 1801 - 1804
  • [6] VOICE-CONTROLLED PICK AND PLACE ROBOT USING ARDUINO UNO
    Vinay, Kukati
    Jyothish, Ravuru Sai
    Islam, Shaik Moyinul
    SenthamilSelvan, K.
    INTERNATIONAL JOURNAL OF EARLY CHILDHOOD SPECIAL EDUCATION, 2022, 14 (05) : 261 - 269
  • [7] Using Large Language Models to Shape Social Robots' Speech
    Sevilla-Salcedo, Javier
    Fernandez-Rodicio, Enrique
    Martin-Galvan, Laura
    Castro-Gonzalez, Alvaro
    Castillo, Jose C.
    Salichs, Miguel A.
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2023, 8 (03): : 6 - 20
  • [8] Voice-Controlled Intelligent Personal Assistant for Call-Center Automation in the Uzbek Language
    Mukhamadiyev, Abdinabi
    Khujayarov, Ilyos
    Cho, Jinsoo
    ELECTRONICS, 2023, 12 (23)
  • [9] Development of a Voice-controlled Intelligent Wheelchair System using Raspberry Pi
    Alim, Muhammad Azlan
    Setumin, Samsul
    Rosli, Anis Diyana
    Ani, Adi Izhar Che
    11TH IEEE SYMPOSIUM ON COMPUTER APPLICATIONS & INDUSTRIAL ELECTRONICS (ISCAIE 2021), 2021, : 274 - 278
  • [10] High-Capacity Robots in Early Education: Developing Computational Thinking with a Voice-Controlled Collaborative Robot
    Castro, Angela
    Aguilera, Cristhian
    Yang, Weipeng
    Urrutia, Brigida
    EDUCATION SCIENCES, 2024, 14 (08):