Goal Recognition Using Deep Learning in a Planetary Exploration Rover Developed for a Contest

被引:0
|
作者
Akiyama, Miho [1 ]
Saito, Takuya [2 ]
机构
[1] Shonan Inst Technol, Grad Sch Elect & Informat Engn, Fujisawa, Kanagawa, Japan
[2] Shonan Inst Technol, Fac Engn, Fujisawa, Kanagawa, Japan
关键词
D O I
10.1109/icce-taiwan49838.2020.9258310
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We participated in the "A Rocket Launch for International Student Satellites (ARLISS)" competition in which original design planetary exploration rovers competed to reach close to the target using autonomous control. In this competition, the rovers of various teams approached the target position using the global positioning system (GPS). However, they could only approach to within a few meters of the target due to the GPS positioning error. Our rover recognized the red traffic cone, placed at the goal point, by its color and in the Tanegashima Rocket Contest 2018, the rover was controlled to the point where the distance to the goal was 0 m. However, image recognition of goal objects by their colors suffers from the problem of unstable recognition due to changes in ambient lighting, which occurs due to, for example, weather changes. We therefore attempted to resolve this problem by employing deep learning. However, a considerable amount of calculation time is taken by a general deep learning model to run on a small planetary exploration rover computer and thus cannot be applied as it is. Therefore, we proposed a deep learning model with a short calculation time and high recognition accuracy. Using the proposed method, a recognition rate of over 99 % could be achieved in a few seconds. Furthermore, we won the contest by demonstrating the effectiveness of the rover using the proposed method and thus proved the effectiveness of this method.
引用
收藏
页数:2
相关论文
共 50 条
  • [31] Human Pose Recognition Using Deep Learning
    Javaid, Sameena
    Ubaid, Muhammad Talha
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, VOL 2, ICICT 2024, 2024, 1012 : 531 - 548
  • [32] Detection and Recognition of Face Using Deep Learning
    Sakthimohan, M.
    Elizabeth Rani, G.
    Navaneethakrishnan, M.
    Janani, K.
    Nithva, V.
    Pranav, R.
    Proceedings of the 2023 International Conference on Intelligent Systems for Communication, IoT and Security, ICISCoIS 2023, 2023, : 72 - 76
  • [33] Recognition of Dangerous Objects using Deep Learning
    Hlavata, Roberta
    Kamencay, Patrik
    Sykora, Peter
    Hudec, Robert
    Radilova, Martina
    2024 34TH INTERNATIONAL CONFERENCE RADIOELEKTRONIKA, RADIOELEKTRONIKA 2024, 2024,
  • [34] Angus Cattle Recognition Using Deep Learning
    Chen, Shunnan
    Wang, Sen
    Zuo, Xinxin
    Yang, Ruigang
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4169 - 4175
  • [35] Sugarcane Disease Recognition using Deep Learning
    Militante, Sammy V.
    Gerardo, Bobby D.
    Medina, Ruji P.
    PROCEEDINGS OF THE 2019 IEEE EURASIA CONFERENCE ON IOT, COMMUNICATION AND ENGINEERING (ECICE), 2019, : 575 - 578
  • [36] Voice Gender Recognition Using Deep Learning
    Buyukyilmaz, Mucahit
    Cibikdiken, Ali Osman
    PROCEEDINGS OF 2016 INTERNATIONAL CONFERENCE ON MODELING, SIMULATION AND OPTIMIZATION TECHNOLOGIES AND APPLICATIONS (MSOTA2016), 2016, 58 : 409 - 411
  • [37] Speech Emotion Recognition Using Deep Learning
    Alagusundari, N.
    Anuradha, R.
    ARTIFICIAL INTELLIGENCE: THEORY AND APPLICATIONS, VOL 1, AITA 2023, 2024, 843 : 313 - 325
  • [38] Persian speech recognition using deep learning
    Veisi, Hadi
    Haji Mani, Armita
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2020, 23 (04) : 893 - 905
  • [39] Emotion Recognition Using Multimodal Deep Learning
    Liu, Wei
    Zheng, Wei-Long
    Lu, Bao-Liang
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 521 - 529
  • [40] Human Activity Recognition using Deep Learning
    Moola, Ramu
    Hossain, Ashraf
    2022 URSI REGIONAL CONFERENCE ON RADIO SCIENCE, USRI-RCRS, 2022, : 165 - 168