Action Generative Networks Planning for Deformable Object with Raw Observations

被引:0
|
作者
Sheng, Ziqi [1 ]
Jin, Kebing [1 ]
Ma, Zhihao [1 ]
Zhuo, Hankz-Hankui [1 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
AI planning; contrastive learning; action model;
D O I
10.3390/s21134552
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Synthesizing plans for a deformable object to transit from initial observations to goal observations, both of which are represented by high-dimensional data (namely "raw" data), is challenging due to the difficulty of learning abstract state representations of raw data and transition models of continuous states and continuous actions. Even though there have been some approaches making remarkable progress regarding the planning problem, they often neglect actions between observations and are unable to generate action sequences from initial observations to goal observations. In this paper, we propose a novel algorithm framework, namely AGN. We first learn a state-abstractor model to abstract states from raw observations, a state-generator model to generate raw observations from states, a heuristic model to predict actions to be executed in current states, and a transition model to transform current states to next states after executing specific actions. After that, we directly generate plans for a deformable object by performing the four models. We evaluate our approach in continuous domains and show that our approach is effective with comparison to state-of-the-art algorithms.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Integrating planning perception and action for informed object search
    Luis J. Manso
    Marco A. Gutierrez
    Pablo Bustos
    Pilar Bachiller
    Cognitive Processing, 2018, 19 : 285 - 296
  • [32] Integrating planning perception and action for informed object search
    Manso, Luis J.
    Gutierrez, Marco A.
    Bustos, Pablo
    Bachiller, Pilar
    COGNITIVE PROCESSING, 2018, 19 (02) : 285 - 296
  • [33] PathGAN: Local path planning with attentive generative adversarial networks
    Choi, Dooseop
    Han, Seung-Jun
    Min, Kyoung-Wook
    Choi, Jeongdan
    ETRI JOURNAL, 2022, 44 (06) : 1004 - 1019
  • [34] Object-Centric Street Scene Synthesis with Generative Adversarial Networks
    Van den Abeele, Maxim
    Neven, Davy
    De Brabandere, Bert
    Proesmans, Marc
    Van Gool, Luc
    20TH IEEE MEDITERRANEAN ELETROTECHNICAL CONFERENCE (IEEE MELECON 2020), 2020, : 665 - 671
  • [35] Generative Adversarial Networks for Stochastic Video Prediction With Action Control
    Hu, Zhihang
    Turki, Turki
    Wang, Jason T. L.
    IEEE ACCESS, 2020, 8 (08): : 63336 - 63348
  • [36] Generative Adversarial Graph Convolutional Networks for Human Action Synthesis
    Degardin, Bruno
    Neves, Joao
    Lopes, Vasco
    Brito, Joao
    Yaghoubi, Ehsan
    Proenca, Hugo
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2753 - 2762
  • [37] Dual-Arm Mobile Manipulation Planning of a Long Deformable Object in Industrial Installation
    Qin, Yili
    Escande, Adrien
    Kanehiro, Fumio
    Yoshida, Eiichi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 3039 - 3046
  • [38] DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection
    Ouyang, Wanli
    Wang, Xiaogang
    Zeng, Xingyu
    Qiu, Shi
    Luo, Ping
    Tian, Yonglong
    Li, Hongsheng
    Yang, Shuo
    Wang, Zhe
    Loy, Chen-Change
    Tang, Xiaoou
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 2403 - 2412
  • [39] Multi-object tracking using deformable convolution networks with tracklets updating
    Zhang, Yuanping
    Tang, Yuanyan
    Fang, Bin
    Shang, Zhaowei
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2019, 17 (06)
  • [40] 3-D Deformable Object Manipulation Using Deep Neural Networks
    Hu, Zhe
    Han, Tao
    Sun, Peigen
    Pan, Jia
    Manocha, Dinesh
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) : 4255 - 4261