Piston Error Automatic Correction for Segmented Mirrors via Deep Reinforcement Learning

被引:0
|
作者
Li, Dequan [1 ]
Wang, Dong [1 ]
Yan, Dejie [1 ]
机构
[1] Chinese Acad Sci, Changchun Inst Opt Fine Mech & Phys, Space Opt Dept, Changchun 130033, Peoples R China
基金
中国国家自然科学基金;
关键词
segmented mirrors; deep reinforcement learning; co-phase error; KECK TELESCOPES; DIVERSITY; SENSOR; SYSTEM;
D O I
10.3390/s24134236
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The segmented mirror co-phase error identification technique based on supervised learning methods has the advantages of simple application conditions, no dependence on custom sensors, a fast calculation speed, and low computing power requirements compared with other methods. However, it is often difficult to obtain a high accuracy in practical application situations with this method because of the difference between the training model and the actual model. The reinforcement learning algorithm does not need to model the real system when operating the system. However, it still retains the advantages of supervised learning. Thus, in this paper, we placed a mask on the pupil plane of the segmented telescope optical system. Moreover, based on the wide spectrum, point spread function, and modulation transfer function of the optical system and deep reinforcement learning-without modeling the optical system-a large-range and high-precision piston error automatic co-phase method with multiple-submirror parallelization was proposed. Finally, we carried out relevant simulation experiments, and the results indicate that the method is effective.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Deep sparse representation via deep dictionary learning for reinforcement learning
    Tang, Jianhao
    Li, Zhenni
    Xie, Shengli
    Ding, Shuxue
    Zheng, Shaolong
    Chen, Xueni
    [J]. 2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2398 - 2403
  • [42] Learning to Drive via Apprenticeship Learning and Deep Reinforcement Learning
    Huang, Wenhui
    Braghin, Francesco
    Wang, Zhuo
    [J]. 2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1536 - 1540
  • [43] Object Shape Error Correction using Deep Reinforcement Learning for Multi-Station Assembly Systems
    Sinha, Sumit
    Franciosa, Pasquale
    Ceglarek, Dariusz
    [J]. 2021 IEEE 19TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), 2021,
  • [44] Deep Learning for Multiwell Automatic Log Correction
    Simoes, Vanessa
    Maniar, Hiren
    Abubakar, Aria
    Zhao, Tao
    [J]. PETROPHYSICS, 2022, 63 (06): : 724 - 747
  • [45] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074
  • [46] Shared Autonomy via Deep Reinforcement Learning
    Reddy, Siddharth
    Dragan, Anca D.
    Levine, Sergey
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XIV, 2018,
  • [47] Unsupervised Paraphrasing via Deep Reinforcement Learning
    Siddique, A. B.
    Oymak, Samet
    Hristidis, Vagelis
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1800 - 1809
  • [48] HIERARCHICAL CACHING VIA DEEP REINFORCEMENT LEARNING
    Sadeghi, Alireza
    Wang, Gang
    Giannakis, Georgios B.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3532 - 3536
  • [49] Optimization of Molecules via Deep Reinforcement Learning
    Zhenpeng Zhou
    Steven Kearnes
    Li Li
    Richard N. Zare
    Patrick Riley
    [J]. Scientific Reports, 9
  • [50] Hypernetwork Dismantling via Deep Reinforcement Learning
    Yan, Dengcheng
    Xie, Wenxin
    Zhang, Yiwen
    He, Qiang
    Yang, Yun
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (05): : 3302 - 3315