Modelling and controlling uncertainty in optimal disassembly planning through reinforcement learning

被引:7
|
作者
Reveliotis, SA [1 ]
机构
[1] Georgia Inst Technol, Sch Ind & Syst Engn, Atlanta, GA 30332 USA
关键词
D O I
10.1109/ROBOT.2004.1307457
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Currently there is increasing consensus that one of the main issues differentiating the remanufacturing from the more traditional manufacturing processes is the need to effectively model and manage the high levels of uncertainty inherent in these new processes. The work presented in this paper formally establishes that the theory of reinforcement learning, one of the most actively researched areas in computational learning theory, constitutes a rigorous, effectively implementable modelling framework for providing (near-)optimal solutions to the optimal disassembly planning (ODP) problem, one of the key problems to be addressed by remanufacturing processes, in the face of the aforementioned uncertainties. The developed results are exemplified and validated by application on a case study borrowed from the relevant literature.
引用
收藏
页码:2625 / 2632
页数:8
相关论文
共 50 条
  • [41] Optimal Planning of Emergency Communication Network Using Deep Reinforcement Learning
    Yin, Changsheng
    Yang, Ruopeng
    Zhu, Wei
    Zou, Xiaofei
    Zhang, Junda
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2021, E104B (01) : 20 - 26
  • [42] Behavior Planning at Urban Intersections through Hierarchical Reinforcement Learning
    Qiao, Zhiqian
    Schneider, Jeff
    Dolan, John M.
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 2667 - 2673
  • [43] Hierarchical production control and distribution planning under retail uncertainty with reinforcement learning
    Deng, Yang
    Chow, Andy H. F.
    Yan, Yimo
    Su, Zicheng
    Zhou, Zhili
    Kuo, Yong-Hong
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2025,
  • [44] Dynamic Obstacle Avoidance and Path Planning through Reinforcement Learning
    Almazrouei, Khawla
    Kamel, Ibrahim
    Rabie, Tamer
    APPLIED SCIENCES-BASEL, 2023, 13 (14):
  • [45] Hierarchical Task and Motion Planning through Deep Reinforcement Learning
    Newaz, Abdullah Al Redwan
    Alam, Tauhidul
    2021 FIFTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2021), 2021, : 100 - 105
  • [46] A Novel Deep Reinforcement Learning Approach for Stability-Based Parallel Disassembly Sequence Planning Problem
    M. R. Mahesh Kumar
    Chandrasekar Ravi
    SN Computer Science, 6 (3)
  • [47] Using reinforcement learning to solve for optimal electricity generation investment under uncertainty
    Grobman, JH
    Carey, JM
    STRUCTURE OF THE ENERGY INDUSTRIES: THE ONLY CONSTANT IS CHANGE, CONFERENCE PROCEEDINGS, 1999, : 127 - 136
  • [48] An Optimal Transfer of Knowledge in Reinforcement Learning through Greedy Approach
    Kumari, Deepika
    Chaudhary, Mahima
    Mishra, Ashish Kumar
    2019 10TH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT), 2019,
  • [49] Optimal Operation of Cryogenic Calorimeters Through Deep Reinforcement Learning
    Angloher G.
    Banik S.
    Benato G.
    Bento A.
    Bertolini A.
    Breier R.
    Bucci C.
    Burkhart J.
    Canonica L.
    D’Addabbo A.
    Di Lorenzo S.
    Einfalt L.
    Erb A.
    v. Feilitzsch F.
    Fichtinger S.
    Fuchs D.
    Garai A.
    Ghete V.M.
    Gorla P.
    Guillaumon P.V.
    Gupta S.
    Hauff D.
    Ješkovský M.
    Jochum J.
    Kaznacheeva M.
    Kinast A.
    Kuckuk S.
    Kluck H.
    Kraus H.
    Langenkämper A.
    Mancuso M.
    Marini L.
    Mauri B.
    Meyer L.
    Mokina V.
    Niedermayer K.
    Olmi M.
    Ortmann T.
    Pagliarone C.
    Pattavina L.
    Petricca F.
    Potzel W.
    Povinec P.
    Pröbst F.
    Pucci F.
    Reindl F.
    Rothe J.
    Schäffner K.
    Schieck J.
    Schönert S.
    Computing and Software for Big Science, 2024, 8 (1)
  • [50] Reinforcement learning for Hybrid Disassembly Line Balancing Problems
    Wang, Jiacun
    Xi, Guipeng
    Guo, Xiwang
    Liu, Shixin
    Qin, Shujin
    Han, Henry
    NEUROCOMPUTING, 2024, 569