Distributed Multi-Agent Reinforcement Learning for Cooperative Low-Carbon Control of Traffic Network Flow Using Cloud-Based Parallel Optimization

被引:0
|
作者
Zhang, Yongnan [1 ]
Zhou, Yonghua [2 ]
Fujita, Hamido [3 ,4 ,5 ]
机构
[1] Beijing Univ Technol, Coll Metropolitan Transportat, Beijing Key Lab Traff Engn, Beijing 100124, Peoples R China
[2] Beijing Jiaotong Univ, Sch Automation & Intelligence, Beijing 100044, Peoples R China
[3] Univ Teknol Malaysia, Malaysia Japan Int Inst Technol MJIIT, Kuala Lumpur 54100, Malaysia
[4] Univ Granada, Andalusian Res Inst Data Sci & Computat Intelligen, Granada 18012, Spain
[5] Iwate Prefectural Univ, Reg Res Ctr, Takizawa020-0693, Takizawa, Japan
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Training; Computational modeling; Optimization; Roads; Carbon dioxide; Decision making; Process control; Distributed multi-agent reinforcement learning; graph convolutional network; self-attention value decomposition; parallel optimization; low-carbon control; traffic network flow;
D O I
10.1109/TITS.2024.3452430
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The escalating air pollution resulting from traffic congestion has necessitated a shift in traffic control strategies towards green and low-carbon objectives. In this study, a graph convolutional network and self-attention value decomposition-based multi-agent actor-critic (GSAVD-MAC) approach is proposed to cooperative control traffic network flow, where vehicle carbon emission and traffic efficiency are considered as reward functions to minimize carbon emissions and traffic congestions. In this method, we design a local coordination mechanism based on graph convolutional network to guide the multi-agent decision-making process by extracting spatial topology and traffic flow characteristics between adjacent intersections. This enables distributed agents to make low-carbon decisions which not only account for their own interactions with the environment but also consider local cooperation with neighboring agents. Further, we design a global coordination mechanism based on self-attention value decomposition to guide multi-agent learning process by assigning various weights to distributed agents with respect to their contribution degrees. This enables distributed agents to learn a globally optimal low-carbon control strategy in a cooperative and adaptive manner. In addition, we design a cloud computing-based parallel optimization algorithm for the GSAVD-MAC model to reduce calculation time costs. Simulation experiments based on real road networks have verified the advantages of the proposed method in terms of computational efficiency and control performance.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] A cloud-based operation optimization of building energy systems using a hierarchical multi-agent control
    Kuempel, Alexander
    Storek, Thomas
    Baranski, Marc
    Schumacher, Markus
    Muller, Dirk
    [J]. CLIMATE RESILIENT CITIES - ENERGY EFFICIENCY & RENEWABLES IN THE DIGITAL ERA (CISBAT 2019), 2019, 1343
  • [32] Cooperative Multi-Agent Deep Reinforcement Learning for Dynamic Virtual Network Allocation With Traffic Fluctuations
    Suzuki, Akito
    Kawahara, Ryoichi
    Harada, Shigeaki
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (03): : 1982 - 2000
  • [33] Satellite Network Traffic Scheduling Algorithm Based on Multi-Agent Reinforcement Learning
    Zhang, Tingting
    Zhang, Mingqi
    Yang, Lintao
    Dong, Tao
    Yin, Jie
    Liu, Zhihui
    Wu, Jing
    Jiang, Hao
    [J]. 19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021), 2021, : 761 - 768
  • [34] Sharing of Energy Among Cooperative Households Using Distributed Multi-Agent Reinforcement Learning
    Ebell, Niklas
    Guetlein, Moritz
    Pruckner, Marco
    [J]. PROCEEDINGS OF 2019 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES EUROPE (ISGT-EUROPE), 2019,
  • [35] Distributed Cooperative Spectrum Sharing in UAV Networks Using Multi-Agent Reinforcement Learning
    Shamsoshoara, Alireza
    Khaledi, Mehrdad
    Afghah, Fatemeh
    Razi, Abolfazl
    Ashdown, Jonathan
    [J]. 2019 16TH IEEE ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2019,
  • [36] CLlight: Enhancing representation of multi-agent reinforcement learning with contrastive learning for cooperative traffic signal control
    Fu, Xiang
    Ren, Yilong
    Jiang, Han
    Lv, Jiancheng
    Cui, Zhiyong
    Yu, Haiyang
    [J]. Expert Systems with Applications, 2025, 262
  • [37] Multi-Agent Deep Reinforcement Learning with Clustering and Information Sharing for Traffic Light Cooperative Control
    Du T.
    Wang B.
    Cheng H.
    Luo L.
    Zeng N.
    [J]. Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (02): : 538 - 545
  • [38] Communication Optimization for Multi-agent Reinforcement Learning-based Traffic Control System with Explainable Protocol
    Wang, Han
    Wu, Haochen
    Lu, Juanwu
    Tang, Fang
    Delle Monache, Maria Laura
    [J]. 2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 6068 - 6073
  • [39] Application of Traffic Light Control in Oversaturated Urban Network Using Multi-Agent Deep Reinforcement Learning
    Ei Mon, Ei
    Ochiai, Hideya
    Aswakul, Chaodit
    [J]. IEEE ACCESS, 2024, 12 : 82384 - 82395
  • [40] A multi-agent reinforcement learning based approach for intelligent traffic signal control
    Benhamza, Karima
    Seridi, Hamid
    Agguini, Meriem
    Bentagine, Amel
    [J]. EVOLVING SYSTEMS, 2024, : 2383 - 2397