A self-sustained EV charging framework with N-step deep reinforcement learning

被引:3
|
作者
Sykiotis, Stavros [1 ]
Menos-Aikateriniadis, Christoforos [2 ,3 ]
Doulamis, Anastasios [1 ]
Doulamis, Nikolaos [1 ]
Georgilakis, Pavlos S. [2 ]
机构
[1] Natl Tech Univ Athens, Sch Rural Surveying & Geoinformat Engn, 9 Iroon Polytech Str, Athens 15773, Greece
[2] Natl Tech Univ Athens, Sch Elect & Comp Engn, Heroon Polytech 9, Athens 15773, Greece
[3] Intracom SA Telecom Solut, Telco Software Dept, 19-7 Km Markopoulou Ave, Athens 19002, Greece
来源
关键词
Smart grid; Smart charging; Demand response; Electric vehicle; Solar power; Self-consumption; SERVICES; STRATEGY;
D O I
10.1016/j.segan.2023.101124
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Decarbonization of the transport sector is a major challenge in the transition towards net-zero emissions. Even though the penetration of electric vehicles (EV) in the passenger vehicle fleet is increasing, the energy mix is not yet dominated by renewables. This leads to the utilization of fossil-based power generation for EV charging, especially during peak hours. In this work, we introduce a residential smart EV charging framework that prioritizes solar photovoltaic (PV) power self-consumption to accelerate the transition to a carbon neutral passenger vehicle fleet. Our approach employs N-Step Deep Reinforcement Learning to charge the EV with clean energy from a PV, without neglecting other major factors that influence end users' behavior, such as electricity cost or EV charging tendencies. Historical smart-meter data from the Pecan Street dataset on total consumption, EV demand and solar generation has been utilized as input features to train the Deep RL method so that it decides on a real-time basis whether to charge or not the EV, without the need for foresight of future observations. Experimental results on six residential houses validate that, compared to uncontrolled EV charging, the proposed method can increase the average self-consumption of solar energy for EV charging by 19.66%, as well as reduce network stress by 7% and electricity bill by 10.3%.& COPY; 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Synchronous n-Step Method for Independent Q-Learning in Multi-Agent Deep Reinforcement Learning
    Gong, Xudong
    Ding, Bo
    Xu, Jie
    Wang, Huaimin
    Zhou, Xing
    Jia, Hongda
    [J]. 2019 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI 2019), 2019, : 460 - 467
  • [2] Reinforcement learning control with n-step information for wastewater treatment systems
    Li, Xin
    Wang, Ding
    Zhao, Mingming
    Qiao, Junfei
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [3] Value Function Transfer for Deep Multi-Agent Reinforcement Learning Based on N-Step Returns
    Liu, Yong
    Hu, Yujing
    Gao, Yang
    Chen, Yingfeng
    Fan, Changjie
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 457 - 463
  • [4] Dynamic Pricing for EV Charging Stations: A Deep Reinforcement Learning Approach
    Zhao, Zhonghao
    Lee, Carman K. M.
    [J]. IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2022, 8 (02): : 2456 - 2468
  • [5] PROLIFIC: Deep Reinforcement Learning for Efficient EV Fleet Scheduling and Charging
    Ma, Junchi
    Zhang, Yuan
    Duan, Zongtao
    Tang, Lei
    [J]. SUSTAINABILITY, 2023, 15 (18)
  • [6] Constrained EV Charging Scheduling Based on Safe Deep Reinforcement Learning
    Li, Hepeng
    Wan, Zhiqiang
    He, Haibo
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2020, 11 (03) : 2427 - 2439
  • [7] Promoting self-sustained learning in higher education: the ISEE framework
    Yang, Min
    [J]. TEACHING IN HIGHER EDUCATION, 2015, 20 (06) : 601 - 613
  • [8] Efficient Reinforcement Learning With the Novel N-Step Method and V-Network
    Zhang, Miaomiao
    Zhang, Shuo
    Wu, Xinying
    Shi, Zhiyi
    Deng, Xiangyang
    Wu, Edmond Q.
    Xu, Xin
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2024,
  • [9] Optimal EV Fast Charging Station Deployment Based on a Reinforcement Learning Framework
    Zhao, Zhonghao
    Lee, Carman K. M.
    Ren, Jingzheng
    Tsang, Yung Po
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (08) : 8053 - 8065
  • [10] An N-step Look Ahead Algorithm Using Mixed (On and Off) Policy Reinforcement Learning
    Kuchibhotla, Vivek
    Harshitha, P.
    Goyal, Shobhit
    [J]. Proceedings of the 3rd International Conference on Intelligent Sustainable Systems, ICISS 2020, 2020, : 677 - 681