Distributed Learning Algorithm for Distributed PV Large-Scale Access to Power Grid Based on Machine Learning

被引:0
|
作者
Lei, Zhen [1 ]
Yang, Yong-biao [2 ]
Xu, Xiao-hui [3 ]
机构
[1] State Grid Jiangsu Elect Power Co, Nanjing, Peoples R China
[2] Southeast Univ, Nanjing, Peoples R China
[3] China Elect Power Res Inst, Nanjing, Peoples R China
关键词
Operation efficiency; Photovoltaic capacity; Radial structure; Power system;
D O I
10.1007/978-3-030-36402-1_47
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Due to the long prediction time and the large range of data filtering, the traditional algorithm has low system operation efficiency. For this reason, distributed learning based on machine learning is widely used to predict the power grid output. First, establish a grid output prediction model to limit the system's line loss and transformer losses. Secondly, based on the distributed photovoltaic power generation output prediction model, the vector moment method and the information method are used to narrow the search space. Based on the data concentration and fitness function values, the calculation formula of voltage output prediction of distribution network nodes with distributed photovoltaics is derived to realize the power grid output prediction algorithm. Finally, it is proved by experiments that distributed PV large-scale access to power grid output prediction algorithm can effectively improve system operation efficiency.
引用
收藏
页码:439 / 447
页数:9
相关论文
共 50 条
  • [41] UIMA GRID: Distributed large-scale text analysis
    Egner, Michael Thomas
    Lorch, Markus
    Biddle, Edd
    [J]. CCGRID 2007: SEVENTH IEEE INTERNATIONAL SYMPOSIUM ON CLUSTER COMPUTING AND THE GRID, 2007, : 317 - +
  • [42] Distributed workflow management for large-scale grid environments
    Schneider, J
    Linnert, B
    Burchard, LO
    [J]. INTERNATIONAL SYMPOSIUM ON APPLICATIONS AND THE INTERNET , PROCEEDINGS, 2006, : 229 - +
  • [43] A Universal Machine Learning Algorithm for Large-Scale Screening of Materials
    Fanourgakis, George S.
    Gkagkas, Konstantinos
    Tylianakis, Emmanuel
    Froudakis, George E.
    [J]. JOURNAL OF THE AMERICAN CHEMICAL SOCIETY, 2020, 142 (08) : 3814 - 3822
  • [44] MapReduce based distributed learning algorithm for Restricted Boltzmann Machine
    Zhang, Chun-Yang
    Chen, C. L. Philip
    Chen, Dewang
    Ng, Kin Tek
    [J]. NEUROCOMPUTING, 2016, 198 : 4 - 11
  • [45] Distributed Resource and Service Management for Large-Scale Dynamic Spectrum Access Systems Through Coordinated Learning
    NoroozOliaee, M.
    Hamdaoui, B.
    [J]. 2011 7TH INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING CONFERENCE (IWCMC), 2011, : 522 - 527
  • [46] Efficient Distributed Preprocessing Model for Machine Learning-Based Anomaly Detection over Large-Scale Cybersecurity Datasets
    Larriva-Novo, Xavier
    Vega-Barbas, Mario
    Villagra, Victor A.
    Rivera, Diego
    Alvarez-Campana, Manuel
    Berrocal, Julio
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (10):
  • [47] DISTRIBUTED BINARY SUBSPACE LEARNING ON LARGE-SCALE CROSS MEDIA DATA
    Zhao, Xueyi
    Zhang, Chenyi
    Zhang, Zhongfei
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2014,
  • [48] Distributed optimization and statistical learning for large-scale penalized expectile regression
    Yingli Pan
    [J]. Journal of the Korean Statistical Society, 2021, 50 : 290 - 314
  • [49] Trade-Offs in Large-Scale Distributed Tuplewise Estimation And Learning
    Vogel, Robin
    Bellet, Aurelien
    Clemencon, Stephan
    Jelassi, Ons
    Papa, Guillaume
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 11907 : 229 - 245
  • [50] Distributed optimization and statistical learning for large-scale penalized expectile regression
    Pan, Yingli
    [J]. JOURNAL OF THE KOREAN STATISTICAL SOCIETY, 2021, 50 (01) : 290 - 314