Active Simulation of Transient Wind Field in a Multiple-Fan Wind Tunnel via Deep Reinforcement Learning

被引:15
|
作者
Li, Shaopeng [1 ]
Snaiki, Reda [1 ,2 ]
Wu, Teng [1 ]
机构
[1] Univ Buffalo, Dept Civil Struct & Environm Engn, Buffalo, NY 14260 USA
[2] Univ Quebec, Dept Construct Engn, Ecole Technol Super, Montreal, PQ H3C 1K3, Canada
关键词
Transient wind; Downburst; Multiple-fan wind tunnel; Reinforcement learning (RL); Deep learning; Active control; BOUNDARY-LAYER; DOWNBURST; FLOW; OUTFLOWS; MODEL; GUST;
D O I
10.1061/(ASCE)EM.1943-7889.0001967
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
The transient wind field during a nonsynoptic wind event (e.g., thunderstorm downburst) presents time-varying mean and nonstationary fluctuating components, and hence is not easy to be reproduced in a conventional boundary-layer wind tunnel with various passive devices (e.g., spires, roughness elements, and barriers). As a promising alternative, the actively controlled multiple-fan wind tunnel has emerged to effectively generate the laboratory-scale, spatiotemporally varying wind flows. The tracking accuracy of target wind speed histories at selected locations in the multiple-fan wind tunnel depends on the control signals input to individual fans. Conventional hand-design linear control schemes cannot ensure good performance due to the complicated fluid dynamics and nonlinear interactions inside the wind tunnel. In addition, the determination of the control parameters involves a time-consuming manual tuning process. In this paper, an accurate and efficient control scheme based on deep reinforcement learning (RL) is developed to realize the prescribed spatiotemporally varying wind field in a multiple-fan wind tunnel. Specifically, the fully connected deep neural network (DNN) is trained using RL methodology to perform active flow control in the multiple-fan wind tunnel. Accordingly, the optimal parameters (network weights) of the DNN-based nonlinear controller are obtained based on an automated trial-and-error process. The controller complexity needed for active simulation of transient winds can be well captured by a DNN due to its powerful function approximation ability, and the "model-free" and "automation" features of RL paradigm eliminate the need of expensive modeling of fluid dynamics and costly hand tuning of control parameters. Numerical results of the transient winds during a moving downburst event (including nose-shape vertical profiles, time-varying mean wind speeds, and nonstationary fluctuations) present good performance of the proposed deep RL-based control strategy in a simulation environment of the multiple-fan wind tunnel at the University at Buffalo.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Experimental study of improved HAWT performance in simulated natural wind by an active controlled multi-fan wind tunnel
    Kazuhiko Toshimitsu
    Takahiko Narihara
    Hironori Kikugawa
    Arata Akiyoshi
    Yuuya Kawazu
    [J]. Journal of Thermal Science, 2017, 26 : 113 - 118
  • [22] Experimental Study of Improved HAWT Performance in Simulated Natural Wind by an Active Controlled Multi-Fan Wind Tunnel
    Kazuhiko Toshimitsu
    Takahiko Narihara
    Hironori Kikugawa
    Arata Akiyoshi
    Yuuya Kawazu
    [J]. Journal of Thermal Science, 2017, 26 (02) : 113 - 118
  • [23] Experimental Study of Improved HAWT Performance in Simulated Natural Wind by an Active Controlled Multi-Fan Wind Tunnel
    Toshimitsu, Kazuhiko
    Narihara, Takahiko
    Kikugawa, Hironori
    Akiyoshi, Arata
    Kawazu, Yuuya
    [J]. JOURNAL OF THERMAL SCIENCE, 2017, 26 (02) : 113 - 118
  • [24] Simulation of Wind Speed in the Ventilation Tunnel for Surge Tanks in Transient Processes
    Yang, Jiandong
    Wang, Huang
    Guo, Wencheng
    Yang, Weijia
    Zeng, Wei
    [J]. ENERGIES, 2016, 9 (02):
  • [25] Deep Reinforcement Learning for Automatic Generation Control of Wind Farms
    Vijayshankar, Sanjana
    Stanfel, Paul
    King, Jennifer
    Spyrou, Evangelia
    Johnson, Kathryn
    [J]. 2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 1796 - 1802
  • [26] Deep Reinforcement Learning Based on Proximal Policy Optimization for the Maintenance of a Wind Farm with Multiple Crews
    Pinciroli, Luca
    Baraldi, Piero
    Ballabio, Guido
    Compare, Michele
    Zio, Enrico
    [J]. ENERGIES, 2021, 14 (20)
  • [27] Optimal scheduling of a wind energy dominated distribution network via a deep reinforcement learning approach
    Zhu, Jiaoyiling
    Hu, Weihao
    Xu, Xiao
    Liu, Haoming
    Pan, Li
    Fan, Haoyang
    Zhang, Zhenyuan
    Chen, Zhe
    [J]. RENEWABLE ENERGY, 2022, 201 : 792 - 801
  • [28] Intelligent wind farm control via deep reinforcement learning and high-fidelity simulations
    Dong, Hongyang
    Zhang, Jincheng
    Zhao, Xiaowei
    [J]. APPLIED ENERGY, 2021, 292
  • [29] Fuzzy-based collective pitch control for wind turbine via deep reinforcement learning
    Nabeel, Abdelhamid
    Lasheen, Ahmed
    Elshafei, Abdel Latif
    Zahab, Essam Aboul
    [J]. ISA TRANSACTIONS, 2024, 148 : 307 - 325
  • [30] Optimal control of a wind farm in time-varying wind using deep reinforcement learning
    Kim, Taewan
    Kim, Changwook
    Song, Jeonghwan
    You, Donghyun
    [J]. ENERGY, 2024, 303