Shared⁃memory parallelization technology of unstructured CFD solver for multi⁃core CPU/many⁃core GPU architecture

被引:0
|
作者
Zhang J. [1 ,2 ]
Li R. [2 ]
Deng L. [2 ]
Dai Z. [2 ]
Liu J. [1 ]
Xu C. [1 ]
机构
[1] National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha
[2] Computational Aerodynamic Institute, China Aerodynamic Research and Development Center, Mianyang
关键词
CFD; GPU; memory access optimization; shared memory parallelization; unstructured-grid;
D O I
10.7527/S1000-6893.2023.28888
中图分类号
学科分类号
摘要
Shared memory parallelization for unstructured CFD on modern high-performance computer architecture is the key to improve the efficiency of floating point computing and realizing large-scale fluid simulation application capa⁃ bilities. However,due to problems such as the complex topological relationship,poor data locality,and data write conflict in unstructured CFD computing,parallelization of the traditional algorithms in shared memory to efficiently ex⁃ plore the hardware capabilities of multi-core CPUs/many-core GPUs has become a significant challenge. Starting from industrial-level unstructured CFD software,a variety of shared memory parallel algorithms are designed and imple⁃ mented by deeply analyzing the computing behavior and memory access mode,and data locality optimization tech⁃ nologies such as grid reordering,loop fusion,and multi-level memory access are used to further improve perfor⁃ mance. Specifically,a comprehensive study is conducted on two parallel modes,loop-based and task-based,for multi-core CPU architectures. An innovative reduction parallel strategy based on a multi-level memory access optimiza⁃ tion method is proposed for the many-core GPU architecture. All the parallel methods and optimization techniques implemented are deeply analyzed and evaluated by the test cases of the M6 wing and CHN-T1 airplane. The results show that the parallel strategy of division and replication has the best performance on the CPU platform. Using Cuthill-McKee grid renumbering and loop fusion techniques to optimize memory access can improve performance by 10%,respectively. For GPU platforms,the proposed reduction strategy combined with multi-level memory access optimiza⁃ tion has a significant acceleration effect. For the hot spot subroutine with data racing,the speed-up can be further im⁃ proved by 3 times,and the overall speed-up can reach 127. © 2024 Chinese Society of Astronautics. All rights reserved.
引用
收藏
相关论文
共 27 条
  • [21] KUMAR V., A fast and high quality multilevel scheme for partitioning irregular graphs[J], SIAM Journal on Scientific Computing, 20, 1, pp. 359-392, (1998)
  • [22] MCKEE J., Reducing the bandwidth of sparse symmetric matrices, Proceedings of the 1969 24th national conference, pp. 157-172, (1969)
  • [23] FOURNIER Y, Et al., Optimizing Code_Saturne computations on petascale systems[J], Computers & Fluids, 45, 1, pp. 103-108, (2011)
  • [24] HUSBANDS P,, Et al., Effects of ordering strategies and programming paradigms on sparse matrix computations[J], SIAM Review, 44, 3, pp. 373-393, (2002)
  • [25] LOHNER R., Cache-efficient renumbering for vectorization[J], International Journal for Numerical Methods in Biomedical Engineering, 26, 5, pp. 628-636, (2010)
  • [26] BALOGH G D,, REGULY I Z,, Et al., Locality optimized unstructured mesh algorithms on GPUs[J], Journal of Parallel and Distributed Computing, 134, pp. 50-64, (2019)
  • [27] YU Y G,, ZHOU Z,, HUANG J T, Et al., Aerodynamic design of a standard model CHN-T1 for single-aisle passenger aircraft[J], Acta Aerodynamica Sinica, 36, 3, pp. 505-513, (2018)