Enabling Highly Efficient k-Means Computations on the SW26010 Many-Core Processor of Sunway TaihuLight

被引:8
|
作者
Li, Min [1 ,2 ]
Yang, Chao [3 ,4 ,5 ]
Sun, Qiao [1 ]
Ma, Wen-Jing [1 ]
Cao, Wen-Long [1 ,2 ]
Ao, Yu-Long [3 ,4 ,5 ]
机构
[1] Chinese Acad Sci, Inst Software, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Peking Univ, Sch Math Sci, Beijing 100871, Peoples R China
[4] Peking Univ, Ctr Data Sci, Beijing 100871, Peoples R China
[5] Peng Cheng Lab, Shenzhen 518052, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
parallel k-means; performance optimization; SW26010; processor; Sunway TaihuLight; ALGORITHM; PERFORMANCE;
D O I
10.1007/s11390-019-1900-5
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the advent of the big data era, the amounts of sampling data and the dimensions of data features are rapidly growing. It is highly desired to enable fast and efficient clustering of unlabeled samples based on feature similarities. As a fundamental primitive for data clustering, the k-means operation is receiving increasingly more attentions today. To achieve high performance k-means computations on modern multi-core/many-core systems, we propose a matrix-based fused framework that can achieve high performance by conducting computations on a distance matrix and at the same time can improve the memory reuse through the fusion of the distance-matrix computation and the nearest centroids reduction. We implement and optimize the parallel k-means algorithm on the SW26010 many-core processor, which is the major horsepower of Sunway TaihuLight. In particular, we design a task mapping strategy for load-balanced task distribution, a data sharing scheme to reduce the memory footprint and a register blocking strategy to increase the data locality. Optimization techniques such as instruction reordering and double buffering are further applied to improve the sustained performance. Discussions on block-size tuning and performance modeling are also presented. We show by experiments on both randomly generated and real-world datasets that our parallel implementation of k-means on SW26010 can sustain a double-precision performance of over 348.1 Gflops, which is 46.9% of the peak performance and 84% of the theoretical performance upper bound on a single core group, and can achieve a nearly ideal scalability to the whole SW26010 processor of four core groups. Performance comparisons with the previous state-of-the-art on both CPU and GPU are also provided to show the superiority of our optimized k-means kernel.
引用
收藏
页码:77 / 93
页数:17
相关论文
共 44 条
  • [41] Large-Scale Hierarchical k-means for Heterogeneous Many-Core Supercomputers
    Li, Liandeng
    Yu, Teng
    Zhao, Wenlai
    Fu, Haohuan
    Wang, Chenyu
    Tan, Li
    Yang, Guangwen
    Thomson, John
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE, AND ANALYSIS (SC'18), 2018,
  • [42] Large-Scale Automatic K-Means Clustering for Heterogeneous Many-Core Supercomputer
    Yu, Teng
    Zhao, Wenlai
    Liu, Pan
    Janjic, Vladimir
    Yan, Xiaohan
    Wang, Shicai
    Fu, Haohuan
    Yang, Guangwen
    Thomson, John
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (05) : 997 - 1008
  • [43] Publisher Correction: xMath2.0: a high-performance extended math library for SW26010-Pro many-core processor
    Fangfang Liu
    Wenjing Ma
    Yuwen Zhao
    Daokun Chen
    Yi Hu
    Qinglin Lu
    WanWang Yin
    Xinhui Yuan
    Lijuan Jiang
    Hao Yan
    Min Li
    Hongsen Wang
    Xinyu Wang
    Chao Yang
    CCF Transactions on High Performance Computing, 2023, 5 : 97 - 97
  • [44] xMath2.0: a high-performance extended math library for SW26010-Pro many-core processor (Oct, 10.1007/s42514-022-00126-8, 2022)
    Liu, Fangfang
    Ma, Wenjing
    Zhao, Yuwen
    Chen, Daokun
    Hu, Yi
    Lu, Qinglin
    Yin, WanWang
    Yuan, Xinhui
    Jiang, Lijuan
    Yan, Hao
    Li, Min
    Wang, Hongsen
    Wang, Xinyu
    Yang, Chao
    CCF TRANSACTIONS ON HIGH PERFORMANCE COMPUTING, 2023, 5 (01) : 97 - 97