LT Codes with Feedback: Accelerate the Distributed Matrix-Vector Multiplication with Stragglers

被引:0
|
作者
Yang, Xiao [1 ]
Jiang, Ming [1 ,2 ]
Zhao, Chunming [1 ,2 ]
机构
[1] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing, Peoples R China
[2] Purple Mt Lab, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
rateless feedback codes; distributed computation; stragglers;
D O I
10.1109/ipccc47392.2019.8958745
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a coding scheme for distributed matrix-vector multiplication that builds upon the Luby transform (LT) codes with feedback. The ideal soliton distribution is utilized in our LT coding scheme to encode the sub-matrices. Besides, the belief propagation (BP) decoding algorithm is modified to cooperate with the feedback information. Compared with other coded distributed computations with straggling servers, our approach achieves lower computation latency when the overall delay incurred by the encoding, mapping and decoding process is considered. Furthermore, we compare the storage loads of different schemes and show that the LT coding with feedback has a strong comparative advantage in these straggler-tolerant computation scenarios.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] On improving the performance of sparse matrix-vector multiplication
    White, JB
    Sadayappan, P
    [J]. FOURTH INTERNATIONAL CONFERENCE ON HIGH-PERFORMANCE COMPUTING, PROCEEDINGS, 1997, : 66 - 71
  • [42] Multithreaded Reproducible Banded Matrix-Vector Multiplication
    Tang, Tao
    Qi, Haijun
    Lu, Qingfeng
    Jiang, Hao
    [J]. MATHEMATICS, 2024, 12 (03)
  • [43] Blocked-Based Sparse Matrix-Vector Multiplication on Distributed Memory Parallel Computers
    Shahnaz, Rukhsana
    Usman, Anila
    [J]. INTERNATIONAL ARAB JOURNAL OF INFORMATION TECHNOLOGY, 2011, 8 (02) : 130 - 136
  • [44] Adaptive Runtime Tuning of Parallel Sparse Matrix-Vector Multiplication on Distributed Memory Systems
    Lee, Seyong
    Eigenmann, Rudolf
    [J]. ICS'08: PROCEEDINGS OF THE 2008 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, 2008, : 195 - 204
  • [45] Adaptive sparse matrix representation for efficient matrix-vector multiplication
    Zardoshti, Pantea
    Khunjush, Farshad
    Sarbazi-Azad, Hamid
    [J]. JOURNAL OF SUPERCOMPUTING, 2016, 72 (09): : 3366 - 3386
  • [46] Hierarchical Matrix Operations on GPUs: Matrix-Vector Multiplication and Compression
    Boukaram, Wajih
    Turkiyyah, George
    Keyes, David
    [J]. ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2019, 45 (01):
  • [47] Distributed-Memory H-Matrix Algebra I: Data Distribution and Matrix-Vector Multiplication
    Li, Yingzhou
    Poulson, Jack
    Ying, Lexing
    [J]. CSIAM TRANSACTIONS ON APPLIED MATHEMATICS, 2021, 2 (03): : 431 - 459
  • [48] Energy Evaluation of Sparse Matrix-Vector Multiplication on GPU
    Benatia, Akrem
    Ji, Weixing
    Wang, Yizhuo
    Shi, Feng
    [J]. 2016 SEVENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2016,
  • [49] Autotuning Runtime Specialization for Sparse Matrix-Vector Multiplication
    Yilmaz, Buse
    Aktemur, Baris
    Garzaran, Maria J.
    Kamin, Sam
    Kirac, Furkan
    [J]. ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2016, 13 (01)
  • [50] Load-balancing in sparse matrix-vector multiplication
    Nastea, SG
    Frieder, O
    ElGhazawi, T
    [J]. EIGHTH IEEE SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING, PROCEEDINGS, 1996, : 218 - 225