Matrix Sparsification for Coded Matrix Multiplication

被引:0
|
作者
Suh, Geewon [1 ]
Lee, Kangwook [1 ]
Suh, Changho [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch EE, Daejeon, South Korea
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Coded computation is a framework for providing redundancy in distributed computing systems to make them robust to slower nodes, or stragglers. In a recent work of Lee et al., the authors propose a coded computation scheme for distributedly computing A x x in the presence of stragglers. The proposed algorithm first encodes the data matrix A to obtain an encoded matrix F. It then computes F x x using distributed processors, waits for some subset of the processors to finish their computations, and decodes Axx from the partial computation results. In another recent work, Dutta et al. explore a new tradeoff between the sparsity of the encoded matrix F and the number of processors to wait to compute A x x. They show that one can introduce a large number of zeros into F to reduce the computational overheads while maintaining the number of processors to wait relatively low. Hence, one can potentially further speed up the distributed computation. In this work, motivated by this observation, we study the sparsity of the encoded matrix for coded computation. Our goal is to characterize the fundamental limits on the sparsity level. We first show that the Short-Dot scheme is optimal if an Maximum Distance Separable (MDS) matrix is fixed. Further, by also designing this MDS matrix, we propose a new encoding scheme that can achieve a strictly larger sparsity than the existing schemes. We also provide an information-theoretic upper bound on the sparsity.
引用
收藏
页码:1271 / 1278
页数:8
相关论文
共 50 条
  • [1] GRAPH SPARSIFICATION BY APPROXIMATE MATRIX MULTIPLICATION
    Charalambides, Neophytos
    Hero, Alfred O., III
    2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 180 - 184
  • [2] Hierarchical coded matrix multiplication
    Kiani, Shahrzad
    Ferdinand, Nuwan
    Draper, Stark C.
    2019 16TH CANADIAN WORKSHOP ON INFORMATION THEORY (CWIT), 2019,
  • [3] Private Coded Matrix Multiplication
    Kim, Minchul
    Yang, Heecheol
    Lee, Jungwoo
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 1434 - 1443
  • [4] Coded Sparse Matrix Multiplication
    Wang, Sinong
    Liu, Jiashang
    Shroff, Ness
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [5] Coded Matrix Chain Multiplication
    Fan, Xiaodi
    Saldivia, Angel
    Soto, Pedro
    Li, Jun
    2021 IEEE/ACM 29TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2021,
  • [6] Hierarchical Coded Matrix Multiplication
    Kianidehkordi, Shahrzad
    Ferdinand, Nuwan
    Draper, Stark C.
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2021, 67 (02) : 726 - 754
  • [7] Variable Coded Batch Matrix Multiplication
    Tauz, Lev
    Dolecek, Lara
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [8] Locally Recoverable Coded Matrix Multiplication
    Jeong, Haewon
    Ye, Fangwei
    Grover, Pulkit
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 715 - 722
  • [9] APPROXIMATE WEIGHTED CR CODED MATRIX MULTIPLICATION
    Charalambides, Neophytos
    Pilanci, Mert
    Hero, Alfred O., III
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5095 - 5099
  • [10] On the Optimal Recovery Threshold of Coded Matrix Multiplication
    Dutta, Sanghamitra
    Fahim, Mohammad
    Haddadpour, Farzin
    Jeong, Haewon
    Cadambe, Viveck
    Grover, Pulkit
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2020, 66 (01) : 278 - 301