One-Step Forward and Backtrack: Overcoming Zig-Zagging in Loss-Aware Quantization Training

被引:0
|
作者
Ma, Lianbo [1 ]
Zhou, Yuee [1 ]
Ma, Jianlun [1 ]
Yu, Guo [2 ]
Li, Qing [3 ]
机构
[1] Northeastern Univ, Software Coll, Shenyang, Peoples R China
[2] Nanjing Tech Univ, Inst Intelligent Mfg, Nanjing, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
NEURAL-NETWORKS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weight quantization is an effective technique to compress deep neural networks for their deployment on edge devices with limited resources. Traditional loss-aware quantization methods commonly use the quantized gradient to replace the full-precision gradient. However, we discover that the gradient error will lead to an unexpected zig-zagging-like issue in the gradient descent learning procedures, where the gradient directions rapidly oscillate or zig-zag, and such issue seriously slows down the model convergence. Accordingly, this paper proposes a one-step forward and backtrack way for loss-aware quantization to get more accurate and stable gradient direction to defy this issue. During the gradient descent learning, a one-step forward search is designed to find the trial gradient of the next-step, which is adopted to adjust the gradient of current step towards the direction of fast convergence. After that, we backtrack the current step to update the full-precision and quantized weights through the current-step gradient and the trial gradient. A series of theoretical analysis and experiments on benchmark deep models have demonstrated the effectiveness and competitiveness of the proposed method, and our method especially outperforms others on the convergence performance.
引用
收藏
页码:14246 / 14254
页数:9
相关论文
共 1 条
  • [1] Octo: INT8 Training with Loss-aware Compensation and Backward Quantization for Tiny On-device Learning
    Zhou, Qihua
    Guo, Song
    Qu, Zhihao
    Guo, Jingcai
    Xu, Zhenda
    Zhang, Jiewei
    Guo, Tao
    Luo, Boyuan
    Zhou, Jingren
    [J]. PROCEEDINGS OF THE 2021 USENIX ANNUAL TECHNICAL CONFERENCE, 2021, : 365 - 380