Learned Image Compression with Fixed-point Arithmetic

被引:6
|
作者
Sun, Heming [1 ,2 ,3 ]
Yu, Lu [2 ]
Katto, Jiro [1 ]
机构
[1] Waseda Univ, Shinjuku City, Japan
[2] Zhejiang Univ, Hangzhou, Peoples R China
[3] JST PRESTO, Saitama, Japan
来源
2021 PICTURE CODING SYMPOSIUM (PCS) | 2021年
关键词
Image compression; neural networks; quantization; fixed-point; fine-tuning;
D O I
10.1109/PCS50896.2021.9477496
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Learned image compression (LIC) has achieved superior coding performance than traditional image compression standards such as HEVC intra in terms of both PSNR and MS-SSIM. However, most LIC frameworks are based on floating-point arithmetic which has two potential problems. First is that using traditional 32-bit floating-point will consume huge memory and computational cost. Second is that the decoding might fail because of the floating-point error coming from different encoding/decoding platforms. To solve the above two problems. 1) We linearly quantize the weight in the main path to 8-bit fixed-point arithmetic, and propose a fine tuning scheme to reduce the coding loss caused by the quantization. Analysis transform and synthesis transform are fine tuned layer by layer. 2) We exploit look-up-table (LUT) for the cumulative distribution function (CDF) to avoid the floating-point error. When the latent node follows non-zero mean Gaussian distribution, to share the CDF LUT for different mean values, we restrict the range of latent node to be within a certain range around mean. As a result, 8-bit weight quantization can achieve negligible coding gain loss compared with 32-bit floating-point anchor. In addition, proposed CDF LUT can ensure the correct coding at various CPU and GPU hardware platforms.
引用
收藏
页码:106 / 110
页数:5
相关论文
共 50 条
  • [31] DIGITAL NOTCH FILTERS IMPLEMENTATION WITH FIXED-POINT ARITHMETIC
    Pinheiro, Eduardo
    Postolache, Octavian
    Girao, Pedro
    XIX IMEKO WORLD CONGRESS: FUNDAMENTAL AND APPLIED METROLOGY, PROCEEDINGS, 2009, : 491 - 496
  • [32] Fixed-Point Arithmetic for Implementing Massive MIMO Systems
    Tian, Mi
    Sima, Mihai
    McGuire, Michael
    2022 24TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): ARITIFLCIAL INTELLIGENCE TECHNOLOGIES TOWARD CYBERSECURITY, 2022, : 1345 - 1355
  • [33] Feedback decoding of fixed-point arithmetic convolutional codes
    Redinbo, GR
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2004, 52 (06) : 857 - 860
  • [34] ON THE FIXED-POINT THEOREM OF CONE EXPANSION AND COMPRESSION
    GUO, DJ
    KEXUE TONGBAO, 1982, 27 (06): : 685 - 685
  • [35] END-TO-END LEARNED IMAGE COMPRESSION WITH FIXED POINT WEIGHT QUANTIZATION
    Sun, Heming
    Cheng, Zhengxue
    Takeuchi, Masaru
    Katto, Jiro
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 3359 - 3363
  • [36] Accuracy-aware processor customisation for fixed-point arithmetic
    Vakili, Shervin
    Langlois, J. M. Pierre
    Bois, Guy
    IET COMPUTERS AND DIGITAL TECHNIQUES, 2016, 10 (01): : 1 - 11
  • [37] Parametrizable Fixed-Point Arithmetic for HIL With Small Simulation Steps
    Sanchez, Alberto
    de Castro, Angel
    Garrido, Javier
    IEEE JOURNAL OF EMERGING AND SELECTED TOPICS IN POWER ELECTRONICS, 2019, 7 (04) : 2467 - 2475
  • [38] Optimizing math-intensive applications with fixed-point arithmetic
    Williams, Anthony
    DR DOBBS JOURNAL, 2008, 33 (04): : 38 - +
  • [39] Accuracy evaluation of deep belief networks with fixed-point arithmetic
    Jiang, Jingfei, 1600, Transport and Telecommunication Institute, Lomonosova street 1, Riga, LV-1019, Latvia (18):
  • [40] Provably Correct Posit Arithmetic with Fixed-Point Big Integer
    Chung, Shin Yee
    PROCEEDINGS OF THE CONFERENCE FOR NEXT GENERATION ARITHMETIC (CONGA'18), 2018,