Mixed-Precision Continual Learning Based on Computational Resistance Random Access Memory

被引:11
|
作者
Li, Yi [1 ,2 ]
Zhang, Woyu [1 ,2 ]
Xu, Xiaoxin [1 ]
He, Yifan [3 ]
Dong, Danian [1 ]
Jiang, Nanjia [1 ]
Wang, Fei [1 ,2 ]
Guo, Zeyu [1 ,2 ]
Wang, Shaocong [4 ]
Dou, Chunmeng [1 ]
Liu, Yongpan [3 ]
Wang, Zhongrui [4 ]
Shang, Dashan [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Microelect, Key Lab Microelect Devices & Integrated Technol, Beijing 100029, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
[3] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[4] Univ Hong Kong, Dept Elect & Elect Engn, Pok Fu Lam Rd, Hong Kong 999077, Peoples R China
基金
中国国家自然科学基金;
关键词
continual learning; in-memory computing; mixed precision; resistance random access memory;
D O I
10.1002/aisy.202200026
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial neural networks have acquired remarkable achievements in the field of artificial intelligence. However, it suffers from catastrophic forgetting when dealing with continual learning problems, i.e., the loss of previously learned knowledge upon learning new information. Although several continual learning algorithms have been proposed, it remains a challenge to implement these algorithms efficiently on conventional digital systems due to the physical separation between memory and processing units. Herein, a software-hardware codesigned in-memory computing paradigm is proposed, where a mixed-precision continual learning (MPCL) model is deployed on a hybrid analogue-digital hardware system equipped with resistance random access memory chip. Software-wise, the MPCL effectively alleviates catastrophic forgetting and circumvents the requirement for high-precision weights. Hardware-wise, the hybrid analogue-digital system takes advantage of the colocation of memory and processing units, greatly improving energy efficiency. By combining the MPCL with an in situ fine-tuning method, high classification accuracies of 94.9% and 95.3% (software baseline 97.0% and 97.7%) on the 5-split-MNIST and 5-split-FashionMNIST are achieved, respectively. The proposed system reduces approximate to 200 times energy consumption of the multiply-and-accumulation operations during the inference phase compared to the conventional digital systems. This work paves the way for future autonomous systems at the edge.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Optimizing Information Theory Based Bitwise Bottlenecks for Efficient Mixed-Precision Activation Quantization
    Zhou, Xichuan
    Liu, Kui
    Shi, Cong
    Liu, Haijun
    Liu, Ji
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 3590 - 3598
  • [42] Hessian-based mixed-precision quantization with transition aware training for neural networks
    Huang, Zhiyong
    Han, Xiao
    Yu, Zhi
    Zhao, Yunlan
    Hou, Mingyang
    Hu, Shengdong
    NEURAL NETWORKS, 2025, 182
  • [43] Experimental demonstration of magnetic tunnel junction-based computational random-access memory
    Yang Lv
    Brandon R. Zink
    Robert P. Bloom
    Hüsrev Cılasun
    Pravin Khanal
    Salonik Resch
    Zamshed Chowdhury
    Ali Habiboglu
    Weigang Wang
    Sachin S. Sapatnekar
    Ulya Karpuzcu
    Jian-Ping Wang
    npj Unconventional Computing, 1 (1):
  • [44] Effect of oxygen concentration on characteristics of NiOx-based resistance random access memory
    Lee, Ming-Daou
    Ho, Chia-Hua
    Lo, Chi-Kuen
    Peng, Tai-Yen
    Yao, Yeong-Der
    IEEE TRANSACTIONS ON MAGNETICS, 2007, 43 (02) : 939 - 942
  • [45] Toggle magneto resistance random access memory based on magneto statically coupled bilayers
    Wang, SY
    Fujiwara, H
    Sun, M
    JOURNAL OF MAGNETISM AND MAGNETIC MATERIALS, 2005, 295 (03) : 246 - 250
  • [46] Effects of the oxygen vacancy concentration in InGaZnO-based resistance random access memory
    Kim, Moon-Seok
    Hwang, Young Hwan
    Kim, Sungho
    Guo, Zheng
    Moon, Dong-Il
    Choi, Ji-Min
    Seol, Myeong-Lok
    Bae, Byeong-Soo
    Choi, Yang-Kyu
    APPLIED PHYSICS LETTERS, 2012, 101 (24)
  • [47] Dynamic Memory-Based Continual Learning with Generating and Screening
    Tao, Siying
    Huang, Jinyang
    Zhang, Xiang
    Sun, Xiao
    Gu, Yu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 365 - 376
  • [48] Spintronics based random access memory: a review
    Bhatti, Sabpreet
    Sbiaa, Rachid
    Hirohata, Atsufumi
    Ohno, Hideo
    Fukami, Shunsuke
    Piramanayagam, S. N.
    MATERIALS TODAY, 2017, 20 (09) : 530 - 548
  • [49] The Observation of "Conduction Spot'' on NiO Resistance Random Access Memory
    Kondo, Hirofumi
    Arita, Masashi
    Fujii, Takashi
    Kaji, Hiromichi
    Moniwa, Masahiro
    Yamaguchi, Takeshi
    Fujiwara, Ichiro
    Yoshimaru, Masaki
    Takahashi, Yasuo
    JAPANESE JOURNAL OF APPLIED PHYSICS, 2011, 50 (08)
  • [50] An In-Memory-Computing Design of Multiplier Based on Multilevel-Cell of Resistance Switching Random Access Memory
    Dai Lan
    Guo Hong
    Lin Qipeng
    Xia Yongxin
    Zhang Xiaobo
    Zhang Feng
    Fan Dongyu
    CHINESE JOURNAL OF ELECTRONICS, 2018, 27 (06) : 1151 - 1157