Statistical mechanics of continual learning: Variational principle and mean-field potential

被引:1
|
作者
Li, Chan [1 ]
Huang, Zhenye [2 ]
Zou, Wenxuan [1 ]
Huang, Haiping [1 ,3 ]
机构
[1] Sun Yat Sen Univ, Sch Phys, PMI Lab, Guangzhou 510275, Peoples R China
[2] Chinese Acad Sci, CAS Key Lab Theoret Phys, Inst Theoret Phys, Beijing 100190, Peoples R China
[3] Sun Yat sen Univ, Guangdong Prov Key Lab Magnetoelectr Phys & Device, Guangzhou 510275, Peoples R China
基金
中国国家自然科学基金;
关键词
NETWORKS;
D O I
10.1103/PhysRevE.108.014309
中图分类号
O35 [流体力学]; O53 [等离子体物理学];
学科分类号
070204 ; 080103 ; 080704 ;
摘要
An obstacle to artificial general intelligence is set by continual learning of multiple tasks of a different nature. Recently, various heuristic tricks, both from machine learning and from neuroscience angles, were proposed, but they lack a unified theory foundation. Here, we focus on continual learning in single-layered and multilayered neural networks of binary weights. A variational Bayesian learning setting is thus proposed in which the neural networks are trained in a field-space, rather than a gradient-ill-defined discrete-weight space, and furthermore, weight uncertainty is naturally incorporated, and it modulates synaptic resources among tasks. From a physics perspective, we translate variational continual learning into a Franz-Parisi thermodynamic potential framework, where previous task knowledge serves as a prior probability and a reference as well. We thus interpret the continual learning of the binary perceptron in a teacher-student setting as a Franz-Parisi potential computation. The learning performance can then be analytically studied with mean-field order parameters, whose predictions coincide with numerical experiments using stochastic gradient descent methods. Based on the variational principle and Gaussian field approximation of internal preactivations in hidden layers, we also derive the learning algorithm considering weight uncertainty, which solves the continual learning with binary weights using multilayered neural networks, and performs better than the currently available metaplasticity algorithm in which binary synapses bear hidden continuous states and the synaptic plasticity is modulated by a heuristic regularization function. Our proposed principled frameworks also connect to elastic weight consolidation, weight-uncertainty modulated learning, and neuroscience-inspired metaplasticity, providing a theoretically grounded method for real-world multitask learning with deep networks.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] A Maximum Principle for Mean-Field SDEs with Time Change
    Giulia Di Nunno
    Hannes Haferkorn
    Applied Mathematics & Optimization, 2017, 76 : 137 - 176
  • [42] A Maximum Principle for Mean-Field SDEs with Time Change
    Di Nunno, Giulia
    Haferkorn, Hannes
    APPLIED MATHEMATICS AND OPTIMIZATION, 2017, 76 (01): : 137 - 176
  • [43] STOCHASTIC MAXIMUM PRINCIPLE FOR WEIGHTED MEAN-FIELD SYSTEM
    Tang, Yanyan
    Xiong, Jie
    DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS-SERIES S, 2023, 16 (05): : 1039 - 1053
  • [44] Mean-field learning for satisfactory solutions
    Tembine, Hamidou
    Tempone, Raul
    Vilanova, Pedro
    2013 IEEE 52ND ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2013, : 4865 - 4870
  • [45] A Stochastic Maximum Principle for General Mean-Field Systems
    Buckdahn, Rainer
    Li, Juan
    Ma, Jin
    APPLIED MATHEMATICS AND OPTIMIZATION, 2016, 74 (03): : 507 - 534
  • [47] QUANTUM-MECHANICAL VARIATIONAL PRINCIPLE FOR STATISTICAL-MECHANICS
    ONYSZKIEWICZ, Z
    ACTA PHYSICA POLONICA A, 1982, 61 (1-2) : 59 - 66
  • [48] VARIATIONAL EXTENSION OF THE TIME-DEPENDENT MEAN-FIELD APPROACH
    FLOCARD, H
    ANNALS OF PHYSICS, 1989, 191 (02) : 382 - 398
  • [49] Mean-field approach for a statistical mechanical model of proteins
    Bruscolini, P
    Cecconi, F
    JOURNAL OF CHEMICAL PHYSICS, 2003, 119 (02): : 1248 - 1256
  • [50] Conditionally Conjugate Mean-Field Variational Bayes for Logistic Models
    Durante, Daniele
    Rigon, Tommaso
    STATISTICAL SCIENCE, 2019, 34 (03) : 472 - 485