EEG extended source imaging with structured sparsity and L1-norm residual

被引:0
|
作者
Xu, Furong [1 ]
Liu, Ke [1 ]
Yu, Zhuliang [2 ,3 ]
Deng, Xin [1 ]
Wang, Guoyin [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Computat Intelligence, Chongqing 400065, Peoples R China
[2] South China Univ Technol, Sch Automat Sci & Engn, Guangzhou 510641, Peoples R China
[3] Pazhou Lab, Guangzhou 510335, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2021年 / 33卷 / 14期
基金
中国国家自然科学基金;
关键词
EEG source imaging; Outliers; Structured sparsity; ADMM; CORTICAL CURRENT-DENSITY; SOURCE RECONSTRUCTION; LOCALIZATION; PERFORMANCE; ALGORITHM; EFFICIENT; FIELD;
D O I
10.1007/s00521-020-05603-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is a long-standing challenge to reconstruct the locations and extents of cortical neural activities from electroencephalogram (EEG) recordings, especially when the EEG signals contain strong background activities and outlier artifacts. In this work, we propose a robust source imaging method called L1R-SSSI. To alleviate the effect of outliers in EEG, L1R-SSSI employs the L-1-loss to model the residual error. To obtain locally smooth and globally sparse estimations, L1R-SSSI adopts the structured sparsity constraint, which incorporates the L-1-norm regularization in both the variation and original source domain. The estimations of L1R-SSSI are efficiently obtained using the alternating direction method of multipliers (ADMM) algorithm. Results of simulated and experimental data analysis demonstrate that L1R-SSSI effectively suppresses the effect of the outlier artifacts in EEG. L1R-SSSI outperforms the traditional L-2-norm-based methods (e.g., wMNE, LORETA), and SISSY, which employs L-2-norm loss and structured sparsity, indicated by the larger AUC (average AUC > 0.80), smaller SD (average SD <50 mm), DLE (average DLE <10 mm) and RMSE (average RMSE <1.75) values under all the numerically simulated conditions. L1R-SSSI also provides better estimations of extended sources than the method with L-1-loss and L-p-norm regularization term (e.g., LAPPS).
引用
收藏
页码:8513 / 8524
页数:12
相关论文
共 50 条
  • [31] A CLUSTERING METHOD BASED ON THE L1-NORM
    JAJUGA, K
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 1987, 5 (04) : 357 - 371
  • [32] L1-NORM BASED FUZZY CLUSTERING
    JAJUGA, K
    FUZZY SETS AND SYSTEMS, 1991, 39 (01) : 43 - 50
  • [33] A Laplacian approach to l1-norm minimization
    Bonifaci, Vincenzo
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2021, 79 (02) : 441 - 469
  • [34] Bayesian L1-norm sparse learning
    Lin, Yuanqing
    Lee, Daniel D.
    2006 IEEE International Conference on Acoustics, Speech and Signal Processing, Vols 1-13, 2006, : 5463 - 5466
  • [35] ORDER REDUCTION BY L1-NORM AND L00-NORM MINIMIZATION
    ELATTAR, RA
    VIDYASAGAR, M
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1978, 23 (04) : 731 - 734
  • [36] ON THE LORENTZ CONJECTURES UNDER THE L1-NORM
    YE, MD
    CHINESE ANNALS OF MATHEMATICS SERIES B, 1990, 11 (03) : 359 - 362
  • [37] Direction Finding with L1-norm Subspaces
    Markopoulos, P. P.
    Tsagkarakis, N.
    Pados, D. A.
    Karystinos, G. N.
    COMPRESSIVE SENSING III, 2014, 9109
  • [38] L1-Norm of Steinhaus chaos on the polydisc
    Weber, Michel J. G.
    MONATSHEFTE FUR MATHEMATIK, 2016, 181 (02): : 473 - 483
  • [39] L1-NORM FIT OF A STRAIGHT LINE
    SADOVSKI, AN
    THE ROYAL STATISTICAL SOCIETY SERIES C-APPLIED STATISTICS, 1974, 23 (02): : 244 - 248
  • [40] L1-Norm Tucker Tensor Decomposition
    Chachlakis, Dimitris G.
    Prater-Bennette, Ashley
    Markopoulos, Panos P.
    IEEE ACCESS, 2019, 7 : 178454 - 178465