Optimal portfolio selections via l1,2-norm regularization

被引:0
|
作者
Zhao, Hongxin [1 ]
Kong, Lingchen [1 ]
Qi, Hou-Duo [2 ]
机构
[1] Beijing Jiaotong Univ, Dept Appl Math, Beijing 100044, Peoples R China
[2] Univ Southampton, Sch Math Sci, Southampton SO17 1BJ, Hants, England
关键词
Portfolio optimization; Minimum variance portfolio; l(1,2)-norm regularization; Proximal augmented Lagrange method; Out-of-sample performance; SPARSE; OPTIMIZATION; MINIMIZATION;
D O I
10.1007/s10589-021-00312-4
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
There has been much research about regularizing optimal portfolio selections through l(1) norm and/or l(2)-norm squared. The common consensuses are (i) l(1) leads to sparse portfolios and there exists a theoretical bound that limits extreme shorting of assets; (ii) l(2) (norm-squared) stabilizes the computation by improving the condition number of the problem resulting in strong out-of-sample performance; and (iii) there exist efficient numerical algorithms for those regularized portfolios with closed-form solutions each step. When combined such as in the well-known elastic net regularization, theoretical bounds are difficult to derive so as to limit extreme shorting of assets. In this paper, we propose a minimum variance portfolio with the regularization of l(1) and l(2) norm combined (namely l(1,2)-norm). The new regularization enjoys the best of the two regularizations of l(1) norm and l(2)-norm squared. In particular, we derive a theoretical bound that limits short-sells and develop a closed-form formula for the proximal term of the l(1,2) norm. A fast proximal augmented Lagrange method is applied to solve the l(1,2)-norm regularized problem. Extensive numerical experiments confirm that the new model often results in high Sharpe ratio, low turnover and small amount of short sells when compared with several existing models on six datasets.
引用
收藏
页码:853 / 881
页数:29
相关论文
共 50 条
  • [1] Exclusive Feature Learning on Arbitrary Structures via l1,2-norm
    Kong, Deguang
    Fujimaki, Ryohei
    Liu, Ji
    Nie, Feiping
    Ding, Chris
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [2] MULTISPECTRAL IMAGE SUPER -RESOLUTION WITH l1,2-NORM REGULARIZATION OF SPATIALLY-ALIGNED LAPLACIANS
    Wu, Xiaolin
    Gao, Dahua
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2817 - 2821
  • [3] Multi-view and Multi-order Graph Clustering via Constrained l1,2-norm
    Xin, Haonan
    Hao, Zhezheng
    Sun, Zhensheng
    Wang, Rong
    Miao, Zongcheng
    Nie, Feiping
    [J]. Information Fusion, 2024, 111
  • [4] L1/2-norm Regularization for Detecting Aero-engine Fan Acoustic Mode
    Li, Zhendong
    Qiao, Baijie
    Wen, Bi
    Li, Zepeng
    Chen, Xuefeng
    [J]. 2022 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE (I2MTC 2022), 2022,
  • [5] Sparse Representaion via l1/2-norm Minimization for Facial Expression Recognition
    Guo, Song
    Ruan, Qiuqi
    An, Gaoyun
    Shi, Caijuan
    [J]. PROCEEDINGS OF 2012 IEEE 11TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP) VOLS 1-3, 2012, : 1243 - 1246
  • [6] Supervised Multiview Feature Selection Exploring Homogeneity and Heterogeneity With l1,2-Norm and Automatic View Generation
    Chen, Xi
    Zhou, Gongjian
    Chen, Yushi
    Shao, Guofan
    Gu, Yanfeng
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2017, 55 (04): : 2074 - 2088
  • [7] Blade dynamic strain non-intrusive measurement using L1/2-norm regularization and transmissibility
    Ao, Chunyan
    Qiao, Baijie
    Chen, Lei
    Xu, Jinghui
    Liu, Meiru
    Chen, Xuefeng
    [J]. MEASUREMENT, 2022, 190
  • [8] Sparse portfolio optimization via l1 over l2 regularization
    Wu, Zhongming
    Sun, Kexin
    Ge, Zhili
    Allen-Zhao, Zhihua
    Zeng, Tieyong
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2024, 319 (03) : 820 - 833
  • [9] Efficient subspace clustering and feature extraction via l2, 1- norm and l1, 2-norm minimization
    Qiao, Xiaoguang
    Chen, Caikou
    Wang, Weiye
    [J]. NEUROCOMPUTING, 2024, 595
  • [10] The Group-Lasso: l1,∞ Regularization versus l1,2 Regularization
    Vogt, Julia E.
    Roth, Volker
    [J]. PATTERN RECOGNITION, 2010, 6376 : 252 - 261