On Bayesian predictive density estimation for skew-normal distributions

被引:0
|
作者
Kortbi, Othmane [1 ]
机构
[1] UAE Univ, Dept Stat & Business Analyt, Al Ain, U Arab Emirates
关键词
Skew-normal distributions; Predictive densities; Minimax estimators; Admissibility; Kullback-Leibler loss; Bayes estimators;
D O I
10.1007/s00184-024-00946-4
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This paper is concerned with prediction for skew-normal models, and more specifically the Bayes estimation of a predictive density for Y mu similar to SNp(mu,vyIp,lambda)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_y I_p, \lambda )$$\end{document} under Kullback-Leibler loss, based on X mu similar to SNp(mu,vxIp,lambda)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_x I_p, \lambda )$$\end{document} with known dependence and skewness parameters. We obtain representations for Bayes predictive densities, including the minimum risk equivariant predictive density p<^>pi o\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{p}_{\pi _{o}}$$\end{document} which is a Bayes predictive density with respect to the noninformative prior pi 0 equivalent to 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _0\equiv 1$$\end{document}. George et al. (Ann Stat 34:78-91, 2006) used the parallel between the problem of point estimation and the problem of estimation of predictive densities to establish a connection between the difference of risks of the two problems. The development of similar connection, allows us to determine sufficient conditions of dominance over p<^>pi o\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{p}_{\pi _{o}}$$\end{document} and of minimaxity. First, we show that p<^>pi o\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{p}_{\pi _{o}}$$\end{document} is a minimax predictive density under KL risk for the skew-normal model. After this, for dimensions p >= 3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p\ge 3$$\end{document}, we obtain classes of Bayesian minimax densities that improve p<^>pi o\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{p}_{\pi _{o}}$$\end{document} under KL loss, for the subclass of skew-normal distributions with small value of skewness parameter. Moreover, for dimensions p >= 4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p\ge 4$$\end{document}, we obtain classes of Bayesian minimax densities that improve p<^>pi o\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{p}_{\pi _{o}}$$\end{document} under KL loss, for the whole class of skew-normal distributions. Examples of proper priors, including generalized student priors, generating Bayesian minimax densities that improve p<^>pi o\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{p}_{\pi _{o}}$$\end{document} under KL loss, were constructed when p >= 5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p\ge 5$$\end{document}. This findings represent an extension of Liang and Barron (IEEE Trans Inf Theory 50(11):2708-2726, 2004), George et al. (Ann Stat 34:78-91, 2006) and Komaki (Biometrika 88(3):859-864, 2001) results to a subclass of asymmetrical distributions.
引用
收藏
页数:14
相关论文
共 50 条