Parameter Estimation From Quantized Observations in Multiplicative Noise Environments

被引:39
|
作者
Zhu, Jiang [1 ,2 ]
Lin, Xiaokang [1 ,2 ]
Blum, Rick S. [3 ]
Gu, Yuantao [1 ]
机构
[1] Tsinghua Univ, TNList, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Grad Sch Shenzhen, Shenzhen 518055, Peoples R China
[3] Lehigh Univ, Elect & Comp Engn Dept, Bethlehem, PA 18015 USA
基金
中国国家自然科学基金;
关键词
CRLB; multiplicative noise; parameter estimation; quantization; WIRELESS SENSOR NETWORKS; CONSTRAINED DISTRIBUTED ESTIMATION; UNIVERSAL DECENTRALIZED ESTIMATION; GENERALIZED OBSERVATION MODEL; MAXIMUM-LIKELIHOOD-ESTIMATION; LOCALLY OPTIMUM DETECTION; PART I; SIGNALS; SYSTEM;
D O I
10.1109/TSP.2015.2436359
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The problem of distributed parameter estimation from binary quantized observations is studied when the unquantized observations are corrupted by combined multiplicative and additive Gaussian noise. These results are applicable to sensor networks where the sensors observe a parameter in combined additive and nonadditive noise and to a case where dispersed receivers are employed with analog communication over fading channels where the receivers employ binary quantization before noise-free digital communications to a fusion center. We first discuss the case in which all the quantizers use an identical threshold. The parameter identifiability condition is given, and, surprisingly, it is shown that unless the common threshold is chosen properly and the parameter lies in an open interval, the parameter will not generally be identifiable, in contrast to the additive noise case. The best achievable mean square error (MSE) performance is characterized by deriving the corresponding Cramer-Rao Lower Bound (CRLB). A closed-form expression describing the corresponding maximum likelihood (ML) estimator is presented. The stability of the performance of the ML estimators is improved when a nonidentical threshold strategy is utilized to estimate the unknown parameter. The thresholds are designed by maximizing the minimum asymptotic relative efficiency (ARE) between quantized and unquantized ML estimators. Although the ML estimation problem is nonconvex, it is shown that one can relax the optimization to make it convex. The solution to the relaxed problem is used as an initial solution in a gradient algorithm to solve the original problem. Next, the case where both the variances of the additive noise and multiplicative noise are unknown is studied. The corresponding CRLB is obtained, and the ML estimation problem is transformed to a convex optimization, which can be solved efficiently. Finally, numerical simulations are performed to substantiate the theoretical analysis.
引用
收藏
页码:4037 / 4050
页数:14
相关论文
共 50 条