Applications of computer-intensive statistical methods to environmental research

被引:20
|
作者
Pitt, DG [1 ]
Kreutzweiser, DP [1 ]
机构
[1] Canadian Forestry Serv, Sault St Marie, ON P6A 5M7, Canada
关键词
D O I
10.1006/eesa.1997.1619
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Conventional statistical approaches rely heavily on the properties of the central limit theorem to bridge the gap between the characteristics of a sample and some theoretical sampling distribution. Problems associated with nonrandom sampling, unknown population distributions, heterogeneous variances, small sample sizes, and missing data jeopardize the assumptions of such approaches and cast skepticism on conclusions. Conventional nonparametric alternatives offer freedom from distribution assumptions, but design limitations and loss of power can be serious drawbacks. With the data-processing capacity of today's computers, a new dimension of distribution-free statistical methods has evolved that addresses many of the limitations of conventional parametric and nonparametric methods. Computer-intensive statistical methods involve reshuffling, resampling, or simulating a data set thousands of times to empirically define a sampling distribution for a chosen test statistic. The only assumption necessary for valid results is the random assignment of experimental units to the test groups or treatments. Application to a real data set illustrates the advantages of these methods, including freedom from distribution assumptions without loss of power, complete choice over test statistics, easy adaptation to design complexities and missing data, and considerable intuitive appeal. The illustrations also reveal that computer-intensive methods can be more time consuming than conventional methods and the amount of computer code required to orchestrate reshuffling, resampling, or simulation procedures can be appreciable. (C) 1998 Academic Press.
引用
收藏
页码:78 / 97
页数:20
相关论文
共 50 条