Putting benchmarks in their rightful place: The heart of computational biology

被引:0
|
作者
Peters, Bjoern [1 ]
Brenner, Steven E. [2 ]
Wang, Edwin [3 ]
Slonim, Donna [4 ]
Kann, Maricel G. [5 ]
机构
[1] La Jolla Inst Allergy & Immunol, La Jolla, CA 92037 USA
[2] Univ Calif Berkeley, Dept Plant & Microbial Biol, Berkeley, CA 94720 USA
[3] Univ Calgary, Cumming Sch Med, Calgary, AB, Canada
[4] Tufts Univ, Dept Comp Sci & Genet, Medford, MA 02155 USA
[5] Univ Maryland, Dept Biol Sci, College Pk, MD 20742 USA
关键词
D O I
10.1371/journal.pcbi.1006494
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Research in computational biology has given rise to a vast number of methods developed to solve scientific problems. For areas in which many approaches exist, researchers have a hard time deciding which tool to select to address a scientific challenge, as essentially all publications introducing a new method will claim better performance than all others. Not all of these claims can be correct. Equally, for this same reason, developers struggle to demonstrate convincingly that they created a new and superior algorithm or implementation. Moreover, the developer community often has difficulty discerning which new approaches constitute true scientific advances for the field. The obvious answer to this conundrum is to develop benchmarks-meaning standard points of reference that facilitate evaluating the performance of different tools-allowing both users and developers to compare multiple tools in an unbiased fashion.
引用
收藏
页数:3
相关论文
共 50 条