Tutorial: assessing metagenomics software with the CAMI benchmarking toolkit

被引:0
|
作者
Fernando Meyer
Till-Robin Lesker
David Koslicki
Adrian Fritz
Alexey Gurevich
Aaron E. Darling
Alexander Sczyrba
Andreas Bremges
Alice C. McHardy
机构
[1] Helmholtz Centre for Infection Research,Computational Biology of Infection Research
[2] German Center for Infection Research (DZIF),Computer Science and Engineering, Biology, and The Huck Institutes of the Life Sciences
[3] Penn State University,Center for Algorithmic Biotechnology
[4] St. Petersburg State University,The ithree institute
[5] University of Technology Sydney,Faculty of Technology and Center for Biotechnology
[6] Bielefeld University,undefined
来源
Nature Protocols | 2021年 / 16卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Computational methods are key in microbiome research, and obtaining a quantitative and unbiased performance estimate is important for method developers and applied researchers. For meaningful comparisons between methods, to identify best practices and common use cases, and to reduce overhead in benchmarking, it is necessary to have standardized datasets, procedures and metrics for evaluation. In this tutorial, we describe emerging standards in computational meta-omics benchmarking derived and agreed upon by a larger community of researchers. Specifically, we outline recent efforts by the Critical Assessment of Metagenome Interpretation (CAMI) initiative, which supplies method developers and applied researchers with exhaustive quantitative data about software performance in realistic scenarios and organizes community-driven benchmarking challenges. We explain the most relevant evaluation metrics for assessing metagenome assembly, binning and profiling results, and provide step-by-step instructions on how to generate them. The instructions use simulated mouse gut metagenome data released in preparation for the second round of CAMI challenges and showcase the use of a repository of tool results for CAMI datasets. This tutorial will serve as a reference for the community and facilitate informative and reproducible benchmarking in microbiome research.
引用
收藏
页码:1785 / 1801
页数:16
相关论文
共 50 条
  • [21] Toolkit for software developers
    不详
    NAVAL ARCHITECT, 2001, : 6 - 6
  • [22] Embedded tutorial: IC test cost benchmarking
    Luther, Klaus
    ETS 2007: 12TH IEEE EUROPEAN TEST SYMPOSIUM, PROCEEDINGS, 2007, : 200 - 200
  • [23] Tutorial on Data Balancing: Application to Benchmarking Clinicians
    Alemi, Roshan
    Elrafey, Amr
    Neuhauser, Duncan
    Alemi, Farrokh
    QUALITY MANAGEMENT IN HEALTH CARE, 2019, 28 (01) : 1 - 7
  • [24] Tutorial on Benchmarking Big Data Analytics Systems
    Ivanov, Todor
    Singhal, Rekha
    ICPE'20: COMPANION OF THE ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING, 2020, : 50 - 53
  • [25] DeskBench: Flexible Virtual Desktop Benchmarking Toolkit
    Rhee, Junghwan
    Kochut, Andrzej
    Beaty, Kirk
    2009 IFIP/IEEE INTERNATIONAL SYMPOSIUM ON INTEGRATED NETWORK MANAGEMENT (IM 2009) VOLS 1 AND 2, 2009, : 622 - +
  • [26] Rival: A New Benchmarking Toolkit for Recommender Systems
    Said, Alan
    Bellogin, Alejandro
    ERCIM NEWS, 2014, (99): : 27 - 28
  • [27] Benchmarking software organizations
    Card, D
    Zubrow, D
    IEEE SOFTWARE, 2001, 18 (05) : 16 - 17
  • [28] Assessing performance benchmarking
    Department of Quantitative Analysis, University of Southern Maine, 96 Falmouth St., Portland, ME 04104-4046, United States
    不详
    不详
    不详
    不详
    不详
    J Am Water Works Assoc, 11 (56-64):
  • [29] SOFTWARE: A TUTORIAL INTRODUCTION.
    Saffady, William
    Software Review, 1982, 1 (01): : 5 - 10
  • [30] Assessing performance benchmarking
    Andrews, BH
    Schumann, PD
    Gowen, TL
    JOURNAL AMERICAN WATER WORKS ASSOCIATION, 1999, 91 (11): : 56 - 64