Quick Annotator: an open-source digital pathology based rapid image annotation tool

被引:11
|
作者
Miao, Runtian [1 ]
Toth, Robert [2 ]
Zhou, Yu [1 ]
Madabhushi, Anant [1 ,3 ]
Janowczyk, Andrew [1 ,4 ]
机构
[1] Case Western Reserve Univ, Dept Biomed Engn, Cleveland, OH 44106 USA
[2] Toth Technol LLC, Dover, NJ USA
[3] Louis Stokes Vet Adm Med Ctr, Cleveland, OH USA
[4] Lausanne Univ Hosp, Precis Oncol Ctr, Lausanne, Switzerland
来源
基金
美国国家卫生研究院;
关键词
digital pathology; computational pathology; deep learning; active learning; annotations; open-source tool; nuclei; epithelium; tubules; efficiency;
D O I
10.1002/cjp2.229
中图分类号
R36 [病理学];
学科分类号
100104 ;
摘要
Image-based biomarker discovery typically requires accurate segmentation of histologic structures (e.g. cell nuclei, tubules, and epithelial regions) in digital pathology whole slide images (WSIs). Unfortunately, annotating each structure of interest is laborious and often intractable even in moderately sized cohorts. Here, we present an open-source tool, Quick Annotator (QA), designed to improve annotation efficiency of histologic structures by orders of magnitude. While the user annotates regions of interest (ROIs) via an intuitive web interface, a deep learning (DL) model is concurrently optimized using these annotations and applied to the ROI. The user iteratively reviews DL results to either (1) accept accurately annotated regions or (2) correct erroneously segmented structures to improve subsequent model suggestions, before transitioning to other ROIs. We demonstrate the effectiveness of QA over comparable manual efforts via three use cases. These include annotating (1) 337,386 nuclei in 5 pancreatic WSIs, (2) 5,692 tubules in 10 colorectal WSIs, and (3) 14,187 regions of epithelium in 10 breast WSIs. Efficiency gains in terms of annotations per second of 102x, 9x, and 39x were, respectively, witnessed while retaining f-scores >0.95, suggesting that QA may be a valuable tool for efficiently fully annotating WSIs employed in downstream biomarker studies.
引用
收藏
页码:542 / 547
页数:6
相关论文
共 50 条
  • [1] An Open-source Digital Diagnostic Radiography Image Annotation Software
    Starcevic, Dorde S.
    Ostojic, Vladimir S.
    Petrovic, Vladimir S.
    [J]. 2016 24TH TELECOMMUNICATIONS FORUM (TELFOR), 2016, : 380 - 383
  • [2] An open-source, MATLAB based annotation tool for virtual slides
    Riku Turkki
    Margarita Walliander
    Ville Ojansivu
    Nina Linder
    Mikael Lundin
    Johan Lundin
    [J]. Diagnostic Pathology, 8 (Suppl 1)
  • [3] NEAL: an open-source tool for audio annotation
    Gibbons, Anthony
    Donohue, Ian
    Gorman, Courtney
    King, Emma
    Parnell, Andrew
    [J]. PEERJ, 2023, 11
  • [4] HistoQC: An Open-Source Quality Control Tool for Digital Pathology Slides
    Janowczyk, Andrew
    Zuo, Ren
    Gilmore, Hannah
    Feldman, Michael
    Madabhushi, Anant
    [J]. JCO CLINICAL CANCER INFORMATICS, 2019, 3 : 1 - 7
  • [5] Annotation Web - An open-source web-based annotation tool for ultrasound images
    Smistad, Erik
    Ostvik, Andreas
    Lovstakken, Lasse
    [J]. INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS 2021), 2021,
  • [6] SimplePhy: An open-source tool for quick online perception experiments
    Lago, Miguel A.
    [J]. BEHAVIOR RESEARCH METHODS, 2021, 53 (04) : 1669 - 1676
  • [7] SimplePhy: An open-source tool for quick online perception experiments
    Miguel A. Lago
    [J]. Behavior Research Methods, 2021, 53 : 1669 - 1676
  • [8] MOSGA: Modular Open-Source Genome Annotator
    Martin, Roman
    Hackl, Thomas
    Hattab, Georges
    Fischer, Matthias G.
    Heider, Dominik
    [J]. BIOINFORMATICS, 2020, 36 (22-23) : 5514 - 5515
  • [9] μDIC: An open-source toolkit for digital image correlation
    Olufsen, Sindre Nordmark
    Andersen, Marius Endre
    Fagerholt, Egil
    [J]. SOFTWAREX, 2020, 11
  • [10] GATE Teamware 2: An open-source tool for collaborative document classification annotation
    Wilby, David
    Karmakharm, Twin
    Roberts, Ian
    Song, Xingyi
    Bontcheva, Kalina
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 145 - 151