Michigan ZoomIN: Validating Crowd-Sourcing to Identify Mammals from Camera Surveys

被引:3
|
作者
Gadsden, Gabriel, I [1 ,3 ]
Malhotra, Rumaan [1 ]
Schell, Justin [2 ]
Carey, Tiffany [1 ,4 ]
Harris, Nyeema C. [1 ]
机构
[1] Univ Michigan, Appl Wildlife Ecol Lab, Ecol & Evolutionary Biol, 1105 N Univ Ave, Ann Arbor, MI 48109 USA
[2] Univ Michigan, Shapiro Design Lab, Ann Arbor, MI 48109 USA
[3] Univ Michigan, Urban Energy Justice Lab, Sch Environm & Sustainabil, 440 Church St, Ann Arbor, MI 48109 USA
[4] Great Lakes Reg Ctr, Natl Wildlife Federat, Ann Arbor, MI 48109 USA
来源
WILDLIFE SOCIETY BULLETIN | 2021年 / 45卷 / 02期
基金
美国国家科学基金会;
关键词
accuracy; carnivores; citizen science; community; engagement; geographic variation; Michigan; mustelid; validation; CITIZEN SCIENCE DATA; CONSERVATION; TRAPS; BIODIVERSITY;
D O I
10.1002/wsb.1175
中图分类号
X176 [生物多样性保护];
学科分类号
090705 ;
摘要
Camera trap studies have become a popular medium to assess many ecological phenomena including population dynamics, patterns of biodiversity, and monitoring of endangered species. In conjunction with the benefit to scientists, camera traps present an unprecedented opportunity to involve the public in scientific research via image classifications. However, this engagement strategy comes with a myriad of complications. Volunteers vary in their familiarity with wildlife, thus, the accuracy of user-derived classifications may be biased by the commonness or popularity of species and user-experience. From an extensive multi-site camera trap study across Michigan, U.S.A, we compiled and classified images through a public science platform called Michigan ZoomIN. We aggregated responses from 15 independent users per image using multiple consensus methods to assess accuracy by comparing to species identification completed by wildlife experts. We also evaluated how different factors including consensus algorithms, study area, wildlife species, user support, and camera type influenced the accuracy of user-derived classifications. Overall accuracy of user-derived classification was 97%; although, several canid (e.g., Canis lupus, Vulpes vulpes) and mustelid (e.g., Neovison vison) species were repeatedly difficult to identify by users and had lower accuracy. When validating user-derived classification, we found that study area, consensus method, and user support best explained accuracy. To overcome hesitancy associated with data collected by untrained participants, we demonstrated their value by showing that the accuracy from volunteers was comparable to experts when classifying North American mammals. Our hierarchical workflow that integrated multiple consensus methods led to more image classifications without extensive training and even when the expertise of the volunteer was unknown. Ultimately, adopting such an approach can harness broader participation, expedite future camera trap data synthesis, and improve allocation of resources by scholars to enhance performance of public participants and increase accuracy of user-derived data. (c) 2021 The Wildlife Society.
引用
收藏
页码:221 / 229
页数:9
相关论文
共 18 条
  • [1] Emerging Technologies Webcams and Crowd-Sourcing to Identify Active Transportation
    Hipp, J. Aaron
    Adlakha, Deepti
    Eyler, Amy A.
    Chang, Bill
    Pless, Robert
    [J]. AMERICAN JOURNAL OF PREVENTIVE MEDICINE, 2013, 44 (01) : 96 - 97
  • [2] Learning motion primitives and annotative texts from crowd-sourcing
    Takano W.
    [J]. ROBOMECH Journal, 2 (1):
  • [3] USING CROWD-FUNDING AS A MOTIVATIONAL DEVICE FOR CROWD-SOURCING ASSESSMENTS FROM TEACHERS
    Zualkernan, Imran A.
    Ali, Mustafa
    Hassoun, Mark
    Jadoon, Noshad
    [J]. EDULEARN14: 6TH INTERNATIONAL CONFERENCE ON EDUCATION AND NEW LEARNING TECHNOLOGIES, 2014, : 2223 - 2232
  • [4] The wisdom of the expert crowd: a crowd-sourcing task for character discovery from nematocyst ultrastructur
    Daly, M.
    Reft, A. J.
    Law, E.
    O'Leary, M.
    [J]. INTEGRATIVE AND COMPARATIVE BIOLOGY, 2014, 54 : E260 - E260
  • [5] USE OF ONLINE CROWD-SOURCING TO IDENTIFY AND STUDY PATIENTS WITH CHRONIC CONDITIONS: IS THIS POSSIBLE AND ARE FINDINGS VALID?
    Yank, Veronica
    Agarwal, Sanjhavi
    Loftus, Pooja
    Choe, Christian
    [J]. JOURNAL OF GENERAL INTERNAL MEDICINE, 2016, 31 : S454 - S455
  • [6] A framework for evaluating urban land use mix from crowd-sourcing data
    Gervasoni, Luciano
    Bosch, Marti
    Fenet, Serge
    Sturm, Peter
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2016, : 2147 - 2156
  • [7] ASSESSING SURGICAL SKILLS AMONG UROLOGY RESIDENT APPLICANTS: CAN CROWD-SOURCING IDENTIFY THE NEXT GENERATION OF SURGEONS?
    Okhunov, Zhamshid
    Vernez, Simone L.
    Huynh, Victor
    Osann, Kathryn
    Landman, Jaime
    Clayman, Ralph V.
    [J]. JOURNAL OF UROLOGY, 2017, 197 (04): : E697 - E697
  • [8] The VerbCorner Project: Findings from Phase 1 of Crowd-Sourcing a Semantic Decomposition of Verbs
    Hartshorne, Joshua K.
    Bonial, Claire
    Palmer, Martha
    [J]. PROCEEDINGS OF THE 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2, 2014, : 397 - 402
  • [9] A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI
    Chen, Wei
    Wang, Weiping
    Li, Qun
    Chang, Qiang
    Hou, Hongtao
    [J]. SENSORS, 2016, 16 (03)
  • [10] Social Structure Predicts Eye Contact Tolerance in Nonhuman Primates: Evidence from a Crowd-Sourcing Approach
    Harrod, Ethan G.
    Coe, Christopher L.
    Niedenthal, Paula M.
    [J]. SCIENTIFIC REPORTS, 2020, 10 (01)