REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets

被引:0
|
作者
Angelina Wang
Alexander Liu
Ryan Zhang
Anat Kleiman
Leslie Kim
Dora Zhao
Iroha Shirai
Arvind Narayanan
Olga Russakovsky
机构
[1] Princeton University,
来源
关键词
Computer vision datasets; Bias mitigation; Tool;
D O I
暂无
中图分类号
学科分类号
摘要
Machine learning models are known to perpetuate and even amplify the biases present in the data. However, these data biases frequently do not become apparent until after the models are deployed. Our work tackles this issue and enables the preemptive analysis of large-scale datasets. REvealing VIsual biaSEs (REVISE) is a tool that assists in the investigation of a visual dataset, surfacing potential biases along three dimensions: (1) object-based, (2) person-based, and (3) geography-based. Object-based biases relate to the size, context, or diversity of the depicted objects. Person-based metrics focus on analyzing the portrayal of people within the dataset. Geography-based analyses consider the representation of different geographic locations. These three dimensions are deeply intertwined in how they interact to bias a dataset, and REVISE sheds light on this; the responsibility then lies with the user to consider the cultural and historical context, and to determine which of the revealed biases may be problematic. The tool further assists the user by suggesting actionable steps that may be taken to mitigate the revealed biases. Overall, the key aim of our work is to tackle the machine learning bias problem early in the pipeline. REVISE is available at https://github.com/princetonvisualai/revise-tool.
引用
收藏
页码:1790 / 1810
页数:20
相关论文
共 50 条
  • [1] REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets
    Wang, Angelina
    Liu, Alexander
    Zhang, Ryan
    Kleiman, Anat
    Kim, Leslie
    Zhao, Dora
    Shirai, Iroha
    Narayanan, Arvind
    Russakovsky, Olga
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (07) : 1790 - 1810
  • [2] Measuring and mitigating PCR bias in microbiota datasets
    Silverman, Justin D.
    Bloom, Rachael J.
    Jiang, Sharon
    Durand, Heather K.
    Dallow, Eric
    Mukherjee, Sayan
    David, Lawrence A.
    [J]. PLOS COMPUTATIONAL BIOLOGY, 2021, 17 (07)
  • [3] Mitigating the Bias of Centered Objects in Common Datasets
    Szabo, Gergely
    Horvath, Andras
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4786 - 4792
  • [4] A survey on bias in visual datasets
    Fabbrizzi, Simone
    Papadopoulos, Symeon
    Ntoutsi, Eirini
    Kompatsiaris, Ioannis
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2022, 223
  • [5] Measuring and Mitigating Bias in AI-Chatbots
    Beattie, Hedin
    Watkins, Lanier
    Robinson, William H.
    Rubin, Aviel
    Watkins, Shari
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ASSURED AUTONOMY (ICAA 2022), 2022, : 117 - 123
  • [6] Gaps in Measuring and Mitigating Implicit Bias in Healthcare
    Arif, Sally A.
    Schlotfeldt, Jessica
    [J]. FRONTIERS IN PHARMACOLOGY, 2021, 12
  • [7] Measuring and Mitigating Unintended Bias in Text Classification
    Dixon, Lucas
    Li, John
    Sorensen, Jeffrey
    Thain, Nithum
    Vasserman, Lucy
    [J]. PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18), 2018, : 67 - 73
  • [8] Measuring and Mitigating Bias and Harm in Personalized Advertising
    Ali, Muhammad
    [J]. 15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, : 869 - 872
  • [9] Through a Fair Looking-Glass: Mitigating Bias in Image Datasets
    Rajabi, Amirarsalan
    Yazdani-Jahromi, Mehdi
    Garibay, Ozlem Ozmen
    Sukthankar, Gita
    [J]. ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I, 2023, 14050 : 446 - 459
  • [10] TabuVis: A tool for visual analytics multidimensional datasets
    Nguyen Quang Vinh
    Qian Yu
    Huang MaoLin
    Zhang JiaWan
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2013, 56 (05) : 1 - 12