Outlier bias: AI classification of curb ramps, outliers, and context

被引:1
|
作者
Deitz, Shiloh [1 ]
机构
[1] Rutgers State Univ, Edward J Bloustein Sch Planning & Publ Policy, New Brunswick, NJ 08854 USA
关键词
artificial intelligence; deep learning; bias; accessibility; disability; United States;
D O I
10.1177/20539517231203669
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
Technologies in the smart city, such as autonomous vehicles and delivery robots, promise to increase the mobility and freedom of people with disabilities. These technologies have also failed to "see" or comprehend wheelchair riders, people walking with service animals, and people walking with bicycles-all outliers to machine learning models. Big data and algorithms have been amply critiqued for their biases-harmful and systematic errors-but the harms that arise from AI's inherent inability to handle nuance, context, and exception have been largely overlooked. In this paper, I run two machine learning models across nine cities in the United States to attempt to fill a gap in data about the location of curb ramps. I find that while curb ramp prediction models may achieve up to 88% accuracy, the rate of accuracy varied in context in ways both predictable and unpredictable. I look closely at cases of unpredictable error (outlier bias), by triangulating with aerial and street view imagery. The sampling of cases shows that while it may be possible to conjecture about patterns in these errors, there is nothing clearly systematic. While more data and bigger models might improve the accuracy somewhat, I propose that a bias toward outliers is something fundamental to machine learning models which gravitate to the mean and require unbiased and not missing data. I conclude by arguing that universal design or design for the outliers is imperative for justice in the smart city where algorithms and data are increasingly embedded as infrastructure.
引用
收藏
页数:14
相关论文
共 8 条
  • [1] Empirical study of outlier impact in classification context
    Khan, Hufsa
    Rasheed, Muhammad Tahir
    Zhang, Shengli
    Wang, Xizhao
    Liu, Han
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 256
  • [2] Probabilistic context integration-based aircraft behaviour intention classification at airport ramps
    Lee, Soomok
    Seo, Seung-Woo
    IET INTELLIGENT TRANSPORT SYSTEMS, 2022, 16 (06) : 725 - 738
  • [3] A new classification of uncertainty orientation: Exploring the susceptibility to the hindsight bias in a gambling context
    Farrell, C
    Cowley, E
    Edwardson, M
    ADVANCES IN CONSUMER RESEARCH, VOLUME XXXI, 2004, 31 : 246 - 247
  • [4] Is artificial intelligence (AI) research biased and conceptually vague? A systematic review of research on bias and discrimination in the context of using AI in human resource management
    Kekez, Ivan
    Lauwaert, Lode
    Redep, Nina Begicevic
    TECHNOLOGY IN SOCIETY, 2025, 81
  • [5] Bias reduction using combined stain normalization and augmentation for AI-based classification of histological images
    Franchet, Camille
    Schwob, Robin
    Bataillon, Guillaume
    Syrykh, Charlotte
    Pericart, Sarah
    Frenois, Francois-Xavier
    Penault-Llorca, Frederique
    Lacroix-Triki, Magali
    Arnould, Laurent
    Lemonnier, Jerome
    Alliot, Jean -Marc
    Filleron, Thomas
    Brousset, Pierre
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 171
  • [6] Predictive modeling for breast cancer classification in the context of Bangladeshi patients by use of machine learning approach with explainable AI
    Islam, Taminul
    Sheakh, Md. Alif
    Tahosin, Mst. Sazia
    Hena, Most. Hasna
    Akash, Shopnil
    Bin Jardan, Yousef A.
    Fentahunwondmie, Gezahign
    Nafidi, Hiba-Allah
    Bourhia, Mohammed
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [7] Data-Driven, 3-D Classification of Person-Object Relationships and Semantic Context Clustering for Robotics and AI Applications
    Zapf, Marc Patrick
    Gupta, Astha
    Saiki, Luis Yoichi Morales
    Kawanabe, Motoaki
    2018 27TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2018), 2018, : 180 - 187
  • [8] Benchmarking Political Bias Classification with In-Context Learning: Insights from GPT-3.5, GPT-4o, LLaMA-3, and Gemma-2
    Kotze, Eduan
    Senekal, Burgert A.
    ARTIFICIAL INTELLIGENCE RESEARCH, SACAIR 2024, 2025, 2326 : 161 - 175