From Bad Users and Failed Uses to Responsible Technologies: A Call to Expand the AI Ethics Toolkit

被引:5
|
作者
Neff, Gina [1 ]
机构
[1] Univ Oxford, Oxford Internet Inst, Oxford, England
关键词
Work and organizations; data work; feminist theory; STS; social sciences; theory; AI ethics;
D O I
10.1145/3375627.3377141
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used. Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people's work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component. This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or "imagined affordances" [1] shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice. Such a framework evaluating the benefits of AI would include the following five questions [2]: 1) What and whose goals are being achieved or promised through 2) what structured performance using 3) what division of labor 4) under whose control and 5) at whose expense? Using such a framework would resolve a couple of current conundrums for AI ethics. First, such a move would shift accountability away from technology designers to the evaluation of the political and economic environments in which pattern how technologies are adopted, modified and are used. Second, such moves encourage thinking through the challenges of capitalism and systems that structure technological affordances, rather than the individual actions of so-called bad users. Third, it reimagines the practice of use in organizational and institutional context, enabling predictions of failure at the interface between technologies and their uses. Finally, such a framework highlights places for intervention. Long global supply chains of AI systems-from data labelling work to engineering work to the front-line use of dashboard systems-often mask the opportunities that people have for intervening in the systems, making it hard for people to contest their results, and blur the lines of accountability and responsibility. A focus on ethical action highlights the choices available to individuals. Expanding the AI ethics toolkit theoretically and methodologically to include attention to social structural and institutional configurations that enable and constrain individual action will ultimately result in more robust ways of describing what people actually do with AI tools and more pathways for influencing the design, application, modification and use of responsible AI technologies.
引用
收藏
页码:5 / 6
页数:2
相关论文
共 8 条
  • [1] Responsible AI: Bridging From Ethics to Practice
    Shneiderman, Ben
    [J]. COMMUNICATIONS OF THE ACM, 2021, 64 (08) : 32 - 35
  • [2] Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems
    Bernd Carsten Stahl
    [J]. Scientific Reports, 13
  • [3] Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems
    Stahl, Bernd Carsten
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [4] Broader uses for microfluidics technologies - Applications expand from drug discovery to battlefield
    Liszewski, K
    [J]. GENETIC ENGINEERING NEWS, 2003, 23 (09): : 40 - +
  • [5] Using the Design of Adversarial Chatbots as a Means to Expose Computer Science Students to the Importance of Ethics and Responsible Design of AI Technologies
    Weiss, Astrid
    Vrecar, Rafael
    Zamiechowska, Joanna
    Purgathofer, Peter
    [J]. HUMAN-COMPUTER INTERACTION, INTERACT 2021, PT III, 2021, 12934 : 331 - 339
  • [6] Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Coeckelbergh, Mark
    de Prado, Marcos Lopez
    Herrera-Viedma, Enrique
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2023, 99
  • [7] From ethics to standards-A path via responsible AI to cyber-physical production systems
    Mezgar, Istvan
    Vancza, Jozsef
    [J]. ANNUAL REVIEWS IN CONTROL, 2022, 53 : 391 - 404
  • [8] From PAPA to PAPAS and Beyond: Dealing with Ethics in Big Data, AI and other Emerging Technologies
    Stahl, Bernd Carsten
    [J]. COMMUNICATIONS OF THE ASSOCIATION FOR INFORMATION SYSTEMS, 2021, 49 : 454 - 461