Artificial Intelligence in mental health and the biases of language based models

被引:41
|
作者
Straw, Isabel [1 ]
Callison-Burch, Chris [2 ]
机构
[1] Univ Penn, Perelman Sch Med, Dept Publ Hlth, Philadelphia, PA 19104 USA
[2] Univ Penn, Comp & Informat Sci Dept, Philadelphia, PA 19104 USA
来源
PLOS ONE | 2020年 / 15卷 / 12期
关键词
GENDER BIAS; EXPRESSION; DISTRESS; IDIOMS; DISORDERS; SCIENCE; RACE;
D O I
10.1371/journal.pone.0240376
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Background The rapid integration of Artificial Intelligence (AI) into the healthcare field has occurred with little communication between computer scientists and doctors. The impact of AI on health outcomes and inequalities calls for health professionals and data scientists to make a collaborative effort to ensure historic health disparities are not encoded into the future. We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biases may widen health inequalities. Our approach systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective. Design/Methods A literature review of the uses of NLP in mental health was carried out across multiple disciplinary databases with defined Mesh terms and keywords. Our primary analysis evaluated biases within 'GloVe' and 'Word2Vec' word embeddings. Euclidean distances were measured to assess relationships between psychiatric terms and demographic labels, and vector similarity functions were used to solve analogy questions relating to mental health. Results Our primary analysis of mental health terminology in GloVe and Word2Vec embeddings demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age. Our literature review returned 52 papers, of which none addressed all the areas of possible bias that we identify in model development. In addition, only one article existed on more than one research database, demonstrating the isolation of research within disciplinary silos and inhibiting cross-disciplinary collaboration or communication. Conclusion Our findings are relevant to professionals who wish to minimize the health inequalities that may arise as a result of AI and data-driven algorithms. We offer primary research identifying biases within these technologies and provide recommendations for avoiding these harms in the future.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Artificial Intelligence in Mental Health Therapy for Children and Adolescents
    Vial, Theodore
    Almon, Alires
    [J]. JAMA PEDIATRICS, 2023, 177 (12) : 1251 - 1252
  • [42] Ethical considerations in the use of artificial intelligence in mental health
    Warrier, Uma
    Warrier, Aparna
    Khandelwal, Komal
    [J]. EGYPTIAN JOURNAL OF NEUROLOGY PSYCHIATRY AND NEUROSURGERY, 2023, 59 (01):
  • [43] Artificial intelligence in positive mental health: a narrative review
    Thakkar, Anoushka
    Gupta, Ankita
    De Sousa, Avinash
    [J]. FRONTIERS IN DIGITAL HEALTH, 2024, 6
  • [44] Bridging the gap between artificial intelligence and mental health
    Lu, Tangsheng
    Liu, Xiaoxing
    Sun, Jie
    Bao, Yanping
    Schuller, Bjorn W.
    Han, Ying
    Lu, Lin
    [J]. SCIENCE BULLETIN, 2023, 68 (15) : 1606 - 1610
  • [45] Artificial intelligence and mental health nursing care plans
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    [J]. JOURNAL OF PSYCHIATRIC AND MENTAL HEALTH NURSING, 2024, 31 (02) : 255 - 256
  • [46] Artificial Intelligence's Impact on Mental Health Treatments
    Ausman, Michelle C.
    [J]. AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, : 533 - 534
  • [47] Artificial intelligence is set to transform mental health services
    Pandi-Perumal, Seithikurippu R.
    Narasimhan, Meera
    Seeman, Mary V.
    Jahrami, Haitham
    [J]. CNS SPECTRUMS, 2024, 29 (03) : 155 - 157
  • [48] Mental Health Diagnosis: A Case for Explainable Artificial Intelligence
    Antoniou, Grigoris
    Papadakis, Emmanuel
    Baryannis, George
    [J]. INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2022, 31 (03)
  • [49] Ecosystem Models Based on Artificial Intelligence
    Strannegard, Claes
    Engsner, Niklas
    Eisfeldt, Jesper
    Endler, John
    Hansson, Amanda
    Lindgren, Rasmus
    Mostad, Petter
    Olsson, Simon
    Perini, Irene
    Reese, Heather
    Taylan, Fulya
    Ulfsbacker, Simon
    Nordgren, Ann
    [J]. 2022 34TH WORKSHOP OF THE SWEDISH ARTIFICIAL INTELLIGENCE SOCIETY (SAIS 2022), 2022, : 37 - 45
  • [50] ChatGPT: can artificial intelligence language models be of value for cardiovascular nurses and allied health professionals
    Moons, Philip
    Van Bulck, Liesbet
    [J]. EUROPEAN JOURNAL OF CARDIOVASCULAR NURSING, 2023, 22 (07) : E55 - E59