Evaluating causal psychological models: A study of language theories of autism using a large sample

被引:4
|
作者
Tang, Bohao [1 ]
Levine, Michael [2 ]
Adamek, Jack H. [2 ]
Wodka, Ericka L. [2 ,3 ]
Caffo, Brian S. [1 ]
Ewen, Joshua B. [2 ,3 ,4 ]
机构
[1] Johns Hopkins Univ, Bloomberg Sch Publ Hlth, Baltimore, MD USA
[2] Kennedy Krieger Inst, Baltimore, MD 21205 USA
[3] Johns Hopkins Univ, Sch Med, Baltimore, MD 21218 USA
[4] Kennedy Krieger Inst, Neurol & Dev Med, Baltimore, MD 21205 USA
来源
FRONTIERS IN PSYCHOLOGY | 2023年 / 14卷
关键词
language; social withdrawal; autism (ASD); psychological theory; large data analysis; causal inference; network analysis; INFANTILE-AUTISM; SPECTRUM DISORDERS; BROADER PHENOTYPE; DEFICITS; CHILDHOOD; CHILDREN; SCHIZOPHRENIA; EXPLANATION; IMPAIRMENTS; BEHAVIOR;
D O I
10.3389/fpsyg.2023.1060525
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
We used a large convenience sample (n = 22,223) from the Simons Powering Autism Research (SPARK) dataset to evaluate causal, explanatory theories of core autism symptoms. In particular, the data-items collected supported the testing of theories that posited altered language abilities as cause of social withdrawal, as well as alternative theories that competed with these language theories. Our results using this large dataset converge with the evolution of the field in the decades since these theories were first proposed, namely supporting primary social withdrawal (in some cases of autism) as a cause of altered language development, rather than vice versa.To accomplish the above empiric goals, we used a highly theory-constrained approach, one which differs from current data-driven modeling trends but is coherent with a very recent resurgence in theory-driven psychology. In addition to careful explication and formalization of theoretical accounts, we propose three principles for future work of this type: specification, quantification, and integration. Specification refers to constraining models with pre-existing data, from both outside and within autism research, with more elaborate models and more veridical measures, and with longitudinal data collection. Quantification refers to using continuous measures of both psychological causes and effects, as well as weighted graphs. This approach avoids "universality and uniqueness" tests that hold that a single cognitive difference could be responsible for a heterogeneous and complex behavioral phenotype. Integration of multiple explanatory paths within a single model helps the field examine for multiple contributors to a single behavioral feature or to multiple behavioral features. It also allows integration of explanatory theories across multiple current-day diagnoses and as well as typical development.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Evaluating Intelligence and Knowledge in Large Language Models
    Bianchini, Francesco
    TOPOI-AN INTERNATIONAL REVIEW OF PHILOSOPHY, 2025, 44 (01): : 163 - 173
  • [22] Evaluating large language models for software testing
    Li, Yihao
    Liu, Pan
    Wang, Haiyang
    Chu, Jie
    Wong, W. Eric
    COMPUTER STANDARDS & INTERFACES, 2025, 93
  • [23] Evaluating large language models as agents in the clinic
    Mehandru, Nikita
    Miao, Brenda Y.
    Almaraz, Eduardo Rodriguez
    Sushil, Madhumita
    Butte, Atul J.
    Alaa, Ahmed
    NPJ DIGITAL MEDICINE, 2024, 7 (01)
  • [24] Improving Causal Inference of Large Language Models with SCM Tools
    Hua, Zhenyang
    Xing, Shuyue
    Jiang, Huixing
    Wei, Chen
    Wang, Xiaojie
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 3 - 14
  • [25] Folk psychological attributions of consciousness to large language models
    Colombatto, Clara
    Fleming, Stephen M.
    NEUROSCIENCE OF CONSCIOUSNESS, 2024, 2024 (01)
  • [26] EVALUATING INTERNATIONAL MARKETS USING CAUSAL-MODELS
    BLANK, SC
    AMERICAN JOURNAL OF AGRICULTURAL ECONOMICS, 1987, 69 (05) : 1085 - 1085
  • [27] Evaluating large language models on medical evidence summarization
    Tang, Liyan
    Sun, Zhaoyi
    Idnay, Betina
    Nestor, Jordan G.
    Soroush, Ali
    Elias, Pierre A.
    Xu, Ziyang
    Ding, Ying
    Durrett, Greg
    Rousseau, Justin F.
    Weng, Chunhua
    Peng, Yifan
    NPJ DIGITAL MEDICINE, 2023, 6 (01)
  • [28] Methodological Challenges in Evaluating Large Language Models in Radiology
    Li, David
    Kim, Woojin
    Yi, Paul H.
    RADIOLOGY, 2024, 313 (03)
  • [29] CLAIR: Evaluating Image Captions with Large Language Models
    Chan, David M.
    Petryk, Suzanne
    Gonzalez, Joseph E.
    Darrell, Trevor
    Canny, John
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13638 - 13646
  • [30] Evaluating large language models on medical evidence summarization
    Liyan Tang
    Zhaoyi Sun
    Betina Idnay
    Jordan G. Nestor
    Ali Soroush
    Pierre A. Elias
    Ziyang Xu
    Ying Ding
    Greg Durrett
    Justin F. Rousseau
    Chunhua Weng
    Yifan Peng
    npj Digital Medicine, 6