Exploring the Potential of Variational Autoencoders for Modeling Nonlinear Relationships in Psychological Data

被引:0
|
作者
Milano, Nicola [1 ]
Casella, Monica [1 ]
Esposito, Raffaella [1 ]
Marocco, Davide [1 ]
机构
[1] Univ Naples Federico II, Dept Humanist Studies, Nat & Artificial Cognit Lab Orazio Miglino, I-80133 Naples, Italy
关键词
machine learning; variational autoencoders; factor analysis; dimensionality reduction; MAXIMUM-LIKELIHOOD-ESTIMATION; PRINCIPAL COMPONENT ANALYSIS; NEURAL-NETWORKS;
D O I
10.3390/bs14070527
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Latent variables analysis is an important part of psychometric research. In this context, factor analysis and other related techniques have been widely applied for the investigation of the internal structure of psychometric tests. However, these methods perform a linear dimensionality reduction under a series of assumptions that could not always be verified in psychological data. Predictive techniques, such as artificial neural networks, could complement and improve the exploration of latent space, overcoming the limits of traditional methods. In this study, we explore the latent space generated by a particular artificial neural network: the variational autoencoder. This autoencoder could perform a nonlinear dimensionality reduction and encourage the latent features to follow a predefined distribution (usually a normal distribution) by learning the most important relationships hidden in data. In this study, we investigate the capacity of autoencoders to model item-factor relationships in simulated data, which encompasses linear and nonlinear associations. We also extend our investigation to a real dataset. Results on simulated data show that the variational autoencoder performs similarly to factor analysis when the relationships among observed and latent variables are linear, and it is able to reproduce the factor scores. Moreover, results on nonlinear data show that, differently than factor analysis, it can also learn to reproduce nonlinear relationships among observed variables and factors. The factor score estimates are also more accurate with respect to factor analysis. The real case results confirm the potential of the autoencoder in reducing dimensionality with mild assumptions on input data and in recognizing the function that links observed and latent variables.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] EXPLORING VARIATIONAL AUTOENCODERS FOR LEMMATIZATION
    Rebeja, Petru
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE LINGUISTIC RESOURCES AND TOOLS FOR NATURAL LANGUAGE PROCESSING, 2020, : 77 - 82
  • [2] A Generation of Enhanced Data by Variational Autoencoders and Diffusion Modeling
    Kim, Young-Jun
    Lee, Seok-Pil
    [J]. ELECTRONICS, 2024, 13 (07)
  • [3] A Bayesian Nonlinear Reduced Order Modeling Using Variational AutoEncoders
    Akkari, Nissrine
    Casenave, Fabien
    Hachem, Elie
    Ryckelynck, David
    [J]. FLUIDS, 2022, 7 (10)
  • [4] Supervised Variational Autoencoders for Soft Sensor Modeling With Missing Data
    Xie, Ruimin
    Jan, Nabil Magbool
    Hao, Kuangrong
    Chen, Lei
    Huang, Biao
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (04) : 2820 - 2828
  • [5] Towards Data-Driven Volatility Modeling with Variational Autoencoders
    Dierckx, Thomas
    Davis, Jesse
    Schoutens, Wim
    [J]. MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT II, 2023, 1753 : 97 - 111
  • [6] Exploring DNA Methylation Data of Lung Cancer Samples with Variational Autoencoders
    Wang, Zhenxing
    Wang, Yadong
    [J]. PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 1286 - 1289
  • [7] On the Use of Variational Autoencoders for Nonlinear Modal Analysis
    Simpson, Thomas
    Tsialiamanis, George
    Dervilis, Nikolaos
    Worden, Keith
    Chatzi, Eleni
    [J]. NONLINEAR STRUCTURES & SYSTEMS, VOL 1, 2023, : 297 - 300
  • [8] Variational Autoencoders and Nonlinear ICA: A Unifying Framework
    Khemakhem, Ilyes
    Kingma, Diederik P.
    Monti, Ricardo Pio
    Hyvarinen, Aapo
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2207 - 2216
  • [9] Green Generative Modeling: Recycling Dirty Data using Recurrent Variational Autoencoders
    Wang, Yu
    Dai, Bin
    Hua, Gang
    Aston, John
    Wipf, David
    [J]. CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [10] DYNAMIC VARIATIONAL AUTOENCODERS FOR VISUAL PROCESS MODELING
    Sager, Alexander
    Shen, Hao
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3677 - 3681