Efficient Low-rank Multimodal Fusion with Modality-Specific Factors

被引:0
|
作者
Liu, Zhun [1 ]
Shen, Ying [1 ]
Lakshminarasimhan, Varun Bharadhwaj [1 ]
Liang, Paul Pu [1 ]
Zadeh, Amir [1 ]
Morency, Louis-Philippe [1 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Multimodal research is an emerging field of artificial intelligence, and one of the main research problems in this field is multimodal fusion. The fusion of multimodal data is the process of integrating multiple unimodal representations into one compact multimodal representation. Previous research in this field has exploited the expressiveness of tensors for multimodal representation. However, these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor. In this paper, we propose the Low-rank Multimodal Fusion method, which performs multimodal fusion using low-rank tensors to improve efficiency. We evaluate our model on three different tasks: multimodal sentiment analysis, speaker trait analysis, and emotion recognition. Our model achieves competitive results on all these tasks while drastically reducing computational complexity. Additional experiments also show that our model can perform robustly for a wide range of low-rank settings, and is indeed much more efficient in both training and inference compared to other methods that utilize tensor representations.
引用
收藏
页码:2247 / 2256
页数:10
相关论文
共 50 条
  • [41] Efficient Low-rank Backpropagation for Vision Transformer Adaptation
    Yang, Yuedong
    Chiang, Hung-Yueh
    Li, Guihong
    Marculescu, Diana
    Marculescu, Radu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [42] Provably Efficient Algorithm for Nonstationary Low-Rank MDPs
    Cheng, Yuan
    Yang, Jing
    Liang, Yingbin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [43] Efficient low-rank solution of generalized Lyapunov equations
    Shank, Stephen D.
    Simoncini, Valeria
    Szyld, Daniel B.
    NUMERISCHE MATHEMATIK, 2016, 134 (02) : 327 - 342
  • [44] COMPACTER: Efficient Low-Rank Hypercomplex Adapter Layers
    Mahabadi, Rabeeh Karimi
    Henderson, James
    Ruder, Sebastian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [45] Efficient Optimization for Low-Rank Integrated Bilinear Classifiers
    Kobayashi, Takumi
    Otsu, Nobuyuki
    COMPUTER VISION - ECCV 2012, PT II, 2012, 7573 : 474 - 487
  • [46] A Fast and Efficient Algorithm for Low-rank Approximation of a Matrix
    Nguyen, Nam H.
    Do, Thong T.
    Tran, Trac D.
    STOC'09: PROCEEDINGS OF THE 2009 ACM SYMPOSIUM ON THEORY OF COMPUTING, 2009, : 215 - 224
  • [47] An efficient Kalman filter for the identification of low-rank systems
    Dogariu, Laura-Maria
    Paleologu, Constantin
    Benesty, Jacob
    Ciochina, Silviu
    SIGNAL PROCESSING, 2020, 166
  • [48] Efficient methods for grouping vectors into low-rank clusters
    Rangan, Aaditya V.
    JOURNAL OF COMPUTATIONAL PHYSICS, 2011, 230 (14) : 5684 - 5703
  • [49] Efficient Low-Rank Spectrotemporal Decomposition using ADMM
    Schamberg, Gabriel
    Ba, Demba
    Wagner, Mark
    Coleman, Todd
    2016 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2016,
  • [50] Multimodal Low-Rank Tensor Subspace Learning for Hyperspectral Image Restoration
    Lv, Junrui
    Luo, Xuegang
    Wang, Juan
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20