Adaptive Transformers for Learning Multimodal Representations

被引:0
|
作者
Bhargava, Prajjwal
机构
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The usage of transformers has grown from learning about language semantics to forming meaningful visiolinguistic representations. These architectures are often over-parametrized, requiring large amounts of computation. In this work, we extend adaptive approaches to learn more about model interpretability and computational efficiency. Specifically, we study attention spans, sparse, and structured dropout methods to help understand how their attention mechanism extends for vision and language tasks. We further show that these approaches can help us learn more about how the network perceives the complexity of input sequences, sparsity preferences for different modalities, and other related phenomena.
引用
下载
收藏
页码:1 / 7
页数:7
相关论文
共 50 条
  • [1] Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective
    Salin, Emmanuelle
    Farah, Badreddine
    Ayache, Stephane
    Favre, Benoit
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11248 - 11257
  • [2] Multimodal Learning With Transformers: A Survey
    Xu, Peng
    Zhu, Xiatian
    Clifton, David A.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12113 - 12132
  • [3] MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question Answering
    Wang, Junjie
    Ji, Yatai
    Sun, Jiaqi
    Yang, Yujiu
    Sakai, Tetsuya
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2280 - 2292
  • [4] Learning to Learn Better Unimodal Representations via Adaptive Multimodal Meta-Learning
    Sun, Ya
    Mai, Sijie
    Hu, Haifeng
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2209 - 2223
  • [5] Learning Multimodal Representations for Unseen Activities
    Piergiovanni, A. J.
    Ryoo, Michael S.
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 506 - 515
  • [6] Learning Multimodal Representations for Drowsiness Detection
    Qian, Kun
    Koike, Tomoya
    Nakamura, Toru
    Schuller, Bjoern
    Yamamoto, Yoshiharu
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) : 11539 - 11548
  • [7] LEARNING TO FUSE LATENT REPRESENTATIONS FOR MULTIMODAL DATA
    Oyedotun, Oyebade K.
    Aouada, Djamila
    Ottersten, Bjoern
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3122 - 3126
  • [8] Learning Disentangled Multimodal Representations for the Fashion Domain
    Saha, Amrita
    Nawhal, Megha
    Khaprat, Mitesh M.
    Raykar, Vikas C.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 557 - 566
  • [9] Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations
    Mai S.
    Zeng Y.
    Hu H.
    IEEE Transactions on Multimedia, 2023, 25 : 4121 - 4134
  • [10] DeepTraSynergy: drug combinations using multimodal deep learning with transformers
    Rafiei, Fatemeh
    Zeraati, Hojjat
    Abbasi, Karim
    Ghasemi, Jahan B.
    Parsaeian, Mahboubeh
    Masoudi-Nejad, Ali
    BIOINFORMATICS, 2023, 39 (08)