Knowledge Distillation on Cross-Modal Adversarial Reprogramming for Data-Limited Atribute Inference

被引:3
|
作者
Li, Quan [1 ]
Chen, Lingwei [2 ]
Jing, Shixiong [1 ]
Wu, Dinghao [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Wright State Univ, Dayton, OH 45435 USA
关键词
Attribute Inference; Adversarial Reprogramming; Data-limited Learning; Knowledge Distillation;
D O I
10.1145/3543873.3587313
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Social media generates a rich source of text data with intrinsic user attributes (e.g., age, gender), where different parties benefit from disclosing them. Attribute inference can be cast as a text classification problem, which, however, suffers from labeled data scarcity. To address this challenge, we propose a data-limited learning model to distill knowledge on adversarial reprogramming of a visual transformer (ViT) for attribute inferences. Not only does this novel cross-modal model transfers the powerful learning capability from ViT, but also leverages unlabeled texts to reduce the demand on labeled data. Experiments on social media datasets demonstrate the state-of-the-art performance of our model on data-limited attribute inferences.
引用
收藏
页码:65 / 68
页数:4
相关论文
共 50 条
  • [1] Cross-modal Adversarial Reprogramming
    Neekhara, Paarth
    Hussain, Shehzeen
    Du, Jinglong
    Dubnov, Shlomo
    Koushanfar, Farinaz
    McAuley, Julian
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2898 - 2906
  • [2] CROSS-MODAL KNOWLEDGE DISTILLATION FOR ACTION RECOGNITION
    Thoker, Fida Mohammad
    Gall, Juergen
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 6 - 10
  • [3] Acoustic NLOS Imaging with Cross-Modal Knowledge Distillation
    Shin, Ui-Hyeon
    Jang, Seungwoo
    Kim, Kwangsu
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1405 - 1413
  • [4] Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval
    Li, Mingyong
    Wang, Hongya
    [J]. PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 183 - 191
  • [5] Adversarial Cross-Modal Retrieval
    Wang, Bokun
    Yang, Yang
    Xu, Xing
    Hanjalic, Alan
    Shen, Heng Tao
    [J]. PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 154 - 162
  • [6] Cross-Modal Knowledge Distillation with Dropout-Based Confidence
    Cho, Won Ik
    Kim, Jeunghun
    Kim, Nam Soo
    [J]. PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 653 - 657
  • [7] Semi-Supervised Knowledge Distillation for Cross-Modal Hashing
    Su, Mingyue
    Gu, Guanghua
    Ren, Xianlong
    Fu, Hao
    Zhao, Yao
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 662 - 675
  • [8] Cross-modal knowledge distillation for continuous sign language recognition
    Gao, Liqing
    Shi, Peng
    Hu, Lianyu
    Feng, Jichao
    Zhu, Lei
    Wan, Liang
    Feng, Wei
    [J]. NEURAL NETWORKS, 2024, 179
  • [9] Progressive Cross-modal Knowledge Distillation for Human Action Recognition
    Ni, Jianyuan
    Ngu, Anne H. H.
    Yan, Yan
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5903 - 5912
  • [10] HardVD: High-capacity cross-modal adversarial reprogramming for data-efficient vulnerability detection
    Tian, Zhenzhou
    Li, Haojiang
    Sun, Hanlin
    Chen, Yanping
    Chen, Lingwei
    [J]. INFORMATION SCIENCES, 2025, 686