Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models

被引:0
|
作者
Zhang, Jinrui [1 ]
Wang, Teng [1 ,2 ]
Zhang, Haigang [3 ]
Lu, Ping [4 ]
Zheng, Feng [1 ,5 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen, Peoples R China
[2] Univ Hong Kong, Hong Kong, Peoples R China
[3] Shenzhen Polytech Univ, Shenzhen, Peoples R China
[4] ZTE Corp, Cloud Comp & IT Inst, Shenzhen, Peoples R China
[5] Peng Cheng Lab, Res Inst Multiple Agents & Embodied Intelligence, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Large Vision Language Models; Visual Instruction Tuning; Hallucination Mitigation;
D O I
10.1007/978-3-031-73113-6_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large vision-language models (LVLMs) have shown promising performance on a variety of vision-language tasks. However, they remain susceptible to hallucinations, generating outputs misaligned with visual content or instructions. While various mitigation strategies have been proposed, they often neglect a key contributor to hallucinations: lack of fine-grained reasoning supervision during training. Without intermediate reasoning steps, models may establish superficial shortcuts between instructions and responses, failing to internalize the inherent reasoning logic. To address this challenge, we propose reflective instruction tuning, which integrates rationale learning into visual instruction tuning. Unlike previous methods that learning from responses only, our approach entails the model predicting rationales justifying why responses are correct or incorrect. This fosters a deeper engagement with the fine-grained reasoning underlying each response, thus enhancing the model's reasoning proficiency. To facilitate this approach, we propose REVERIE, the first large-scale instruction-tuning dataset with ReflEctiVE RatIonalE annotations. REVERIE comprises 115k machine-generated reasoning instructions, each meticulously annotated with a corresponding pair of correct and confusing responses, alongside comprehensive rationales elucidating the justification behind the correctness or erroneousness of each response. Experimental results on multiple LVLM benchmarks reveal that reflective instruction tuning with the REVERIE dataset yields noticeable performance gain over the baseline model, demonstrating the effectiveness of reflecting from the rationales. Project page is at https://zjr2000.github.io/projects/reverie
引用
收藏
页码:196 / 213
页数:18
相关论文
共 50 条
  • [21] Robust Fine-Tuning of Vision-Language Models for Domain Generalization
    Vogt-Lowell, Kevin
    Lee, Noah
    Tsiligkaridis, Theodoros
    Vaillant, Marc
    2023 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE, HPEC, 2023,
  • [22] Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models
    Kan, Baoshuo
    Wang, Teng
    Lu, Wenpeng
    Zhen, Xiantong
    Guan, Weili
    Zheng, Feng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15624 - 15634
  • [23] Debiased Fine-Tuning for Vision-Language Models by Prompt Regularization
    Zhu, Beier
    Niu, Yulei
    Lee, Saeil
    Hur, Minhoe
    Zhang, Hanwang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3834 - 3842
  • [24] Why Is Prompt Tuning for Vision-Language Models Robust to Noisy Labels?
    Wu, Cheng-En
    Tian, Yu
    Yu, Haichao
    Wang, Heng
    Morgado, Pedro
    Hu, Yu Hen
    Yang, Linjie
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15442 - 15451
  • [25] Visual In-Context Learning for Large Vision-Language Models
    Zhou, Yucheng
    Le, Xiang
    Wang, Qianning
    Shen, Jianbing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 15890 - 15902
  • [26] Learning the Visualness of Text Using Large Vision-Language Models
    Verma, Gaurav
    Rossi, Ryan A.
    Tensmeyer, Christopher
    Gu, Jiuxiang
    Nenkova, Ani
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 2394 - 2408
  • [27] CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs
    Ouali, Yassine
    Bulat, Adrian
    Martinez, Brais
    Tzimiropoulos, Georgios
    COMPUTER VISION - ECCV 2024, PT LXXVI, 2025, 15134 : 395 - 413
  • [28] OCTOPACK: INSTRUCTION TUNING CODE LARGE LANGUAGE MODELS
    Muennighoff, Niklas
    Liu, Qian
    Zebaze, Armel
    Zheng, Qinkai
    Hui, Binyuan
    Zhuo, Terry Yue
    Singh, Swayam
    Tang, Xiangru
    von Werra, Leandro
    Longpre, Shayne
    arXiv, 2023,
  • [29] GraphGPT: Graph Instruction Tuning for Large Language Models
    Tang, Jiabin
    Yang, Yuhao
    Wei, Wei
    Shi, Lei
    Su, Lixin
    Cheng, Suqi
    Yin, Dawei
    Huang, Chao
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 491 - 500
  • [30] Vision-Language Models for Vision Tasks: A Survey
    Zhang, Jingyi
    Huang, Jiaxing
    Jin, Sheng
    Lu, Shijian
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5625 - 5644