Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models

被引:0
|
作者
Zhang, Jinrui [1 ]
Wang, Teng [1 ,2 ]
Zhang, Haigang [3 ]
Lu, Ping [4 ]
Zheng, Feng [1 ,5 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen, Peoples R China
[2] Univ Hong Kong, Hong Kong, Peoples R China
[3] Shenzhen Polytech Univ, Shenzhen, Peoples R China
[4] ZTE Corp, Cloud Comp & IT Inst, Shenzhen, Peoples R China
[5] Peng Cheng Lab, Res Inst Multiple Agents & Embodied Intelligence, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Large Vision Language Models; Visual Instruction Tuning; Hallucination Mitigation;
D O I
10.1007/978-3-031-73113-6_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large vision-language models (LVLMs) have shown promising performance on a variety of vision-language tasks. However, they remain susceptible to hallucinations, generating outputs misaligned with visual content or instructions. While various mitigation strategies have been proposed, they often neglect a key contributor to hallucinations: lack of fine-grained reasoning supervision during training. Without intermediate reasoning steps, models may establish superficial shortcuts between instructions and responses, failing to internalize the inherent reasoning logic. To address this challenge, we propose reflective instruction tuning, which integrates rationale learning into visual instruction tuning. Unlike previous methods that learning from responses only, our approach entails the model predicting rationales justifying why responses are correct or incorrect. This fosters a deeper engagement with the fine-grained reasoning underlying each response, thus enhancing the model's reasoning proficiency. To facilitate this approach, we propose REVERIE, the first large-scale instruction-tuning dataset with ReflEctiVE RatIonalE annotations. REVERIE comprises 115k machine-generated reasoning instructions, each meticulously annotated with a corresponding pair of correct and confusing responses, alongside comprehensive rationales elucidating the justification behind the correctness or erroneousness of each response. Experimental results on multiple LVLM benchmarks reveal that reflective instruction tuning with the REVERIE dataset yields noticeable performance gain over the baseline model, demonstrating the effectiveness of reflecting from the rationales. Project page is at https://zjr2000.github.io/projects/reverie
引用
收藏
页码:196 / 213
页数:18
相关论文
共 50 条
  • [1] Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding
    Wang, Xintong
    Pan, Jingheng
    Ding, Liang
    Biemann, Chris
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 15840 - 15853
  • [2] Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models
    Luo, Gen
    Zhou, Yiyi
    Ren, Tianhe
    Chen, Shengxin
    Sun, Xiaoshuai
    Ji, Rongrong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
    Leng, Sicong
    Zhang, Hang
    Chen, Guanzheng
    Li, Xin
    Lug, Shijian
    Miao, Chunyan
    Bing, Lidong
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 13872 - 13882
  • [4] Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models
    Ma, Chengcheng
    Liu, Yang
    Deng, Jiankang
    Xie, Lingxi
    Dong, Weiming
    Xu, Changsheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4616 - 4629
  • [5] InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
    Dai, Wenliang
    Li, Junnan
    Li, Dongxu
    Tiong, Anthony Meng Huat
    Zhao, Junqi
    Wang, Weisheng
    Li, Boyang
    Fung, Pascale
    Hoi, Steven
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models
    Wu, Junfei
    Liu, Qiang
    Wang, Ding
    Zhang, Jinghao
    Wu, Shu
    Wang, Liang
    Tan, Tieniu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 6944 - 6962
  • [7] Adversarial Prompt Tuning for Vision-Language Models
    Zhang, Jiaming
    Ma, Xingjun
    Wang, Xin
    Qiu, Lingyu
    Wang, Jiaqi
    Jiang, Yu-Gang
    Sang, Jitao
    COMPUTER VISION - ECCV 2024, PT XLV, 2025, 15103 : 56 - 72
  • [8] Task Residual for Tuning Vision-Language Models
    Yu, Tao
    Lu, Zhihe
    Jin, Xin
    Chen, Zhibo
    Wang, Xinchao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10899 - 10909
  • [9] SkyEyeGPT: Unifying remote sensing vision-language tasks via instruction tuning with large language model
    Zhan, Yang
    Xiong, Zhitong
    Yuan, Yuan
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2025, 221 : 64 - 77
  • [10] Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models
    Kim, Minchan
    Kim, Minyeong
    Bae, Junik
    Choi, Suhwan
    Kim, Sungkyung
    Change, Buru
    COMPUTER VISION - ECCV 2024, PT LXXXVI, 2025, 15144 : 236 - 252