Development of an Adaptive User Support System Based on Multimodal Large Language Models

被引:0
|
作者
Wang, Wei [1 ]
Li, Lin [2 ]
Wickramathilaka, Shavindra [1 ]
Grundy, John [1 ]
Khalajzadeh, Hourieh [3 ]
Obie, Humphrey O. [1 ]
Madugalla, Anuradha [1 ]
机构
[1] Monash Univ, Dept Software Syst & Cybersecur, Melbourne, Vic, Australia
[2] RMIT Univ, Dept Informat Syst & Business Analyt, Melbourne, Vic, Australia
[3] Deakin Univ, Sch Informat Technol, Melbourne, Vic, Australia
关键词
Adaptive User Support; User Interface; Multimodal Large Language Models (MLLMs);
D O I
10.1109/VL/HCC60511.2024.00044
中图分类号
学科分类号
摘要
As software systems become more complex, some users find it challenging to use these tools efficiently, leading to frustration and decreased productivity. We tackle the shortcomings of conventional user support mechanisms in software and aim to create and assess a user support system that integrates Multimodal Large Language Models (MLLMs) for producing support messages. Our system initially segments the user interface to serve as a reference for selection and requests users to specify their preferences for support messages. Following this, the system creates personalised user support messages for each individual. We propose that user support systems enhanced with MLLMs can provide more efficient and bespoke assistance compared to conventional methods.
引用
收藏
页码:344 / 347
页数:4
相关论文
共 50 条
  • [21] Contextual Object Detection with Multimodal Large Language Models
    Zang, Yuhang
    Li, Wei
    Han, Jun
    Zhou, Kaiyang
    Loy, Chen Change
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (02) : 825 - 843
  • [22] Investigating the Catastrophic Forgetting in Multimodal Large Language Models
    Zhai, Yuexiang
    Tong, Shengbang
    Li, Xiao
    Cai, Mu
    Qu, Qing
    Lee, Yong Jae
    Ma, Yi
    CONFERENCE ON PARSIMONY AND LEARNING, VOL 234, 2024, 234 : 202 - 227
  • [23] Multimodal Food Image Classification with Large Language Models
    Kim, Jun-Hwa
    Kim, Nam-Ho
    Jo, Donghyeok
    Won, Chee Sun
    ELECTRONICS, 2024, 13 (22)
  • [24] A Survey on Multimodal Large Language Models for Autonomous Driving
    Cui, Can
    Ma, Yunsheng
    Cao, Xu
    Ye, Wenqian
    Zhou, Yang
    Liang, Kaizhao
    Chen, Jintai
    Lu, Juanwu
    Yang, Zichong
    Liao, Kuei-Da
    Gao, Tianren
    Li, Erlong
    Tang, Kun
    Cao, Zhipeng
    Zhou, Tong
    Liu, Ao
    Yan, Xinrui
    Mei, Shuqi
    Cao, Jianguo
    Wang, Ziran
    Zheng, Chao
    2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024, 2024, : 958 - 979
  • [25] Woodpecker: hallucination correction for multimodal large language models
    Yin, Shukang
    Fu, Chaoyou
    Zhao, Sirui
    Xu, Tong
    Wang, Hao
    Sui, Dianbo
    Shen, Yunhang
    Li, Ke
    Sun, Xing
    Chen, Enhong
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (12)
  • [26] Do multimodal large language models understand welding?
    Khvatskii, Grigorii
    Lee, Yong Suk
    Angst, Corey
    Gibbs, Maria
    Landers, Robert
    Chawla, Nitesh V.
    INFORMATION FUSION, 2025, 120
  • [27] Woodpecker: hallucination correction for multimodal large language models
    Shukang YIN
    Chaoyou FU
    Sirui ZHAO
    Tong XU
    Hao WANG
    Dianbo SUI
    Yunhang SHEN
    Ke LI
    Xing SUN
    Enhong CHEN
    Science China(Information Sciences), 2024, 67 (12) : 52 - 64
  • [28] Do Multimodal Large Language Models and Humans Ground Language Similarly?
    Jones, Cameron R.
    Bergen, Benjamin
    Trott, Sean
    COMPUTATIONAL LINGUISTICS, 2024, 50 (04) : 1415 - 1440
  • [29] Using Augmented Small Multimodal Models to Guide Large Language Models for Multimodal Relation Extraction
    He, Wentao
    Ma, Hanjie
    Li, Shaohua
    Dong, Hui
    Zhang, Haixiang
    Feng, Jie
    APPLIED SCIENCES-BASEL, 2023, 13 (22):
  • [30] Computing Architecture for Large-Language Models (LLMs) and Large Multimodal Models (LMMs)
    Liang, Bor-Sung
    PROCEEDINGS OF THE 2024 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, ISPD 2024, 2024, : 233 - 234