Mobile User Interface Adaptation Based on Usability Reward Model and Multi-Agent Reinforcement Learning

被引:0
|
作者
Vidmanov, Dmitry [1 ]
Alfimtsev, Alexander [1 ]
机构
[1] Bauman Moscow State Tech Univ, Informat Syst & Telecommun, Moscow 105005, Russia
关键词
deep learning; reinforcement learning; multi-agent systems; adaptive systems; mobile user interface; usability; user experience; LEVEL;
D O I
10.3390/mti8040026
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today, reinforcement learning is one of the most effective machine learning approaches in the tasks of automatically adapting computer systems to user needs. However, implementing this technology into a digital product requires addressing a key challenge: determining the reward model in the digital environment. This paper proposes a usability reward model in multi-agent reinforcement learning. Well-known mathematical formulas used for measuring usability metrics were analyzed in detail and incorporated into the usability reward model. In the usability reward model, any neural network-based multi-agent reinforcement learning algorithm can be used as the underlying learning algorithm. This paper presents a study using independent and actor-critic reinforcement learning algorithms to investigate their impact on the usability metrics of a mobile user interface. Computational experiments and usability tests were conducted in a specially designed multi-agent environment for mobile user interfaces, enabling the implementation of various usage scenarios and real-time adaptations.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Filtered Observations for Model-Based Multi-agent Reinforcement Learning
    Meng, Linghui
    Xiong, Xuantang
    Zang, Yifan
    Zhang, Xi
    Li, Guoqi
    Xing, Dengpeng
    Xu, Bo
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT IV, 2023, 14172 : 540 - 555
  • [32] Reinforcement learning model based on regret for multi-agent conflict games
    Department of Computer and Information Technology, Fudan University, Shanghai 200433, China
    [J]. Ruan Jian Xue Bao, 2008, 11 (2957-2967):
  • [33] Multi-agent Deep Reinforcement Learning Based Adaptive User Association in Heterogeneous Networks
    Yi, Weiwen
    Zhang, Xing
    Wang, Wenbo
    Li, Jing
    [J]. COMMUNICATIONS AND NETWORKING, CHINACOM 2018, 2019, 262 : 57 - 67
  • [34] Multi-Agent Deep Reinforcement Learning based User Association for Dense mmWave Networks
    Sana, Mohamed
    De Domenico, Antonio
    Strinati, Emilio Calvanese
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [35] Cooperative Multi-Agent Reinforcement Learning With Approximate Model Learning
    Park, Young Joon
    Lee, Young Jae
    Kim, Seoung Bum
    [J]. IEEE ACCESS, 2020, 8 : 125389 - 125400
  • [36] A Hybrid Interaction Model for Multi-Agent Reinforcement Learning
    Guisi, Douglas M.
    Ribeiro, Richardson
    Teixeira, Marcelo
    Borges, Andre P.
    Dosciatti, Eden R.
    Enembreck, Fabricio
    [J]. PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS, VOL 2 (ICEIS), 2016, : 54 - 61
  • [37] LMRL: A multi-agent reinforcement learning model and algorithm
    Wang, BN
    Gao, Y
    Chen, ZQ
    Xie, JY
    Chen, SF
    [J]. Third International Conference on Information Technology and Applications, Vol 1, Proceedings, 2005, : 303 - 307
  • [38] A cooperation model using reinforcement learning for multi-agent
    Lee, M
    Lee, J
    Jeong, HJ
    Lee, Y
    Choi, S
    Gatton, TM
    [J]. COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2006, PT 5, 2006, 3984 : 675 - 681
  • [39] Robust Multi-Agent Reinforcement Learning with Model Uncertainty
    Zhang, Kaiqing
    Sun, Tao
    Tao, Yunzhe
    Genc, Sahika
    Mallya, Sunil
    Basar, Tamer
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [40] Multi-agent reinforcement learning: An approach based on the other agent's internal model
    Nagayuki, Y
    Ishii, S
    Doya, K
    [J]. FOURTH INTERNATIONAL CONFERENCE ON MULTIAGENT SYSTEMS, PROCEEDINGS, 2000, : 215 - 221