Effect-driven Dynamic Selection of Physical Media for Visual IoT Services using Reinforcement Learning

被引:3
|
作者
Baek, KyeongDeok [1 ]
Ko, In-Young [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
基金
新加坡国家研究基金会;
关键词
effect-driven dynamic medium selection; visual service effectiveness; quality of experience; reinforcement learning; internet of things;
D O I
10.1109/ICWS.2019.00019
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Recent advances in Internet of Things (IoT) technologies have encouraged web services to expand their provision boundary to physical environments by utilizing IoT devices. IoT services that generate and deliver physical effects to users via a space utilize such IoT devices as media to interact within a physical environment. Existing studies on dynamic service selection have only considered network-level quality of service (QoS) attributes, which cannot be used to evaluate the quality of the delivery of physical effects from a user perspective. Furthermore, to provide the services in a continuous manner, a dynamic selection of physical media is essential. Herein we propose a new metric called visual service effectiveness to evaluate how well a visual effect, generated using an IoT device as a medium, can be delivered to a user. Based on this metric, we also propose an effect-driven dynamic medium selection agent (EDMS-Agent) that conducts medium selection to maximize the visual service effectiveness during runtime and can be trained using reinforcement learning algorithms. We evaluated our EDMS-Agent by conducting several experiments in simulated IoT environments. The results show that simple distance-based metric is insufficient to measure the quality of physical effects in the user's perspective, and EDMS-Agent performs better than the baselines in terms of effectiveness, by learning the optimal policy of selecting media.
引用
收藏
页码:41 / 49
页数:9
相关论文
共 50 条
  • [1] Dynamic and Effect-Driven Output Service Selection for IoT Environments Using Deep Reinforcement Learning
    Baek, KyeongDeok
    Ko, In-Young
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (04) : 3339 - 3355
  • [2] Effect-Driven Selection of Web of Things Services in Cyber-Physical Systems Using Reinforcement Learning
    Baek, KyeongDeok
    Ko, In-Young
    [J]. WEB ENGINEERING (ICWE 2019), 2019, 11496 : 554 - 559
  • [3] Dynamic Algorithm Selection Using Reinforcement Learning
    Armstrong, Warren
    Christen, Peter
    McCreath, Eric
    Rendell, Alistair P.
    [J]. AIDM 2006: INTERNATIONAL WORKSHOP ON INTEGRATING AI AND DATING MINING, 2006, : 18 - +
  • [4] Trust-driven reinforcement selection strategy for federated learning on IoT devices
    Gaith Rjoub
    Omar Abdel Wahab
    Jamal Bentahar
    Ahmed Bataineh
    [J]. Computing, 2024, 106 : 1273 - 1295
  • [5] Trust-driven reinforcement selection strategy for federated learning on IoT devices
    Rjoub, Gaith
    Wahab, Omar Abdel
    Bentahar, Jamal
    Bataineh, Ahmed
    [J]. COMPUTING, 2024, 106 (04) : 1273 - 1295
  • [6] Dynamic assembly sequence selection using reinforcement learning
    Lowe, G
    Shirinzadeh, B
    [J]. 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, : 2633 - 2638
  • [7] Correlation Filter Selection for Visual Tracking Using Reinforcement Learning
    Xie, Yanchun
    Xiao, Jimin
    Huang, Kaizhu
    Thiyagalingam, Jeyarajan
    Zhao, Yao
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (01) : 192 - 204
  • [8] Dynamic pricing of regulated field services using reinforcement learning
    Mandania, Rupal
    Oliveira, Fernando. S.
    [J]. IISE TRANSACTIONS, 2023, 55 (10) : 1022 - 1034
  • [9] Pattern Driven Dynamic Scheduling Approach using Reinforcement Learning
    Wei Yingzi
    Jiang Xinli
    Hao Pingbo
    Gu Kanfeng
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 514 - +
  • [10] Optimal resource allocation using reinforcement learning for IoT content-centric services
    Gai, Keke
    Qiu, Meikang
    [J]. APPLIED SOFT COMPUTING, 2018, 70 : 12 - 21