Q-Learning Cell Selection for Femtocell Networks: Single- and Multi-User Case

被引:0
|
作者
Dhahri, Chaima [1 ]
Ohtsuki, Tomoaki [1 ]
机构
[1] Keio Univ, Dept Comp & Informat Sci, Yokohama, Kanagawa 223, Japan
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we focus on user-centered handover decision making in open-access non-stationary femtocell networks. Traditionally, such handover mechanism is usually based on a measured channel/cell quality metric such as the channel capacity (between the user and the target cell). However, the throughput experienced by the user is time-varying because of the channel condition, i.e. owing to the propagation effects or receiver location. In this context, user decision can depend not only on the current state of the network, but also on the future possible states (horizon). To this end, we need to implement a learning algorithm that can predict, based on the past experience, the best performing cell in the future. We present in this paper a reinforcement learning (RL) framework as a generic solution for the cell selection problem in a non-stationary femtocell network that selects, without prior knowledge about the environment, a target cell by exploring past cells behavior and predicting their potential future state based on Q-learning algorithm. Our algorithm aims at balancing the number of handovers and the user capacity taking into account the dynamic change of the environment. Simulation results demonstrate that our solution offers an opportunistic-like capacity performance with less number of handovers.
引用
收藏
页码:4975 / 4980
页数:6
相关论文
共 50 条
  • [1] Adaptive Q-Learning Cell Selection Method for Open-Access Femtocell Networks: Multi-User Case
    Dhahri, Chaima
    Ohtsuki, Tomoaki
    [J]. IEICE TRANSACTIONS ON COMMUNICATIONS, 2014, E97B (08) : 1679 - 1688
  • [2] A Cooperative Q-learning Approach for Distributed Resource Allocation in Multi-user Femtocell Networks
    Saad, Hussein
    Mohamed, Amr
    ElBatt, Tamer
    [J]. 2014 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2014, : 1490 - 1495
  • [3] Cell Selection Mechanism Based on Q-learning Environment in Femtocell LTE-A Networks
    Bathich, Ammar
    Suliman, Saiful Izwan
    Mansor, Hj Mohd Asri Hj
    Ali, Sinan Ghassan Abid
    Abdulla, Raed
    [J]. JOURNAL OF ICT RESEARCH AND APPLICATIONS, 2021, 15 (01) : 56 - 70
  • [4] Cell Selection in Two-Tier Femtocell Networks Using Q-Learning Algorithm
    Tan, Xu
    Luan, Xi
    Cheng, Yuxin
    Liu, Aimin
    Wu, Jianjun
    [J]. 2014 16TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT), 2014, : 1031 - 1035
  • [5] Multi-agent Q-Learning of Channel Selection in Multi-user Cognitive Radio Systems: A Two by Two Case
    Li, Husheng
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 1893 - 1898
  • [6] A Cooperative Uplink Transmission Technique for the Single- and Multi-User Case
    Tsinos, Christos G.
    Berberidis, Kostas
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS - ICC 2010, 2010,
  • [7] Global Q-Learning Approach for Power Allocation in Femtocell Networks
    Alenezi, Abdulmajeed M.
    Hamdi, Khairi
    [J]. INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2019, PT I, 2019, 11871 : 220 - 228
  • [8] Pricing Scheme based Nash Q-Learning Flow Control for Multi-user Network
    Li, Xin
    Yu, Haibin
    [J]. MATERIALS, MECHATRONICS AND AUTOMATION, PTS 1-3, 2011, 467-469 : 847 - 852
  • [9] Cell Selection Using Distributed Q-Learning in Heterogeneous Networks
    Kudo, Toshihito
    Ohtsuki, Tomoaki
    [J]. 2013 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2013,
  • [10] Multi-User Mm Wave Beam Tracking via Multi-Agent Deep Q-Learning
    MENG Fan
    HUANG Yongming
    LU Zhaohua
    XIAO Huahua
    [J]. ZTE Communications, 2023, 21 (02) : 53 - 60