Energy-Efficient Object Tracking Using Adaptive ROI Subsampling and Deep Reinforcement Learning

被引:1
|
作者
Katoch, Sameeksha [1 ]
Iqbal, Odrika [1 ]
Spanias, Andreas [1 ]
Jayasuriya, Suren [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
关键词
Image sensors; Energy efficiency; Object tracking; Kalman filters; Cameras; Visualization; Target tracking; Reinforcement learning; energy optimization; adaptive subsampling; ROI tracking; COMPRESSION; ALGORITHMS; NETWORKS; MODEL;
D O I
10.1109/ACCESS.2023.3270776
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent innovations in ROI camera systems have opened up the avenue for exploring energy optimization techniques like adaptive subsampling. Generally speaking, image frame capture and read-out demand high power consumption. ROI camera systems make it possible to exploit the inverse relation between energy consumption and spatiotemporal pixel readout to optimize the power efficiency of the image sensor. To this end, we develop a reinforcement learning (RL) based adaptive subsampling framework which predicts ROI trajectories and reconfigures the image sensor on-the-fly for improved power efficiency of the image sensing pipeline. In our proposed framework, a pre-trained convolutional neural network (CNN) extracts rich visual features from incoming frames and a long short-term memory (LSTM) network predicts the region of interest (ROI) and subsampling pattern for the consecutive image frame. Based on the application and the difficulty level of object motion trajectory, the user can utilize either the predicted ROI or coarse subsampling pattern to switch off the pixels for sequential frame capture, thus saving energy. We have validated our proposed method by adapting existing trackers for the adaptive subsampling framework and evaluating them as competing baselines. As a proof-of-concept, our method outperforms the baselines and achieves an average AUC score of 0.5090 on three benchmarking datasets. We also characterize the energy-accuracy tradeoff of our method vs. the baselines and show that our approach is best suited for applications that demand both high visual tracking precision and low power consumption. On the TB100 dataset, our method achieves the highest AUC score of 0.5113 out of all the competing algorithms and requires a medium-level power consumption of approximately 4 W as per a generic energy model and an energy consumption of 1.9 mJ as per a mobile system energy model. Although other baselines are shown to have better performance in terms of power consumption, they are ill-suited for applications that require considerable tracking precision, making our method the ideal candidate in terms of power-accuracy tradeoff.
引用
收藏
页码:41995 / 42011
页数:17
相关论文
共 50 条
  • [41] Energy-Efficient Cooperative Secure Communications in mmWave Vehicular Networks Using Deep Recurrent Reinforcement Learning
    Ju, Ying
    Gao, Zipeng
    Wang, Haoyu
    Liu, Lei
    Pei, Qingqi
    Dong, Mianxiong
    Mumtaz, Shahid
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 14460 - 14475
  • [42] Energy-Efficient Resource Allocation with Dynamic Cache Using Reinforcement Learning
    Hu, Zeyu
    Li, Zexu
    Li, Yong
    2019 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2019,
  • [43] Energy-efficient Clock-Synchronization in IoT Using Reinforcement Learning
    Assylbek, Damir
    Nadirkhanova, Aizhuldyz
    Zorbas, Dimitrios
    2024 20TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SMART SYSTEMS AND THE INTERNET OF THINGS, DCOSS-IOT 2024, 2024, : 244 - 248
  • [44] Energy-efficient heating control for nearly zero energy residential buildings with deep reinforcement learning
    Qin, Haosen
    Yu, Zhen
    Li, Tailu
    Liu, Xueliang
    Li, Li
    ENERGY, 2023, 264
  • [45] Energy-Efficient and Accelerated Resource Allocation in O-RAN Slicing Using Deep Reinforcement Learning and Transfer Learning
    Sherif, Heba
    Ahmed, Eman
    Kotb, Amira M.
    CYBERNETICS AND INFORMATION TECHNOLOGIES, 2024, 24 (03) : 132 - 150
  • [46] Deep-LK for Efficient Adaptive Object Tracking
    Wang, Chaoyang
    Galoogahi, Hamed Kiani
    Lin, Chen-Hsuan
    Lucey, Simon
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 627 - 634
  • [47] Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile-Edge Computing
    Zhou, Huan
    Jiang, Kai
    Liu, Xuxun
    Li, Xiuhua
    Leung, Victor C. M.
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (02): : 1517 - 1530
  • [48] Energy-Efficient Motion Planning and Control for Robotic Arms via Deep Reinforcement Learning
    Shen, Tan
    Liu, Xing
    Dong, Yunlong
    Yuan, Ye
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 5502 - 5507
  • [49] Deep Reinforcement Learning for Secrecy Energy-Efficient UAV Communication with Reconfigurable Intelligent Surface
    Tham, Mau-Luen
    Wong, Yi Jie
    Iqbal, Amjad
    Bin Ramli, Nordin
    Zhu, Yongxu
    Dagiuklas, Tasos
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [50] Energy-Efficient Mobile Crowdsensing by Unmanned Vehicles: A Sequential Deep Reinforcement Learning Approach
    Piao, Chengzhe
    Liu, Chi Harold
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 6312 - 6324