Energy-Efficient Object Tracking Using Adaptive ROI Subsampling and Deep Reinforcement Learning

被引:1
|
作者
Katoch, Sameeksha [1 ]
Iqbal, Odrika [1 ]
Spanias, Andreas [1 ]
Jayasuriya, Suren [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
关键词
Image sensors; Energy efficiency; Object tracking; Kalman filters; Cameras; Visualization; Target tracking; Reinforcement learning; energy optimization; adaptive subsampling; ROI tracking; COMPRESSION; ALGORITHMS; NETWORKS; MODEL;
D O I
10.1109/ACCESS.2023.3270776
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent innovations in ROI camera systems have opened up the avenue for exploring energy optimization techniques like adaptive subsampling. Generally speaking, image frame capture and read-out demand high power consumption. ROI camera systems make it possible to exploit the inverse relation between energy consumption and spatiotemporal pixel readout to optimize the power efficiency of the image sensor. To this end, we develop a reinforcement learning (RL) based adaptive subsampling framework which predicts ROI trajectories and reconfigures the image sensor on-the-fly for improved power efficiency of the image sensing pipeline. In our proposed framework, a pre-trained convolutional neural network (CNN) extracts rich visual features from incoming frames and a long short-term memory (LSTM) network predicts the region of interest (ROI) and subsampling pattern for the consecutive image frame. Based on the application and the difficulty level of object motion trajectory, the user can utilize either the predicted ROI or coarse subsampling pattern to switch off the pixels for sequential frame capture, thus saving energy. We have validated our proposed method by adapting existing trackers for the adaptive subsampling framework and evaluating them as competing baselines. As a proof-of-concept, our method outperforms the baselines and achieves an average AUC score of 0.5090 on three benchmarking datasets. We also characterize the energy-accuracy tradeoff of our method vs. the baselines and show that our approach is best suited for applications that demand both high visual tracking precision and low power consumption. On the TB100 dataset, our method achieves the highest AUC score of 0.5113 out of all the competing algorithms and requires a medium-level power consumption of approximately 4 W as per a generic energy model and an energy consumption of 1.9 mJ as per a mobile system energy model. Although other baselines are shown to have better performance in terms of power consumption, they are ill-suited for applications that require considerable tracking precision, making our method the ideal candidate in terms of power-accuracy tradeoff.
引用
收藏
页码:41995 / 42011
页数:17
相关论文
共 50 条
  • [31] Energy-Efficient and Interference-Aware VNF Placement with Deep Reinforcement Learning
    Mu, Yanyan
    Wang, Lei
    Zhao, Jin
    2021 IFIP NETWORKING CONFERENCE AND WORKSHOPS (IFIP NETWORKING), 2021,
  • [32] Delay-Sensitive Energy-Efficient UAV Crowdsensing by Deep Reinforcement Learning
    Dai, Zipeng
    Liu, Chi Harold
    Han, Rui
    Wang, Guoren
    Leung, Kin K. K.
    Tang, Jian
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (04) : 2038 - 2052
  • [33] Deep Reinforcement Learning for Energy-Efficient Edge Caching in Mobile Edge Networks
    Meng Deng
    Zhou Huan
    Jiang Kai
    Zheng Hantong
    Cao Yue
    Chen Peng
    China Communications, 2024, 21 (11) : 243 - 256
  • [34] Efficient Object Detection in Large Images Using Deep Reinforcement Learning
    Uzkent, Burak
    Yeh, Christopher
    Ermon, Stefano
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 1813 - 1822
  • [35] A Hybrid Spiking Neural Network Reinforcement Learning Agent for Energy-Efficient Object Manipulation
    Oikonomou, Katerina Maria
    Kansizoglou, Ioannis
    Gasteratos, Antonios
    MACHINES, 2023, 11 (02)
  • [36] REINFORCEMENT LEARNING FOR ENERGY-EFFICIENT WIRELESS TRANSMISSION
    Mastronarde, Nicholas
    van der Schaar, Mihaela
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 3452 - 3455
  • [37] An Energy-Efficient Object Tracking Algorithm in Sensor Networks
    Ren, Qianqian
    Gao, Hong
    Jiang, Shouxu
    Li, Jianzhong
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PROCEEDINGS, 2008, 5258 : 237 - 248
  • [38] Trace Pheromone-Based Energy-Efficient UAV Dynamic Coverage Using Deep Reinforcement Learning
    Cheng, Xu
    Jiang, Rong
    Sang, Hongrui
    Li, Gang
    He, Bin
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 1063 - 1074
  • [39] Energy-Efficient Multi-UAV Network using Multi-Agent Deep Reinforcement Learning
    Ju, Hyungyu
    Shim, Byonghyo
    2022 IEEE VTS ASIA PACIFIC WIRELESS COMMUNICATIONS SYMPOSIUM, APWCS, 2022, : 70 - 74
  • [40] Comfortable and energy-efficient speed control of autonomous vehicles on rough pavements using deep reinforcement learning
    Du, Yuchuan
    Chen, Jing
    Zhao, Cong
    Liu, Chenglong
    Liao, Feixiong
    Chan, Ching-Yao
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2022, 134