Low-loss data compression using deep learning framework with attention-based autoencoder

被引:1
|
作者
Sriram, S. [1 ]
Chitra, P. [1 ]
Sankar, V. Vijay [1 ]
Abirami, S. [1 ]
Durai, S. J. Rethina [1 ]
机构
[1] Vellore Inst Technol, Sch Comp Sci & Engn, Chennai, Tamil Nadu, India
关键词
deep learning; multi-layer autoencoder; compression ratio; attention; reconstruction loss; ALGORITHM;
D O I
10.1504/IJCSE.2023.129150
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
With rapid development of media, data compression plays a vital role in efficient data storage and transmission. Deep learning can help the research objective of compression by exploring its technical avenues to overcome the challenges faced by the traditional Windows archivers. The proposed work initially investigates multi-layer autoencoder models, which achieve higher compression rates than traditional Windows archivers but suffer from reconstruction loss. To address this, an attention layer is proposed in the autoencoder to reduce the difference between the encoder and decoder latent representation of an input along with the difference between the original input and reconstructed output. The proposed attention-based autoencoder is extensively evaluated on the atmospheric and oceanic data obtained from the Centre for Development of Advanced Computing (CDAC). The results show that the proposed model performs better with around 89.7% improved compression rate than traditional Windows archiver and 25% reduced reconstruction loss than multi-layer autoencoder.
引用
收藏
页码:90 / 100
页数:11
相关论文
共 50 条
  • [21] RUL Prediction Using a Fusion of Attention-Based Convolutional Variational AutoEncoder and Ensemble Learning Classifier
    Remadna, Ikram
    Terrissa, Labib Sadek
    Al Masry, Zeina
    Zerhouni, Noureddine
    [J]. IEEE TRANSACTIONS ON RELIABILITY, 2023, 72 (01) : 106 - 124
  • [22] Attention-Based Multihead Deep Learning Framework for Online Activity Monitoring With Smartwatch Sensors
    Thakur, Dipanwita
    Guzzo, Antonella
    Fortino, Giancarlo
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (20) : 17746 - 17754
  • [23] DeMAAE: deep multiplicative attention-based autoencoder for identification of peculiarities in video sequences
    Nazia Aslam
    Maheshkumar H. Kolekar
    [J]. The Visual Computer, 2024, 40 : 1729 - 1743
  • [24] A Multivariate Anomaly Detector for Satellite Telemetry Data Using Temporal Attention-Based LSTM Autoencoder
    Xu, Zhaoping
    Cheng, Zhijun
    Guo, Bo
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [25] DeMAAE: deep multiplicative attention-based autoencoder for identification of peculiarities in video sequences
    Aslam, Nazia
    Kolekar, Maheshkumar H.
    [J]. VISUAL COMPUTER, 2024, 40 (03): : 1729 - 1743
  • [26] Depression detection using cascaded attention based deep learning framework using speech data
    Gupta, Sachi
    Agarwal, Gaurav
    Agarwal, Shivani
    Pandey, Dilkeshwar
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (25) : 66135 - 66173
  • [27] Attention-based Deep Learning for Network Intrusion Detection
    Guo, Naiwang
    Tian, Yingjie
    Li, Fan
    Yang, Hongshan
    [J]. 2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [28] ARiADNE: A Reinforcement learning approach using Attention-based Deep Networks for Exploration
    Cao, Yuhong
    Hou, Tianxiang
    Wang, Yizhuo
    Yi, Xian
    Sartoretti, Guillaume
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 10219 - 10225
  • [29] Attention-based Open RAN Slice Management using Deep Reinforcement Learning
    Lotfi, Fatemeh
    Afghah, Fatemeh
    Ashdown, Jonathan
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6328 - 6333
  • [30] Spatio-Temporal Attention-Based Deep Learning Framework for Mesoscale Eddy Trajectory Prediction
    Wang, Xuegong
    Li, Chong
    Wang, Xinning
    Tan, Lining
    Wu, Jin
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2022, 15 : 3853 - 3867