Image classification with quantum pre-training and auto-encoders

被引:7
|
作者
Piat, Sebastien [1 ]
Usher, Nairi [2 ]
Severini, Simone [2 ]
Herbster, Mark [2 ]
Mansi, Tommaso [1 ]
Mountney, Peter [1 ]
机构
[1] Siemens Healthineers, Med Imaging Technol, Princeton, NJ 08540 USA
[2] UCL, Dept Comp Sci, London, England
基金
“创新英国”项目; 英国工程与自然科学研究理事会;
关键词
Quantum computing; machine learning; medical imaging; quantum machine learning;
D O I
10.1142/S0219749918400099
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Computer vision has a wide range of applications from medical image analysis to robotics. Over the past few years, the field has been transformed by machine learning and stands to benefit from potential advances in quantum computing. The main challenge for processing images on current and near-term quantum devices is the size of the data such devices can process. Images can be large, multidimensional and have multiple color channels. Current machine learning approaches to computer vision that exploit quantum resources require a significant amount of manual pre-processing of the images in order to be able to fit them onto the device. This paper proposes a framework to address the problem of processing large scale data on small quantum devices. This framework does not require any dataset-specific processing or information and works on large, grayscale and RGB images. Furthermore, it is capable of scaling to larger quantum hardware architectures as they become available. In the proposed approach, a classical autoencoder is trained to compress the image data to a size that can be loaded onto a quantum device. Then, a Restricted Boltzmann Machine (RBM) is trained on the D-Wave device using the compressed data, and the weights from the RBM are then used to initialize a neural network for image classification. Results are demonstrated on two MNIST datasets and two medical imaging datasets.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Feature Selection using Multiple Auto-Encoders
    Guo, Xinyu
    Minai, Ali A.
    Lu, Long J.
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 4602 - 4609
  • [42] A deep stacked wavelet auto-encoders to supervised feature extraction to pattern classification
    Salima Hassairi
    Ridha Ejbali
    Mourad Zaied
    [J]. Multimedia Tools and Applications, 2018, 77 : 5443 - 5459
  • [43] HGATE: Heterogeneous Graph Attention Auto-Encoders
    Wang, Wei
    Suo, Xiaoyang
    Wei, Xiangyu
    Wang, Bin
    Wang, Hao
    Dai, Hong-Ning
    Zhang, Xiangliang
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 3938 - 3951
  • [44] Comparison of Auto-encoders with Different Sparsity Regularizers
    Zhang, Li
    Lu, Yaping
    [J]. 2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [45] A hybrid learning model based on auto-encoders
    Zhou, Ju
    Ju, Li
    Zhang, Xiaolong
    [J]. PROCEEDINGS OF THE 2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2017, : 522 - 528
  • [46] Complete Stacked Denoising Auto-Encoders for Regression
    María-Elena Fernández-García
    José-Luis Sancho-Gómez
    Antonio Ros-Ros
    Aníbal R. Figueiras-Vidal
    [J]. Neural Processing Letters, 2021, 53 : 787 - 797
  • [47] Robust color image hashing using convolutional stacked denoising auto-encoders for image authentication
    Madhumita Paul
    Arnab Jyoti Thakuria
    Ram Kumar Karsh
    Fazal Ahmed Talukdar
    [J]. Neural Computing and Applications, 2021, 33 : 13317 - 13331
  • [48] Self-Supervised Variational Auto-Encoders
    Gatopoulos, Ioannis
    Tomczak, Jakub M.
    [J]. ENTROPY, 2021, 23 (06)
  • [49] Genomic data imputation with variational auto-encoders
    Qiu, Yeping Lina
    Zheng, Hong
    Gevaert, Olivier
    [J]. GIGASCIENCE, 2020, 9 (08):
  • [50] Towards more effective encoders in pre-training for sequential recommendation
    Ke Sun
    Tieyun Qian
    Ming Zhong
    Xuhui Li
    [J]. World Wide Web, 2023, 26 : 2801 - 2832