Data-Independent Feature Learning with Markov Random Fields in Convolutional Neural Networks

被引:3
|
作者
Peng, Yao [1 ]
Hankins, Richard [1 ]
Yin, Hujun [1 ]
机构
[1] Univ Manchester, Sch Elect & Elect Engn, Manchester M13 9PL, Lancs, England
关键词
Convolutional neural networks; Image representation; Markov random fields; Gibbs distribution; Self-organising maps; Image classification; Image features; SPATIAL-INTERACTION; CONVERGENCE; MODELS; SPACE;
D O I
10.1016/j.neucom.2019.03.107
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In image classification, deriving robust image representations is a key process that determines the performance of vision systems. Numerous image features and descriptors have been developed manually over the years. As an alternative, however, deep neural networks, in particular convolutional neural networks (CNNs), have become popular for learning image features or representations from data and have demonstrated remarkable performance in many real-world applications. But CNNs often require huge amount of labelled data, which may be prohibitive in many applications, as well as long training times. This paper considers an alternative, data-independent means of obtaining features for CNNs. The proposed framework makes use of the Markov random field (MRF) and self-organising map (SOM) to generate basic features and model both intra- and inter-image dependencies. Various MRF textures are synthesized first, and are then clustered by a convolutional translation-invariant SOM, to form generic image features. These features can be directly applied as early convolutional filters of the CNN, leading to a new way of deriving effective features for image classification. The MRF framework also offers a theoretical and transparent way to examine and determine the influence of image features on performance of CNNs. Comprehensive experiments on the MNIST, rotated MNIST, CIFAR-10 and CIFAR-100 datasets were conducted with results outperforming most state-of-the-art models of similar complexity. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:24 / 35
页数:12
相关论文
共 50 条
  • [21] Extended Siamese Convolutional Neural Networks for Discriminative Feature Learning
    Lee, Sangyun
    Hong, Sungjun
    INTERNATIONAL JOURNAL OF FUZZY LOGIC AND INTELLIGENT SYSTEMS, 2022, 22 (04) : 339 - 349
  • [22] Ensemble feature learning for material recognition with convolutional neural networks
    Bian, Peng
    Li, Wanwan
    Jin, Yi
    Zhi, Ruicong
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2018,
  • [23] Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks
    Dosovitskiy, Alexey
    Fischer, Philipp
    Springenberg, Jost Tobias
    Riedmiller, Martin
    Brox, Thomas
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (09) : 1734 - 1747
  • [24] Segmentation of sonar imagery using convolutional neural networks and Markov random field
    Peng Liu
    Yan Song
    Multidimensional Systems and Signal Processing, 2020, 31 : 21 - 47
  • [25] Segmentation of sonar imagery using convolutional neural networks and Markov random field
    Liu, Peng
    Song, Yan
    MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING, 2020, 31 (01) : 21 - 47
  • [26] Relational Neural Markov Random Fields
    Chen, Yugiao
    Natarajan, Sriraam
    Ruozzi, Nicholas
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [27] Loss networks and Markov random fields
    Zachary, S
    Ziedins, I
    JOURNAL OF APPLIED PROBABILITY, 1999, 36 (02) : 403 - 414
  • [28] Functional data learning using convolutional neural networks
    Galarza, J.
    Oraby, T.
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (01):
  • [29] LEARNING IN GAUSSIAN MARKOV RANDOM FIELDS
    Riedl, Thomas J.
    Singer, Andrew C.
    Choi, Jun Won
    2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 3070 - 3073
  • [30] Multiscale Bayesian texture segmentation using neural networks and Markov random fields
    Tae Hyung Kim
    Il Kyu Eom
    Yoo Shin Kim
    Neural Computing and Applications, 2009, 18 : 141 - 155