STEREOPSIS BY CONSTRAINT LEARNING FEEDFORWARD NEURAL NETWORKS

被引:10
|
作者
KHOTANZAD, A
BOKIL, A
LEE, YW
机构
[1] Image Processing and Analysis Laboratory, Electrical Engineering Department, Southern Methodist University, Dallas, TX
来源
关键词
D O I
10.1109/72.207620
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a novel neural network (NN) approach to the problem of stereopsis. The correspondence problem (finding the correct matches between pixels of the epipolar lines of the stereo pair from amongst all the possible matches) is posed as a noniterative many-to-one mapping. Two multilayer feed-forward NN's are utilized to learn and code this nonlinear and complex mapping using the back-propagation learning rule and a training set. The first NN is a conventional fully connected net while the second one is a sparsely connected NN with a fixed number of hidden layer nodes. Three variations of the sparsely connected NN are considered. The important aspect of this technique is that none of the typical constraints such as uniqueness and continuity are explicitly imposed. All the applicable constraints are learned and internally coded by the NN's enabling them to be more flexible and more accurate than the existing methods. The approach is successfully tested on several random-dot stereograms. It is shown that the nets can generalize their learned mappings to cases outside their training sets and to noisy images. Advantages over the Marr-Poggio algorithm are discussed and it is shown that the NN's performances are superior.
引用
收藏
页码:332 / 342
页数:11
相关论文
共 50 条
  • [41] Towards a Mathematical Understanding of the Difficulty in Learning with Feedforward Neural Networks
    Shen, Hao
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 811 - 820
  • [42] Avoiding local minima in feedforward neural networks by simultaneous learning
    Atakulreka, Akarachai
    Sutivong, Daricha
    [J]. AI 2007: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2007, 4830 : 100 - +
  • [43] Parallel Learning of Feedforward Neural Networks Without Error Backpropagation
    Bilski, Jaroslaw
    Wilamowski, Bogdan M.
    [J]. ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2016, 2016, 9692 : 57 - 69
  • [44] A Learning Algorithm for Feedforward Neural Networks Based on Fuzzy Controller
    Yan, Chen
    Yan, Liang
    Jun, Zhai
    Zhou, Zhou
    [J]. PROCEEDINGS OF THE 2008 INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN, VOL 1, 2008, : 348 - 351
  • [45] A Comparative Study of Inductive and Transductive Learning with Feedforward Neural Networks
    Bianchini, Monica
    Belahcen, Anas
    Scarselli, Franco
    [J]. AI*IA 2016: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2016, 10037 : 283 - 293
  • [46] OPTIMAL FILTERING ALGORITHMS FOR FAST LEARNING IN FEEDFORWARD NEURAL NETWORKS
    SHAH, S
    PALMIERI, F
    DATUM, M
    [J]. NEURAL NETWORKS, 1992, 5 (05) : 779 - 787
  • [47] Learning Properties of Feedforward Neural Networks Using Dual Numbers
    Okawa, Yuto
    Nitta, Tohru
    [J]. 2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 187 - 192
  • [48] A new modified hybrid learning algorithm for feedforward neural networks
    Han, F
    Huang, DS
    Cheung, YM
    Huang, GB
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS, 2005, 3496 : 572 - 577
  • [49] Statistical Online Learning in Recurrent and Feedforward Quantum Neural Networks
    Zuev, S. V.
    [J]. DOKLADY MATHEMATICS, 2023, 108 (SUPPL 2) : S317 - S324
  • [50] Automatic scaling using gamma learning for feedforward neural networks
    Engelbrecht, AP
    Cloete, I
    Geldenhuys, J
    Zurada, JM
    [J]. FROM NATURAL TO ARTIFICIAL NEURAL COMPUTATION, 1995, 930 : 374 - 381