Cross-Domain Correspondence for Sketch-Based 3D Model Retrieval Using Convolutional Neural Network and Manifold Ranking

被引:2
|
作者
Jiao, Shichao [1 ]
Han, Xie [1 ]
Xiong, Fengguang [1 ]
Sun, Fusheng [1 ]
Zhao, Rong [1 ]
Kuang, Liqun [1 ]
机构
[1] North Univ China, Sch Data Sci & Technol, Taiyuan 030051, Peoples R China
来源
IEEE ACCESS | 2020年 / 8卷
基金
中国国家自然科学基金;
关键词
Sketch; 3D model retrieval; deep learning; semantic labels; manifold ranking; convolutional neural network; SHAPE RETRIEVAL; FEATURES;
D O I
10.1109/ACCESS.2020.3006585
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the huge difference in the representation of sketches and 3D models, sketch-based 3D model retrieval is a challenging problem in the areas of graphics and computer vision. Some state-of-the-art approaches usually extract features from 2D sketches and produce multiple projection views of 3D models, and then select one view of 3D models to match sketch. It's hard to find "the best view" and views from different perspectives of a 3D model may be completely different. Other methods apply learning features to retrieve 3D models based on 2D sketch. However, sketches are abstract images and are usually drawn subjectively. It is difficult to be learned accurately. To address these problems, we propose cross-domain correspondence method for sketch-based 3D model retrieval based on manifold ranking. Specifically, we first extract learning features of sketches and 3D models by two-parts CNN structures. Subsequently, we generate cross-domain undirected graphs using learning features and semantic labels to create correspondence between sketches and 3D models. Finally, the retrieval results are computed by manifold ranking. Experimental results on SHREC 13 and SHREC 14 datasets show the superior performance in all 7 standard metrics, compared to the state-of-the-art approaches.
引用
收藏
页码:121584 / 121595
页数:12
相关论文
共 50 条
  • [1] Ranking on Cross-Domain Manifold for Sketch-based 3D Model Retrieval
    Furuya, Takahiko
    Ohbuchi, Ryutarou
    [J]. 2013 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW), 2013, : 274 - 281
  • [2] 3D Sketch-based 3D Model Retrieval with Convolutional Neural Network
    Ye, Yuxiang
    Li, Bo
    Lu, Yijuan
    [J]. 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 2936 - 2941
  • [3] Learning Cross-Domain Neural Networks for Sketch-Based 3D Shape Retrieval
    Zhu, Fan
    Xie, Jin
    Fang, Yi
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 3683 - 3689
  • [4] Novel Sketch-Based 3D Model Retrieval via Cross-domain Feature Clustering and Matching
    Gao, Kai
    Zhang, Jian
    Li, Chen
    Wang, Changbo
    He, Gaoqi
    Qin, Hong
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 299 - 311
  • [5] Sketch-based 3D Shape Retrieval using Convolutional Neural Networks
    Wang, Fang
    Kang, Le
    Li, Yi
    [J]. 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 1875 - 1883
  • [6] Local features and manifold ranking coupled method for sketch-based 3D model retrieval
    Xiaohui Tan
    Yachun Fan
    Ruiliang Guo
    [J]. Frontiers of Computer Science, 2018, 12 : 1000 - 1012
  • [7] Local features and manifold ranking coupled method for sketch-based 3D model retrieval
    Tan, Xiaohui
    Fan, Yachun
    Guo, Ruiliang
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2018, 12 (05) : 1000 - 1012
  • [8] Sketch-Based Cross-Domain Image Retrieval Via Heterogeneous Network
    Zhang, Hao
    Zhang, Chuang
    Wu, Ming
    [J]. 2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2017,
  • [9] Hybrid cross-domain joint network for sketch-based image retrieval
    Li, Qizhen
    Zhou, Yuan
    Li, Chuo
    Peng, Yinan
    Liang, Xianming
    [J]. Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 2022, 54 (05): : 64 - 73
  • [10] Sketch-based Image Retrieval Using Cross-domain Modeling and Deep Fusion Network
    Yu, Deng
    Liu, Yu-Jie
    Xing, Min-Min
    Li, Zong-Min
    Li, Hua
    [J]. Ruan Jian Xue Bao/Journal of Software, 2019, 30 (11): : 3567 - 3577