Deep Texture Exemplar Extraction Based on Trimmed T-CNN

被引:0
|
作者
Wu, Huisi [1 ]
Yan, Wei [1 ]
Li, Ping [2 ]
Wen, Zhenkun [1 ]
机构
[1] College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
[2] Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong
基金
中国国家自然科学基金;
关键词
Computer graphics - Deep neural networks - Convolution - Filter banks - Textures;
D O I
暂无
中图分类号
学科分类号
摘要
Texture exemplar has been widely used in synthesizing 3D movie scenes and appearances of virtual objects. Unfortunately, conventional texture synthesis methods usually only emphasized on generating optimal target textures with arbitrary sizes or diverse effects, and put little attention to automatic texture exemplar extraction. Obtaining texture exemplars is still a labor intensive task, which usually requires carefully cropping and post-processing. In this paper, we present an automatic texture exemplar extraction based on Trimmed Texture Convolutional Neural Network (Trimmed T-CNN). Specifically, our Trimmed T-CNN is filter banks for texture exemplar classification and recognition. Our Trimmed T-CNN is learned with a standard ideal exemplar dataset containing thousands of desired texture exemplars, which were collected and cropped by our invited artists. To efficiently identify the exemplar candidates from an input image, we employ a selective search algorithm to extract the potential texture exemplar patches. We then put all candidates into our Trimmed T-CNN for learning ideal texture exemplars based on our filter banks. Finally, optimal texture exemplars are identified with a scoring and ranking scheme. Our method is evaluated with various kinds of textures and user studies. Comparisons with different feature-based methods and different deep CNN architectures (AlexNet, VGG-M, Deep-TEN and FV-CNN) are also conducted to demonstrate its effectiveness. © 1999-2012 IEEE.
引用
收藏
页码:4502 / 4514
相关论文
共 50 条
  • [1] Deep Texture Exemplar Extraction Based on Trimmed T-CNN
    Wu, Huisi
    Yan, Wei
    Li, Ping
    Wen, Zhenkun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 4502 - 4514
  • [2] T-CNN time series classification method based on Gram matrix
    Wang J.-L.
    Li S.
    Ji W.-T.
    Jiang T.
    Song B.-Y.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2023, 57 (02): : 267 - 276
  • [3] A T-CNN time series classification method based on Gram matrix
    Wang, Junlu
    Li, Su
    Ji, Wanting
    Jiang, Tian
    Song, Baoyan
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [4] A T-CNN time series classification method based on Gram matrix
    Junlu Wang
    Su Li
    Wanting Ji
    Tian Jiang
    Baoyan Song
    Scientific Reports, 12
  • [5] Automatic Texture Exemplar Extraction Based on a Novel Textureness Metric
    Wu, Huisi
    Jiang, Junrong
    Li, Ping
    Wen, Zhenkun
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 798 - 806
  • [6] Automatic texture exemplar extraction based on global and local textureness measures
    Wu H.
    Lyu X.
    Wen Z.
    Computational Visual Media, 2018, 4 (2) : 173 - 184
  • [7] Automatic texture exemplar extraction based on global and local textureness measures
    Huisi Wu
    Xiaomeng Lyu
    Zhenkun Wen
    Computational Visual Media, 2018, 4 (02) : 173 - 184
  • [8] Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos
    Hou, Rui
    Chen, Chen
    Shah, Mubarak
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5823 - 5832
  • [9] Exemplar based surface texture
    Haro, A
    Essa, I
    VISION, MODELING, AND VISUALIZATION 2003, 2003, : 95 - +
  • [10] T-CNN: Tubelets With Convolutional Neural Networks for Object Detection From Videos
    Kang, Kai
    Li, Hongsheng
    Yan, Junjie
    Zeng, Xingyu
    Yang, Bin
    Xiao, Tong
    Zhang, Cong
    Wang, Zhe
    Wang, Ruohui
    Wang, Xiaogang
    Ouyang, Wanli
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) : 2896 - 2907