Foreground-based Depth Map Generation for 2D-to-3D Conversion

被引:0
|
作者
Lee, Ho Sub [1 ]
Cho, Sung In [1 ]
Bae, Gyu Jin [1 ]
Kim, Young Hwan [1 ]
Kim, Hi-Seok [2 ]
机构
[1] Pohang Univ Sci & Technol, Dept Elect Engn, Pohang, South Korea
[2] Cheongju Univ, Elect & Informat Engn, Cheongju, South Korea
来源
2015 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) | 2015年
关键词
depth map generation; 2D-to-3D conversion; scene classification; background modeling; foreground extraction;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper proposes a foreground-based approach to generating a depth map which will be used for 2D-to-3D conversion. For a given input image, the proposed approach determines if the image is an object-view (OV) scene or a non-object-view (NOV) scene, depending on the existence of foreground objects which are clearly distinguishable from the background. If the input image is an OV scene, the proposed approach extracts a foreground using block-wise background modeling and performs segmentation using adaptive background region selection and color modeling. Then, it performs segment-wise depth merging and cross bilateral filtering (CBF) to generate a final depth map. On the other hand, for the NOV scene, the proposed approach uses a conventional color-based depth map generation method [9] which has simple operations but provides a 3D depth map of good quality. Human beings are usually more sensitive to depth map quality, and 3D images, for OV scenes than for NOV scenes. With the proposed approach, it is possible to improve the quality of a depth map for OV scenes than using the conventional methods only. The performance of the proposed approach was evaluated through the subjective evaluation after 2D-to-3D conversion using a 3D display, and the proposed one provided the best depth quality and visual comfort among the benchmark methods.
引用
收藏
页码:1210 / 1213
页数:4
相关论文
共 50 条
  • [41] A Block-based 2D-to-3D Conversion System with Bilateral Filter
    Cheng, Chao-Chung
    Lis, Chung-Te
    Huang, Po-Sen
    Lin, Tsung-Kai
    Tsai, Yi-Min
    Chen, Liang-Gee
    2009 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS, 2009, : 393 - 394
  • [42] A novel method for 2D-to-3D video conversion based on boundary information
    Tsai, Tsung-Han
    Huang, Tai-Wei
    Wang, Rui-Zhi
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2018,
  • [43] Learning-Based, Automatic 2D-to-3D Image and Video Conversion
    Konrad, Janusz
    Wang, Meng
    Ishwar, Prakash
    Wu, Chen
    Mukherjee, Debargha
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (09) : 3485 - 3496
  • [44] A novel method for 2D-to-3D video conversion based on boundary information
    Tsung-Han Tsai
    Tai-Wei Huang
    Rui-Zhi Wang
    EURASIP Journal on Image and Video Processing, 2018
  • [45] Object-Based 2D-to-3D Video Conversion for Effective Stereoscopic Content Generation in 3D-TV Applications
    Feng, Yue
    Ren, Jinchang
    Jiang, Jianmin
    IEEE TRANSACTIONS ON BROADCASTING, 2011, 57 (02) : 500 - 509
  • [46] 2D-to-3D method via semantic depth transfer
    Yuan, H. (yuanhx@mail.ustc.edu.cn), 1600, Institute of Computing Technology (26):
  • [47] SEMI-AUTOMATIC 2D-TO-3D VIDEO CONVERSION BASED ON DEPTH PROPAGATION FROM KEY-FRAMES
    Lin, Guo-Shiang
    Huang, Jian-Fa
    Lie, Wen-Nung
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 2202 - 2206
  • [48] IMPROVED 2D-TO-3D VIDEO CONVERSION BY FUSING OPTICAL FLOW ANALYSIS AND SCENE DEPTH LEARNING
    Herrera, Jose L.
    del-Blanco, Carlos R.
    Garcia, Narciso
    2016 3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON), 2016,
  • [49] A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps
    Yin, Shouyi
    Dong, Hao
    Jiang, Guangli
    Liu, Leibo
    Wei, Shaojun
    SENSORS, 2015, 15 (07) : 15246 - 15264
  • [50] 2D-to-3D Conversion by Using Visual Attention Analysis
    Kim, Jiwon
    Baik, Aron
    Jung, Yong Ju
    Park, Dusik
    STEREOSCOPIC DISPLAYS AND APPLICATIONS XXI, 2010, 7524