Multi-Scale Flow-Based Occluding Effect and Content Separation for Cartoon Animations

被引:2
|
作者
Xu, Cheng [1 ]
Qu, Wei [1 ]
Xu, Xuemiao [1 ,2 ,3 ]
Liu, Xueting [4 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Guangdong, Peoples R China
[2] Minist Educ, State Key Lab Subtrop Bldg Sci, Key Lab Big Data & Intelligent Robot, Guangzhou 510006, Peoples R China
[3] Guangdong Prov Key Lab Computat Intelligence & Cyb, Guangzhou 510006, Peoples R China
[4] Caritas Inst Higher Educ, Hk, Peoples R China
关键词
Cartoon effect-content separation; cartoon effect removal; optical flow; INTRINSIC IMAGE DECOMPOSITION; REMOVAL; MODEL;
D O I
10.1109/TVCG.2022.3174656
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Occluding effects have been frequently used to present weather conditions and environments in cartoon animations, such as raining, snowing, moving leaves, and moving petals. While these effects greatly enrich the visual appeal of the cartoon animations, they may also cause undesired occlusions on the content area, which significantly complicate the analysis and processing of the cartoon animations. In this article, we make the first attempt to separate the occluding effects and content for cartoon animations. The major challenge of this problem is that, unlike natural effects that are realistic and small-sized, the effects of cartoons are usually stylistic and large-sized. Besides, effects in cartoons are manually drawn, so their motions are more unpredictable than realistic effects. To separate occluding effects and content for cartoon animations, we propose to leverage the difference in the motion patterns of the effects and the content, and capture the locations of the effects based on a multi-scale flow-based effect prediction (MFEP) module. A dual-task learning system is designed to extract the effect video and reconstruct the effect-removed content video at the same time. We apply our method on a large number of cartoon videos of different content and effects. Experiments show that our method significantly outperforms the existing methods. We further demonstrate how the separated effects and content facilitate the analysis and processing of cartoon videos through different applications, including segmentation, inpainting, and effect migration.
引用
收藏
页码:4001 / 4014
页数:14
相关论文
共 50 条
  • [31] The Multi-Scale Modelling of Coronary Blood Flow
    Jack Lee
    Nicolas P. Smith
    Annals of Biomedical Engineering, 2012, 40 : 2399 - 2413
  • [32] Temporal Multi-Scale Models for Flow and Acceleration
    Yaser Yacoob
    Larry S. Davis
    International Journal of Computer Vision, 1999, 32 : 147 - 163
  • [33] MULTI-SCALE MULTI-BAND DENSENETS FOR AUDIO SOURCE SEPARATION
    Takahashi, Naoya
    Mitsufuji, Yuki
    2017 IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS (WASPAA), 2017, : 21 - 25
  • [34] Interactive multi-scale structures for summarizing video content
    WANG HongAn
    MA CuiXia
    ScienceChina(InformationSciences), 2013, 56 (05) : 103 - 114
  • [35] Interactive multi-scale structures for summarizing video content
    HongAn Wang
    CuiXia Ma
    Science China Information Sciences, 2013, 56 : 1 - 12
  • [36] Interactive multi-scale structures for summarizing video content
    Wang HongAn
    Ma CuiXia
    SCIENCE CHINA-INFORMATION SCIENCES, 2013, 56 (05) : 1 - 12
  • [37] Inter-scale Energy Transfer in a Multi-scale Flow
    Buxton, O. R. H.
    Baj, P.
    PROGRESS IN TURBULENCE VIII, 2019, 226 : 3 - 8
  • [38] Multi-scale analysis for environmental dispersion in wetland flow under the effect of wind
    Mondal, Buddhadeb
    Barman, Krishnendu
    Mazumder, Bijoy S.
    ECOHYDROLOGY, 2023, 16 (01)
  • [39] Effect of acid hydrolysis on the multi-scale structure change of starch with different amylose content
    Chen, Pei
    Xie, Fengwei
    Zhao, Lei
    Qiao, Qian
    Liu, Xingxun
    FOOD HYDROCOLLOIDS, 2017, 69 : 359 - 368
  • [40] Content-based image enhancement in the compressed domain based on multi-scale α-rooting algorithm
    Lee, Sangkeun
    PATTERN RECOGNITION LETTERS, 2006, 27 (10) : 1054 - 1066