Adaptation of facial and body animation for MPEG-based architectures

被引:0
|
作者
Di Giacomo, T [1 ]
Joslin, C [1 ]
Garchery, S [1 ]
Magnenat-Thalmann, N [1 ]
机构
[1] Univ Geneva, MIRALab, CH-1211 Geneva 4, Switzerland
关键词
D O I
10.1109/CYBER.2003.1253458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While level of detail (LoD) methods for the representation of 3D models are efficient and established tools to manage the trade-off between speed and quality of the rendering, LoD for animation has not yet been intensively, studied by the community, and especially virtual humans am. matt. on has not been focused in the past. Animation, a major step for immersive and credible virtual environments, involves heavy, computations and as such, it needs a control on its complexity, to be embedded into real-time systems. Today, it becomes even more critical and necessary, to pro vide such a control with the emergence of powerful new mobile devices and their increasing use for cyberworlds. With the help of suitable middleware solutions, executables are becoming more and more multi-platform. However, the adaptation of content, for various network and terminal capabilities - as well as for different user preferences, is still a key feature that needs to be investigated. It would ensure the adoption of the "Multiple Target Devices Single Content" concept for virtual environments, and would in theory, provide the possibility of such virtual worlds in any possible condition without the need for multiple content. It is on this issue that we focus, with a particular emphasis on 3D objects and animation. This paper presents some methods for adapting a virtual human's representation and animation stream, both for their skeleton-based body animation and their deformation based facial animation, we also discuss practical details for the integration of our methods into MPEG-21 and MPEG-4 architectures.
引用
收藏
页码:221 / 228
页数:8
相关论文
共 50 条
  • [41] MPEG-based performance comparison between network-on-chip and AMBA MPSoC
    Shafik, Rishad A.
    Rosinger, Paul
    Al-Hashimi, Bashir M.
    2008 IEEE WORKSHOP ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS AND SYSTEMS, PROCEEDINGS, 2008, : 98 - 103
  • [42] Adaptive streaming of MPEG-based audio/video content over wireless networks
    Burza, Marek
    Kang, Jeffrey
    van der Stok, Peter
    Journal of Multimedia, 2007, 2 (02): : 17 - 27
  • [43] Pseudo-modeling of MPEG-based variable-bit-rate video
    Feng, WC
    MULTIMEDIA COMPUTING AND NETWORKING 2000, 2000, 3969 : 52 - 64
  • [44] Compression of MPEG-4 facial animation parameters for transmission of talking heads
    Tao, H
    Chen, HH
    Wu, W
    Huang, TS
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 1999, 9 (02) : 264 - 276
  • [45] Automatic Facial Animation Parameters extraction in MPEG-4 visual communication
    Yang, CG
    Gong, WW
    Yu, L
    VISUAL COMMUNICATIONS AND IMAGE PROCESSING 2002, PTS 1 AND 2, 2002, 4671 : 396 - 405
  • [46] Efficient support for interactive scanning operations in MPEG-based video-on-demand systems
    Krunz, M
    Apostolopoulos, G
    MULTIMEDIA SYSTEMS, 2000, 8 (01) : 20 - 36
  • [47] MPEG-4 face and body animation coding applied to HCI
    Petajan, E
    REAL-TIME VISION FOR HUMAN-COMPUTER INTERACTION, 2005, : 249 - 268
  • [48] Statistical learning based facial animation
    Xu, Shibiao
    Ma, Guanghui
    Meng, Weiliang
    Zhang, Xiaopeng
    Journal of Zhejiang University: Science C, 2013, 14 (07): : 542 - 550
  • [49] Statistical learning based facial animation
    Xu, Shibiao
    Ma, Guanghui
    Meng, Weiliang
    Zhang, Xiaopeng
    JOURNAL OF ZHEJIANG UNIVERSITY-SCIENCE C-COMPUTERS & ELECTRONICS, 2013, 14 (07): : 542 - 550
  • [50] Constraint-based facial animation
    Ruttkay Z.
    Constraints, 2001, Kluwer Academic Publishers (06) : 85 - 113