A hand-pose estimation for vision-based human interfaces

被引:73
|
作者
Ueda, E [1 ]
Matsumoto, Y
Imai, M
Ogasawara, T
机构
[1] Nara Inst Sci & Technol, Robot Lab, Nara 6300192, Japan
[2] Tottori Univ Environm Studies, Dept Informat Syst, Tottori 6891111, Japan
关键词
hand-pose estimation; model fitting; silhouette image; vision-based human interface; voxel model;
D O I
10.1109/TIE.2003.814758
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a novel method for a hand-pose estimation that can be used for vision-based human interfaces. The aim of this method is to estimate all joint angles. In this method, the hand regions-are extracted from multiple images obtained by a multiviewpoint camera system. By integrating these multiviewpoint silhouette images, a hand pose is reconstructed as a "voxel model." Then, all joint angles are estimated using a three-dimensional model fitting between the hand model and the voxel model. The following two experiments were performed: 1) an estimation of joint angles by the silhouette images from the hand-pose simulator and 2) hand-pose estimation using real hand images. The experimental results indicate the feasibility of the proposed algorithm for vision-based interfaces, although the algorithm requires faster implementation for real-time processing.
引用
收藏
页码:676 / 684
页数:9
相关论文
empty
未找到相关数据