Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model


The KIPS Transactions:PartB , Vol. 9, No. 5, pp. 563-570, Oct. 2002
10.3745/KIPSTB.2002.9.5.563,   PDF Download:

Abstract

Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
S. H. Kim, "Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model," The KIPS Transactions:PartB , vol. 9, no. 5, pp. 563-570, 2002. DOI: 10.3745/KIPSTB.2002.9.5.563.

[ACM Style]
Sang Hoon Kim. 2002. Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model. The KIPS Transactions:PartB , 9, 5, (2002), 563-570. DOI: 10.3745/KIPSTB.2002.9.5.563.