Today I read a paper titled “Fully Automatic Expression-Invariant Face Correspondence”
The abstract is:
We consider the problem of computing accurate point-to-point correspondences among a set of human face scans with varying expressions
Our fully automatic approach does not require any manually placed markers on the scan
Instead, the approach learns the locations of a set of landmarks present in a database and uses this knowledge to automatically predict the locations of these landmarks on a newly available scan
The predicted landmarks are then used to compute point-to-point correspondences between a template model and the newly available scan
To accurately fit the expression of the template to the expression of the scan, we use as template a blendshape model
Our algorithm was tested on a database of human faces of different ethnic groups with strongly varying expressions
Experimental results show that the obtained point-to-point correspondence is both highly accurate and consistent for most of the tested 3D face models