Method: A sample of 80 individuals with three-dimensional facial images at rest and during speech were recorded. Subjects were asked to pronounce four bilabial words in a relaxed manner and scanned using the 3dMDFace™ Dynamic System at 48 frames per second. Six lip landmarks were identified at rest and the landmark displacement vectors for the frame of maximal lip movement for all six visemes were recorded. Principal component analysis was applied to isolate relationship between lip traits and their registered coordinates. Eight specific resting morphological lip traits were identified for each individual. The principal component (PC) scores for each viseme were labelled by lip morphological trait and were graphically visualized as ellipses to discriminate any differences in lip movement.
Results: The first five PCs accounted for up to 95% of the total variance in lip shape during movement, with PC1 accounting for at least 38%. There was no clear discrimination between PC1, PC2 and PC3 for any of the resting morphological lip traits.
Conclusion: Lip shapes during movement are more uniform between individuals and resting morphological lip shape does not influence movement of the lips.
OBJECTIVES: In this manuscript, the Robotic Facial Recognition System using the Compound Classifier (RERS-CC) is introduced to improve the recognition rate of human faces. The process is differentiated into classification, detection, and recognition phases that employ principal component analysis based learning. In this learning process, the errors in image processing based on the extracted different features are used for error classification and accuracy improvements.
RESULTS: The performance of the proposed RERS-CC is validated experimentally using the input image dataset in MATLAB tool. The performance results show that the proposed method improves detection and recognition accuracy with fewer errors and processing time.
CONCLUSION: The input image is processed with the knowledge of the features and errors that are observed with different orientations and time instances. With the help of matching dataset and the similarity index verification, the proposed method identifies precise human face with augmented true positives and recognition rate.