Multibiometric human recognition using 3D ear and face features
MetadataShow full item record
We present automatic extraction of local 3D features (L3DF) from ear and face biometrics and their combination at the feature and score levels for robust identification. To the best of our knowledge, this paper is the first to present feature level fusion of 3D features extracted from ear and frontal face data. Scores from L3DF based matching are also fused with iterative closest point algorithm based matching using a weighted sum rule. We achieve identification and verification (at 0.001 FAR) rates of 99.0% and 99.4%, respectively, with neutral and 96.8% and 97.1% with non-neutral facial expressions on the largest public databases of 3D ear and face.
Showing items related by title, author, creator and subject.
Dincer, Tuna (2000)Lactose is the major carbohydrate in milk. The presence of lactose in whey constitutes a significant pollution problem for dairy factories. At the same time, there is an increasing market for high quality crystalline ...
Classifying pretended and evoked facial expressions of positive and negative affective states using infrared measurement of skin temperatureKhan, Masood Mehmood; Ward, R. D.; Ingleby, M. (2009)Earlier researchers were able to extract the transient facial thermal features from thermal infrared images (TIRIs) to make binary distinctions between the expressions of affective states. However, effective human-computer ...
Savage, R.; Lipp, Ottmar; Craig, B.; Becker, S.; Horstmann, G. (2013)Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects ...