Invited Speaker---Assoc. Prof. Nick Pears


Department of Computer Science, University of York, UK


Biography: Nick Pears was awarded both a BSc in Engineering Science and a PhD in Robotics (1990) by Durham University, UK. He then worked in the Robotics Research Group, University of Oxford and the Speech, Vision and Robotics Research Group (now Machine Intelligence Lab), University of Cambridge, where he was a fellow of Girton College. In 1998 he joined the Computer Science Department, University of York, UK, where he works as a Senior Lecturer (Associate Professor) in Computer Vision and Machine Learning, with emphasis on applications in 3D imaging, visual surveillance and visual human-computer interaction. Recently he coedited a graduate text on 3D Imaging, Analysis and Applications and was awarded a Senior Research Fellowship by the Royal Academy of Engineering and Leverhulme Trust for 3D craniofacial modelling. Currently his work on 3D morphable models is supported by a Google Faculty Award.

Speech Title: Automatic Construction of 3D Morphable Models of Shape from Large Scale Datasets
Abstract: Morphable models of 3D surface shape have many applications in medical image analysis, biometrics and creative media. Traditional model building pipelines have used manual landmarking to generate surface correspondences and initialise surface alignment procedures. However, this is extremely time-consuming and laborious for large scale datsets. Here we present a fully automatic approach and apply it to a large dataset of 3D images of the human head, thus generating the first 3D morphable model of the full craniofacial region that models both shape and texture variation. Our approach employs automatic 2D landmarking of the face, which is projected to 3D using the known 2D-to-3D registration generated by the image capture system. This facilitates normalisation of head pose to a canonical position, which matches that our template model. We then employ a hierarchical parts-based template morphing procedure, which is based on Coherent Point Drift. By morphing the same template to every 3D image in the dataset we achieve full surface vertex correspondence across the whole dataset. Generalised Procrustes Analysis is employed to place each scale-normalised template into a common alignment and subsequently Prinicipal Compenent Analysis is used to generate our statistical model. We demonstrate the ability of the model to represent a wide range of faces and we present a case study of the use of the model in the analysis of the pre- and post-operative cranial shape of a set of craniosynostosis patients.