PhotoID matching

The other key part of digital identity verification is to match the applicant's photo on an ID card or selfie image with the selfie video. The Dubipa PhotoID matching (face matching) API/SDK evaluates whether two faces belong to the same person or not. Face verification ensures a one-to-one (1:1) match of a face image captured at the time of integration with an image captured from a trusted credential such as a driver's license or passport.

In our solution

We have created a very deep convolutional neural network to extract very high-level features from a face for each person. We provide a large-scale image database of faces from many sources, e.g. web crowdsourcing, our in-house integrated dataset. The database contains a wide variety of breeds. The inference time of our model is 115ms on an intel corei7 6700k processor. Mostly in our solution, we save some images from client's SDK which are selected based on our own algorithm. This feature makes our pairing stronger. In addition, we record the checksum of a video to check for duplicate videos. Our method has shown promising results with wide variations in Appearance, eg pose, age gaps, skin tone, glass, makeup and beard.The gaze pattern is used to check eye proximity (whether the user is asleep or not). Head pose verification is used to avoid data diversity. This makes our dynamic liveness model train better and easier. This detection of gaze and pose does not complicate the gesture of the user, on the other hand, it secures the system. In addition, in the whole procedure, a user must adapt his face to an oval. We generate the random oval in the random position of the screens. Displaying the random oval in random screen positions has two advantages. First of all, it prevents injection. Secondly, this movement makes some bold spoofing signals for attacks. In addition, we collect 30 client-side images On the server side, our dynamic face spoofing model, which is a convolutional model, was trained on our data. It uses 30 frames from the video and extracts a depth map capable of detecting depth and detecting spoofing attacks based on depth detection. The main challenge was providing data for training models. Our team collected a large set of data from our first attendance App. We also explored social media to collect a large set of video data, then we cleaned and created spoof data from it. SDK are available for iOS, Android, Linux or Windows. If you want to use our face liveness detection software development kit, please send us a request.