Facial recognition algorithms must now be protected against mastermind attacks – researchers

Biometrics researchers say lead face attacks pose “a serious security threat” to underprotected facial recognition algorithms.

Four IEEE scientists studied a way to create so-called master faces for presentation attacks using a latent variable evolution algorithm to learn, among other things, how to best create strong master faces. These created or transformed images correspond to aspects of several patterns registered in facial recognition systems and are used in presentation attacks.

In their paper, the scientists urge developers and others to be aware of major faces and the threat they pose to facial recognition systems. One of the main targets would be phones protected by facial recognition locks.

They write that the attacks could be mitigated using algorithms with “a well-designed objective function trained on a large database balanced with a fake image detector”.

According to the article, the objective functions used in facial recognition code training need to be improved. Simply extending the database made the algorithms more reliable, but not invulnerable.

Article topics

algorithms | biometric authentication | biometrics | biometric search | facial recognition | Idiap | IEEE | presentation attack detection

Sharon D. Cole