The scientists at Facebook, in the interests of pure research, have been working to improve the ability of software to recognize the same person's face in two different photos. They now report that their DeepFace software has reached 97.25 percent accuracy.

The software rotates facial images to correct for the different angles at which they were taken, to produce a consistent numerical map of features that can be compared from one photo to another. As machines continue to study the celebrity-photograph dataset known as Labeled Faces in the Wild, the scientists write, their performance "marches steadily toward the human performance of over 97.55."

Steadily, the machines of Facebook are marching toward mastery of the relationship between faces. The MIT Technology Review, reporting this news, makes sure to remind humans that this is strictly laboratory knowledge, at present. DeepFace:

performs what researchers call facial verification (it recognizes that two images show the same face), not facial recognition (putting a name to a face).

Obviously these are very different tasks. The fact that a face in one photo is the same as a face in another photo does nothing, in itself, to identify the person in that photograph. Without a name attached to one of the photographs, the whole exercise would be a dead end.

Before this technology could ever begin to move from the realm of theory to any real-world application, Facebook would need to possess a large database of photographs linked with names.

[Image via Facebook]