Subscribe
About

Singularity as a service

Facebook's DeepFace is a sign of how we are developing better algorithms to bring singularity within reach.

Jon Tullett
By Jon Tullett, Editor: News analysis
Johannesburg, 20 Mar 2014

Researchers at Facebook recently announced a breakthrough in facial recognition, achieving accuracy much higher than previously possible. Engineers Taigman, Yang, Ranzato and Wolf published a paper disclosing a system dubbed DeepFace, which has achieved a 97.25% accuracy rate against sample images of faces.

This is a stunning achievement - more than a 25% reduction in error rate over the previous best solutions, but more to the point, it is within touching distance of the average human's own ability to recognise faces: 97.53%.

There are two major contributors to the success here. First is a clever new algorithm, which builds a 3D model of the human face from images provided, then uses that to identify a sample. But when DeepFace moves beyond research and into deployment, it will be Facebook's enormous dataset of tagged images which delivers usable results.

Cognitive computing goes big data

Facial recognition has advanced tremendously in the last few years, but it is big data which has really taken it to new heights. About a decade ago, a leading facial recognition product, in a controlled demo environment, identified me with a high degree of confidence as an elderly Asian lady, much to the amusement of my young, male, Caucasian self.

It's a big step from there to Facebook's 97.25% accuracy, and with much more difficult sample data too - DeepFace can handle images of faces at angles by applying its 3D transformations.

Although we've shrunk computers from the size of a building to something you can embed in a watch, data has scaled faster in the other direction. "Big data" describes a problem - of having data which is either too big to handle, or which cannot be processed fast enough to deliver results. Cognitive computing has typically been seen as a speed problem - having enough CPU horsepower to mimic intelligence. Turning it into a big data problem achieves a profound change: we have tools for big data, and they are getting better all the time.

Elementary, my dear Watson

IBM has been pushing the boundaries of human-vs-computer competition for years. In the late 90s, its Deep Blue supercomputer achieved the first competition-level victory against chess grandmaster Garry Kasparov. That victory was largely a horsepower achievement - modern chess software can achieve similar levels of competence on a decent personal computer.

In 2011, IBM achieved a much more difficult task when its Watson computer system, comprising plenty of compute power, but also a prodigious amount of data, won the general knowledge quiz show Jeopardy.

If Deep Blue was mainly a research project, Watson has turned quickly into commercial reality, thanks to the cloud. Earlier this year, IBM declared it was developing a new business unit to bring the Watson system to market as a hosted service. The system will be used in fields such as medicine.

'Dear Aunt, let's set so double the killer...'

Voice recognition is another field where we have seen tremendous progress, and again the cloud is a major factor. Voice dictation software has been notoriously difficult and error-prone for decades, and although it has improved, it failed to win mainstream acceptance. Futurists have always assumed it would happen, though: Bill Gates predicted voice interfaces in his book "The Road Ahead" in 1995, and 10 years later Microsoft's public demo of speech recognition in Windows Vista went hilariously, terribly wrong. Although the software was not really as bad as that demo suggested, it was still very limited.

About a decade ago, a leading facial recognition product, in a controlled demo environment, identified me with a high degree of confidence as an elderly Asian lady.

Jump forward a few years, and we have Apple's Siri, bringing usable speech recognition to everyday consumers. And it does it via the cloud: by uploading speech samples, processing them in a data centre, which is definitely not small enough to fit in your pocket, and sending data back. Other vendors have followed suit, and some have achieved on-handset voice processing on par with PC software. It is eminently clear that the real smarts are in the cloud.

Singularity will not be televised

The notion of a "singularity", or a point where computer intelligence passes human intelligence, was predicted by John von Neumann and popularised by author and technologist Ray Kurzweil, whose background lies in human-computer interfaces.

Kurzweil has more recently been exploring ways to mine big data to improve language comprehension for artificial intelligence. It should come as no surprise that he accepted a directorship at Google to further his research.

Futurists like Kurzweil and Vernor Vinge have targeted the 2040s as their candidate timeframe for the singularity - the moment that cognitive computing achieves better-than-human performance.

Facebook's DeepFace is a sign of how we are developing better algorithms to bring that within reach, but also how the underlying data is lending itself to future developments.

The big players will be those with both the analytical research and the massive data to hand. It is perhaps indicative of the times that the first AI will probably be born within a social network.

Share