Subscribe
About

Defending the human

The Turing Test says less about computers' intelligence than it does about what it means to be human.

Lezette Engelbrecht
By Lezette Engelbrecht, ITWeb online features editor
Johannesburg, 08 Jun 2011

How would you prove that you are human? Stripped of everything, but your ability to communicate, how would you convince a group of fellow humans - highly-trained ones - that you are more homo sapien than your AI program competitor?

The question calls up powerful assumptions about what it means to be a human being, and lies at the heart of a new book by US poet and author Brian Christian, following his experience of doing just that - figuring out how to be the most human human.

In essence, we could end up trying to fool machines into thinking we're human, not vice versa.

Lezette Engelbrecht, online features editor, ITWeb

Christian competed in the 2009 Turing Test - a challenge conceived by mathematician Alan Turing whereby a computer program tries to fool judges into thinking it's human. Christian joined three fellow humans and four computers, which each partake in a five-minute IM chat with the judges, to prove his authenticity. He also competed for one of the contest's more intriguing prizes - the Loebner award for the most human human.

Part of what drove Christian to become a confederate (one of the “defendants” trying to prevent the machines from winning) was Turing's prediction in 1950 that computers would be passing the test - by fooling the judges at least 30% of the time - by the year 2000.

In 2008, one vote was all that separated the computers from achieving this very feat.

Christian went about researching how to save humanity some face and become a more believable version of what he already was. In this instance, the advice he was given by one of the organisers: “Just be yourself; you are, after all, human,” proved inadequate preparation.

This is where the test's legitimacy comes into question. It can never be a test of individuality (what reasonably could?) only the successful mimicking of certain characteristics the judges consider to be “human”. And therein lies the rub - what it means to be human is obviously a subjective perception. Researchers can agree on a set of indicators, such as the ability to reason, or communicate verbally, but it's not necessarily a more accurate measure than say, the capacity for compassion, or cruelty.

One of the previous Loebner award contestants, Wired columnist Charles Platt, said he outsmarted the machines by being “moody, irritable, and obnoxious”, which certainly reflects the rather human approach of “When you can't beat 'em, get nasty”. Yet, the thought that our only defence, when wit or knowledge fails, is our distinguishing penchant for ill-temper, is a rather depressing conclusion.

Where else to look though? The AI programs have already mastered some of our more endearing traits, such as a sense of humour. The 2008 transcripts reveal the computers engaging in happy emoticon banter with the judges, while the human competitors are being placated with apologies for talking about something as banal as the weather.

And even if we could engineer bots to possess just enough “human” qualities to serve a functional role, it conjures up a few worrying scenarios. If AI programs are used for future administration purposes, for example, could a person be denied access to a certain procedure because they didn't pass the test for “humanness”? Again, what kind of “human” criteria will be used as a test?

Imagine an evolution of those voice-prompt systems where if your tone or intonation or syllable emphasis doesn't match the sounds the system has been programmed to recognise, then you're forced to repeat the process until your mimicry of another person's voice is satisfactory. Failing that, you're connected to an operator, because you're too poor an imitation of a human. So, in essence, we could end up trying to fool machines into thinking we're human, not vice versa.

New age Frankenstein

The abilities of these super machines will ultimately depend on the skills of their programmers, which means programs will only ever reflect an advanced, but limited, aspect of human nature. The nuances of people's behaviour played out in real life are too diverse, too unique, too transient, to ever be translated into coded form.

For how many intangible yet undeniably human experiences take place outside the scope of observation that are intrinsic to people's overall “intelligence”? What about shows of compassion or empathy, the instinctive knowing when to reach out and when to hold back, spontaneity, pride, faith, despair, hope - are these not equally central to human experience than the ability to be a savvy communicator?

If a core element of human thinking involves one's individual identity, which is woven together by a set of factors so unique they're impossible to reproduce, then any claim of humanness is merely a copy of various generalisations and stereotypes.

The format of the Turing Test also poses problems; it bases human intelligence on being able to engage in topics of a predominately western nature, in a communication style that's typical only to certain groups.

Of course, the obsession with trying to create a human-like being, the old Frankenstein narrative, is perhaps one of the most human traits of all. While wild animals procreate for strictly biological reasons, in people there's an additional element of curious narcissism - what would it be like to reproduce a form of oneself? A fascination we see played out in the field of cloning.

In trying to determine computers' ability to think, then, the goalposts will constantly shift, as does our own understanding of what it means to be human. In Christian's words: “There's a sense in which this contest, which we invented as a means for measuring the machines, actually turns out to be a means of measuring ourselves.”

Share