Magazine article AI Magazine

Beyond the Turing Test

Magazine article AI Magazine

Beyond the Turing Test

Article excerpt

Alan Turing's renowned test on intelligence, commonly known as the Turing test, is an inescapable signpost in AI. To people outside the field, the test - which hinges on the ability of machines to fool people into thinking that they (the machines) are people - is practically synonymous with the quest to create machine intelligence. Within the field, the test is widely recognized as a pioneering landmark, but also is now seen as a distraction, designed over half a century ago, and too crude to really measure intelligence. Intelligence is, after all, a multidimensional variable, and no one test could possibly ever be definitive truly to measure it. Moreover, the original test, at least in its standard implementations, has turned out to be highly gameable, arguably an exercise in deception rather than a true measure of anything especially correlated with intelligence. The much ballyhooed 2015 Turing test winner Eugene Goostman, for instance, pretends to be a thirteen-year-old foreigner and proceeds mainly by ducking questions and returning canned one-liners; it cannot see, it cannot think, and it is certainly a long way from genuine artificial general intelligence.

Our hope is to see a new suite of tests, part of what we have dubbed the Turing Championships, each designed in some way to move the field forward, toward previously unconquered territory. Most of the articles in this special issue stem from our first workshop toward creating such an event, held during the AAAI Conference on Artificial Intelligence in January 2015 in Austin, Texas.

The articles in this special issue can be broadly divided into those that propose specific tests, and those that look at the challenges inherent in building robust, valid, and reliable tests for advancing the state of the art in artificial intelligence.

In the article My Computer is an Honor Student - But How Intelligent Is It? Standardized Tests as a Measure of AI, Peter Clark and Oren Etzioni argue that standardized tests developed for children offer one starting point for testing machine intelligence.

Ernest Davis in his article How to Write Science Questions That Are Easy for People and Hard for Computers, proposes an alternative test called SQUABU (science questions appraising basic understanding) that aims to asks questions that are easy for people but hard for computers.

In Toward a Comprehension Challenge, Using Crowdsourcing as a Tool, Praveen Paritosh and Gary Marcus propose a crowdsourced comprehension challenge, in which machines will be asked to answer open-ended questions about movies, YouTube videos, podcasts, stories, and podcasts.

The article The Social-Emotional Turing Challenge, by William Jarrold and Peter Z. Yeh, considers the importance of social-emotional intelligence and proposes a methodology for designing tests that assess the ability of machines to infer things like motivations and desires (often referred to in the psychological literature as theory of mind.)

In Artificial Intelligence to Win the Nobel Prize and Beyond: Creating the Engine for Scientific Discovery, Hiroaki Kitano urges the field to build AI systems that can make significant, even Nobel-worthy, scientific discoveries.

In Planning, Executing, and Evaluating the Winograd Schema Challenge, Leora Morgenstern, Ernest Davis, and Charles L. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.