Newspaper article St Louis Post-Dispatch (MO)

Despite Musk's Dark Warning, Artificial Intelligence Is More Benefit Than Threat

Newspaper article St Louis Post-Dispatch (MO)

Despite Musk's Dark Warning, Artificial Intelligence Is More Benefit Than Threat

Article excerpt

We expect scary predictions about the technological future from philosophers and science fiction writers, not famous technologists.

Elon Musk, though, turns out to have an imagination just as dark as that of Arthur C. Clarke and Stanley Kubrick, who created the sentient and ultimately homicidal computer HAL 9000 in "2001: A Space Odyssey."

Musk, the founder of Tesla, SpaceX, HyperLoop, Solar City and other companies, spoke to the National Governors Association last week on a variety of technology topics. When he got to artificial intelligence, the field of programming computers to replace humans in tasks such as decision making and speech recognition, his words turned apocalyptic.

He called artificial intelligence, or AI, a "fundamental risk to the existence of human civilization." For example, Musk said, an unprincipled user of AI could start a war by spoofing email accounts and creating "fake news" to whip up tension.

Then Musk did something unusual for a businessman who has described himself as "somewhat libertarian": He urged the governors to be proactive in regulating AI. If we wait for the technology to develop and then try to rein it in, he said, we might be too late.

Are scientists that close to creating an uncontrollable, HAL-like intelligence? Sanmay Das, associate professor of computer science and engineering at Washington University, doesn't think so.

"This idea of AI being some kind of super-intelligence, becoming smarter than humans, I don't think anybody would subscribe to that happening in the next 100 years," Das said.

Society does have to face some regulatory questions about AI, he added, but they're not the sort of civilization-ending threat Musk was talking about.

The pressing issues are more like one ProPublica raised last year in its "Machine Bias" investigation. States are using algorithms to tell them which convicts are likely to become repeat offenders, and the software may be biased against African-Americans. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.