Friday, November 14, 2014

The Mythology of AI


"The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person?"

- Jaron Lanier 

By Jaron Lanier

A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn't been asked to decide, and declare that corporations are people. That's a cover for making it easier for big money to have an influence in politics. But there's another angle to it, which I don't think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they're essentially algorithms.

Jaron Lanier
If you look at a company like Google or Amazon and many others, they do a little bit of device manufacture, but the only reason they do is to create a channel between people and algorithms. And the algorithms run on these big cloud computer facilities.

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us.

That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

Elon Musk
In the past, all kinds of different figures have proposed that this kind of thing will happen, using different terminology. Some of them like the idea of the computers taking over, and some of them don't. What I'd like to do here today is propose that the whole basis of the conversation is itself askew, and confuses us, and does real harm to society and to our skills as engineers and scientists.

A good starting point might be the latest round of anxiety about artificial intelligence, which has been stoked by some figures who I respect tremendously, including Stephen Hawking and Elon Musk. And the reason it's an interesting starting point is that it's one entry point into a knot of issues that can be understood in a lot of different ways, but it might be the right entry point for the moment, because it's the one that's resonating with people.

The usual sequence of thoughts you have here is something like: "so-and-so," who's a well-respected expert, is concerned that the machines will become smart, they'll take over, they'll destroy us, something terrible will happen. They're an existential threat, whatever scary language there is. My feeling about that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn't optimal is the way it talks about an end of human agency.

But it's a call for increased human agency, so in that sense maybe it's functional, but I want to go little deeper in it by proposing that the biggest threat of AI is probably the one that's due to AI not actually existing, to the idea being a fraud, or at least such a poorly constructed idea that it's phony. In other words, what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.

What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if we talk about the particular technical challenges that AI researchers might be interested in, we end up with something that sounds a little duller and makes a lot more sense.

For instance, we can talk about pattern classification. Can you get programs that recognize faces, that sort of thing? And that's a field where I've been active. I was the chief scientist of the company Google bought that got them into that particular game some time ago. And I love that stuff. It's a wonderful field, and it's been wonderfully useful.

But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous ... when you do all of that, you create a series of negative consequences that undermine engineering practice, and also undermine scientific method, and also undermine the economy.

The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive. I'm going to go through a couple of layers of how the mythology does harm...



Jaron Lanier is a computer scientist, author, and composer. He is one of most celebrated technology writers in the world, and is known for charting a humanistic approach to technology appreciation and criticism. He was awarded the Peace Prize of the German Book Trade in 2014. His book "Who Owns the Future?" won Harvard's Goldsmith Book Prize in 2014.