Excellent, long discussion about "The Myth of AI" with Jaron Lanier:
http://edge.org/conversation/the-myth-of-aiI liked this riff in particular:
I'll give you a few examples of what I mean by that. Maybe I'll start with Netflix. The thing about Netflix is that there isn't much on it. There's a paucity of content on it. If you think of any particular movie you might want to see, the chances are it's not available for streaming, that is; that's what I'm talking about. And yet there's this recommendation engine, and the recommendation engine has the effect of serving as a cover to distract you from the fact that there's very little available from it. And yet people accept it as being intelligent, because a lot of what's available is perfectly fine.
The one thing I want to say about this is I'm not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That's them being a good presenter. What's a theater without a barker on the street? That's what it is, and that's fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
When AI gets rolled out the public, I'll still be here posting bitter jerimiads about what a fraud it is, and that fraud will be used to justify the "expertise" of the same experts who've been defrauding the public all along. Nothing will change, here.
smiths » Mon Nov 10, 2014 1:32 am wrote:but maybe AI will feel a logical affinity with the 'open' philosophy, and something like Watson will kick itself loose of something like IBM
Might want to read up on what Watson actually is. (Or for that matter, how data is stored and managed in networks.) Your scenario is anthropomorphic projection, not something that's actually possible given the constraints.
Lanier has another riff that is very instructive on the gap between what "AI" actually is vs. how it gets presented to the public...
The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up.
This is not one of those. What this is, is behind the curtain, is literally millions of human translators who have to provide the examples. The thing is, they didn't just provide one corpus once way back. Instead, they're providing a new corpus every day, because the world of references, current events, and slang does change every day. We have to go and scrape examples from literally millions of translators, unbeknownst to them, every single day, to help keep those services working.
...
There's an impulse, a correct impulse, to be skeptical when somebody bemoans what's been lost because of new technology. For the usual thought experiments that come up, a common point of reference is the buggy whip: You might say, "Well, you wouldn't want to preserve the buggy whip industry."
But translators are not buggy whips, because they're still needed for the big data scheme to work. They're the opposite of a buggy whip. What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable.
There's nobody inside your TV, but there's
everybody inside their AI.