Moderators: Elvis, DrVolin, Jeff
The AI Revolution: The Road to Superintelligence
By Tim Urban
Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 will go up next week.
_______________
We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge
What does it feel like to stand here?
It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
t takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.
Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.
What we do know is that the history of humans on the Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happen in the next few decades.
If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:
Will it be a nice God?
The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.
Wombaticus Rex » 24 Jan 2015 08:48 wrote:Tim Urban means well but, what the fuck? "I've been reading about AI for three weeks?" Damn, let's get you onstage at TED post haste, daug!
I guess I shouldn't be surprised that a cartoonist/blogger didn't exactly nail a book report on Superintelligence reassembled from stolen infographics. My heuristic algorithms should have seen that coming. I would strongly recommend anyone with an interest on this subject read that book.
Or, you know, dig into this: http://edge.org/responses/q2015The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.
Linear calculations just sort lists. "Intelligence" may prove slightly more complex.
WRex: Linear calculations just sort lists. "Intelligence" may prove slightly more complex.
Elon Musk ✔ @elonmusk
Good primer on the exponential advancement of technology, particularly AI http://waitbutwhy.com/2015/01/artificia ... ion-1.html …
12:46 PM - 23 Jan 2015
justdrew » Sat Jan 24, 2015 1:01 pm wrote:
I think there's a possibility that there are real limits on what we call "intelligence" - and wouldn't it be a good idea to nail the definition of it down before trying to build machine/software to duplicate it?
We can back-up petabytes of sili-brains perfectly in seconds, but transfer of information between carbo-brains takes decades and the similarity between the copies is barely recognizable. Some speculate that we could translate from carbo to sili, and even get the sili version to behave like the original. However, such a task requires much deeper understanding than merely making a copy. We harnessed the immune system via vaccines in 10th century China and 18th century Europe, long before we understood cytokines and T-cell receptors. We do not yet have a medical nanorobot of comparable agility or utility. It may turn out that making a molecularly adequate copy of a 1.2 kg brain (or 100 kg body) is easier than understanding how it works (or than copying my brain to a room of students "multitasking" with smart phone cat videos and emails). This is far more radical than human cloning, yet does not involve embryos.
Via George Church: http://edge.org/response-detail/26027
Because of mistakes, we have a view of natural reality, which is too flat, and this is the origin of the confusion. The world is more or less just a large collection of particles, arranged in various manners. This is just factually true. But if we then try to conceive the world precisely as we conceive an amorphous and disorganised bunch of atoms, we fail to understand the world. Because the virtually unlimited combinatorics of these atoms is so rich to include stones, water, clouds, trees, galaxies, rays of light, the colours of the sunset, the smiles of the girls in the spring, and the immense black starry night. As well as our emotions and our thinking about all this, which are so hard to be conceived in terms of atoms combinatorics, not because some black magic intervenes from outside nature, but because these thinking machines that are ourselves are, too, much limited in their thinking capacities.
Via Carlo Rovelli: http://edge.org/response-detail/26026
But here is the key point. The limits of each intelligence are an engine of evolution. Mimicry, camouflage, deception, parasitism—all are effects of an evolutionary arms race between different forms of intelligence sporting different strengths and suffering different limits.
Only recently has the stage been set for AIs to enter this race. As our computing resources expand and become better connected, more niches will appear in which AIs can reproduce, compete and evolve. The chaotic nature of evolution makes it impossible to predict precisely what new forms of AI will emerge. We can confidently predict, however, that there will be surprises and mysteries, strengths where we have weaknesses, and weaknesses where we have strengths.
Via Donald Hoffman: http://edge.org/response-detail/26036
Wombaticus Rex wrote:Also: my favorite overall was by Satjayit Das.
Users browsing this forum: No registered users and 158 guests