Artificial Intelligence / Digital life / Skynet megathread

Moderators: Elvis, DrVolin, Jeff

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Ben D » Tue Dec 23, 2014 12:30 am

"The Dominant Life Form in the Cosmos Is Probably Superintelligent Robots"

No, it's probably not.

It's most definitely something we haven't concieved of yet, just like we hadn't concieved of robots or AI a century ago.

This is not meant to convert anyone, nor steer the discussion towards religion, but it is just a plain statement as to how I understand it. The dominant life form is not in the Cosmos, but IS the Cosmos. The holarchy of Cosmic forms that constitute the Cosmos, from multiverses to galaxies, to stars, to planets, etc.......give sentience to the entities that exist at each level. Every life form lives its life in some greater life, from the infinitesimally small to the infinitely large, ie. the one absolute Cosmos. In fact there is really only one life...everything else is an aspect of it...and since nothing can ever be lost from the Cosmos...it only is eternal while all else is relatively temporal.
There is That which was not born, nor created, nor evolved. If it were not so, there would never be any refuge from being born, or created, or evolving. That is the end of suffering. That is God**.

** or Nirvana, Allah, Brahman, Tao, etc...
User avatar
Ben D
 
Posts: 2005
Joined: Sun Aug 12, 2007 8:10 pm
Location: Australia
Blog: View Blog (3)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby slimmouse » Tue Dec 23, 2014 5:17 am

Whilst labouring around language and our numerous understandings of what a term actually means, I might suggest that human beings are certainly artificial, by any reasonable definition of the term. Our bodies are like our own individual dinky little biological suit.

For me Its the intelligence (consciousness) emanating from within each of our own little custom-made suits that we probably need to consider more dutifully.
slimmouse
 
Posts: 6129
Joined: Fri May 20, 2005 7:41 am
Location: Just outside of you.
Blog: View Blog (3)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby coffin_dodger » Wed Jul 01, 2015 2:44 pm

Google apologises for Photos app's racist blunder BBC News 1 Jul 2015

Google says it is "appalled" that its new Photos app mistakenly labelled a black couple as being "gorillas".

Its product automatically tags uploaded pictures using its own artificial intelligence software.

http://www.bbc.co.uk/news/technology-33347866


Who knew AI would be racist, eh?
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Wed Jul 01, 2015 3:53 pm

Well, it was trained on material supplied by humans all over the world.
Same reason Google maps would direct you to the White House if you googled "nigger king" a while ago.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby coffin_dodger » Wed Jul 01, 2015 4:38 pm

DrEvil wrote:Well, it was trained on material supplied by humans all over the world.


Indeed, but it's a circular problem, isn't it?

Any AI is going to be initially (or completely) programmed by humans.

Any AI capable of then determining it's own cogmitive functions without human intervention - scary.
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Wed Jul 01, 2015 5:25 pm

coffin_dodger » Wed Jul 01, 2015 10:38 pm wrote:
DrEvil wrote:Well, it was trained on material supplied by humans all over the world.


Indeed, but it's a circular problem, isn't it?

Any AI is going to be initially (or completely) programmed by humans.

Any AI capable of then determining it's own cogmitive functions without human intervention - scary.


I'm not sure if you could say that the "racist" AI was programmed by humans. It learns on its own using data collected by Google. What exactly it learns isn't planned out in advance. The same algorithm used to label people of all colors as dogs too at one point.

Obviously the underlying algorithms for how it learns is made by humans, but from there on out the programmers act more like chaperones than engineers. It's a bit like a fractal: The underlying algorithm (made by humans) is pretty simple, but the patterns that emerge aren't in any way designed by humans. They're an emergent property of the algorithm.

As a side note: I remember reading an interview with Google engineers where they talked about how some of their larger data-bases were starting to act almost as if they were alive, doing things no-one had predicted. :shock:
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby coffin_dodger » Wed Jul 01, 2015 7:06 pm

Dr Evil said:
I'm not sure if you could say that the "racist" AI was programmed by humans


But you laid the blame on humans for supplying the material it learned from in your initial response. :starz:

AI is one of the biggest scams ever perpetrated on 'brilliant' minds.

We live in a society of don'ts. The list of 'don'ts' is constantly being added to, almost on a daily basis.

One should pity poor racist Google AI - it compared images of humans from all over the world and found (due to fractals) that some black people look more similar to gorillas than Chinese, Indians or Pale-Faces. That's a pretty big fucking faux-pas, right there.

Many more to come from AI, before it's finally packed away into the annals of history as a rather embarassing sidenote.
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sun Jul 05, 2015 3:33 pm

coffin_dodger » Thu Jul 02, 2015 1:06 am wrote:Dr Evil said:
I'm not sure if you could say that the "racist" AI was programmed by humans


But you laid the blame on humans for supplying the material it learned from in your initial response. :starz:


It learned from us, sure, but no-one specifically programmed it to be racist. Nit-picking, I know, but there is a difference. And it wasn't really racist of course, it was just an unfortunate side-effect of a less than perfect image recognition algorithm.

AI is one of the biggest scams ever perpetrated on 'brilliant' minds.


No, not really. AI has been around us for decades. Phone routing software is (was? Not sure if it still is) based on ant path-finding behavior, for example. Apple Siri, Google Now, Cortana and whatever the Baidu one is called are all possible because of AI (Note: there is a difference between dumb AI, available now, and smart AI - AGI (Artificial General Intelligence), available in The Future(tm) ), and a whole host of other things small and large. In a few years pretty much everything with a chip will be connected to an AI in some way or another (The fabled Internet of Things - IoT).

We live in a society of don'ts. The list of 'don'ts' is constantly being added to, almost on a daily basis.

One should pity poor racist Google AI - it compared images of humans from all over the world and found (due to fractals) that some black people look more similar to gorillas than Chinese, Indians or Pale-Faces. That's a pretty big fucking faux-pas, right there.


Not really. It was an algorithm that made a simple mistake, which is now being corrected.

Many more to come from AI, before it's finally packed away into the annals of history as a rather embarassing sidenote.


I don't think you realize just how widespread and useful AI already is. It's not going away, unless they declare a Butlerian Jihad or something, and not even then if they win. :)
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Nordic » Sun Jul 05, 2015 3:56 pm

DrEvil » Wed Jul 01, 2015 4:25 pm wrote:
coffin_dodger » Wed Jul 01, 2015 10:38 pm wrote:
DrEvil wrote:Well, it was trained on material supplied by humans all over the world.


Indeed, but it's a circular problem, isn't it?

Any AI is going to be initially (or completely) programmed by humans.

Any AI capable of then determining it's own cogmitive functions without human intervention - scary.


I'm not sure if you could say that the "racist" AI was programmed by humans. It learns on its own using data collected by Google. What exactly it learns isn't planned out in advance. The same algorithm used to label people of all colors as dogs too at one point.

Obviously the underlying algorithms for how it learns is made by humans, but from there on out the programmers act more like chaperones than engineers. It's a bit like a fractal: The underlying algorithm (made by humans) is pretty simple, but the patterns that emerge aren't in any way designed by humans. They're an emergent property of the algorithm.

As a side note: I remember reading an interview with Google engineers where they talked about how some of their larger data-bases were starting to act almost as if they were alive, doing things no-one had predicted. :shock:


And that should scare the shit out of everybody. And lead us, if we had any real intelligence at all, to shut it all down until we can come to terms with it.

We've gotten by very well for thousands and thousands of years without AI.

With AI that could change very quickly.

Just because you CAN do something doesn't mean you SHOULD.

The Nazis sorta proved that.
"He who wounds the ecosphere literally wounds God" -- Philip K. Dick
Nordic
 
Posts: 14230
Joined: Fri Nov 10, 2006 3:36 am
Location: California USA
Blog: View Blog (6)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby zangtang » Mon Jul 06, 2015 12:14 pm

that sounds like an interview worth reading.....if you can find it i'll prepare some new underwear.
zangtang
 
Posts: 1247
Joined: Fri Jun 10, 2005 2:13 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Luther Blissett » Wed Jul 22, 2015 2:10 pm

We Spoke to a Researcher Working to Stop AI from Killing Us All
July 21, 2015
by Dan Nulley

Most of us aren't afraid of killer robots because we're adults and we know they aren't real. But earlier this month the Future of Life Institute—a group backed by famed tech entrepreneur Elon Musk—handed out 37 grants to projects focused on keeping artificial intelligence "robust and beneficial to humanity." In other words, they're devoted over millions of dollars to making sure that the machines don't rise up and kill us all.

Among other things, the funded projects aim to keep AI systems aligned with human morals and limit the independence of autonomous weapons. One of the recipients was Ben Rubinstein, a senior lecturer in computing and information systems at Melbourne University who received $100,000 to make sure computers don't turn on us and breach important security systems.

VICE caught up with him to ask how in God's name he's going to do that.

VICE: Hey Ben, so movies love the idea of AI overtaking human intelligence. Is that a real concern?
Ben Rubinstein: Personally, I don't think it's inevitable. From the outside it looks like we are moving really fast, but from the inside it doesn't look that way. When I look at AI, I see lots of things it can't do. There's this thing called Moravec's paradox, and with some exceptions, it basically says humans and computers aren't good at the same things.

I take it morality is one of the things computers aren't good at. How do you implant morals and ethics into a machine brain?
When AI becomes a level above what it is now, we need to have value alignment. The problem is, what if the utility doesn't align with a human's utility function? Isaac Asimov was a science fiction writer, and he wrote three laws of robotics: Robots shouldn't injure a human or allow a human to come into harm. Robots should obey orders from a human unless it violates [law] number one. And robots should protect themselves unless they violate laws one and two. These laws make for good reading, and make a lot of sense, but the problem is they are very vague.

How do you make them less vague?
One way some of the research projects are trying to do this is by having the AI learn human judgments. Simply get the AI to watch humans, put that into a machine, and then design an algorithm so it can observe actions we might take. Have a model of the world and ascribe it values that can explain what we are doing, if that makes sense.

Kind of...
Basically it's inverting the process. Instead of going from values to actions rather observing actions we are taking and try to reverse engineer us to figure out what our values are.

Tell me about your project. What will the grant allow you to do?
I'll be focusing on machine learning. So for a short-term problem, I want to find out if machine learning can be misled. When you design a machine learning system, you have something in mind that you want it to do. So it's going to extract patterns from data and it's going to accurately predict something, maybe about customer attention or predicting a disease from a medical diagnostic.

But say you were to feed a machine learning system slightly incorrect data on purpose, how much would it influence the machine learning system in the wrong way? This is particularly relevant when you are talking about cybersecurity. Imagine having a sophisticated adversary that doesn't try to hack into your system by exploiting a bug in your code but instead they mislead the machinery algorithm to make it seem like something is happening when it's not, like autonomous weapons going off randomly.

Anywhere machine learning is being used and making important decisions like someone's health—such as monitors in hospitals—it's something where my research is relevant. It's not just about hacking into the system anymore.

Is a Terminator scenario likely at any point?
Unlikely in my life, and I am in my mid 30s. But surveys have been conducted with international experts of AI, asking when might AI be able to do the general things humans can do. They say about 2040 or 2050. But AI researchers are notoriously bad at estimating how far AI is going to come. So I would take these predictions with a grain of salt.

But you're not ruling it out. Does that mean you're saying it's a theoretical possibility?
Yes, it is. AI is improving, and one day it will be there. But when you look at Terminator-style science fiction, it always looks kind of hopeless for humans and the only way out is to get a time machine. But the problem with this sci-fi is it often looks at this current society and says, What would it look like if AI became super intelligent now?

If we had Terminators or Cylons walking through the streets today we would be in trouble. But before we get there, AI is going to progress, and AI is going to be given more and more responsibility to act in our world. I think we will see small-scale accidents happen first. For example, with autonomous driving or elderly care robots there will be an accident. And any accident, even on a small scale, will significantly rein in the ability of AI to be used.

That's why it makes it hard to predict what we will do when super intelligence is around. But I am feeling pretty optimistic about the whole thing.
The Rich and the Corporate remain in their hundred-year fever visions of Bolsheviks taking their stuff - JackRiddler
User avatar
Luther Blissett
 
Posts: 4990
Joined: Fri Jan 02, 2009 1:31 pm
Location: Philadelphia
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Wed Jul 22, 2015 7:11 pm

zangtang » Mon Jul 06, 2015 6:14 pm wrote:that sounds like an interview worth reading.....if you can find it i'll prepare some new underwear.


I posted it in a different thread. Not exactly an interview, but interesting nonetheless:

viewtopic.php?f=8&t=39072&p=569419&hilit=It%27s+Alive!#p569419
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby zangtang » Thu Jul 23, 2015 5:57 am

just re-read that - not frightening at all.
if i understand it rather than merely thinking i understand it,
the part responsible for scheduling & apportioning tasks
is responding to ever-increasing complexity by making increasingly complex
decisions which appear....unpredictable.

gatling gun safety return to //on
zangtang
 
Posts: 1247
Joined: Fri Jun 10, 2005 2:13 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby divideandconquer » Sat Jul 25, 2015 8:51 am

I'm not sure if you could say that the "racist" AI was programmed by humans. It learns on its own using data collected by Google. What exactly it learns isn't planned out in advance. The same algorithm used to label people of all colors as dogs too at one point.

Obviously the underlying algorithms for how it learns is made by humans, but from there on out the programmers act more like chaperones than engineers. It's a bit like a fractal: The underlying algorithm (made by humans) is pretty simple, but the patterns that emerge aren't in any way designed by humans. They're an emergent property of the algorithm.

As a side note: I remember reading an interview with Google engineers where they talked about how some of their larger data-bases were starting to act almost as if they were alive, doing things no-one had predicted.


Yes, but isn't it possible to create biased computer algorithms? Or to manipulate the input data in order to exploit specific vulnerabilities of learning algorithms? To redirect the computation of the underlying algorithms in some way?

I mean, we trust algorithms, because we think of them as objective, whereas the reality is that biased humans create these algorithms and can embed in them all sorts of biases and perspectives. In other words, a computer algorithm is unbiased in its execution, but this doesn't mean that there is not bias encoded within it.

Google is intensely resistant to releasing the computer algorithms they use to process and adjust the data so how is it possible to know that Google isn't using algorithms that assigns exaggerated weight to things that will ensure the results they want?
'I see clearly that man in this world deceives himself by admiring and esteeming things which are not, and neither sees nor esteems the things which are.' — St. Catherine of Genoa
User avatar
divideandconquer
 
Posts: 1021
Joined: Mon Dec 24, 2012 3:23 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Elvis » Sat Jul 25, 2015 9:23 am

I noticed that in those Google 'pattern recognition'-processed photos include a lot of dogs -- dogs popping out of everything. I suppose that's because of the disproportionate number of dog photos on the Internet. So why no cats? It's dogs, dogs, dogs in those images.
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7434
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 38 guests