Artificial Intelligence / Digital life / Skynet megathread

Moderators: Elvis, DrVolin, Jeff

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Sat Jul 20, 2013 6:53 pm

Wow.
As a thought experiment, try 'stepping into the shoes' of the intelligence that sees the world like that - just for a second...
User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby barracuda » Sat Jul 20, 2013 6:53 pm

For instance, they became intrigued by "tool-like objects oriented at 30 degrees," including spatulas and needle-nose pliers.


Holy crap, they're looking at porn.
User avatar
barracuda
 
Posts: 12890
Joined: Thu Sep 06, 2007 5:58 pm
Location: Niles, California
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Sat Jul 20, 2013 7:07 pm

barracuda » Sat Jul 20, 2013 10:53 pm wrote:
For instance, they became intrigued by "tool-like objects oriented at 30 degrees," including spatulas and needle-nose pliers.


Holy crap, they're looking at porn.


You = Perv :lol2:

They *might* just have a wholesome sentient A.I. interest in cooking or home-improvement
User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Sat Jul 20, 2013 7:11 pm

Five Creepiest Advances in Artificial Intelligence
http://www.learning-mind.com/five-creepiest-advances-in-artificial-intelligence/

Already, the electronic brains of the most advanced robotic models surpass human intelligence and are able to do things that will make some of us shudder uncomfortably. But what is your reaction going to be after learning about recent advances in robotics and artificial intelligence?

5. Schizophrenic robot

Scientists at the University of Texas (Austin) have simulated mental illness for a computer, testing schizophrenia on artificial intelligence units.

The test subject is DISCERN – a supercomputer that functions as a biological neural network and operates using the principles of how human brain functions. In their attempt to recreate the mechanism behind schizophrenia, the scientists have applied the concepts described in the theory of hyper-learning, which states that schizophrenic brain processes and stores too much information too thoroughly by memorizing everything, even the unnecessary details.

The researchers then emulated schizophrenic brain in artificial intelligence by overloading the computer with many stories. At one point, the computer claimed responsibility for a terrorist act, telling researchers about setting off a bomb. Artificial intelligence has reported this incident because of confusion with a third party’s story about the explosion by terrorists, mixed in with its own memory. In another case, the computer began to talk about itself as a third person, because it could not make out what exactly it was at the moment.

4. Robot-deceiver

Professor Roland Arkin from the School of interactive computing at the University of Georgia presented the results of an experiment in which scientists were able to teach a group of robots to cheat and deceive. The strategy for such fraudulent behavior was based on the behavior of birds and squirrels.

The experiment involved two robots. First robot had to find a place to hide, and the second robot was to discover where the first robot was hiding. Robots had to go through an obstacle course with pre-installed physical objects which turned over as the robots moved along. The first robot led the way, and the second one followed the first robot by analyzing tracks left along the path.

After a while, the hiding robot started deliberately overturning obstacles just to create a diversion and was hiding somewhere away from the mess he had left behind. This strategy was not originally programmed, the robot has developed its own strategy, through trial and error. After all, this was just a harmless university experiment, right?

3. Ruthless robot

The scientists at the Laboratory of Intelligent Systems put a group of robots in the same room with predetermined sources of “food” and “poison.” Machines earned points for being closest to the “food”, and lost points if they approached sources of “poison.” All machines involved in the experiment were fitted with small blue lights, flashing erratically, as well as a sensor camera, which helped to identify the light from the lamps of other robots.

The robots were able to turn off their lights if needed. When the experiment began, it did not take too long for the robots to realize that the largest concentration of blue lights was at the point where the other robots congregated, that is next to the “food.” It turned out that, by blinking their lights, the robots showed the competitors where the correct source was located.

After several phases of the experiment, almost all of the robots turned off their “beacons”, refusing to help each other. But this was not the only outcome of this experiment: some of the other bots managed to divert other bots away from the “food” by blinking more intensely.

2. Supercomputer with imagination

Among the many projects by the Google company, which, without a doubt, one day will put an end to our civilization, there is one project that stands out: a self-learning computer with a neural network simulation system.

In an experiment, this supercomputer was given free access to the Internet and the ability to examine the contents of the network. There were no restrictions or guidelines, the powerful super intelligence was simply allowed to explore the entire human history and experience. And what do you think this supercomputer has chosen out of all this wealth of information? It began browsing though images of kittens.

Yes, as it turned out, we all use the Internet the same way, no matter who we are, human beings or high-tech digital intelligence. A little later, Google has discovered that the computer has even developed its own concept of what a kitten should look like by independently generating the image with an analogue to our cerebral cortex and based on a review of photographs seeing earlier.

1. Robot prophet

“Nautilus” is another self-learning supercomputer. This unit was fed millions of newspaper articles starting from 1945, by basing its search on two criteria: the nature of the publication and location. Using this wealth of information about past events, the computer was asked to come up with suggestions on what would happen in the “future.” And these turned out to be surprisingly accurate guesses. How accurate? Well, for example, it had located Bin Laden.

The same task took 11 years, two wars, two presidents and billions of dollars for the U.S. government and its allies. The “Nautilus” project has taken much less time, and all that was done was just the analysis of the news pertaining to the terrorist leader and connecting dots in his probable whereabouts. As a result of its analysis, the “Nautilus” has narrowed the search area to a 200-km zone in the northern Pakistan, where Osama’s refuge was discovered.

The experiment with the “Nautilus” was retrospective in nature , the computer was given an opportunity to predict events which had already happened. Now scientists are contemplating allowing the machine to predict present day’s future events.
User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sun Jul 21, 2013 5:48 pm

Microsoft Kinect used to live-translate Chinese sign language into text

Researchers from the Chinese Academy of Sciences (CAS) have used a Microsoft Kinect to live-translate Chinese sign language into text.

The work, a collaboration between the CAS Institute of Computing Technology and Microsoft Research Asia, could be vital to helping deaf and non-deaf people communicate with each other.


http://arstechnica.com/business/2013/07 ... into-text/
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 4158
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby justdrew » Sun Jul 21, 2013 5:56 pm

justdrew » 10 Jul 2013 00:20 wrote:http://en.wikipedia.org/wiki/Schema_theory
http://en.wikipedia.org/wiki/Social_cognition

Note that the 1st video I posted the fellow states that they have Voice Recognition (speech to text) working at nearly 100%. (7 years ago)

If you have a DB of phone calls and know the participants, you can develop a training file for each person's voice. So you'd be able to have the software convert speech to text of phone calls with a high reliability and even recognize voices otherwise unidentified (say, someone using a payphone).

So what?

Well, one thing this would let you do is develop of map of each individual's cognitive schema and their particular associational map.

Which would enable you to (quite possibly enable your software to) write a flexible script that could be used to prime, actively exploit the mere-exposure effect, evoke desired schema associations, and generally lead the conversant wherever you want them to go... with some degree of likelihood. A salesman's dream come true.

but more than salesmen would find it handy.

very potent personalized push-polling for instance.

Certainly the existence of the mere-exposure effect dictates that the content of TV and movies MUST be regulated.

How much of that regulation is Controlled/Dictated vs arising naturally from "market forces" is an open question. Certainly the Market Forces are also regulated and this is likely the primary point for control, you know, Nielsen ratings and all that crap.



Another wonderful thing to look for is if/when the thinking-machines (not "free" thinking, these are just, for now, software doing 'cognitive computations') with access to "the personal data system" (that network of loci that store info about, or submitted by, individuals), becomes increasingly able to PROVIDE DIRECT STIMULUS to individuals.

This will certainly initially take the form of individually customized "coupons" sent to people. Does Joe Blow need a little reward? His phone beeps and he's got a coupon for a free {Favorite Food Item}. In time the available array of stimuli that can be applied will expand and differentiate. So it'll be possible to do positive conditioning as well as the current negative (punishment for law breaking). Though the negative conditioning will no doubt expand as well. If you're speeding, expect your bank account to automatically be dinged. If intoxication is detected, expect the car to pull over and park itself, locking you in while police are en-route.

I predict, the Next-Facebook/Next-Google will be a company that is able to effectively push such positive stimuli to consumer attention.

In fact I'd be a damn fool to not start it up right now and get bought out ASAP. I may indeed be a damn fool though. Who wants to go at it with me? With luck we could fuck it up in such a way that it's delayed for a generation :evilgrin

Any bogus Patent Experts in the house?

http://en.wikipedia.org/wiki/Felicific_calculus


and another thing...

given access to the "the personal data system" (internet posts, phone conversations, text msgs, etc) a significant amount of automated psychological analysis and categorization could be done, based on Individuals responses to Characters and Events in various Fictional Universes, particularly the ones with an almost "one of each type" cast of characters, like Harry Potter, Game of Thrones, Reality-TV programs, etc. Using concepts from projective psychotherapy.

"The Machines of Loving Grace will know you better than you know yourself"
By 1964 there were 1.5 million mobile phone users in the US
User avatar
justdrew
 
Posts: 11966
Joined: Tue May 24, 2005 7:57 pm
Location: unknown
Blog: View Blog (11)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Sun Jul 21, 2013 8:33 pm

This is from the area called Artificial Life -
A program called Langton's Ant which is a demonstration of what incredible complexity can arise from extremely simple rules.

User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby cptmarginal » Thu Jul 25, 2013 2:13 am



Thanks for posting this, it was inspiring on a number of levels... Douglas Rushkoff is one of those people whose work I automatically follow
The new way of thinking is precisely delineated by what it is not.
cptmarginal
 
Posts: 2741
Joined: Tue Apr 10, 2007 8:32 pm
Location: Gordita Beach
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Belligerent Savant » Thu Aug 15, 2013 5:43 pm

.

Image

A Japanese roboticist Dr. Hiroshi Ishiguro is building androids to understand humans. One is an android version of a middle-aged family man — himself.

Photo gallery of his androids at: http://www.geminoid.jp/en/robots.html.


The robot, like the original, has a thin frame, a large head, furrowed brows, and piercing eyes that, as one observer put it, seem on the verge of emitting laser beams. The android is fixed in a sitting posture, so it can’t walk out of the lab and go fetch groceries. But it does a fine job of what it’s intended to do: mimic a person.

Ishiguro controls this robot remotely, through his computer, using a microphone to capture his voice and a camera to track his face and head movements. When Ishiguro speaks, the android reproduces his intonations; when Ishiguro tilts his head, the android follows suit. The mechanical Ishiguro also blinks, twitches, and appears to be breathing, although some human behaviors are deliberately suppressed. In particular, when Ishiguro lights up a cigarette, the android abstains.

These robots have been covered many times by major media, such as Discovery channel, NHK, and BBC. Received Best Humanoid Award 4 times in RoboCup. In 2007, Synectics Survey of Contemporary Genius 2007 has selected him as one of the top 100 geniuses alive in the world today.

The idea of connecting a person’s brain so intimately with a remotely controlled body seems straight out of science fiction. In The Matrix, humans control virtual selves. In Avatar, the controlled bodies are alien-human hybrids. In the recent Bruce Willis movie Surrogates, people control robot proxies sent into the world in their places. Attentive viewers will notice that Ishiguro and the Geminoid have cameo roles, appearing in a TV news report on the rapid progress of ”robotic surrogacy.”

Ishiguro’s surrogate doesn’t have sensing and actuation capabilities as sophisticated as those in the movie. But even this relatively simple android is giving Ishiguro great insight into how our brains work when we come face to face with a machine that looks like a person. He’s also investigating, with assistance from cognitive scientists, how the operator’s brain behaves. Teleoperating the android can be so immersive that strange things happen. Simply touching the android is enough to trigger a physical sensation in him, Ishiguro says, almost as though he were inhabiting the robot’s body.

Join us at the GF2045 International Congress to meet Dr. Ishiguro, see his famous geminoid, and learn more about new and amazing technologies in life extension, robotics, prosthetics and brain function from the world's leading scientists.


Image

Elfoid is a cellphone-type teleoperated android that follows the concept of Telenoid. Minimal design of human and soft, pleasant-to-the-touch exterior are implemented in cellular phone size. Thanks to its capability of cellular phone, everyone can easily talk with a person in the remote place while feeling as if they are facing with each other.


Image

Hugvie is a a "human presence" transfer media that enables users to strongly feel the presence of remote partners while interacting with them. Through research and development of other robots such as "TelenoidR R1" (press release August 2010) or "ElfoidR P1" (press release March 2011), we have found that hugging and holding these robots during an interaction is an effective way for strongly feeling the existence of a partner. "Hugvie" is an epoch-making communication medium that can strongly transfer the presence of an interaction partner despite its simple shape.
User avatar
Belligerent Savant
 
Posts: 5587
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Fri Aug 16, 2013 8:09 am

Thank you for the Geminoid links!

I found this little video which explained to me why the Japanese attitude seems so different than the US / Western European.

Did anyone else sense something akin to 'proto-consciousness' in theGeminoid robot next to him?

User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby semper occultus » Fri Aug 16, 2013 11:45 am

User avatar
semper occultus
 
Posts: 2974
Joined: Wed Feb 08, 2006 2:01 pm
Location: London,England
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Tue Aug 20, 2013 6:40 pm

New project aims to upload a honey bee's brain into a flying insectobot by 2015
Image

Every once in a while, there's news which reminds us that we're living in the age of accelerating change. This is one of those times: A new project has been announced in which scientists at the Universities of Sheffield and Sussex are hoping to create the first accurate computer simulation of a honey bee brain — and then upload it into an autonomous flying robot.

This is obviously a huge win for science — but it could also save the world. The researchers hope a robotic insect could supplement or replace the shrinking population of honey bees that pollinate essential plant life.

Powerful and affordable

Now, while this might sound like some kind of outlandish futurist joke, there are some serious players — and money — involved. Called the "Green Brain Project," it was recently given £1 million (USD $1,614,700) by the Engineering and Physical Sciences Research Council (EPSRC), as well as hardware donations from the NVIDIA corporation.

New project aims to upload a honey bee's brain into a flying insectobot by 2015

Specifically, NVIDIA will provide them with high-performance graphical processing units called GPU accelerators. This will allow the researchers to simulate aspects of a honey bee's brain by using massively paralleled desktop PCs. While this will certainly work to promote the NVIDIA brand, it will also allow the researchers to conduct their project inexpensively (supercomputer clusters aren't cheap).

And indeed, the researchers are going to need all the computational power they can get; it may appear that insects have simple minds — but their brains can be extremely complex.

Creating autonomy

Now, it should be noted that the researchers aren't trying to emulate a complete honey bee brain, but rather two specific and complex functions within it, namely vision and sense of smell. Once complete, they will upload those models into a robotic honey bee so that it can act autonomously.

New project aims to upload a honey bee's brain into a flying insectobot by 2015

"This is an important further advance over current work on brain models because it is becoming more and more clear that an essential aspect of brain function is that the brain is not acting in isolation but in constant interaction with the body and the environment," they note in their proposal, "This concept of 'embodiment' and its consequences for cognition are important insights of modern cognitive science and will become equally important for modern neuroscience."

By isolating and modeling these particular functions, the researchers hope to provide their flying robot with the cognitive power required to perform basic tasks — and without a set of pre-programmed instructions. It is hoped, for example, that the robotic bee will be able to detect particular odors or gasses in the same way that real bee can identify certain flowers.

To help them with their work, the researchers will collaborate with Martin Giurfa of Toulouse, an expert in all aspects of bee brain anatomy, physiology, and bee cognition and behavior.

Should they be successful, it would mark an important moment in technological history: The first robot brain that can perform complex tasks as proficiently as the animal its trying to emulate.

More at link!
http://io9.com/5948202/new-project-aims-to-upload-a-honey-bees-brain-into-a-flying-insectobot-by-2015
User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby smoking since 1879 » Wed Aug 21, 2013 7:48 pm

What flavour of mind are we going be making here?
How can we model it on the brain when we still don't understand it, despite all the research?

I think I posted this before, if so, appologies.

The divided brain...
"Now that the assertive, the self-aggrandising, the arrogant and the self-opinionated have allowed their obnoxious foolishness to beggar us all I see no reason in listening to their drivelling nonsense any more." Stanilic
smoking since 1879
 
Posts: 509
Joined: Mon Apr 20, 2009 10:20 pm
Location: CZ
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Wombaticus Rex » Fri Aug 23, 2013 9:37 am

Awesome RFI this week from the DARPAcrats:
https://www.fbo.gov/index?s=opportunity ... e&_cview=0

Request for Information (RFI) on Research and Development of a Cortical Processor

DESCRIPTION

The Microsystems Technology Office (MTO) of the Defense Advanced Research Projects Agency (DARPA) seeks information on Cortical Processing technologies and applications that may support a new DARPA program in complex signal processing and data analysis. Although not a neuroscience project per se, it will heavily depend on a variety of neural models derived from the computational neuroscience of neocortex.

Capturing complex spatial and temporal structure in high-bandwidth, noisy, ambiguous data streams is a significant challenge in even the most modern signal/image analysis systems. Current computational approaches are overwhelmingly compute intensive and are only able to extract limited spatial structure from modest quantities of data. Present-day machine intelligence is even more challenged by anomaly detection, which requires recognition of all aspects of a normal signal, in order to determine those parts that do not fit. New approaches, based on high capacity, low power implementations, must be developed.

Approaches today, which include machine learning, Bayesian techniques, and graphical knowledge structures, provide partial solutions to this problem, but are limited in their ability to efficiently scale to larger more complex datasets. They are also compute intensive, exhibit limited parallelism, require high precision arithmetic, and, in most cases, do not account for temporal data. DARPA is examining a new approach based on mammalian neocortex, which efficiently captures spatial and temporal structure and routinely solves extraordinarily difficult recognition problems in real-time. Although a thorough understanding of how the cortex works is beyond current state of the art, we are at a point where some basic algorithmic principles are being identified and merged into machine learning and neural network techniques. Algorithms inspired by neural models, in particular neocortex, can recognize complex spatial and temporal patterns and can adapt to changing environments. Consequently, these algorithms are a promising approach to data stream filtering and processing and have the potential for providing new levels of performance and capabilities for a range of data recognition problems.

DARPA requests information that provides new concepts and technologies for developing a “Cortical Processor” based on Hierarchical Temporal Memory. For the purposes of this request, we use the term Hierarchical Temporal Memory (HTM) to represent a family of cortical processing models, not a single specific algorithm.* Although such algorithms have a number of important characteristics, there are several key features to HTM that would be necessary to a cortical processor, including temporal/spatial recognition, the use of sparse distributed representations (SDR), and a columnar “modular” structure. The processing occurs in a cortical-like hierarchical model that uses spatial and temporal evolution of the data representation to form relationships. SDR in particular is a key component of HTM. Unlike traditional memory representations, SDRs assign meaning to each bit and features are expressed in degrees of similarity by virtue of overlapping characteristics. The cortical computational model should be fault tolerant to gaps in data, massively parallel, extremely power efficient, and highly scalable. It should also have minimal arithmetic precision requirements, and allow ultra-dense, low power implementations.

* Hawkins, J. On Intelligence; with Blakeslee, S.; Times Books: New York, 2004.
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby coffin_dodger » Tue Jan 14, 2014 7:55 am

I'm wondering what google are looking to build... :whistling:

Google to buy Nest Labs for $3.2bn
BBC News 13 January 2014

<snip> It produces a thermostat capable of learning user behaviour and working out whether a building is occupied or not, using temperature, humidity, activity and light sensors.

<snip> Google's purchase of Nest Labs follows its acquisition of military robot-maker Boston Dynamics last month and of human-gesture recognition start-up Flutter in October.


An earlier acquisition starting to pay dividends:

Google’s Schaft Robot Dominates Pentagon Contest
Washington Wire Dec 23, 2013
<snip> Google Incs newly acquired Japanese start-up is poised to secure more Pentagon funding to develop a creation capable of venturing into dangerous disaster zones to help humans. Yeeeaahh...riiiiight.
http://blogs.wsj.com/washwire/2013/12/2 ... n-contest/


Oh, look - A.I. as well..



https://plus.google.com/+QuantumAILab/posts

other assorted robo-bits -

Google just bought a high-tech face recognition unit called Pitt Patt
http://www.fastcompany.com/1768963/how- ... ebs-future


Google Acquires Seven Robot Companies, Wants Big Role in Robotics
IEEE spectrum 4 Dec 2013
http://spectrum.ieee.org/automaton/robo ... -companies


Boston Dynamics - military robot maker
Nest labs - activity detection
Flutter - human gesture recognition
Holomni - robotic wheels
Bot & Dolly - robotic cameras
Meka robotics - robots
Industrial Perception - computer vision
+ A.I.

=

Image
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 166 guests