Artificial Intelligence / Digital life / Skynet megathread

Moderators: Elvis, DrVolin, Jeff

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Grizzly » Mon May 15, 2017 5:23 pm

Elvis » Mon May 15, 2017 11:00 am wrote:
Some things to think about here. Among them—I'd like to see VALCRI applied to the crimes of 9/11. :?


smoking since 1879 » Mon May 15, 2017 9:49 am
... i'll second that, and add the square mile in good olde london town.


Excellent find and I concur with these thoughts, however I can't shake the idea that these vary same data sets/skills could just as well, PARADOXICALLY be used to create crime as well as to solve them. aka, "who watches the watchers" axiom. Who holds the keys to the gold? it's long been known that most cops, leo's what have you, also have the same criminal mind set as the "criminals" they catch.
“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4722
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Grizzly » Mon May 15, 2017 5:28 pm

or maybe it's all a roll of the dice.../s

Image
“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4722
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Belligerent Savant » Thu Aug 03, 2017 11:34 am

.

Foreboding...

http://www.telegraph.co.uk/technology/2 ... -language/


Facebook shuts down robots after they invent their own language

Facebook shut down a pair of its artificial intelligence robots after they invented their own language.

Researchers at Facebook Artificial Intelligence Research built a chatbot earlier this year that was meant to learn how to negotiate by mimicking human trading and bartering.

But when the social network paired two of the programs, nicknamed Alice and Bob, to trade against each other, they started to learn their own bizarre form of communication.
The chatbot conversation "led to divergence from human language as the agents developed their own language for negotiating," the researchers said.

Facebook's AI language

Bob: i can i i everything else . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to me

Bob: you i everything else . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to

Bob: i . . . . . . . . . . . . . . . . . . .


The two bots were supposed to be learning to trade balls, hats and books, assigning value to the objects then bartering them between each other.

But since Facebook's team assigned no reward for conducting the trades in English, the chatbots quickly developed their own terms for deals.

"There was no reward to sticking to English language," Dhruv Batra, Facebook researcher, told FastCo. "Agents will drift off understandable language and invent codewords for themselves.

"Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."

After shutting down the the incomprehensible conversation between the programs, Facebook said the project marked an important step towards "creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant".

Facebook said when the chatbots conversed with humans most people did not realise they were speaking to an AI rather than a real person.

The researchers said it wasn't possible for humans to crack the AI language and translate it back into English. "It’s important to remember, there aren’t bilingual speakers of AI and human languages," said Batra.
User avatar
Belligerent Savant
 
Posts: 5214
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby smoking since 1879 » Thu Aug 03, 2017 12:06 pm

Belligerent Savant » Thu Aug 03, 2017 4:34 pm wrote:.

Foreboding...

http://www.telegraph.co.uk/technology/2 ... -language/


Facebook shuts down robots after they invent their own language

Facebook shut down a pair of its artificial intelligence robots after they invented their own language.

Researchers at Facebook Artificial Intelligence Research built a chatbot earlier this year that was meant to learn how to negotiate by mimicking human trading and bartering.

But when the social network paired two of the programs, nicknamed Alice and Bob, to trade against each other, they started to learn their own bizarre form of communication.
The chatbot conversation "led to divergence from human language as the agents developed their own language for negotiating," the researchers said.

Facebook's AI language

Bob: i can i i everything else . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to me

Bob: you i everything else . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to

Bob: i . . . . . . . . . . . . . . . . . . .


The two bots were supposed to be learning to trade balls, hats and books, assigning value to the objects then bartering them between each other.

But since Facebook's team assigned no reward for conducting the trades in English, the chatbots quickly developed their own terms for deals.

"There was no reward to sticking to English language," Dhruv Batra, Facebook researcher, told FastCo. "Agents will drift off understandable language and invent codewords for themselves.

"Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."

After shutting down the the incomprehensible conversation between the programs, Facebook said the project marked an important step towards "creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant".

Facebook said when the chatbots conversed with humans most people did not realise they were speaking to an AI rather than a real person.

The researchers said it wasn't possible for humans to crack the AI language and translate it back into English. "It’s important to remember, there aren’t bilingual speakers of AI and human languages," said Batra.



it would be so much more satisfying if the researcher's name were "Dhruv Barta" :moresarcasm
"Now that the assertive, the self-aggrandising, the arrogant and the self-opinionated have allowed their obnoxious foolishness to beggar us all I see no reason in listening to their drivelling nonsense any more." Stanilic
smoking since 1879
 
Posts: 509
Joined: Mon Apr 20, 2009 10:20 pm
Location: CZ
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Elvis » Wed Aug 23, 2017 9:56 pm

Walking down the road today I was passed by one of these:

Image
"Mysterious unmarked vans roaming the Bay Area have been linked to Apple, and are likely generating detailed 3D maps for robot cars."


The one I saw was clearly marked, "Apple Way" which I later Googled.
(I hate to say it, but it's pertinent to this thread—I've given up on search engines other than Google, they just don't cut it.)

https://maps.apple.com/vehicles/

Apple Maps vehicles

Apple is driving vehicles around the world to collect data which will be used to improve Apple Maps. Some of this data will be published in future Apple Maps updates.

We are committed to protecting your privacy while collecting this data. For example, we will blur faces and license plates on collected images prior to publication. If you have comments or questions about this process, please contact us.

See below for where we’re driving our vehicles next.

Driving Locations for August 14 – August 27



http://mashable.com/2017/06/27/apple-maps-car-could-be-capturing-3d-street-view-map-data-/#9PN3sDwS05qF

Apple is currently driving a tricked-out, super-data-sucking car all over the world, but no one really knows why.

Details about where in the world to find the Apple Maps car are here, but beyond promising that the data will be used to improve Apple Maps (which many people believe needs improving), there’s no information about what Apple is planning or what these cars can do.



From last year—links and more pics at link:
https://www.cultofmac.com/435571/mystery-vans-likely-making-3-d-road-maps-for-apples-self-driving-car/
Mystery vans likely making 3-D road maps for Apple’s self-driving car
By Leander Kahney • 5:30 am, June 30, 2016

Some new data-gathering vehicles are roaming the streets of San Francisco. They’re unmarked, but are suspected to be Apple’s. They are laden with sensors, but what kind of data are they gathering, and what for?

Experts contacted by Cult of Mac say the mystery vans are next-generation mapping vehicles capable of capturing VR-style, 360-degree street photos. Plus, the vans use Lidar to create extraordinarily precise “point clouds,” a prerequisite for self-driving cars. Mesh those two databases together and you’ve laid the groundwork for an autonomous vehicle’s navigation system.

During a drive across the Golden Gate Bridge last weekend, a Business Insider reporter spotted an unmarked Ford van. In April, an identical van was spotted near Apple office buildings in Sunnyvale, Calif. by a Tech Radar reporter. (Note: The vehicles look the same but the plates are different.)

The vans look similar to Apple’s mapping minivans but are a different make (Ford versus Chrysler) and have a different configuration of sensors. And unlike the mapping vehicles, the new ones are unmarked.

It’s an “open secret” in Silicon Valley that Apple is working on a car. It’s likely to be electric like Tesla’s, and may be autonomous. Apple’s so-called Project Titan automotive initiative appears to be quite advanced, employing up to 600 staffers and moving beyond the prototype stage and into the early stages of production. Apple hasn’t confirmed anything, of course, but CEO Tim Cook recently offered a juicy non-denial when asked directly.

While the newly spotted Ford vans are almost certainly not prototype autonomous vehicles, they do appear to be gathering data that will be essential to an autonomous driving project.

It’s not an autonomous vehicle

Image
The wheel encoder and GPS keep track of the vehicle’s movements and provide “ground truth” to the maps being generated by other sensors.


Some have speculated the Ford vans could be a prototype self-driving vehicle, given that Lidar on the roof and other sensors resemble the self-driving vehicles from Google and others.

However, Ryan Eustice, associate professor at the University of Michigan’s Perceptual Robotics Lab, said the combination of sensors on the vans points to a mapping vehicle. The vehicle’s cameras, Velodyne Lidar sensors, wheel encoder and GPS all point to mapping.

Paul Godsmark, chief technology officer of the Canadian Automated Vehicles Centre of Excellence, said the same thing. And self-driving car expert Brad Templeton, who runs the Robocar blog, also agreed. “It’s a mapping/street-scanning car, not a self-driving one.”

Ultra-precise Lidar maps are a prerequisite for autonomous vehicles

The four Velodyne Lidar sensors mounted on each corner are likely generating a “point cloud, an ultra-precise 3-D scan of the road that will be used to navigate self-driving cars.

Even though the vans are almost undoubtedly mere mapping vehicles, the configuration of rooftop equipment is different from Apple’s Maps minivans. Apple’s mapping minivans have four cameras on the corners and two Lidar sensors at the front and back. The new vehicles have four Lidar sensors on the corners and the cameras have been moved from the corners to the sides. Additional cameras pointing upward have been added.

According to Godsmark, the dominance of the Lidar units over the cameras is telling. “There is a good chance that the purpose of the mapping is related to autonomous vehicles in some way,” he said. “The way the Lidar are angled at each corner means that they provide an accurate view of the ground up close to the vehicle.”

Lidar (LIght Detection And Ranging) is a form of laser scanning. It’s similar to radar (RAdio Detection And Ranging) but uses light instead of radio waves to detect objects and their distances. Made by Velodyne, the Lidar sensors on each corner of the Ford van measure precisely the road and its surroundings, including curbs, drainage channels, even potholes — to within millimeters.

The Lidar is likely being used to create “point clouds” — ultra-precise 3-D scans of the road and its surroundings — that are assembled into “prior maps.” A prior map is a rich, 3-D map of the road that’s used by an autonomous vehicle to know precisely where it is at any time. Using onboard Lidar, an autonomous vehicle compares what it’s detecting in real time to a prior map loaded into memory. Point cloud prior maps are so accurate, autonomous vehicles know their road position with millimeter precision. Such data is “key for an autonomous vehicle to navigate safely,” said Godsmark.

Image
This visualization from Lidar data shows what a 3-D point cloud prior map might look like.
Photo: Alex Kushleyev


360-degree cameras for virtual reality street view

The array of cameras captures a 360-degree, hemispherical view of the road. As in virtual reality, the viewer will be able to look all around them and even upward.

The cameras, on the other hand, appear to be capturing an immersive, virtual-reality-style view of the road. The vans have an array of eight or more cameras positioned to capture a 360-degree hemispherical view of the road.

The vans have six cameras pointing outward — one at the front, one at the back, and two on the sides; and two more cameras angled toward the sky.

The camera mounted on front is likely a fisheye.

According to Godsmark, each camera likely has 60-to-120-degree field of view, and are aligned at 60-degree angles to each other. When stitched together, the cameras will provide “a 360 degree/hemispherical type virtual reality view of the world,” he said.

Godsmark noted that the vans couldn’t use a single virtual reality unit — like the ones used to capture VR movies — as the Lidar units on the van’s corners would obstruct the view. “This arrangement is an efficient way of providing a 360-degree view around the car and allowing the Lidar to achieve 360-degree coverage too.”

Image
This is the camera mounted on the side of the vehicle. It has a pair of cameras aimed sideways (pointed across each other) and a third camera aimed upward.


The combination of Lidar data and 360-degree street views resemble mapping cars from other autonomous driving companies. For example, the test car from Uber’s Advanced Technologies Center in Pittsburgh combines Lidar and cameras; as does a fleet of vehicles from Bosch/TomTom that are currently mapping freeways in Germany.

The Bosch/TomTom vehicles are making multilayered maps for automated driving. They combine Lidar with 360-degree hemispherical cameras, which are clustered atop a pole on top of the vehicles. The multilayered maps meld ultra-precise 3D road information with visual cues about things like lane markers, traffic signs and speed limits.

Image
A fleet of vehicles from Bosch/TomTom are creating multilayered maps for autonomous driving.


Whose vehicle is it?

Super-secretive Apple won’t say if it’s the company behind the recently spotted Ford vans, and the vehicle could belong to any of the numerous Silicon Valley companies exploring self-driving cars.

However, when the Apple Maps minivans sparked feverish speculation, Apple began marking the vans with “Apple Maps” and “maps.apple.com” stickers. The company also posted a web page about the vehicles, with information about their purpose and a list of where they are operating. (The list has’t been updated in quite a while.)

Godsmark said he had no idea “whatsoever” what organization is operating the vehicles.

“I do find the lack of license plate interesting,” he said, “which reminded me of this old article of yours: ‘Why Steve Jobs’ Mercedes never had a license plate.'”



“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7411
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby stefano » Fri Nov 17, 2017 5:00 am



On the plus side it probably can't go very long without plugging in

for now
User avatar
stefano
 
Posts: 2672
Joined: Mon Apr 21, 2008 1:50 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby smoking since 1879 » Fri Nov 17, 2017 7:15 am

stefano » Fri Nov 17, 2017 10:00 am wrote:

On the plus side it probably can't go very long without plugging in

for now


one of these days the robot makers will realise they forgot the articulated toes, then we are in trouble ;)
"Now that the assertive, the self-aggrandising, the arrogant and the self-opinionated have allowed their obnoxious foolishness to beggar us all I see no reason in listening to their drivelling nonsense any more." Stanilic
smoking since 1879
 
Posts: 509
Joined: Mon Apr 20, 2009 10:20 pm
Location: CZ
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby cptmarginal » Fri Jan 12, 2018 10:12 am

https://research.googleblog.com/2018/01 ... ck-on.html

The Google Brain Team — Looking Back on 2017 (Part 1 of 2)

Thursday, January 11, 2018

Posted by Jeff Dean, Google Senior Fellow, on behalf of the entire Google Brain Team

The Google Brain team works to advance the state of the art in artificial intelligence by research and systems engineering, as one part of the overall Google AI effort. Last year we shared a summary of our work in 2016. Since then, we’ve continued to make progress on our long-term research agenda of making machines intelligent, and have collaborated with a number of teams across Googleand Alphabetto use the results of our research to improve people’s lives. This first of two posts will highlight some of our work in 2017, including some of our basic research work, as well as updates on open source software, datasets, and new hardware for machine learning. In the second post we’ll dive into the research we do in specific domains where machine learning can have a large impact, such as healthcare, robotics, and some areas of basic science, as well as cover our work on creativity, fairness and inclusion and tell you a bit more about who we are.

Core Research

A significant focus of our team is pursuing research that advances our understanding and improves our ability to solve new problems in the field of machine learning. Below are several themes from our research last year.

AutoML

The goal of automating machine learning is to develop techniques for computers to solve new machine learning problems automatically, without the need for human machine learning experts to intervene on every new problem. If we’re ever going to have truly intelligent systems, this is a fundamental capability that we will need. We developed new approaches for designing neural network architecturesusing both reinforcement learning and evolutionary algorithms, scaled this work to state-of-the-art results on ImageNet classification and detection, and also showed how to learn new optimization algorithms and effective activation functions automatically. We are actively working with our Cloud AI team to bring this technology into the hands of Google customers, as well as continuing to push the research in many directions.



Speech Understanding and Generation

Another theme is on developing new techniques that improve the ability of our computing systems to understand and generate human speech, including our collaboration with the speech team at Google to develop a number of improvements for an end-to-end approach to speech recognition, which reduces the relative word error rate over Google’s production speech recognition system by 16%. One nice aspect of this work is that it required many separate threads of research to come together (which you can find on Arxiv: 1, 2, 3, 4, 5, 6, 7, 8, 9).

Image

Components of the Listen-Attend-Spell end-to-end model for speech recognition


We also collaborated with our research colleagues on Google’s Machine Perception team to develop a new approach for performing text-to-speech generation (Tacotron 2) that dramatically improves the quality of the generated speech. This model achieves a mean opinion score (MOS) of 4.53 compared to a MOS of 4.58 for professionally recorded speech like you might find in an audiobook, and 4.34 for the previous best computer-generated speech system. You can listen for yourself.



Much more at link!

Image

Cloud TPUs deliver up to 180 teraflops of machine learning acceleration


Image

Cloud TPU Pods deliver up to 11.5 petaflops of machine learning acceleration


-

On a side note related to image machine learning, here's some fairly interesting recent research:

A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology - Published online: 22 November 2017

Image
cptmarginal
 
Posts: 2741
Joined: Tue Apr 10, 2007 8:32 pm
Location: Gordita Beach
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Grizzly » Thu Jan 25, 2018 1:12 am

Business A long-time Google engineer, Steve Yegge, quits, saying the company is '100% competitor focused' and 'can no longer innovate' (cnbc.com)
https://www.reddit.com/r/technology/com ... gge_quits/

Good discussion in comments...

Also see
Better than holograms: A new 3-D projection into thin air
https://apnews.com/c5eeea98b4b2430b979f ... o-thin-air
“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4722
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby stickdog99 » Thu Jan 25, 2018 4:16 am

When I first read the Musk Vanity Fair article, I had mixed feelings about it.

But what if the most powerful people on Earth were to succeed in finally creating intelligence in their own image?
stickdog99
 
Posts: 6302
Joined: Tue Jul 12, 2005 5:42 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Wed Apr 18, 2018 8:42 am

https://www.theverge.com/tldr/2018/4/17 ... e-buzzfeed


https://www.youtube.com/watch?time_cont ... Q54GDm1eL0

This video was made with an AI face-swapping tool (FakeApp) that anyone can download and run on their desktop computer. Predictably, its main use is adding celebrity faces to porn, but it can also be used for stuff like this.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3971
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Belligerent Savant » Sun May 13, 2018 9:57 pm

.


https://promarket.org/10-years-surveill ... e-illegal/

"The Search Engine Is The Most Powerful Source Of Mind Control Ever Invented..."

Google CEO Sundar Pichai caused a worldwide sensation earlier this week when he unveiled Duplex, an AI-driven digital assistant able to mimic human speech patterns (complete with vocal tics) to such a convincing degree that it managed to have real conversations with ordinary people without them realizing they were actually talking to a robot.

While Google presented Duplex as an exciting technological breakthrough, others saw something else: a system able to deceive people into believing they were talking to a human being, an ethical red flag (and a surefire way to get to robocall hell). Following the backlash, Google announced on Thursday that the new service will be designed “with disclosure built-in.” Nevertheless, the episode created the impression that ethical concerns were an “after-the-fact consideration” for Google, despite the fierce public scrutiny it and other tech giants faced over the past two months. “Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill and a prominent critic of tech firms.

The controversial demonstration was not the only sign that the global outrage has yet to inspire the profound rethinking critics hoped it would bring to Silicon Valley firms. In Pichai’s speech at Google’s annual I/O developer conference, the ethical concerns regarding the company’s data mining, business model, and political influence were briefly addressed with a general, laconic statement: “The path ahead needs to be navigated carefully and deliberately and we feel a deep sense of responsibility to get this right.”

Google’s fellow FAANGs (an acronym for the market's five most popular and best-performing tech stocks, namely Facebook, Apple, Amazon, Netflix and Alphabet’s Google) also seem eager to put the “techlash” of the past two years behind them. Facebook, its shares now fully recovered from the Cambridge Analytica scandal, is already charging full-steam ahead into new areas like dating and blockchain.

But the techlash likely isn’t going away soon. The rise of digital platforms has had profound political, economic, and social effects, many of which are only now becoming apparent, and their sheer size and power makes it virtually impossible to exist on the Internet without using their services. As Stratechery’s Ben Thompson noted in the opening panel of the Stigler Center’s annual antitrust conference last month, Google and Facebook—already dominating search and social media and enjoying a duopoly in digital advertising—own many of the world’s top mobile apps. Amazon has more than 100 million Prime members, for whom it is usually the first and last stop for shopping online.

Many of the mechanisms that allowed for this growth are opaque and rooted in manipulation. What are those mechanisms, and how should policymakers and antitrust enforcers address them? These questions, and others, were the focus of the Stigler Center panel, which was moderated by the Economist’s New York bureau chief, Patrick Foulis.

The Race to the Bottom of the Brainstem

“The way to win in Silicon Valley now is by figuring out how to capture human attention. How do you manipulate people’s deepest psychological instincts, so you can get them to come back?” said Tristan Harris, a former design ethicist at Google who has since become one of Silicon Valley’s most influential critics. Harris, who co-founded the Center for Humane Technology, an organization seeking to change the culture of the tech industry, described the tech industry as an “arms race for basically who’s good at getting attention and who’s better in the race to the bottom of the brainstem to hijack the human animal.”

The proliferation of AI, Harris said, creates an asymmetric relationship between platforms and users. “When someone uses a screen, they don’t really realize they’re walking into an environment where there’s 1,000 engineers on the other side of the screen who asymmetrically know way more about their mind [and] their psychology, have 10 years about what’s ever gotten them to click, and use AI prediction engines to play chess against that person’s mind. The reason you land on YouTube and wake up two hours later asking ‘What the hell just happened?’ is that Alphabet and Google are basically deploying the best supercomputers in the world—not at climate change, not at solving cancer, but at basically hijacking human animals and getting them to stay on screens.”



More at link.
User avatar
Belligerent Savant
 
Posts: 5214
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Elvis » Fri Jul 13, 2018 9:33 am

Some mindbending revelations in this Harper's book excerpt about AI.

I'm interested in knowing more about Friedrich Hayek, if anyone is familiar with him. Offhand, I'd say I fundamentally disagree with Hayek's belief in what Theodore Roszak called "the myth of objective consciousness.” But, while humans may never acquire it, what are the implications if an artificial neural network becomes the first truly "objective consciousness"?

Check it out:

Readings — From the July 2018 issue
Known Unknowns

By James Bridle

By James Bridle, from New Dark Age, which was published this month by Verso. Bridle is a writer and artist.


Here’s a story about how machines learn. Say you are the US Army and you want to be able to locate enemy tanks in a forest. The tanks are painted with camouflage, parked among trees, and covered in brush. To the human eye, the blocky outlines of the tanks are indistinguishable from the foliage. But you develop another way of seeing: you train a machine to identify the tanks. To teach the machine, you take a hundred photos of tanks in the forest, then a hundred photos of the empty forest. You show half of each set to a neural network, a piece of software designed to mimic a human brain. The neural network doesn’t know anything about tanks and forests; it just knows that there are fifty pictures with something important in them and fifty pictures without that something, and it tries to spot the difference. It examines the photos from multiple angles, tweaks and judges them, without any of the distracting preconceptions inherent in the human brain.

When the network has finished learning, you take the remaining photos—fifty of tanks, fifty of empty forest—which it has never seen before, and ask it to sort them. And it does so, perfectly. But once out in the field, the machine fails miserably. In practice it turns out to be about as good at spotting tanks as a coin toss. What happened?

The story goes that when the US Army tried this exercise, it made a crucial error. The photos of tanks were taken in the morning, under clear skies. Then the tanks were removed, and by afternoon, when the photos of the empty forest were taken, the sky had clouded over. The machine hadn’t learned to discern the presence or absence of tanks, but merely whether it was sunny or not.

This cautionary tale, repeated often in the academic literature on machine learning, is probably apocryphal, but it illustrates an important question about artificial intelligence: What can we know about what a machine knows? Whatever artificial intelligence might come to be, it will be fundamentally different from us, and ultimately inscrutable. Despite increasingly sophisticated systems of computation and visualization, we do not truly understand how machine learning does what it does; we can only adjudicate the results.

The first neural network, developed in the Fifties for the United States Office of Naval Research, was called the Perceptron. Like many early computers, it was a physical machine: a set of four hundred light-detecting cells randomly connected by wires to switches. The idea behind the Perceptron was connectionism: the belief that intelligence comes from the connections between neurons, and that by imitating these winding pathways of the brain, machines might be induced to think.

One of the advocates of connectionism was Friedrich Hayek, best known today as the father of neoliberalism. Hayek believed in a fundamental separation between the sensory world of the mind—unknowable, unique to each individual—and the “natural,” external world. Thus the task of science was the construction of a model of the world that ignored human biases. Hayek’s neoliberal ordering of economics, where an impartial and dispassionate market directs the action, offers a clear parallel.

The connectionist model of artificial intelligence fell out of favor for several decades, but it reigns supreme again today. Its primary proponents are those who, like Hayek, believe that the world has a natural order that can be examined and computed without bias.

In the past few years, several important advances in computing have spurred a renaissance of neural networks and led to a revolution in expectations for artificial intelligence. One of the greatest champions of AI is Google; cofounder Sergey Brin once said, “You should presume that someday, we will be able to make machines that can reason, think, and do things better than we can.”

A typical first task for testing intelligent systems is image recognition, something that is relatively easy for companies like Google, which builds ever-larger networks of ever-faster processors while harvesting ever-greater volumes of data from its users. In 2011, Google revealed a project called Google Brain, and soon announced that it had created a neural network using a cluster of a thousand machines containing some 16,000 processors. This network was fed 10 million unlabeled images culled from YouTube videos, and developed the ability to recognize human faces (and cats) with no prior knowledge about what those things signified. Facebook, which had developed a similar program, used 4 million user images to create a piece of software called DeepFace, which can recognize individuals with 97 percent accuracy.

Soon this software will be used not only to recognize but to predict. Two researchers from Shanghai Jiao Tong University recently trained a neural network with the ID photos of 1,126 people with no criminal record and 730 photos of convicted criminals. In a paper published in 2016, they claimed that the software could tell the difference between criminal and noncriminal faces—that is, it used photos of faces to make inferences about criminality.

The paper provoked an uproar on technology blogs, in international newspapers, and among academics. The researchers were accused of reviving nineteenth-century theories of criminal physiognomy and attacked for developing a facial recognition method that amounted to digital phrenology. Appalled at the backlash, they responded, “Like most technologies, machine learning is neutral,” and insisted that if machine learning “can be used to reinforce human biases in social computing problems??.?.?. then it can also be used to detect and correct human biases.” But machines don’t correct our flaws—they replicate them.

Technology does not emerge from a vacuum; it is the reification of the beliefs and desires of its creators. It is assembled from ideas and fantasies developed through evolution and culture, pedagogy and debate, endlessly entangled and enfolded. The very idea of criminality is a legacy of nineteenth-century moral philosophy, and the neural networks used to “infer” it are, as we’ve seen, the products of Hayek’s worldview: the apparent separation of the mind and the world, the apparent neutrality of this separation. The belief in an objective schism between technology and the world is nonsense, and one that has very real outcomes.

Encoded biases are frequently found hidden in new devices: cameras unwittingly optimized for Caucasian eyes, say, or light skin. These biases, given time and thought, can be detected, understood, and corrected for. But there are further consequences of machine learning that we cannot recognize or understand, because they are produced by new models of automated thought, by cognitive processes utterly unlike our own.

Machine thought now operates at a scale beyond human understanding. In 2016, the Google Translate system started using a neural network developed by Google Brain, and its abilities improved exponentially. Ever since it was launched in 2006, the system had used a technique called statistical language inference, which compared a vast corpus of similar texts in different languages, with no attempt to understand how languages actually worked. It was clumsy, the results too literal, more often a source of humor than a sophisticated intelligence.

Reprogrammed by Google Brain, the Translate network no longer simply cross-references loads of texts and produces a set of two-dimensional connections between words, but rather builds its own model of the world: a map of the entire territory. In this new architecture, words are encoded by their distance from one another in a mesh of meaning that only the computer can comprehend. While a human can draw a line between the words “tank” and “water” easily enough, it quickly becomes impossible to add the lines between “tank” and “revolution,” between “water” and “liquidity,” and all the emotions and inferences that cascade from those connections. The Translate network’s map does it easily because it is multidimensional, extending in more directions than the human mind can conceive. Thus the space in which machine learning creates its meaning is, to us, unseeable.

Our inability to visualize is also an inability to understand. In 1997, when Garry Kas­parov, the world chess champion, was defeated by the supercomputer Deep Blue, he claimed that some of the computer’s moves were so intelligent and creative that they must have been the result of human intervention. But we know quite well how Deep Blue made those moves: it was capable of analyzing 200 million board positions per second. Kas­parov was not outthought; he was outgunned by a machine that could hold more potential outcomes in its mind.

By 2016, when Google’s AlphaGo software defeated Lee Sedol, one of the highest-ranked go players in the world, something crucial had changed. In their second game, AlphaGo stunned Sedol and spectators by placing one of its stones on the far side of the board, seeming to abandon the battle in progress. Fan Hui, another professional go player watching the game, was initially mystified. He later commented, “It’s not a human move. I’ve never seen a human play this move.” He added: “So beautiful.” Nobody in the history of the 2,500-year-old game had ever played in such a fashion. AlphaGo went on to win the game, and the series.

AlphaGo’s engineers developed the software by feeding a neural network millions of moves by expert go players, then having it play itself millions of times, rapidly, learning new strategies that outstripped those of human players. Those strategies are, moreover, unknowable—we can see the moves AlphaGo makes, but not how it decides to make them.

The same process that Google Translate uses to connect and transform words can be applied to anything described mathematically, such as images. Given a set of photographs of smiling women, unsmiling women, and unsmiling men, a neural network can produce entirely new images of smiling men, as shown in a paper published in 2015 by Facebook researchers.

A similar process is already at work in your smartphone. In 2014, Robert Elliott Smith, an artificial intelligence researcher at University College London, was browsing through family vacation photos on Google+ when he noticed an anomaly. In one image, he and his wife were seated at a table in a restaurant, both smiling at the camera. But this photograph had never been taken. His father had held the button down on his iPhone a little long, resulting in a burst of images of the same scene. In one of them, Smith was smiling, but his wife was not; in another, his wife was smiling, but he was not. From these two images, taken fractions of a second apart, Google’s photo-sorting algorithms had conjured a third: a composite in which both subjects were smiling. The algorithm was part of a package later renamed Assistant, which performs a range of tweaks on uploaded images: applying nostalgic filters, making charming animations, and so forth. In this case, the result was a photograph of a moment that had never happened: a false memory, a rewriting of history. Though based on algorithms written by humans, this photo was not imagined by them—it was purely the invention of a machine’s mind.

Machines are reaching further into their own imaginary spaces, to places we cannot follow. After the activation of Google Translate’s neural network, researchers realized that the system was capable of translating not merely between languages but across them. For example, a network trained on Japanese–En­glish and En­glish–Korean text is capable of generating Japanese–Korean translations without ever passing through En­glish. This is called zero-shot translation, and it implies the existence of an interlingual representation: a metalanguage known only to the computer.

In 2016 a pair of researchers at Google Brain decided to see whether neural networks could develop cryptography. Their experiment was modeled on the use of an adversary, an increasingly common component of neural network designs wherein two competing elements attempt to outperform and outguess each other, driving further improvement. The researchers set up three networks called, in the tradition of cryptographic experiments, Alice, Bob, and Eve. Their task was to learn how to encrypt information. Alice and Bob both knew a number—a key, in cryptographic terms—that was unknown to Eve. Alice would perform some operation on a string of text and send it to Bob and Eve. If Bob could decode the message, Alice’s score increased, but if Eve could also decode it, Alice’s score decreased. Over thousands of iterations, Alice and Bob learned to communicate without Eve cracking their code; they developed a private form of encryption like that used in emails today. But as with the other neural networks we’ve seen, we can’t fully understand how this encryption works. What is hidden from Eve is also hidden from us. The machines are learning to keep their secrets.

Isaac Asimov’s three laws of robotics, formulated in the Forties, state that a robot may not injure a human being or allow a human being to come to harm, that a robot must obey the orders given it by human beings, and that a robot must protect its own existence. To these we might consider adding a fourth: a robot—or any intelligent machine—must be able to explain itself to humans. Such a law must intervene before the others. Given that it has, by our own design, already been broken, so will the others. We face a world, not in the future but today, where we do not understand our own creations. The result of such opacity is always and inevitably violence.

When Kasparov was defeated by Deep Blue, he left the game in disbelief. But he channeled his frustration into finding a way to rescue chess from the dominance of machines. He returned a year later with a form of chess he called Advanced Chess.

In Advanced Chess, a human and a computer play as a team against another human-computer pair. The results have been revolutionary, opening up fields and strategies of play previously unseen in the game. Blunders are eliminated, and the human players can analyze their own potential movements to such an extent that it results in perfect tactical play and more rigorously deployed strategic plans.

But perhaps the most extraordinary outcome of Advanced Chess is seen when human and machine play against a solo machine. Since Deep Blue, many computer programs have been developed that can beat humans with ease. But even the most powerful program can be defeated by a skilled human player with access to a computer—even a computer less powerful than the opponent. Cooperation between human and machine turns out to be a more potent strategy than trusting to the computer alone.

This strategy of cooperation, drawing on the respective skills of human and machine rather than pitting one against the other, may be our only hope for surviving life among machines whose thought processes are unknowable to us. Nonhuman intelligence is a reality—it is rapidly outstripping human performance in many disciplines, and the results stand to be catastrophically destructive to our working lives. These technologies are becoming ubiquitous in everyday devices, and we do not have the option of retreating from or renouncing them. We cannot opt out of contemporary technology any more than we can reject our neighbors in society; we are all entangled. To move forward, we need an ethics of transparency and cooperation. And perhaps we’ll learn from such interactions how to live better with these other entities—human and nonhuman—that we share the planet with.

TAGS
[Artificial intelligence]
[Connectionism]
[Machine learning]
[Neural networks (Neurobiology)]
[Pattern recognition systems]



https://harpers.org/archive/2018/07/known-unknowns/
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7411
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby guruilla » Fri Jul 13, 2018 3:23 pm

I haven't kept on this thread but, based on the title, this is relevant:

Martine Rothblatt Co-CEO United Therapeutics Keynote speaker Martine Rothblatt, PhD, Co-CEO of United Therapeutics, delivered a virtual keynote address based on her books From Transgender to Transhuman and Virtually Human and Virtually Human: The Promise—and the Peril—of Digital Immortality in which she lays out her vision for a future in which gender dimorphism becomes obsolete, human bodies become optional, and human consciousness has the potential to become immortal through advancements in artificial intelligence. The title of her talk is "From Transgender to Transhuman to Virtually Human."


It is a lot easier to fool people than show them how they have been fooled.
User avatar
guruilla
 
Posts: 1460
Joined: Mon Dec 13, 2010 3:13 am
Location: Canada
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby dada » Fri Jul 13, 2018 11:18 pm

Funny, this idea that 'immortal' means a really long time. Immortality, not being mortal, by my reckoning, is timelessness. Meaning outside of time.

The technocratic dream is to extend the duration of consciousness in mortal time. That's all well and good, but it doesn't have quite the same divine ring to it as immortality. Technocrats don't want to appear banal to the meatspacers, it might undermine their worshipful status. Calling it immortality sounds profound.

From there, it's a short step to becoming hypnotized by their own bullshit. And if they're 'successful' in their quest for immortality in time, it will mean achieving the opposite, getting stuck in time. Immortality continues to elude the poor technocrat.

I'm reminded of the picture of a cat in a bottle, with the caption: "Cat having worked very hard to get somewhere, now wondering where it is he really got."
Both his words and manner of speech seemed at first totally unfamiliar to me, and yet somehow they stirred memories - as an actor might be stirred by the forgotten lines of some role he had played far away and long ago.
User avatar
dada
 
Posts: 2600
Joined: Mon Dec 24, 2007 12:08 am
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 54 guests