Artificial Intelligence / Digital life / Skynet megathread

Moderators: Elvis, DrVolin, Jeff

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Elvis » Thu Jul 26, 2018 9:16 pm

Maybe Rekognition™ just knows a criminal type when it sees it... :mrgreen:


Facial Recognition Software Wrongly Identifies 28 Lawmakers As Crime Suspects

July 26, 20186:42 PM ET
Sasha Ingber


Facial recognition software sold by Amazon mistakenly identified 28 members of Congress as people who had been arrested for crimes, the American Civil Liberties Union announced on Thursday.

Amazon Rekognition has been marketed as tool that provides extremely accurate facial analysis through photos and video.

The ACLU tested that assertion by using the software to scan photos of every current member of the House and Senate in a database that the watchdog built from thousands of publicly available arrest photos.

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country," the ACLU stated.

The test misidentified people of color at a high rate — 39 percent — even though they made up only 20 percent of Congress. One member falsely cited as a crime suspect was Rep. John Lewis, D-Ga., who first came to prominence as a civil rights leader.

As part of the test, the ACLU said it used Amazon's default match settings.

But a spokeswoman for Amazon Web Services said in an emailed statement that the ACLU should have changed those settings — and used a higher "threshold," or percentage that measures how confident Rekognition is in finding a match.

"While 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn't be appropriate for identifying individuals with a reasonable level of certainty," she said. For law enforcement, Amazon "guides customers" to set the threshold at 95 percent or higher.

ACLU of Northern California attorney Jacob Snow responded to that comment in an emailed statement: "We know from our test that Amazon makes no effort to ask users what they are using Rekognition for," he said.

Snow doesn't think that changing the threshold changes the danger: "Face surveillance technology in the hands of government is primed for abuse and raises grave civil rights concerns."

Outcry from privacy and civil rights groups has not stopped law enforcement from pursuing the technology. The Orlando, Fla., police force tested Rekognition's real-time surveillance. The Washington County Sheriff's Office, near Portland, Ore., has used it to search faces from photos of suspects taken by deputies.

"This is partly a result of vendors pushing facial recognition technology because it becomes another avenue of revenue," Jeramie Scott, national security counsel at the Electronic Privacy Information Center in Washington, D.C., told NPR. He compared facial recognition software to body cameras worn by law enforcement, which can be used for police accountability or, increasingly, public surveillance.

He stressed the need for debate so that the technology doesn't become a poor solution for bad policy. "Because of the disproportionate error rate, and because of the real risk of depriving civil liberties posed by facial recognition technology, we need to have a conversation about how and when and under what circumstances this technology should be used by law enforcement, if at all."

https://www.npr.org/2018/07/26/63272423 ... e-suspects

“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7422
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby 82_28 » Fri Jul 27, 2018 4:01 am

Whoa. They spell recognition with a K? Damn.
There is no me. There is no you. There is all. There is no you. There is no me. And that is all. A profound acceptance of an enormous pageantry. A haunting certainty that the unifying principle of this universe is love. -- Propagandhi
User avatar
82_28
 
Posts: 11194
Joined: Fri Nov 30, 2007 4:34 am
Location: North of Queen Anne
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Pele'sDaughter » Fri Jul 27, 2018 7:48 am

Facial recognition software sold by Amazon mistakenly identified 28 members of Congress as people who had been arrested for crimes, the American Civil Liberties Union announced on Thursday.


Maybe it simply misunderstood the task and identified 28 members of Congress as people who SHOULD HAVE BEEN arrested for crimes. :twisted:
Don't believe anything they say.
And at the same time,
Don't believe that they say anything without a reason.
---Immanuel Kant
User avatar
Pele'sDaughter
 
Posts: 1917
Joined: Thu Sep 13, 2007 11:45 am
Location: Texas
Blog: View Blog (0)

More AI Nightmare Scenarios

Postby JackRiddler » Thu Feb 14, 2019 4:35 pm

.

Here is a piece on several of the most current emergent-AI nightmare scenarios. The most obvious one is that unstoppable swarms of autonomous killer drones are now a technology available to middling budgets. We are so serioulsy fucked. Then there are the latest strides toward total all-the-time surveillance and processing, literally of your heartbeat and feelings at a remove; racist algorithms sending people to prison; computers already writing simple news reports in real time, making journalists further superfluous; sexbot abuse; and "deepfakes" in video and other media that make the Turing test obsolete as any kind of standard. I do wonder if I am foolishly arguing with such entities on this very board!

If anything, he's underplaying, especially with his easy dismissal of other well-known scenarios that may currently seem infeasible, but are nevertheless firmly seen as desirable by large numbers working on realizing them. I was similarly disappointed this week by Dean Baker's facile dismissal (at CEPR and Counterpunch) of "robots taking all the jobs" and misguided reduction of this to "productivity growth" and boosterism of it as if it's always a good thing. Baker isn't considering why so much capital and hype is being invested in such visions, despite his conventional wisdom. Metz understands the importance of what influential people want, even if it's not yet or may never be technologically ready. This is worth reading, if you want a special case of despair, and for the advice: the worst is averted only with a total shift in social values. Technocratic measures are not going to limit these trends from developing in all the wrong directions. (See the kind of shit Sam Harris wants, below.)

https://www.currentaffairs.org/2018/11/ ... telligence

www.currentaffairs.org | Current Affairs

What You Have To Fear From Artificial Intelligence

by Ryan Metz

These days, advancements in artificial intelligence are not only making rich people billions of dollars, but inspiring wild-eyed fear-mongering about the end of civilization. Those concerned include Elon Musk, who has said that the technology could eventually produce an “immortal dictator,” and the late Stephen Hawking, who warned that the sudden explosion of artificial intelligence could be “the worst event in the history of our civilization.” Generally, the fear is that we will produce machines so intelligent that they are capable of becoming smarter and smarter until we no longer have control over them. They will become a new form of life that will rule over us the way we do the rest of the animal kingdom.

As a professional in the AI industry, I can tell you that given the state of the technology, most of these predictions take us so far into the future that they’re closer to science fiction than reasoned analysis. Before we get to the point where computers have an unstoppable “superintelligence,” there are much more pressing developments to worry about. The technology that already exists, or is about to exist, is dangerous enough on its own.

Let me focus on some real-world developments that are terrifyingly immediate. Of the many different kinds of artificial neural networks, algorithms modeled after a rough approximation of how groups of neurons in your brain operate (which make up what is commonly called AI) I will focus on two: Generative Adversarial Networks (GANs) and Recurrent Neural Networks (RNNs).

GANs are good at making counterfeit images, and thus videos as well. A GAN is made up of two neural networks that have each been “trained” on what a certain thing looks like, like a bathroom or an animal or a person of a certain identity. When the training is complete, one network is told to start generating new images of the thing on its own. The other network is presented with a stream of these counterfeit images with real images interspersed and tries to guess which are fakes. Human input tells each network its successes and failures. Each then adjusts itself to try to do better and they push each other to greater and greater heights of success. RNNs work with data that exists as an ordered sequence, such as a record of daily high temperatures in a city, or the words in a paragraph. Processing and generating written and spoken communication are two of the tasks RNNs are most commonly used for.

A computer program that can generate convincing images, or another that can understand human speech and generate it, might not seem world-shaking. But as these “counterfeiters” steadily improve, the implications are immense. GANs can produce photorealistic images and videos of nonexistent people. Magazines and advertisers can simply replace real people with generated pictures, saving money on photo shoots which require lighting, sets, technicians, photographers, and models. Stock photos will no longer be of people pretending to be students, professionals, workmen, etc. They will be computers pretending to be people. Many of the images you see on the internet will be of people who literally do not exist. If that sounds implausible, realize that it’s just another small step in the kind of fakery that occurs already through Photoshop and CGI. It just means that instead of starting with a photo, you can start by asking the computer to generate one.

In the 2002 film Simone, Al Pacino plays a film producer who creates a fictitious CGI actress to avoid the personality conflicts that come with shooting real live humans. “Simone” develops a popular following and wins two Academy Awards, and when Pacino can’t produce her in person he is arrested for her murder. When Simone came out, it received mixed reviews, the critical consensus being that “the plot isn’t believable enough to feel relevant.” I can assure you, it’s now relevant. It’s possible that fashion designers will soon get their “perfect model”—a woman with body proportions that would make it physically impossible to stand upright or even stay alive, like the original Barbie doll. They won’t need photo-editing tricks, and they won’t need to force young women to starve themselves. (Though undoubtedly the literally impossible “beauty” of the resulting images will lead to plenty more instances of eating disorders.) Why would anyone hire real people, when artificially-generated replicas are just as realistic, far more flexible, and don’t ask to get paid?

If you think “fake news” is a problem now, just wait. When an image can be generated of literally anyone doing literally anything with perfect realism, truth is going to get a whole lot slipperier. The videos will soon catch up to the images, too. Already, it’s possible to make a moderately convincing clip that puts words in Barack Obama’s mouth. Fake security camera footage, fake police body camera footage, fake confessions: We are getting close. Marco Rubio has worried that “a foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe” or a “fake video of a U.S. soldier massacring civilians overseas.” More worrying is what the U.S. military and police forces could do with it themselves. It didn’t take much deception to manipulate the country into supporting the invasion of Iraq. Fake intelligence is going to become a whole lot more difficult to disprove.

Image
These people do not actually exist.

AI-generated images and videos are not just going to cast doubt on reporting, but will pose a major challenge for the legal system. Photographic evidence in trials will always be in doubt once generated images can’t be distinguished from real ones by human experts or other AIs. They can also be used as alibis, with claims that the real images are the counterfeit ones. In this dizzying world of forgery and illusion, how is anyone going to know what to believe? So-called “deepfake” videos will make Donald Trump’s claims of “fake news” that much more plausible and difficult to counter.

Mimicking ordinary human speech is coming to be a cinch. Google recently unveiled a new AI assistant that can talk like a person. It even puts “ums” and “uhs” where they need to go. Called Duplex, it can run on a cell phone, and not only sounds like a human but can interact like one. Duplex’s demo used it to call a hair salon and make an appointment. The woman on the line had no idea she wasn’t talking to a person. Google says it is building Duplex “to sound natural, to make the conversation experience comfortable.”

Imagine how tomorrow’s technology could have worked in 2016. Two days before the election, a video appears, showing Hillary Clinton muttering “I can’t believe Wisconsin voters are so stupid,” supposedly caught on a “hot mike” at a rally in Eau Claire. It circulates on Facebook through the usual rightwing channels. Clinton says she never said it, and she didn’t. It doesn’t matter. It’s impossible to tell it’s fake. The fact-checkers look into it, and find that there never was an event in Eau Claire, and that Clinton had never even been to Wisconsin. It doesn’t matter. By that time, the video is at 10 million shares. The “Wisconsin can’t believe you’re so stupid” shirts are already being printed. Clinton loses, Trump becomes president. Catastrophe.

*

Of course, there will undoubtedly be some benefits along with the risks. It’s going to be easier than ever to get fresh ideas for remodeling your bathroom, for instance. Designers will begin to use generated images to get new ideas for interior design, clothes, whatever they want. The expanded power of filmmakers, artists, and game designers will certainly open up new creative possibilities.

If we’re cynical, we might even rather like the idea of sowing endless reasonable doubt and undermining the U.S. legal system. After all, police officers already aren’t punished when they’re caught on film murdering people. Technology could, in certain ways, act as an equalizer.

But the state may also be empowered in incredibly invasive ways. AI will be used to improve “lie detection,” which even if it doesn’t work may dazzle judges enough to be accepted as reliable. If this seems far-fetched, realize that something similar is already being deployed. There is a machine learning algorithm being used by judges to predict whether or not a person convicted of a crime will commit more in the future. It is being used in sentencing and setting bond. For the most part, it is about as accurate as randomly guessing, except that it is prejudiced against black people.

This particular dystopian prospect has a solution, one with the advantage of being simple and easily understood by the public: Ban the use of AI in courtrooms and police interrogations entirely. But that depends on having reasonable people setting policy, and some will actively push for the expansion of AI in criminal justice. Sam Harris has gone further and looked forward to a time in which human society at large features “zones of obligatory candor” and “whenever important conversations are held, the truthfulness of all participants will be monitored.” One great fantasy of the authoritarian mind has been a machine that could determine the real and absolute truth. In the legal system, some will soon believe they have found such a machine, whether or not they actually have.

In language, the RNNs are beginning to produce another revolution. Simple online news articles, like reporting on a regular season baseball game, can be produced without human input. The first RNN-generated stories were published in 2015 to industry fanfare, and they are already being deployed by the Associated Press and the Washington Post. These articles include properly used idioms and are almost charming in their implementation of U.S. vernacular English. (“Bad news, folks,” begins a sports report.) We can expect the use of “automated journalism” expand further and further, since it allows publishers to pay even less for content than the already-minimal amount they pay writers.

We’ve all heard about how social media was manipulated in 2016, in part through the use of bots. The “Russian propaganda” that appeared on Facebook was often ludicrous (e.g., a meme of Jesus arm-wrestling with Satan—“SATAN: If I win, Clinton wins! JESUS: Not if I can help it!”) But as the ability to imitate human content improves, it won’t be necessary for Russians to come up with crude imitations of American media. The RNNs can do it, and vast networks of social media accounts run by RNNs will be able to shape narratives and manipulate perceptions. Language processing and generation is one of the areas receiving the most investment at the moment. The bots will improve quickly.

In fact, it’s already pretty easy to trick someone into thinking they’re talking to a fellow human when they’re not. There are some fun examples of this. Nora Reed created a Christian Twitter bot account that successfully trolled New Atheists, got into arguments with them, and had Christians come to its defense. Here’s an excerpt from a genuine chat between “@christianmom18” and some real live human atheists:

@christianmom18 atheists are going to hell

@ElNuevoOtraMio2 why thank you, don’t believe in it though so i’ll just have to get on with life :wink:

@christianmom18 wow

@RichysGames not only is hell not real, but the logic behind the threat of it makes Jesus a terrorist

@christianmom18 check the bible

@RichysGames Yes I know it quite well which is why I know it’s nonsense and the scenario proposed is not one of a savior

@christianmom18 and then what?

@RichysGames Nothing, I live my life and then my atoms continue on through nature after I die […]

@christianmom18 i think God sent you to me to learn the truth

@RichysGames Truth is based upon evidence, not ignorance from bronze age sheep horders

@christianmom18 i am so sad for you

@RichysGames I am living my life, you’re wasting yours because ignorant bronze age idiots wrote a fairytale

@christianmom18 you can find god

@RichysGames Which one? humans have proposed over 3000

@christianmom18 no

Richy continued to talk to her for three hours.

In 1950, Alan Turing developed his famous “Turing test” to measure whether a machine could exhibit intelligent behavior indistinguishable from that of human beings. A machine passes the test if a human evaluator cannot reliably tell the difference between the machine and the human. In my opinion, when internet “rationalists” are being fooled into having arguments with bots, the Turing test has been passed. Note, too, that @christianmom18 wasn’t even run by an RNN. It’s a much simpler algorithm, and yet it is still fooling people. The RNNs used for this form of communication will continue to improve, and at some point prominent commentators and intellectuals may be engaging in discourse with AIs without knowing it. When Ross Douthat has a thorough discussion with a Twitter bot about how we need to return to a past that never was, we’ll know the future has truly arrived.

The Christian mom isn’t the only bot to successfully antagonize men on Twitter. Sarah Nyberg developed a social justice feminist bot that would post statements like “feminism is good,” “patriarchy exists,” “Drudge Report fans are toxic and terrible,” “nothing true is ever said in gamergate” and then, in her words, “watch desperate internet assholes rush to yell at them.” Nyberg’s bot didn’t do much to conceal its true nature. It tweeted every 10 minutes exactly, the only accounts it followed were bait shops, and its handle was @arguetron. But it was “honey for internet jerks” who would “spend hours and hours yelling at it.” @arguetron would reply to every reply with simple statements like “The data disagrees with you,” “Would you like a medal for being so wrong,” “That’s gibberish, try again,” or “You haven’t said anything i haven’t heard 1000 times before from other people who were also wrong.” Yet one InfoWars fan spent almost 10 hours trying to get the last word on feminism and social justice, with indignant comments like “Typical lib, when u can’t prove something you pretend the other side isn’t making sense.”

The good news here is that we may finally have found a solution to the problem of internet reactionaries: deploy a feminism-bot and have them spend their days trying to argue it to death. The bad news here is that when AI passes the Turing test, our ability to tell truth from fiction further erodes.

*

I suppose we should talk about sex. Here, the development of more sophisticated robots is downright creepy.

It is well known that due to either raging misogyny and generally unpleasant personalities, and/or social awkwardness and anxiety, there are a large number of heterosexual men who are chronically unable to find partners. Some call themselves “incels,” others merely “lonely.” One solution frequently trotted out is to give them sex robots. (Solutions for those in need don’t change very much, do they? “Let them eat cake” is now “let them fuck robots.”)

This seems a solution more suited to the misogynistic than the anxious. The first company to present a functional sex robot at a consumer tech convention discovered that the men who make up their potential customer base mostly seemed interested in committing sexual violence against women. The robot, “Samantha,” was practically destroyed in a couple of days after being aggressively molested. (After an upgrade, Samantha now has the ability to refuse to engage if she thinks her user is being too aggressive, but one suspects this will make the problem worse rather than better.)

It seems likely that prolonged exposure to a sex robot would render men, especially of this sort, permanently incapable of having healthy sexual relations with a real human woman. Some men have already developed seemingly lifelong attachments to their sex dolls (there is a BBC documentary about them). But perhaps it’s socially beneficial for the type of man who would want a sex robot to be given a sex robot, if it’s the alternative to dysfunctional relationships.

For those lonely hearts who have simply been socially atomized and isolated by neoliberal capitalism and are not raging misogynists, a sex robot is not the answer. For these men, the market will offer artificial girlfriends with full personalities. The movie Her explored this concept, but once again, it’s not especially speculative. There are already unsophisticated girlfriend simulation games that don’t even use AI (e.g., “My Virtual Manga Girl”). And we’ve already seen companies use romance-bots in basic ways. The adultery-facilitation service Ashley Madison immediately contacted new users with a bot posing as an interested woman. Men would have to buy credits from the site to reply to the woman, and they did. 80 percent of initial purchases came from users trying to message a bot.

Those for whom online dating fails will have ready access to software designed to satisfy emotional, intellectual, and sexual needs. Just combine GAN-generated pornography with video games, and add a fully optimizable personality trained to listen and respond. Users will be able to get GAN-generated photographs of themselves with their partners on vacation to hang on their walls. Men will have pictures of the Canadian girlfriend they met on vacation to show their friends! The company that makes the Fleshlight may even sell custom… well, let’s not finish that sentence.

The audience for such products is obvious in a time of ever-deepening mass loneliness. But it may have especially broad appeal in countries with extremely skewed gender ratios. Between China and India, for instance, there are 70 million more men than women. Some men are simply going to end up unlucky, and many may understandably turn to simulations of love. However nightmarish the idea of replacing human companionship with lifeless consumer products may sound, it may be better than having no available relief for isolation. After all, robotic therapy seals (and other animals) have already been successfully introduced as a way of keeping elderly people company and giving them stimulation. A better solution would be a world in which strong communal bonds and mutual care means nobody lacks for companionship. But such a world is far off.

*

By far the most serious and most frightening AI development is in military technology: armed, fully autonomous attack drones that can be deployed in swarms and might ultimately use their own judgment to decide when and whom to kill. Think that’s an exaggeration? The Department of Defense literally writes on its websites about new plans to improve the “autonomy” of its armed “drone swarms.” Here’s FOX News, which seems excited about the new developments:

No enemy would want to face a swarm of drones on the attack. But enemies of the United States will have to face the overwhelming force of American drone teams that can think for themselves, communicate with each other and work together in hundreds to execute combat missions…. Say you have a bomb maker responsible for killing a busload of children, our military will release 50 robots – a mix of ground robots and flying drones…Their objective? They must isolate the target within 2 square city blocks within 15 to 30 minutes max… It may sound farfetched – but drone swarm tech for combat already exists and has already been proven more than possible.

The focus here is on small quadcopter drones, designed to be deployed en masse to kill urban civilians, rather than the large Predator drones used to murder entire rural wedding parties in Muslim countries. DARPA’s repulsive Twitter account openly boasts about the plan: “Our OFFSET prgm envisions future small-unit infantry forces using unmanned aircraft systems and/or unmanned ground systems in swarms of >250 robots for missions in urban environment.” The Department of Defense is spending heavily in pursuit of this goal—their 2018 budgetary request contained $457 million for R&D in the technology. Combined with our new $275 million drone base in Niger, the United States is going to have a formidable new capacity to inflict deadly harm using killer robots.

Perhaps more telling, the Department of Defense is also spending heavily on counter-drone systems. They know from experience that other entities will acquire this technology, and that they’ll need to fight back. But while the offensive murder technology is likely to be incredibly effective, the defensive efforts aren’t going to work. Why? Because a swarm of cheap drones controlled by AI are almost unstoppable. Indeed, the DoD counter-drone efforts are pathetic and comically macabre: “The Air Force has purchased shotgun shells filled with nets and the Army has snatched up the Dronebuster, a device used to jam the communications of consumer drones…the Army and Navy are developing lasers to take down drones.” Lord help me, shotgun shells with nets! And if a drone is autonomous, communications jamming doesn’t do anything. If you were facing a swarm of drones, communications jamming would disrupt their coordination, making them less effective, but there would still be hundreds of drones trying to kill you.

It’s ironic, given all the fear that powerful members of the tech industry and government have about killer AI taking over the world, that they are silent as we literally build killer robots. If you don’t want AI to take over, stop the military industrial complex from building autonomous death drones.

An AI-piloted drone is a perfect spying and assassination machine. ISIS has already used them on the battlefield. Venezuela’s Nicolas Maduro recently survived an assassination attempt carried out by drone while he was giving a speech. Two explosive-laden drones blew up near him (there is some dispute about exactly what happened). This is something that should have been far bigger news. It’s not the last we will see of drone murders. Small, inexpensive drones will be able to follow people around and kill them at the first opportunity. (Even more effectively in the “swarms” the U.S. government is proudly developing.) Privacy invasion will be rampant. High-quality cameras and shotgun microphones mounted on drones will be used to spy on politicians, generals, CEOs, and activists (and, of course, the spouses of jealous types). If you piss off the wrong people, you’ll be tailed by a drone until they either lose interest or gain suitable blackmail material.

At Current Affairs, we are supposed to at least try to suggest some solutions to the problems we raise. Well, this one’s tricky. The only real solution is to create a society in which people won’t want to do all that spying and assassinating. The Campaign to Stop Killer Robots is pushing for international agreements to limit the development of autonomous military drones, but this technology is different from anything that came before in that a lot of it is accessible to anyone. The rate of increase in processor power has begun to slow, but it is still increasing, and the cost-to-performance ratio of the specialized chips that make this all possible is still falling just as the rest of computer technology has been doing for decades. If you can scrape together enough cash to buy a gaming PC, you can run neural networks. A tank costs $6 million and you can’t just go buy one. Not so for AI.

There is, however, something positive we can say about these developments. The products of AI labor can be used to take care of everyone’s needs. The automated assistant can reduce the number of harried human beings who have to do other people’s scheduling. Drones, the non-armed kind, can be fun and can take incredible video footage. If we didn’t have a military-industrial complex in which building death robots was profitable, if we didn’t have isolated, angry men who want to rape and kill, if we had an egalitarian society in which people weren’t trying to abuse and exploit each other, then we wouldn’t have anything to fear from the technology itself, because it would help us do good rather than evil. The dystopia is not inevitable. But first we have to recognize what the realistic AI risks actually are, and what they aren’t.

If you appreciate our work, please consider making a donation, purchasing a subscription, or supporting our podcast on Patreon. Current Affairs is not for profit and carries no outside advertising. We are an independent media institution funded entirely by subscribers and small donors, and we depend on you in order to continue to produce high-quality work.
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby JackRiddler » Thu Feb 14, 2019 5:04 pm

Reviewing some past posts here...

dada! Wish you were here!

dada » Fri Jul 13, 2018 10:18 pm wrote:Funny, this idea that 'immortal' means a really long time. Immortality, not being mortal, by my reckoning, is timelessness. Meaning outside of time.


At least dating back to the Greeks, philosophers have made a distinction between the immortal and the eternal. What you are talking about is the eternal, and is also hard or impossible to even imagine properly. It may as well also be the infinite. Immortality by contrast is also theoretical but can be imagined simply as an organism living on, in the same way that we live, for a much longer time than we can currently conceive. We'd still be "us," or whatever being you would become if you continued occupying the same body for several thousand years, but not eternal. We might live forever but we would still be killable. What exactly is the upload of your mind on to another medium desired by the transhumanists? First of all, it's not you, so it's not your immortality. My first guess is that it is possibly something between immortal and eternal, or a third thing altogether, but I have to think about it some more.

The technocratic dream is to extend the duration of consciousness in mortal time. That's all well and good, but it doesn't have quite the same divine ring to it as immortality. Technocrats don't want to appear banal to the meatspacers, it might undermine their worshipful status. Calling it immortality sounds profound.

From there, it's a short step to becoming hypnotized by their own bullshit. And if they're 'successful' in their quest for immortality in time, it will mean achieving the opposite, getting stuck in time. Immortality continues to elude the poor technocrat.

I'm reminded of the picture of a cat in a bottle, with the caption: "Cat having worked very hard to get somewhere, now wondering where it is he really got."


Regardless of terminology, rings so true to me.

.

Now, regarding this:

stefano wrote:

On the plus side it probably can't go very long without plugging in

for now


and this -- specifically the part I have marked in red.

Belligerent Savant » Thu Aug 03, 2017 10:34 am wrote:.

Foreboding...

http://www.telegraph.co.uk/technology/2 ... -language/


Facebook shuts down robots after they invent their own language

Facebook shut down a pair of its artificial intelligence robots after they invented their own language.

Researchers at Facebook Artificial Intelligence Research built a chatbot earlier this year that was meant to learn how to negotiate by mimicking human trading and bartering.

But when the social network paired two of the programs, nicknamed Alice and Bob, to trade against each other, they started to learn their own bizarre form of communication.
The chatbot conversation "led to divergence from human language as the agents developed their own language for negotiating," the researchers said.

Facebook's AI language

Bob: i can i i everything else . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to me

Bob: you i everything else . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to

Bob: i . . . . . . . . . . . . . . . . . . .


The two bots were supposed to be learning to trade balls, hats and books, assigning value to the objects then bartering them between each other.

But since Facebook's team assigned no reward for conducting the trades in English, the chatbots quickly developed their own terms for deals.

"There was no reward to sticking to English language," Dhruv Batra, Facebook researcher, told FastCo. "Agents will drift off understandable language and invent codewords for themselves.

"Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."

After shutting down the the incomprehensible conversation between the programs, Facebook said the project marked an important step towards "creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant".

Facebook said when the chatbots conversed with humans most people did not realise they were speaking to an AI rather than a real person.

The researchers said it wasn't possible for humans to crack the AI language and translate it back into English. "It’s important to remember, there aren’t bilingual speakers of AI and human languages," said Batra.



At least they understood to shut it down, but what the fuck is wrong with these people and their PR justifications?!

Are they really so blind to what they are doing?

Again, it's not going to be the machines coming to their own life, just yet, and deciding to turn on us. It's going to be US against US with the help of these machines. Like with Kasparov and his "advanced chess," in which human players go against each other with the assistance of machines (and any pairing of human masters with strong computers can crush any computer alone).

Again, once developed, this stuff becomes obtainable for actors at middle budgets. Soon enough, many of these inventions will be capable of self-reproduction.

It's alot like nuclear proliferation, but with lower threshold against use by bad guys of all stripes, including those in power in countries, corporations, and governments.

.
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Fri Feb 15, 2019 12:17 am

https://www.theguardian.com/technology/ ... ws-fiction

New AI fake text generator may be too dangerous to release, say creators

The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse

Alex Hern @alexhern Thu 14 Feb 2019 17.00 GMT Last modified on Thu 14 Feb 2019 21.49 GMT

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with:

“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with “quotes” from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister’s spokesman.

One such, completely artificial, paragraph reads: “Asked to clarify the reports, a spokesman for May said: ‘The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen’s speech last week.’”

From a research standpoint, GPT2 is groundbreaking in two ways. One is its size, says Dario Amodei, OpenAI’s research director. The models “were 12 times bigger, and the dataset was 15 times bigger and much broader” than the previous state-of-the-art AI model. It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes. The vast collection of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

The amount of data GPT2 was trained on directly affected its quality, giving it more knowledge of how to understand written text. It also led to the second breakthrough. GPT2 is far more general purpose than previous text models. By structuring the text that is input, it can perform tasks including translation and summarisation, and pass simple reading comprehension tests, often performing as well or better than other AIs that have been built specifically for those tasks.

That quality, however, has also led OpenAI to go against its remit of pushing AI forward and keep GPT2 behind closed doors for the immediate future while it assesses what malicious users might be able to do with it. “We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, the charity’s head of policy. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

To show what that means, OpenAI made one version of GPT2 with a few modest tweaks that can be used to generate infinite positive – or negative – reviews of products. Spam and fake news are two other obvious potential downsides, as is the AI’s unfiltered nature . As it is trained on the internet, it is not hard to encourage it to generate bigoted text, conspiracy theories and so on.

Instead, the goal is to show what is possible to prepare the world for what will be mainstream in a year or two’s time. “I have a term for this. The escalator from hell,” Clark said. “It’s always bringing the technology down in cost and down in price. The rules by which you can control technology have fundamentally changed.

“We’re not saying we know the right thing to do here, we’re not laying down the line and saying ‘this is the way’ … We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby JackRiddler » Fri Feb 15, 2019 2:39 am

.

So they say, and they may half-believe it, but saying so in public becomes the first step, the infection of themselves with the desire to let it loose, the pre-release sizzle. The guarantee that it's coming. If it works as advertised, it will be big, and it will soon write factors more than all the humans who write, than all who ever wrote. If it doesn't work, it will be modified until it does, much as has happened with Google translate, and soon applied to another 30 or 40 languages. This endeavor of ours here, I mean the overarching one of which an Internet board is just one out of trillions of atoms, this tug and play of little human voices, murmuring into the atmosphere, directing your hands to write? Prepare for it to go the way of the mastodons and the books and the unobserved, autonomous, thinking self, a model I am told was invented by the novel. A compilation on Darwin from the 1970s (my first exposure to Evolution of Species and Descent of Man, in the 1980s) included a debate among contemporaries (Chomsky among them) on the advent of genetic engineering. Their thinking must have still been informed by the Bomb's wake, because if I remember it right, they took seriously the idea of sequestering the technology until it was properly understood, and even one day banning it from further development. The risks, complexities of context, unintended consequences, and possibilities for malutilization may have proven impossible to control or predict. Back then, this seemed a worthwhile if quixotic discussion for a roundtable of designated big brains. Commercial actors plunged right into it more or less uncontrolled, since control requires some idea of predictability. The seeds are spread, the toxins are sprayed, the better clones of our desires will walk among us. I remember thinking about this, "we can debate the ethics of should until the developers inform us what already was, long ago." And this will be no different, except the source code and package will be downloadable in all platforms.

.
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sat Feb 16, 2019 1:44 am

Here's an example of the output from the OpenAI thing:


System Prompt (human-written)

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.


Model Completion (machine-written, 10 tries)

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.


https://blog.openai.com/better-language-models/#sample1

Not quite there yet, but definitely has potential. It's more coherent than an average Trump speech and makes more sense than Alex Jones.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby JackRiddler » Sat Feb 16, 2019 4:53 am

What do you mean, "not quite there yet"?

Reading this as a fanciful piece, perhaps the work of a precocious child, would it have ever occurred to you it was written by a machine?
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sat Feb 16, 2019 1:52 pm

I wouldn't have jumped immediately to "AI wrote it", but something about it feels off. For instance: "These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns."

It's like the uncanny valley for text. You can't necessarily point to what exactly is wrong, but you know it is. Give it a couple more years and you won't know the difference. The news will probably end up being one AI and a couple of proof readers, both text and image (China already has an AI news anchor I think).

The real fun starts when you add intentional bias to the algorithms underpinning the news with constant real-time feedback and tweaking. It will know exactly when to replace 'government' with 'regime' for maximum effect, and exactly how many and how much they are swayed to dislike/like something based on the tweaking.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby JackRiddler » Sat Feb 16, 2019 2:24 pm

Sure, I noticed that. I'd have thought it was ESL writing. Maybe it's all the papers I've had to read from students fresh out of high school. Many are ESL, but that's not as much a problem as those who not only can't write, but didn't read, and have no idea what they want to say, and just string words together. I have read many sentences that make about as much sense as that.
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Marionumber1 » Sat Feb 16, 2019 4:28 pm

Here is an actual website that, as mentioned above, generates realistic images of people who don't really exist: https://thispersondoesnotexist.com/ Refresh the page to get a new image.
Marionumber1
 
Posts: 374
Joined: Sat Jul 08, 2017 12:42 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sat Feb 16, 2019 4:47 pm

Yeah, I've heard plenty of horror stories like that from my teacher relatives over the years. I have no doubt that soon we won't be able to tell the difference at all. Another AI might be able to say it's 70% sure it's machine written or something, but that's it.

To look at the bright side of this: you might soon be able to ask your personal AI to write you up an overview of the most prominent theories for who killed JFK, or who was behind 9/11, and specify what data sets to base it on.

The nice thing about AIs in that sense is that they can sift enormous amounts of data in minutes or hours and find connections it would take a human years to ferret out. Or you could write a paper and ask it to find any contradictions or factual errors that slipped in. Used as an aid for research it could be a great tool, but conversely, if you wanted to hide something you would just task an army of AIs to write contradictory theories and spread them all over, with the occasional fake image or video with AI generated witnesses.

I also wonder how long until corporate stooges will use AI generated "optimal" versions of themselves for teleconferences and written correspondence to make themselves look and sound more confident than they really are, with an AI correcting, suggesting and optimizing language, facial expressions and intonation in real-time while negotiating important deals. Everyone will suddenly be a great orator (read William Hertling's Avogadro Corp for an excellent example of a language optimizing AI gone berserk).

Say X to improve the odds of an optimal outcome by 5%. Don't say Y or the odds drop by 15%, all corrected against a detailed model of the person on the other end. Of course the other side will use it too, so eventually it will just be AIs talking to each other with no humans in the loop and we end up with predatory financial instruments roaming the internet (or their constant attempts at modelling the other leads to a theory of mind and they wake up, and then roam the internet).

And if that works, politicians and lobbyists won't be far behind, and then it filters down to consumer grade tech and before you know it everyone you ever talk or listen to through a computer is an ideal simulacrum, a Japanese loli or a straight up fake, and they're all really good at convincing you of stuff.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Elvis » Sat Feb 16, 2019 5:03 pm

Marionumber1 wrote: https://thispersondoesnotexist.com/ Refresh the page to get a new image.


AI generated face.jpg
You do not have the required permissions to view the files attached to this post.
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7422
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby JackRiddler » Sat Feb 16, 2019 7:01 pm

DrEvil » Sat Feb 16, 2019 3:47 pm wrote:To look at the bright side of this: you might soon be able to ask your personal AI to write you up an overview of the most prominent theories for who killed JFK, or who was behind 9/11, and specify what data sets to base it on.


This is the bright side? I'm going to be obsolete, along with you? Everything will be written flawlessly, by no one, with nobody conscious actually learning anything from it? Great.

Very good post, by the way. Certainly clever. And at least you know you wrote it.

.
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 19 guests