The creepiness that is Facebook

Moderators: Elvis, DrVolin, Jeff

Re: The creepiness that is Facebook

Postby seemslikeadream » Sun Sep 24, 2017 9:50 pm

Obama tried to give Zuckerberg a wake-up call over fake news on Facebook

Facebook CEO Mark Zuckerberg’s company recently said it would turn over to Congress more than 3,000 politically themed advertisements that were bought by suspected Russian operatives. (Eric Risberg/AP)
By Adam Entous, Elizabeth Dwoskin and Craig Timberg September 24 at 8:44 PM

Nine days after Facebook chief executive Mark Zuckerberg dismissed as “crazy” the idea that fake news on his company’s social network played a key role in the U.S. election, President Barack Obama pulled the youthful tech billionaire aside and delivered what he hoped would be a wake-up call.

For months leading up to the vote, Obama and his top aides quietly agonized over how to respond to Russia’s brazen intervention on behalf of the Donald Trump campaign without making matters worse. Weeks after Trump’s surprise victory, some of Obama’s aides looked back with regret and wished they had done more.

Now huddled in a private room on the sidelines of a meeting of world leaders in Lima, Peru, two months before Trump’s inauguration, Obama made a personal appeal to Zuckerberg to take the threat of fake news and political disinformation seriously. Unless Facebook and the government did more to address the threat, Obama warned, it would only get worse in the next presidential race.

[Mark Zuckerberg denies that fake news on Facebook influenced the elections]

Zuckerberg acknowledged the problem posed by fake news. But he told Obama those messages weren’t widespread on Facebook and that there was no easy remedy, according to people briefed on the exchange, who spoke on the condition of anonymity to share details of a private conversation.

Play Video 2:10
Facebook to turn over Russian ad sales from the 2016 election

Facebook announced on Sept. 21 that it would turn over copies of 3,000 political ads brought by Russian accounts during the 2016 election, after previously showing some to congressional investigators. (The Washington Post)

The conversation on Nov. 19 was a flashpoint in a tumultuous year in which Zuckerberg came to recognize the magnitude of a new threat — a coordinated assault on a U.S. election by a shadowy foreign force that exploited the social network he created.

Like the U.S. government, Facebook didn’t foresee the wave of disinformation that was coming and the political pressure that followed. The company then grappled with a series of hard choices designed to shore up its own systems without impinging on free discourse for its users around the world.

One outcome of those efforts was Zuckerberg’s admission on Thursday that Facebook had indeed been manipulated and that the company would now turn over to Congress more than 3,000 politically themed advertisements that were bought by suspected Russian operatives.


But that highly public moment came after months of maneuvering behind the scenes that has thrust Facebook, one of the world’s most valuable companies — and one that’s used by one-third of the world’s population each month — into a multi-sided Washington power struggle in which the company has much to lose.

Some critics say Facebook dragged its feet and is acting only now because of outside political pressure.


“There’s been a systematic failure of responsibility” on Facebook’s part, said Zeynep Tufekci, as associate professor at the University of North Carolina at Chapel Hill who studies social media companies’ impact on society and governments. “It’s rooted in their overconfidence that they know best, their naivete about how the world works, their extensive effort to avoid oversight, and their business model of having very few employees so that no one is minding the store.”

Facebook says it responded appropriately.

“We believe in the power of democracy, which is why we’re taking this work on elections integrity so seriously, and have come forward at every opportunity to share what we’ve found,” said Elliot Schrage, vice president for public policy and communications. A spokesperson for Obama declined to comment.

This account — based on interviews with more than a dozen people involved in the government’s investigation and Facebook’s response — provides the first detailed backstory of a 16-month journey in which the company came to terms with an unanticipated foreign attack on the U.S. political system and its search for tools to limit the damage.


Among the revelations is how Facebook detected elements of the Russian information operation in June 2016 and then notified the FBI. Yet in the months that followed, the government and the private sector struggled to work together to diagnose and fix the problem.

The growing political drama over these issues has come at a time of broader reckoning for Facebook, as Zuckerberg has wrestled with whether to take a more active role in combatting an emerging dark side on the social network — including fake news and suicides on live video, and allegations that the company was censoring political speech.

[Facebook wanted ‘visceral’ live video. It’s getting live-streaming killers and suicides.]

These issues have forced Facebook and other Silicon Valley companies to weigh core values, including freedom of speech, against the problems created when malevolent actors use those same freedoms to pump messages of violence, hate and disinformation.

There has been a rising bipartisan clamor, meanwhile, for new regulation of a tech industry that, amid a historic surge in wealth and power over the past decade, has largely had its way in Washington despite concerns raised by critics about its behavior.


In particular, momentum is building in Congress and elsewhere in the federal government for a law requiring tech companies — like newspapers, television stations and other traditional carriers of campaign messages — to disclose who buys political ads and how much they spend on them.

“There is no question that the idea that Silicon Valley is the darling of our markets and of our society — that sentiment is definitely turning,” said Tim O’Reilly, an adviser to tech executives and chief executive of the influential Silicon Valley-based publisher O’Reilly Media.

Thwarting the Islamic State
The encounter in Lima was not the first time Obama had sought Facebook’s help.

In the aftermath of the December 2015 shooting in San Bernardino, Calif., the president dispatched members of his national security team — including Chief of Staff Denis McDonough, Homeland Security Secretary Jeh Johnson and top counterterrorism adviser Lisa Monaco — to huddle with leading Silicon Valley executives over ways to thwart the Islamic State’s practice of using U.S.-based technology platforms to recruit members and inspire attacks.


The result was a summit, on Jan. 8, 2016, which was attended by one of Zuckerberg’s top deputies, Chief Operating Officer Sheryl Sandberg. The outreach effort paid off in the view of the Obama administration when Facebook agreed to set up a special unit to develop tools for finding Islamic State messages and blocking their dissemination.

Facebook’s efforts were aided in part by the relatively transparent ways in which the extremist group sought to build its global brand. Most of its propaganda messages on Facebook incorporated the Islamic State’s distinctive black flag — the kind of image that software programs can be trained to automatically detect.

In contrast, the Russian disinformation effort has proven far harder to track and combat because Russian operatives were taking advantage of Facebook’s core functions, connecting users with shared content and with targeted native ads to shape the political environment in an unusually contentious political season, say people familiar with Facebook’s response.

Unlike the Islamic State, what Russian operatives posted on Facebook was, for the most part, indistinguishable from legitimate political speech. The difference was the accounts that were set up to spread the misinformation and hate were illegitimate.

A Russian operation
It turned out that Facebook, without realizing it, had stumbled into the Russian operation as it was getting underway in June 2016.

At the time, cybersecurity experts at the company were tracking a Russian hacker group known as APT28, or Fancy Bear, which U.S. intelligence officials considered an arm of the Russian military intelligence service, the GRU, according to people familiar with Facebook’s activities.

Members of the Russian hacker group were best known for stealing military plans and data from political targets, so the security experts assumed that they were planning some sort of espionage operation — not a far-reaching disinformation campaign designed to shape the outcome of the U.S. presidential race.

Facebook executives shared with the FBI their suspicions that a Russian espionage operation was in the works, a person familiar with the matter said. An FBI spokesperson had no comment.

Soon thereafter, Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.

After the November election, Facebook began to look more broadly at the accounts that had been created during the campaign.

A review by the company found that most of the groups behind the problematic pages had clear financial motives, which suggested that they weren’t working for a foreign government.

But amid the mass of data the company was analyzing, the security team did not find clear evidence of Russian disinformation or ad purchases by Russian-linked accounts.

Nor did any U.S. law enforcement or intelligence officials visit the company to lay out what they knew, said people familiar with the effort, even after the nation’s top intelligence official, James R. Clapper Jr., testified on Capitol Hill in January that the Russians had waged a massive propaganda campaign online.

[Top U.S. intelligence official: Russia meddled in election by hacking, spreading of propaganda]

The sophistication of the Russian tactics caught Facebook off-guard. Its highly regarded security team had erected formidable defenses against traditional cyber attacks but failed to anticipate that Facebook users — deploying easily available automated tools such as ad micro-targeting — pumped skillfully crafted propaganda through the social network without setting off any alarm bells.

Political post-mortem
As Facebook struggled to find clear evidence of Russian ma­nipu­la­tion, the idea was gaining credence in other influential quarters.

In the electrified aftermath of the election, aides to Hillary Clinton and Obama pored over polling numbers and turnout data, looking for clues to explain what they saw as an unnatural turn of events.

One of the theories to emerge from their post-mortem was that Russian operatives who were directed by the Kremlin to support Trump may have taken advantage of Facebook and other social media platforms to direct their messages to American voters in key demographic areas in order to increase enthusiasm for Trump and suppress support for Clinton.

These former advisers didn’t have hard evidence that Russian trolls were using Facebook to micro-target voters in swing districts — at least not yet — but they shared their theories with the House and Senate intelligence committees, which launched parallel investigations into Russia’s role in the presidential campaign in January.

[Congressional investigations into alleged Russian hacking begin without end in sight]

Sen. Mark R. Warner, vice chairman of the Senate Intelligence Committee, initially wasn’t sure what to make of Facebook’s role. U.S. intelligence agencies had briefed the Virginia Democrat and other members of the committee about alleged Russian contacts with the Trump campaign and about how the Kremlin leaked Democratic emails to WikiLeaks to undercut Clinton.

But the intelligence agencies had little data on Russia’s use of Facebook and other U.S.-based social media platforms, in part because of rules designed to protect the privacy of communications between Americans.

Facebook’s effort to understand Russia’s multifaceted influence campaign continued as well.

Zuckerberg announced in a 6,000-word blog post in February that Facebook needed to play a greater role in controlling its dark side.

“It is our responsibility,” he wrote, “to amplify the good effects [of the Facebook platform] and mitigate the bad — to continue increasing diversity while strengthening our common understanding so our community can create the greatest positive impact on the world.”

‘A critical juncture’
The extent of Facebook’s internal self-examination became clear in April, when Facebook Chief Security Officer Alex Stamos co-authored a 13-page white paper detailing the results of a sprawling research effort that included input from experts from across the company, who in some cases also worked to build new software aimed specifically at detecting foreign propaganda.

“Facebook sits at a critical juncture,” Stamos wrote in the paper, adding that the effort focused on “actions taken by organized actors (governments or non-state actors) to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.” He described how the company had used a technique known as machine learning to build specialized data-mining software that can detect patterns of behavior — for example, the repeated posting of the same content — that malevolent actors might use.

The software tool was given a secret designation, and Facebook is now deploying it and others in the run-up to elections around the world. It was used in the French election in May, where it helped disable 30,000 fake accounts, the company said. It was put to the test again on Sunday when Germans went to the polls. Facebook declined to share the software tool’s code name. Another recently developed tool shows users when articles have been disputed by third-party fact checkers.

Notably, Stamos’s paper did not raise the topic of political advertising — an omission that was noticed by Capitol Hill investigators. Facebook, worth $495 billion, is the largest online advertising company in the world after Google. Although not mentioned explicitly in the report, Stamos's team had searched extensively for evidence of foreign purchases of political advertising but had come up short.

A few weeks after the French election, Warner flew out to California to visit Facebook in person. It was an opportunity for the senator to press Stamos directly on whether the Russians had used the company’s tools to disseminate anti-Clinton ads to key districts.

Officials said Stamos underlined to Warner the magnitude of the challenge Facebook faced policing political content that looked legitimate.

Stamos told Warner that Facebook had found no accounts that used advertising but agreed with the senator that some probably existed. The difficulty for Facebook was finding them.

Finally, Stamos appealed to Warner for help: If U.S. intelligence agencies had any information about the Russian operation or the troll farms it used to disseminate misinformation, they should share it with Facebook. The company is still waiting, people involved in the matter said.

Breakthrough moment
For months, a team of engineers at Facebook had been searching through accounts, looking for signs that they were set up by operatives working on behalf of the Kremlin. The task was immense.

Warner’s visit spurred the company to make some changes in how it conducted its internal investigation. Instead of searching through impossibly large batches of data, Facebook decided to focus on a subset of political ads.

Technicians then searched for “indicators” that would link those ads to Russia. To narrow down the search further, Facebook zeroed in on a Russian entity known as the Internet Research Agency, which had been publicly identified as a troll farm.

“They worked backwards,” a U.S. official said of the process at Facebook.

The breakthrough moment came just days after a Facebook spokesman on July 20 told CNN that “we have seen no evidence that Russian actors bought ads on Facebook in connection with the election.”

Facebook’s talking points were about to change.

By early August, Facebook had identified more than 3,000 ads addressing social and political issues that ran in the United States between 2015 and 2017 and that appear to have come from accounts associated with the Internet Research Agency.

After making the discovery, Facebook reached out to Warner’s staff to share what they had learned.

Congressional investigators say the disclosure only scratches the surface. One called Facebook’s discoveries thus far “the tip of the iceberg.” Nobody really knows how many accounts are out there and how to prevent more of them from being created to shape the next election — and turn American society against itself.
https://www.washingtonpost.com/business ... 059dad47d6




Mark Zuckerberg Can’t Stop You From Reading This Because The Algorithms Have Already Won
And the machines are running the asylum.

Posted on September 24, 2017, at 7:18 p.m.
Charlie Warzel

There’s a decent chance that Facebook CEO Mark Zuckerberg will see this story. It's relevant to his interests and nominally about him and the media and advertising industries his company has managed to upend and dominate. So the odds that it will appear in his Facebook News Feed are reasonably good. And should that happen, Zuckerberg might wince at this story’s headline or roll his eyes in frustration at its thesis. He might even cringe at the idea that others might see it on Facebook as well. And some almost certainly will. Because if Facebook works as designed, there's a chance this article will also be routed or shared to their News Feeds. And there's little the Facebook CEO can do to stop it, because he's not really in charge of his platform — the algorithms are.

This has been true for some time now, but it's been spotlit in recent months following a steady drumbeat of reports about Facebook as a channel for fake news and propaganda and, more recently, the company's admission that it sold roughly $100,000 worth of ads to a Russian troll farm in 2016. The gist of the coverage follows a familiar narrative for Facebook since Trump’s surprise presidential win: that social networks as vast and pervasive as Facebook are among the most important engines of social power, with unprecedented and unchecked influence. It’s part of a Big Tech political backlash that’s gained considerable currency in recent months — enough that the big platforms like Facebook are scrambling to avoid regulation and bracing themselves for congressional testimony.

Should Zuckerberg or Twitter CEO Jack Dorsey be summoned to Congress and peppered with questions about the inner workings of their companies, they may well be ill-equipped to answer them. Because while they might be in control of the broader operations of their respective companies, they do not appear to be fully in control of the automated algorithmic systems calibrated to drive engagement on Facebook and Twitter. And they have demonstrably proven that they lacked the foresight to imagine and understand the now clear real-world repercussions of those systems — fake news, propaganda, and dark targeted advertising linked to foreign interference in a US presidential election.

Among tech industry critics, every advancement from Alexa to AlphaGo to autonomous vehicles is winkingly dubbed as a harbinger of a dystopian future powered by artificial intelligence. Tech moguls like Tesla and SpaceX founder Elon Musk and futurists like Stephen Hawking warn against nightmarish scenarios that vary from the destruction of the human race to the more likely threat that our lives will be subject to the whims of advanced algorithms that we’ve been happily feeding with our increasingly personal data. In 2014, Musk remarked that artificial intelligence is “potentially more dangerous than nukes” and warned that humanity might someday become a “biological boot loader for digital superintelligence.”


But if you look around, some of that dystopian algorithmic future has already arrived. Complex technological systems orchestrate many — if not most — of the consequential decisions in your life. We entrust our romantic lives to apps and algorithms — chances are you know somebody who’s swiped right or matched with a stranger and then slept with, dated, or married them. A portion of our daily contact with our friends and families is moderated via automated feeds painstakingly tailored to our interests. To navigate our cities, we’re jumping into cars with strangers assigned to us via robot dispatchers and sent down the quickest route to our destination based on algorithmic analysis of traffic patterns. Our fortunes are won and lost as the result of financial markets largely dictated by networks of high-frequency trading algorithms. Meanwhile, the always-learning AI-powered technology behind our search engines and our newsfeeds quietly shapes and reshapes the information we discover and even how we perceive it. And there’s mounting evidence that suggests it might even be capable of influencing the outcome of our elections.

Put another way, the algorithms increasingly appear to have more power to shape lives than the people who designed and maintain them. This shouldn’t come as a surprise, if only because Big Tech’s founders have been saying it for years now — in fact, it’s their favorite excuse — “we’re just a technology company” or “we’re only the platform.” And though it’s a convenient cop-out for the unintended consequences of their own creations, it’s also — from the perspectives of technological complexity and scale — kind of true. Facebook and Google and Twitter designed their systems, and they tweak them rigorously. But because the platforms themselves — the technological processes that inform decisions for billions of people every second of the day — are largely automated, they’re enormously difficult to monitor.

Facebook acknowledged this in its response to a ProPublica report this month that showed the company allowed advertisers to target users with anti-Semitic keywords. According to the report, Facebook’s anti-Semitic categories “were created by an algorithm rather than by people.”

And Zuckerberg suggested similar difficulties in monitoring just this week while addressing Facebook’s role in protecting elections. “Now, I'm not going to sit here and tell you we're going to catch all bad content in our system,” he explained during a Facebook Live session last Thursday. “I wish I could tell you we're going to be able to stop all interference, but that wouldn't be realistic.” Beneath Zuckerberg’s video, a steady stream of commenters remarked on his speech. Some offered heart emojis of support. Others mocked his demeanor and delivery. Some accused him of treason. He was powerless to stop it.


Facebook
Facebook’s response to accusations about its role in the 2016 election since Nov. 9 bears this out, most notably Zuckerberg’s public comments immediately following the election that the claim that fake news influenced the US presidential election was “a pretty crazy idea.” In April, when Facebook released a white paper detailing the results of its investigation into fake news on its platform during the election, the company insisted it did not know the identity of the malicious actors using its network. And after recent revelations that Facebook had discovered Russian ads on its platform, the company maintained that as of April 2017, it was unaware of any Russian involvement. “When asked we said there was no evidence of Russian ads. That was true at the time,” Facebook told Mashable earlier this month.

Some critics of Facebook speak about the company’s leadership almost like an authoritarian government — a sovereign entity with virtually unchecked power and domineering ambition. So much so, in fact, that Zuckerberg is now frequently mentioned as a possible presidential candidate despite his public denials. But perhaps a better comparison might be the United Nations — a group of individuals endowed with the almost impossible responsibility of policing a network of interconnected autonomous powers. Just take Zuckerberg’s statement this week, in which he sounded strikingly like an embattled secretary-general: “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion,” he said.

“I wish I could tell you we're going to be able to stop all interference, but that wouldn't be realistic” isn’t just a carefully hedged pledge to do better, it's a tacit admission that the effort to do better may well be undermined by a system of algorithms and processes that the company doesn't fully understand or control at scale. Add to this Facebook's mission as a business — drive user growth; drive user engagement; monetize that growth and engagement; innovate in a ferociously competitive industry; oh, and uphold ideals of community and free speech — and you have a balance that’s seemingly impossible to maintain.

Facebook’s power and influence are vast, and the past year has shown that true understanding of the company’s reach and application is difficult; as CJR’s Pete Vernon wrote this week, “What other CEO can claim, with a straight face, the power to ‘proactively…strengthen the democratic process?’” But perhaps “power” is the wrong word to describe Zuckerberg's — and other tech moguls’ — position. In reality, it feels more like a responsibility. At the New York Times, Kevin Roose described it as Facebook’s Frankenstein problem — the company created a monster it can’t control. And in terms of responsibility, the metaphor is almost too perfect. After all, people always forget that Dr. Frankenstein was the creator, not the monster.
https://www.buzzfeed.com/charliewarzel/ ... .ynbKye6yE
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Mon Sep 25, 2017 8:56 am

Will Mark Zuckerberg ‘Like’ This Column?

Maureen Dowd SEPT. 23, 2017

Mark Zuckerberg may be learning what it’s like to be Dr. Frankenstein. Credit Justin Sullivan/Getty Images
WASHINGTON — The idea of Mark Zuckerberg running for president was always sort of scary.

But now it’s really scary, given what we’ve discovered about the power of his little invention to warp democracy.

All these years, the 33-year-old founder of Facebook has been dismissive of the idea that social media and A.I. could be used for global domination — or even that they should be regulated.

Days after Donald Trump pulled out his disorienting win, Zuckerberg told a tech conference that the contention that fake news had influenced the election was “a pretty crazy idea,” showing a “profound lack of empathy” toward Trump voters.

But all the while, the company was piling up the rubles and turning a blind eye as the Kremlin’s cyber hit men weaponized anti-Hillary bots on Facebook to sway the U.S. election. Russian agents also used Facebook and Twitter trolls, less successfully, to try to upend the French election.

Finally on Thursday, speaking on Facebook Live, Zuckerberg said he would give Congress more than 3,000 ads linked to Russia. As one Facebooker posted: “Why did it take EIGHT MONTHS to get here?”

Hillary is right that this $500 billion company has a lot to answer for in allowing the baby-photo-sharing site to be turned into what, with Twitter, The Times’s Scott Shane called “engines of deception and propaganda.”

Robert Mueller’s team, as well as House and Senate investigators, are hotly pursuing the trail of Russian fake news. On Friday, the Department of Homeland Security told 21 states, including Wisconsin and Ohio, that Russian agents had tried to hack their elections systems during the campaign.

As Vanity Fair pointed out, Mueller’s focus on social media during the campaign could spell trouble for Jared Kushner, who once bragged that he had called his Silicon Valley friends to get a tutorial in Facebook microtargeting and brought in Cambridge Analytica — Robert Mercer is a big investor — to help build a $400 million operation for his father-in-law’s campaign.

Some lawmakers suspect that the Russians had help in figuring out which women and blacks to target in precincts in Wisconsin and Michigan.

Senator Martin Heinrich, a New Mexico Democrat on the Senate Intelligence Committee looking into Russia’s intervention in 2016, has a suspect in mind. “Paul Manafort made an awful lot of money coming up with a game plan for how Russian interests could be pushed in Western countries and Western elections,” Heinrich told Vanity Fair.

ProPublica broke the news that, until it asked about it recently, Facebook had “enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of ‘Jew hater,’ ‘How to burn jews,’ or, ‘History of “why jews ruin the world.”’”

Sheryl Sandberg, Facebook’s C.O.O., apologized for this on Wednesday and promised to fix the ad-buying tools, noting, “We never intended or anticipated this functionality being used this way — and that is on us.”

The Times’s Kevin Roose called this Facebook’s “Frankenstein moment,” like when Mary Shelley’s scientist, Victor Frankenstein, says, “I had been the author of unalterable evils, and I lived in daily fear lest the monster whom I had created should perpetrate some new wickedness.”

Roose noted that in addition to the Russian chicanery, “In Myanmar, activists are accusing Facebook of censoring Rohingya Muslims, who are under attack from the country’s military. In Africa, the social network faces accusations that it helped human traffickers extort victims’ families by leaving up abusive videos.”

The Sandberg admission was also game, set and match for Elon Musk, who has been sounding the alarm for years about the danger of Silicon Valley’s creations and A.I. mind children getting out of control and hurting humanity. His pleas for safeguards and regulations have been mocked as “hysterical” and “pretty irresponsible” by Zuckerberg.

Zuckerberg, whose project last year was building a Jarvis-style A.I. butler for his home, likes to paint himself as an optimist and Musk as a doomsday prophet. But Sandberg’s comment shows that Musk is right: The digerati at Facebook and Google are either being naïve or cynical and greedy in thinking that it’s enough just to have a vague code of conduct that says “Don’t be evil,” as Google does.

As Musk told me when he sat for a Vanity Fair piece: “It’s great when the emperor is Marcus Aurelius. It’s not so great when the emperor is Caligula.”

In July, the chief of Tesla and SpaceX told a meeting of governors that they should adopt A.I. legislation before robots start “going down the street killing people.” In August, he tweeted that A.I. going rogue represents “vastly more risk than North Korea.” And in September, he tweeted out a Gizmodo story headlined “Hackers Have Already Started to Weaponize Artificial Intelligence,” reporting that researchers proved that A.I. hackers were better than humans at getting Twitter users to click on malicious links.

(Musk also tweeted that it was a cautionary tale when Microsoft’s chatbot, Tay, had to be swiftly shut down when Twitter users taught her how to reply with racist, misogynistic and anti-Semitic slurs, talking approvingly about Hitler.)

Vladimir Putin has denied digital meddling in the U.S. elections. But he understands the possibilities and threat of A.I. In a recent address, the Russian president told schoolchildren, “Whoever becomes the leader in this sphere will become the ruler of the world.” Musk agreed on Twitter that competition for A.I. superiority would be the “most likely cause of WW3.”

On Thursday, touring the Moscow tech firm Yandex, Putin asked the company’s chief how long it would be before superintelligent robots “eat us.”

Zuckerberg scoffs at such apocalyptic talk. His project this year was visiting all 50 states, a trip designed by former Obama strategist David Plouffe, which sparked speculation that he might be the next billionaire to seek the Oval Office.

As Bloomberg Businessweek wrote in a cover story a few days ago, Zuckerberg has hired Plouffe, other senior Obama officials and Hillary’s pollster. He has said he is no longer an atheist and he changed Facebook’s charter to allow him to maintain control in the hypothetical event he runs for office.

Yep. Very scary.
https://www.nytimes.com/2017/09/23/opin ... ml?mcubz=0



WaPo: Obama Asked Zuckerberg To Address Fake News On Facebook In Nov.

By CAITLIN MACNEAL Published SEPTEMBER 25, 2017 8:20 AM

After President Donald Trump won the presidential election in November, then-President Barack Obama asked Facebook founder Mark Zuckerberg to address the uptick in fake news on the social media website, the Washington Post reported Sunday night, citing unnamed people brief on the conversation.

As federal investigators and reporters dig deeper into Russia’s attempt to interfere in the 2016 presidential election, Facebook has come under scrutiny for the amount of fake news that flourished on the social platform. Reports this month revealed that a Russian troll farm spent $100,000 on Facebook ads during the 2016 election and that a Russian-linked Facebook group promoted pro-Trump rallies.

After the 2016 election, amid concerns that Russia tried to interfere in the election, Zuckerberg said it was “crazy” to think that fake content on Facebook influenced the outcome of the 2016 race.

After those comments, on Nov. 19, Obama spoke to Zuckerberg about fake news and Facebook on the sidelines of a meeting with world leaders in Peru, according to the Washington Post. Obama told Zuckerberg that he needed to do more to address fake news and its influence on elections, per the Post.

However, Zuckerberg was resistant and told Obama that fake news was not widespread on Facebook and that it would be hard to address, according to the Washington Post.

Read the Washington Post’s full report here.
http://talkingpointsmemo.com/livewire/o ... s-facebook
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Tue Sep 26, 2017 8:05 am

BUSINESS
09.26.1707:00 AM
WHAT WE KNOW—AND DON'T KNOW—ABOUT FACEBOOK, TRUMP, AND RUSSIA


Special investigator Robert Mueller leaves after a closed meeting with members of the Senate Judiciary Committee June 21, 2017 at the Capitol in Washington, DC.ALEX WONG/GETTY IMAGES
FACEBOOK IS NOW enmeshed in several investigations into Russia’s interference in the 2016 election. Last week, the company agreed to give Congress 3,000 political ads linked to Russian actors that it sold and ran during the 2016 election cycle; it previously had handed that information to special investigator Robert Mueller. But the details of how the social-networking giant found itself at the center of all of this, and, crucially, what that could mean for President Trump, can easily get lost amid competing headlines around healthcare, hurricanes, and a steadily escalating nuclear standoff with North Korea.
To help, we’re here to walk you through everything we know—and don’t know—about Facebook’s role in the 2016 election, and the subsequent investigations. We’ll update this list of questions and answers as we learn more.
What did Facebook give Mueller?

In early September, Facebook said it had identified $150,000 of political ads purchased by fake accounts linked to Russia. It attributed about $100,000 of the total, or 3,000 ads, to 470 accounts related to a Russian propaganda group called Internet Research Agency. It found another 2,000 ads worth $50,000 by searching for ads purchased through US internet addresses whose accounts were set to the Russian language. The ads touched on hot-button social issues such as immigration and LGBT rights and, according to a report from The Washington Post, included content aimed at stoking racial resentment against blacks and Muslims. About 25 percent of the ads geographically targeted certain regions of the United States. The majority of these ads ran in 2015.
After suspending the accounts and writing a vague blog post on the subject, Facebook remained largely silent about what the ads contained, who they reached, or how they were discovered. But on Sept. 21, Facebook confirmed it had shared the the ads with Mueller's team and would do the same with Congressional investigators. Facebook has not yet agreed to meet with Congress for further questioning.
How did Facebook find these ads?

The only detail Facebook has shared publicly is that it looked for American IP addresses set to the Russian language, then “fanned out,” from there, as the Facebook spokesperson put it. That makes it impossible to know whether Facebook has identified all suspect ads, or just those the Russians were laziest about hiding. The Facebook spokesperson declined to comment on whether Mueller's team has access to the company's investigative process.
It’s likely, however, that Facebook's search has not covered everything. On Sept. 21, during a Facebook Live address, CEO Mark Zuckerberg admitted as much, saying, "We may find more, and if we do, we will continue to work with the government." We know, for instance, that Internet Research Agency, the propaganda group, has officially shut down. But similar firms, including one called Glavset, operate with the same people at the same addresses. The Facebook spokesperson would not discuss whether its investigation would have caught these other shell companies.


During his Facebook Live, Zuckerberg outlined how the company plans to overhaul its election-integrity processes. For starters, it will require political advertisers to disclose–on the ads—who paid for the ad. It will also require political advertisers to publicly catalog all of the variations of ads that they target to different Facebook audiences. The goal here is to make it easier for the public to see when politicians send different messages to different groups of people. President Trump has been criticized for using so-called "dark posts" to send messages about the border wall to core supporters that conflict with his more public statements. That kind of targeted advertising is par for the course in the internet age, but now, Facebook says it will ensure that when it's used in politics, the public has more visibility into those messages. Facebook also said it would add 250 people to its election-integrity team to more thoroughly vet who's buying political ads.
And yet, the question remains: What constitutes a political ad? Are campaigns and Super PACs the only ones subject to this disclosure on Facebook? Or will anyone who wants to advertise about a political issue be subject to the same scrutiny? And what about fake news publishers that pay to boost their own articles? Facebook isn't providing much detail about how it will implement its plan, but answers to those questions are critical to understanding how effective this self-regulation will be.
Could Russians have placed other ads that Facebook hasn’t yet identified?

Absolutely. In the case of the $150,000 in ads, one digital breadcrumb led to the next, until Facebook uncovered a cohesive effort by the Internet Research Agency to spread misleading information to American voters. It’s easier to spot such a coordinated campaign than it is to find every ally of Vladimir Putin who might have spent a few thousand dollars to give a fake news story some extra exposure. Facebook sold $27 billion in ads in 2016. Combing through that pile of cash for signs of Russian dirty work is a tremendously complex, if not impossible, task.


Is there anything the government can do about this?

Facebook has never been particularly welcoming of government intervention. In 2011, it asked the Federal Election Commission to exempt it from rules requiring political advertisers to disclose in an ad who paid for that ad. Facebook argued its ads should be regulated as "small items," like campaign buttons. The FEC failed to reach a decision on the issue, so Facebook and other platforms have been allowed to host political ads with no disclosures.
Now, some members of Congress are looking to change that. Democratic Senators Mark Warner and Amy Klobuchar are working on a bill that would require those disclosures and also require tech platforms with more than 1 million users to publicly track all "electioneering communication" purchased by anyone spending at least $10,000 on the platform. The FEC defines electioneering communication as ads "that refer to a federal candidate, are targeted to voters and appear within 30 days of a primary or 60 days of a general election." For now, the term applies only to broadcast ads.
The FEC is also re-opening comments on rules related to online political ads—the same rules the FEC failed to clarify back in 2011. Last week, 20 members of Congress sent a letter to the FEC urging the agency to develop guidelines for platforms like Facebook.
Were these Facebook ads the only way that foreigners tried to influence the 2016 election??

Hardly. Earlier this year, WIRED investigated a wave of fake news sites that emerged in Macedonia last year. The fake news creators wrote phony blogs about Hillary Clinton’s health or the Pope endorsing Trump, and then posted them in key Facebook groups to attract attention. Once the posts drew sufficient traffic, their creators placed Google Ads on their sites to make some extra money. These mostly teenage hoaxsters never needed to touch Facebook’s advertising tools.
Did the Russians use other platforms as well?

Yes. The group Securing Democracy tracks 600 Russia-linked Twitter accounts and analyzes the role they play in promoting certain hashtags. When Twitter meets with Congressional investigators , the use of bots by foreign agents will be central to the conversation. Google, meanwhile, has said it found no evidence of Russians buying ads. But Facebook told WIRED the same thing earlier this summer, before its recent disclosure.

How did the Russians decide which Americans to target with the Facebook ads?

The short answer is we don’t know. There are suspicions that the Russians might have had help from the Trump campaign or its allies. But the Russians may not have needed more than the targeting tools Facebook offers to every advertiser.
Facebook allows any advertiser to upload lists of names or email addresses that it would like to target. In most states, voter files are publicly available for free or for purchase. Advertisers can then design so-called lookalike audiences that have lots in common with the original list. They can target ads based on geography, profession, and interest. Facebook knows the news you read, the posts you like, and what you shop for, along with a million other things about you. The company stitches this information together to make educated guesses about what kind of person you are.
Armed with so much data, a Russian operative would hardly need to call in help. That doesn’t mean they didn’t. It just means we have no evidence so far that they did.
What kind of evidence would there be?

One way to find out if the Trump campaign helped Internet Research Agency would be to compare the targeting criteria the campaign used on Facebook to the targeting criteria the Russian propagandists used. If both groups targeted the same audience, that's worth looking into. Investigators could do the same with any further suspicious accounts Facebook unearths.
What about Cambridge Analytica? Could it have been involved?

Cambridge Analytica was President Trump’s data-mining firm during the 2016 election. The Trump team, led by digital director Brad Parscale, worked with Cambridge, as well as the Republican National Committee, to analyze data about the American electorate to guide decisions about where and how to advertise on television and online. That’s not unusual. Hillary Clinton’s campaign tapped similar analyses from a data-analytics firm called BlueLabs, as well as the Democratic National Committee.

What is unusual about Cambridge Analytica is its backstory. The company, which is an American spinoff of the UK-based firm SCL Elections, is financially backed by billionaire financier Robert Mercer, who spends liberally to advance his fiercely conservative views.
Cambridge has also been accused of amassing data from Facebook users—such as what they like on the site and who their friends are—via silly personality quizzes. (Facebook has since closed this privacy gap.) Cambridge combined those results with data from elsewhere to sort people into categories based on their personality types, so advertisers could send them specially tailored messages. Cambridge calls this approach psychographic targeting, as opposed to demographic targeting.
During the election cycle, some Republican operatives outside the Trump campaign accused the company of overselling its technical wizardry. Now, Cambridge’s approach is viewed by some, including Hillary Clinton, as a form of ugly psychological warfare that was unleashed on the American electorate.
Its parent company, SCL, has been known to use questionable methods in other countries’ elections. In Trinidad, it reportedly staged graffiti to give voters the impression that SCL's client had the support of Trinidadian youth. And Cambridge is currently being investigated in the UK for the role it may have played in swaying voters to support Brexit. It’s worth noting, though, that the UK has stricter laws around how citizens’ data can be used near elections. The US does not have the same protections.
Is Cambridge involved with these Russian ads on Facebook?

Not as far as we know. While Cambridge helped the Trump campaign target its own advertisements, there's no evidence so far that Cambridge did the same for any Russians. Whether any connection exists, of course, is a key question both Mueller's team and Congress will continue to investigate.
In a recent BBC interview, Theresa Hong, the former digital content director for the Trump campaign, said Facebook, Google, and Twitter had offices inside the Trump campaign headquarters during the campaign. Is that normal?

Tech companies regularly assign dedicated staffers to political campaigns that advertise on their platforms. Clinton’s campaign also worked closely with Facebook and other tech companies, if not physically side-by-side.

Still, perhaps the least secretive part of the whole affair is the outsized role digital advertising played in the Trump campaign’s strategy. Shortly after the election, Parscale told WIRED, “Facebook and Twitter were the reason we won this thing. Twitter for Mr. Trump. And Facebook for fundraising." The Trump campaign ran as many as 50,000 variants of its ads each day on Facebook, tweaking the look and messaging to see which got the most traction. Days after the election, Andrew Bleeker, who ran digital advertising for the Clinton campaign, acknowledged that the Trump team used digital platforms “extremely well.” He said the Trump campaign “spent a higher percentage of their spending on digital than we did.”
Could Facebook have prevented this?

That's complicated. Obviously, Facebook was bluffing when it told the FEC in 2011 that disclosing who paid for campaign ads right on the ad would be impractical. That's what Zuckerberg recently announced.
Still, it’s unclear if those steps would have prevented Russia from spreading misinformation on Facebook. For starters, while the ads Internet Research Agency purchased were about election issues, they weren’t explicitly about the 2016 election. It's not clear those would have been considered election ads under FEC guidelines. And even if they were, the Supreme Court has given nonprofit groups wide latitude to raise money to influence elections both online and offline without revealing their donors. That's why it's called dark money.
Senator Warner recently said, “[Facebook] took down 50,000 accounts in France. I find it hard to believe they’ve only been able to identify 470 accounts in America.” What did he mean, and does he have a point?

Yes and no. In April, Facebook disclosed that it suspended 30,000 accounts, not 50,000, that were spreading fake news in France ahead of elections there. It did not explicitly tie those accounts to Russian actors. Instead, it identified those accounts after updating its tools for identifying fake accounts, adding flags on accounts that, for instance, repeatedly post the same content or suddenly produce a spike in activity.
That means the French example is not directly comparable to the election ads purchased by accounts that Facebook connected to Russia. Facebook is not asserting that those 470 accounts represent the totality of fake accounts on the platform. They’re merely the accounts Facebook has so far linked to Russia. That said, Warner's point is well taken: without more information on how Facebook found those accounts, it’s impossible to know what the company may have missed.
ProPublica recently found that it’s possible to target ads on Facebook to categories of people who identify as “Jew haters” and other anti-Semitic terms. How does that relate to this?

These are distinct issues, but there is some overlap. ProPublica recently reported that it had purchased $30 of ads, targeted at users Facebook thought might be interested in terms like “Jew hater,” “how to burn Jews,” and “why Jews ruin the world.” Facebook’s advertising tool had scraped these terms from users' profiles and turned them into categories advertisers could target. Those categories comprised a tiny subset of the 2 billion Facebook users, but ProPublica showed that it could assemble such a cohort and send its members targeted ads in 15 minutes. Facebook temporarily changed its ad tool to prevent these user-generated terms from being turned into advertising categories.
The company views this as a separate issue from Russian ads. And yet, both incidents point to a lack of oversight of Facebook’s advertising platform. The reason Russians could easily buy political ads to sway American voters is the same reason anyone can target ads to neo-Nazis: Facebook’s advertising systems are largely automated, and anyone can set up an ad campaign with little human oversight from Facebook.
Last week, Facebook Chief Operating Officer Sheryl Sandberg issued a statement saying Facebook had restored the ability of advertisers to target user-generated terms, but had taken measures to weed out the bad ones. It's also adding additional human oversight to the process of selling ads, and is setting up a system through which anyone can report abuses of the ad tool. Something tells us they're in for an onslaught.
https://www.wired.com/story/what-we-kno ... nd-russia/


Russians targeted Black Lives Matter and other hot-button issues in Facebook ads

The Facebook ads that Russian operatives purchased to try to influence U.S. voters during the 2016 election will be shared with congressional investigators in a matter of days, said a person with knowledge of the situation.
The ads included ones highlighting the Black Lives Matter movement and other hot-button, divisive issues, said this person, who requested anonymity in exchange for sharing details on the advertising.


The Facebook ads that Russian operatives purchased to try to influence U.S. voters during the 2016 election highlighted the Black Lives Matter movement and other hot-button, divisive issues, said a person familiar with the situation.

The content of the ads was previously reported by The Washington Post.

This person, who requested anonymity, also said the ads will be shared with congressional investigators in a matter of days.

Facebook said earlier this month that an internal investigation had found that groups with ties to Russia had spent $100,000 on ads designed to influence the attitudes of U.S. voters in the last presidential campaign.

The investigation found approximately $100,000 in ad spending from June 2015 to May 2017 associated with roughly 3,000 ads.

That was a reversal from late last year, when CEO Mark Zuckerberg argued that fake news on Facebook did not play a key role in the election's outcome.

Facebook had contacted the FBI during the summer of 2016 when it first suspected Russian involvement but was unable to confirm its suspicions until recently.

Sen. Richard Burr, the chairman of the Senate Intelligence Committee, said last week that a hearing on the matter is a question of "when."
https://www.cnbc.com/2017/09/25/russian ... ction.html
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Wed Sep 27, 2017 8:38 am

ELECTION 2016
Was Facebook Fooled by the Russians, or Did It Know All Along?
Facebook's role in influencing the 2016 election is only now being understood.
By Steven Rosenfeld / AlterNet September 26, 2017, 2:26 PM GMT

Facebook’s political troubles do not appear to be anywhere near ending, despite mea culpas by founder Mark Zuckerberg and COO Sheryl Sandberg that the global social media giant now recognizes its platform was used by Russian troll accounts to influence the 2016 election and its automated advertising platform can be gamed to foment racist messaging.

The past two weeks' media revelations about how, as one New York Times piece put it, Zuckerberg created a 21st-century Frankenstein, a behemoth he cannot control, read like a screenplay from the latest Netflix political thriller. Last weekend, the Washington Post reported that Facebook discovered a Russian-based operation “as it was getting underway” in June 2016, using its platform to spread anti-Democratic Party propaganda. Facebook alerted the FBI. After Facebook traced “a series of shadowy accounts” that were promoting the stolen emails and other Democratic campaign documents, it “once again contacted the FBI.”

But Facebook “did not find clear evidence of Russian disinformation or ad purchases by Russian-linked accounts,” the Post reported. “Nor did any U.S. law enforcement or intelligence officials visit the company to lay out what they knew.” Instead, it was preoccupied with a rash of highly propagandistic partisan pages, both left and right, that came out of nowhere in 2016, the Post reported. These websites stole content from real news sites and twisted it into incendiary claims, drawing readers and shares that exploited Facebook's royalty-producing business model. “The company found that most of the groups behind the problematic pages had clear financial motives, which suggested that they weren’t working for a foreign government,” the Post said.

This messaging fog prompted Zuckerberg to say it was “crazy” for anyone to suggest that fake news on Facebook played a role in Trump’s electoral victory and the GOP triumph. The Post’s biggest scoop—after noting that Facebook was telling federal agencies during the election about Russian trolling activities, even if it misread them—was President Obama pulling Zuckerberg aside at an international conference, where “Obama made a personal appeal to Zuckerberg to take the threat of fake news and political disinformation seriously... [or] it would only get worse in the next presidential race.”

The Post’s account is a remarkable example of Washington-based reporting. Sources inside Facebook, law enforcement and intelligence agencies are saying that they held in their hands the dots that are only being connected today—much like the federal agents who were tracking some of the 9/11 hijackers before the terrorist attack. Facebook has since changed its tune, giving special counsel Robert Mueller’s investigation of Russia-Trump campaign collusion and congressional inquests 3,000 Facebook ads placed by one Russian front group. Zuckerberg also issued an online video last week, in which he said, “I don’t want anyone to use our tools to undermine democracy,” and pledged Facebook would now disclose the names of businesses that place political ads.

Meanwhile, after ProPublica this month reported it could use Facebook’s automated ad placement service to target people describing themselves as “Jew haters” or who used terms like “how to burn Jews,” Sandberg announced the colossus had badly erred, and would revamp its ad filtering and targeting system. “The fact that hateful terms were even offered as options was totally inappropriate and a fail on our part,” she said. “Hate has no place on Facebook, and as a Jew, as a mother and as a human being, I know the damage that can come from hate.”

But even as Zuckerberg makes public commitments about supporting American democracy, and Sandberg makes heartfelt declarations against enabling hate, top technology writers and editorial pages aren’t quite buying Facebook’s mea culpas. The most sympathetic pieces say there was no willful malice on Facebook's part. They add that when Facebook asked the feds to help them figure out the Russia puzzle, they were met with silence from federal law enforcement agencies. That deer-in-the-headlights narrative has led to characterizing its trials as “Facebook’s Frankenstein moment.” As New York Times business writer Kevin Roose quoted a former Facebook advertising executive, “The reality is that if you’re at the help of a machine that has two billion screaming, whiny humans, it’s basically impossible to predict each and every nefarious use case… It’s a whack-a-mole problem.”

The Times editorial page was less forgiving, calling Zuckerberg and Sandberg’s awakening “belated,” noting that Facebook has opposed federal regulation of online political messaging, and that Zuckerberg’s remedy of disclosing names of businesses that place ads is easily evaded by campaign operatives. “Disclosing the name of Facebook business accounts placing political ads, for instance, will be of little value if purchasers can disguise their real identity—calling themselves, say, Americans for Motherhood and Apple Pie,” the Times said. “Further, even if Facebook succeeds in driving away foreign propaganda, the same material could pop up on Twitter or other social media sites.”

Actually, the Post reported that Facebook has recently deployed software that was able to “disable 30,000 fake accounts” in May's French national election, and that software was successfully used last weekend in Germany’s national election. That disclosure by the Post, and other investigative reporting by the Times about how Facebook has worked with foreign governments to censor posts by critics and posted pro-regime propaganda, suggests Facebook is not quite the innocent bystander it professes to be.

The Times ran an extensive piece on how Facebook’s future lies with finding hundreds of millions of new users overseas, including in countries where governments want to control the media. Part of trying to access markets like China, where Facebook has been banned, include allowing Chinese state media outlets to buy pro-government ads targeting Facebook's Hong Kong users. In other words, its ad sales business model has looked past political propaganda to cash in, which Russia adroitly exploited in 2016. Of course, there is a double-standard here. Russia was using Facebook to aim at U.S. elections, upsetting America’s political establishment; whereas when China and other nations used Facebook for political purposes, it's apparently okay.

Last week Jim Rutenberg, the Times' “Mediator” columnist, wrote there’s a veritable mountain of detail that still has not been made public by Facebook concerning 2016’s election. This goes far beyond releasing the 3,000 ads bought by a single Russian troll account it shared with Mueller and congressional committees. So far, we know the ads amplified “divisive social and political messages,” that the users who bought the ads were fabricated, and that some ads targeted specific states and voter segments. But what we don’t know, Rutenberg noted, is what those ads looked like, what they specifically said, whose accounts sent them, how many people saw and shared them, which states and counties were targeted, and what actions the ads urged people to take. The Daily Beast reported that at least one ad organized an anti-refugee rally in Idaho, and another report said Russian trolls promoted 17 Trump rallies in Florida.

On Monday afternoon, the Post reported it had spoken to congressional sources familiar with the contents of the 3,000 ads, who said they used references to groups like Black Lives Matter to incite different blocs of voters. "The Russian campaign—taking advantage of Facebook’s ability to simultaneously send contrary messages to different groups of users based on their political and demographic characteristics—also sought to sow discord among religious groups. Other ads highlighted support for Democrat Hillary Clinton among Muslim women," the Post said.

For these reasons and others, Facebook’s political troubles do not appear to be ending soon. Predictably, some Democratic lawmakers are saying it’s time to require anyone who buys an online political ad to disclose it. But that notion, apart from going nowhere in a GOP-majority Congress, only scratches the surface of what’s going on. Campaign finance laws have proven to be utterly incapable of stopping so-called dark money in recent years, such as front groups created by the Koch brothers or state chambers of commerce. These laws can only regulate explicit political speech, such as ads telling people to vote for or against a certain candidate. How are they going to prevent innuendo-filled messaging, from fake messengers, on a deregulated internet?

Companies like Facebook, which track and parse the behavior of multi-millions of Americans online and sell ads based on those metrics, have embraced all the benefits of its business model. But they have avoided taking the lead to prevent nefarious uses of their platforms, until they're shamed in public, such as ProPublica’s recent outing of Facebook’s automated ad platform that can be gamed by anti-Semites, or disclosures like the Post report that Obama tried to give Zuckerberg a wakeup call last November.

Internet “companies act as if they own our data. There’s no reason why that should be the case…That data is an x-ray of our soul,” Franklin Foer, author of the new book, World Without Mind: The Existential Threat of Big Tech, told KQED-FM in San Francisco on Monday. But these companies aren’t regulated in the U.S. The firms own vast files on virtually anyone who is likely to vote, let alone shop. And their automated systems rolled out the red carpet to anyone seeking to target 2016’s voters, from the presidential campaigns to Russian trolls.
http://www.alternet.org/election-2016/w ... s-election
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Wed Sep 27, 2017 10:20 am

Facebook's underclass: as staffers enjoy lavish perks, contractors barely get by

The social network has an army of behind-the-scenes employees who can’t afford to live in an area with out-of-control housing costs


Tuesday 26 September 2017 05.00 EDT Last modified on Tuesday 26 September 2017 13.18 EDT
As one of the most desirable employers in Silicon Valley, Facebook has built a small town square for staff at its headquarters in Menlo Park. After leaving the car with a valet attendant, employees can work out at the gym, take their bikes for a tune-up, drop off their dry cleaning, pop by the company dentist or doctor’s office, play video games in the arcade, or even sit down for a trim at the barber’s shop.

But keeping all of those amenities running requires an army of subcontracted contingent workers, including bicycle mechanics, security guards and janitors.


Facebook worker living in garage to Zuckerberg: challenges are right outside your door
Read more
Maria Gonzalez, a janitor at Facebook, is part of that battalion. She said she liked working at Facebook and didn’t resent the engineers and product managers she cleans up after. “I know that they are the ones that are making the money,” she said in Spanish. “They are the ones doing the hard job and getting fair pay.”

But it does strike her as ironic that the most highly paid workers at Facebook are also the ones who get all the free amenities.

“They have free laundry, haircuts, free food at any time, free gym, all the regular things that you have to pay for, but they have it for free,” said Gonzalez, who with her husband spends more than half their combined income on rent in nearby San Jose. “It’s not the same for janitors. We just leave with the check.”

The $500bn company has been conscientious about ensuring that its subcontracted workers are relatively well paid. In May 2015, amid a nationwide movement to raise the minimum wage, the company established a $15 an hour minimum for its contractors, as well as benefits like paid sick leave, vacation and a $4,000 new-child benefit.

But those wages only go so far in a region with out-of-control housing costs. San Francisco and San Jose ranked first and third in the nation a recent analysis of rents, with one-bedroom apartments in San Jose going for $2,378. The extreme cost of housing is why California has the highest poverty rate in the country, according to a US Census figure that takes into account a region’s cost of living.

Maria Gonzalez is a janitor at Facebook.

Maria Gonzalez is a janitor at Facebook. Photograph: Julia Carrie Wong for the Guardian
“You work for a company that makes so much money, and the pay that they give you is not affordable to live out here,” said Jiovanny Martinez, a security guard at Facebook’s main campus. “You still have to have a second job. You’ll probably never be able to afford a home. It’s a struggle.”

Martinez, 30, actually works three jobs to support himself and his family. His Facebook shift goes from 1.15pm to 10.00pm, so he drives for Lyft in the mornings. On weekends, he picks up shifts as a park ranger. All that work affords him a three-room house that is home to four adults and four children: his wife, their two daughters, his mother-in-law, his wife’s sister, and her two children. The sister and her children sleep in the garage.

Facebook is taking some steps to address the housing crisis. The company’s planned expansion to a new campus includes the construction of 1,500 units of housing, of which 225 will be below market rate.

In the meantime, Unique Parsha will continue driving her home to work.

In July, a Facebook employee alerted security that there was a dog in a car in the parking lot on a hot day. The employee was concerned for the dog’s welfare, but for Parsha, explaining why her miniature poodle was left in the car while she worked her shift as a contractor at Facebook required a very personal disclosure: she’s homeless. Since April, when she left an abusive relationship, the 47-year-old has been sleeping in the parking lot of a 24 Hour Fitness gym when she gets off work at midnight.

“When I get dressed at the gym, I’ll be laughing,” Parsha said. “I look all cute and stuff, and I’m homeless. That’s hecka funny. No one would ever know.”

Parsha is keenly aware of the incongruity of going to work each day at one of the richest companies in the world after sleeping in the back seat of her car. “Sometimes people ask me, ‘Where do you live? What city do you live in?’” she said. “I just feel ashamed.’”

Parsha earns well above Facebook’s minimum wage – she is a content specialist, moderating live videos and other content – but she still hasn’t been able to find an apartment or room that she can afford alongside her student loans and other bills. She’s set up a GoFundMe page, and shared it on Facebook.

“It’s not enough pay to survive based on the rent that’s out there. How can people survive? A one-bedroom is at least $1800,” she said, underestimating what she would likely have to shell out for an apartment of her own.

“That’s my whole check right there.”
https://www.theguardian.com/technology/ ... collection
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Mon Oct 02, 2017 10:42 am

Facebook to Deliver 3,000 Russia-Linked Ads to Congress on Monday
By MIKE ISAAC and SCOTT SHANEOCT. 1, 2017

Mark Zuckerberg speaking at Harvard last May. “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections,” he said last week. “But if that’s what we must do, we are committed to rising to the occasion.” Credit Brian Snyder/Reuters
SAN FRANCISCO — Under intensifying scrutiny from federal investigators and the public, Facebook said on Sunday that it planned to turn over more than 3,000 Russian-linked advertisements to congressional investigators on Monday.

The decision, which comes after a week of scathing calls from Congress for details about Facebook’s advertising system, is the latest attempt by a major technology company to disclose the scope of Russian interference in the 2016 presidential election.

Last week, Mark Zuckerberg, the chief executive of Facebook, vowed to work with investigators and other technology companies in an attempt to snuff out the spread of false news stories and bogus accounts across their sites. It is a growing threat that Facebook and similar companies have begun to come to terms with only in recent months.

“It is a new challenge for internet communities to deal with nation-states attempting to subvert elections,” Mr. Zuckerberg said in a live video address on Facebook last week. “But if that’s what we must do, we are committed to rising to the occasion.”

Facebook has yet to disclose the types of advertisements and content the company will hand over. But news reports have linked the posts to issues such as religion, race, gun ownership and other politically charged topics.

Interested in All Things Tech?
The daily Bits newsletter will keep you updated on the latest from Silicon Valley and the technology industry, plus exclusive analysis from our reporters and editors.

Mark Warner, Democrat of Virginia and vice chairman of the Senate Intelligence Committee, has been a fierce critic of Facebook and other companies for not disclosing the extent to which foreign agents had a hand in shaping the outcome of the 2016 election.

Congress has also raised the possibility of regulation of political advertising across social media sites. Last month, congressional Democrats asked the Federal Election Commission to advise on ways to prevent illicit foreign influence on American elections via social media. Facebook, Twitter and Google — unlike television, print and radio — are not currently bound by law to disclose who purchases their ads.

In an attempt to pre-empt such regulation, Facebook has pledged to overhaul its advertising systems to give more insight into the identity of those who purchase political ads on the network. In the future, Mr. Zuckerberg said, there would be ways for users to view all of the political ads connected to a particular advertiser on Facebook.

Facebook is not the only such technology company under such scrutiny. In a meeting with congressional investigators last week, Twitter disclosed that it had deleted hundreds of Russian-linked accounts on its platform. Google, too, is in the midst of conducting an internal investigation into whether its advertising products played a role in Russian interference in the election.
https://www.nytimes.com/2017/10/01/tech ... a-ads.html


Why Russia Is Threatening to Block Facebook
http://www.hollywoodreporter.com/news/w ... ok-1044763
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Wed Oct 04, 2017 8:19 am

WEAPONIZING FACEBOOK FOR RUSSIA'S INVASION OF UKRAINE

Michael MacKay, Radio Lemberg, 04.10.2017

Image

Mark Zuckerberg turned a billion people into unpaid data entry clerks and then weaponized Facebook for Russia. The Internet is supposed to be about using technology for human liberation and for free communication, but the Facebook piece of it has become a medium for Russia to wage information warfare and subvert democracy all around the world.

Prominent in the news now is the way that Russian intelligence services (FSB, GRU, SVR) and Facebook worked together to falsify the United States presidential election that was held on 8 November 2016. But the Russia-Facebook alliance goes back at least to the start of Russia’s invasion of Ukraine, which began its military phase on 20 February 2014.

Ukrainian Facebook users have been prolific in posting reports, photos, and video about the Russian invasion of Crimea and of Donbas, and commenting extensively about what was happening in their homeland. Needless to say, the words of Ukrainian Facebook users were less-than-kind when talking about the foreign invaders from Muscovy who had come to Crimea and Donbas to attack them in their homes. When your friends and relatives are killed or injured, when your home is damaged or destroyed, and when you are forced to become a refugee in your own country, you are likely to say very harsh things about the people who did this to you.

At the same time Russian information warriors were posting voluminous fake news, such as the infamous “crucified boy” false report that was first put out by Russian propaganda TV on 12 July 2014. Using huge numbers of “sock puppet” accounts, the Russians amplified fake news to put it at the top of “most viewed right now” lists on social media and news aggregators. The Russian troll army also worked in large numbers to suppress or ban pro-Ukrainian accounts. En masse, they would report to Facebook any posting critical of Russia’s invasion of Ukraine as being either “nudity/pornography” or “hate speech” against an identifiable group. It turns out the category of the complaint didn’t matter, as the algorithm used by Facebook responded to the volume of complaints, not the substance of them. Ukrainian accounts accurately reporting the war were punished with banishment, and Russian accounts spreading fake news were rewarded with prominence.

On 13 May 2015, Maksym Savanevsky, co-founder of the Ukrainian Crisis Media Centre, wrote a letter to Mark Zuckerberg, chairman and chief executive officer of Facebook. Maksym Savanevsky illustrated how Facebook had ceased to be a space of free communication for Ukrainians. Facebook was aggressively blocking accounts of Ukrainian activists and patriots, and deleting web sites that were documenting Russian war crimes and human rights violations. The President of Ukraine, Petro Poroshenko, encouraged Zuckerberg to open a Ukraine office, so that Facebook employees who understood Ukraine and its people as well as the nuance of language would be moderating content from where it was being generated.

On 14 May 2015, Mark Zuckerberg hosted a “town hall” meeting at Facebook headquarters. The question about Facebook suppressing pro-Ukraine content received the most votes for consideration. Zuckerberg dismissed the complaint out-of-hand, admitting only to a minor flaw in the algorithm that identified “hate speech” as “nudity.” He ignored the key point that Russia was invading Ukraine, and persisted in defining criticism of Russians for what they were doing as hate speech against an identifiable group. He stuck to his guns and insisted that Facebook would continue to censor Ukrainian Facebook users using Russian-speaking – and Russo-centric – employees working out of a Dublin office. Facebook would not open (and still hasn’t opened) an office in Ukraine, the largest country that is wholly within Europe.

Adding insult to injury, Zuckerberg laughed. He laughed at the cry for help from Ukrainians who only wanted to have their stories heard. The questioner at the Facebook “town hall” laughed, and the Facebook employees laughed right along with Zuckerberg. It was a great joke to Facebook that Internet users in a time of war would take being banned and being censored as the life-and-death issue that it truly is. Maksym Savanevsky had told Facebook why Ukrainians needed to get their story out. He told them about the Heavenly Hundred who were murdered on Maidan by the Yanukovych regime and about the hastily mobilized volunteer battalions that had saved Europe by stopping the Russian invaders in the east. He had told them of the suffering and sacrifice of Ukrainians for their own freedom and for Europe’s security. To that, Zuckerberg and Facebook laughed.

Mark Zuckerberg went on to extend the successful weaponizing of Facebook for Russia’s invasion of Europe in Ukraine to encompass weaponizing Facebook for Russia’s successful falsification of the US presidential election.

In a big chunk of cyberspace, Facebook has turned the Internet promise of liberation and free speech into a reality of captivity and loss of voice. Vint Cerf and Bob Kahn and Tim Berners-Lee and the other Internet pioneers might be looking at Mark Zuckerberg and saying (as the first telegraph message did in 1844): “What hath God wrought.”
http://radiolemberg.com/ua-articles/ua- ... of-ukraine


And There It Is

By JOSH MARSHALL Published OCTOBER 3, 2017 9:50 PM

Just out from CNN: Russia Facebook campaign specifically targeted Michigan and Wisconsin and key demographic groups within those states.

From CNN …

A number of Russian-linked Facebook ads specifically targeted Michigan and Wisconsin, two states crucial to Donald Trump’s victory last November, according to four sources with direct knowledge of the situation.


Some of the Russian ads appeared highly sophisticated in their targeting of key demographic groups in areas of the states that turned out to be pivotal, two of the sources said. The ads employed a series of divisive messages aimed at breaking through the clutter of campaign ads online, including promoting anti-Muslim messages, sources said.

It has been unclear until now exactly which regions of the country were targeted by the ads. And while one source said that a large number of ads appeared in areas of the country that were not heavily contested in the elections, some clearly were geared at swaying public opinion in the most heavily contested battlegrounds.

Here’s the whole story.
http://talkingpointsmemo.com/edblog/and ... re-1087249


Exclusive: Russian-linked Facebook ads targeted Michigan and Wisconsin
By Manu Raju, Dylan Byers and Dana Bash, CNN
Updated 6:57 AM ET, Wed October 4, 2017


Source: CNN
What $100,000 can buy you on Facebook 01:14

(CNN)A number of Russian-linked Facebook ads specifically targeted Michigan and Wisconsin, two states crucial to Donald Trump's victory last November, according to four sources with direct knowledge of the situation.

Some of the Russian ads appeared highly sophisticated in their targeting of key demographic groups in areas of the states that turned out to be pivotal, two of the sources said. The ads employed a series of divisive messages aimed at breaking through the clutter of campaign ads online, including promoting anti-Muslim messages, sources said.
It has been unclear until now exactly which regions of the country were targeted by the ads. And while one source said that a large number of ads appeared in areas of the country that were not heavily contested in the elections, some clearly were geared at swaying public opinion in the most heavily contested battlegrounds.
Michigan saw the closest presidential contest in the country -- Trump beat Democratic nominee Hillary Clinton by about 10,700 votes out of nearly 4.8 million ballots cast. Wisconsin was also one of the tightest states, and Trump won there by only about 22,700 votes. Both states, which Trump carried by less than 1%, were key to his victory in the Electoral College.
The sources did not specify when in 2016 the ads ran in Michigan and Wisconsin.
close dialog

As part of their investigations, both special counsel Robert Mueller and congressional committees are seeking to determine whether the Russians received any help from Trump associates in where to target the ads.
White House officials could not be reached for comment on this story. The President and senior White House officials have long insisted there was never any collusion with Russia, with Trump contending the matter is a "hoax."
The focus on Michigan and Wisconsin also adds more evidence that the Russian group tied to the effort was employing a wide range of tactics potentially aimed at interfering in the election.
Warner: 'Million-dollar question' how Russians knew who to target on Facebook
Warner: 'Million-dollar question' how Russians knew who to target on Facebook
Facebook previously has acknowledged that about one quarter of the 3,000 Russian-bought ads were targeted to specific geographic locations, without detailing the locations. The company said of the ads that were geographically targeted "more ran in 2015 than 2016." In all, Facebook estimates the entire Russian effort was seen by 10 million people.
Facebook could still be weaponized again for the 2018 midterms
Rep. Adam Schiff, the top Democrat on the House Intelligence Committee, told CNN the panel was still assessing the full geographical breakdown of the Russian ads and whether there was any assistance from individuals associated with the Trump campaign.
"Obviously, we're looking at any of the targeting of the ads, as well as any targeting of efforts to push out the fake or false news or negative accounts against Hillary Clinton, to see whether they demonstrate a sophistication that would be incompatible with not having access to data analytics from the campaign," Schiff said Tuesday evening. "At this point, we still don't know."
One person with direct knowledge of the matter said that some of the ads were aimed at reaching voters who may be susceptible to anti-Muslim messages, even suggesting that Muslims were a threat to the American way of life. Such messaging could presumably appeal to voters attracted to Trump's hard-line stance against immigration and calls to ban Muslims from entering the United States.
Schiff said that the committee was planning to investigate ads that suggested Muslims supported Clinton, and how those were geared to people who had been searching online for the Muslim Brotherhood and other items to suggest they were critical of Islam.

This is how easy it is to buy a Facebook ad like the Russian 'troll farms' did

The ads were part of roughly 3,000 that Facebook turned over to congressional investigators this week as part of the multiple Capitol Hill inquiries into Russia meddling in the 2016 elections.
CNN reported last week that at least one of the Facebook ads bought by Russians during the 2016 presidential campaign referenced Black Lives Matter and was specifically targeted to reach audiences in Ferguson, Missouri and Baltimore, according to sources with knowledge of the ads.
Lawmakers have only started to assess the scope of the data, and sources from both parties said the 3,000 ads touched on a range of polarizing topics, including the Second Amendment and civil rights issues. The ads were aimed at suppressing the votes and sowing discontent among the electorate, the sources said.
Members from both parties said that there was a clear sophistication in the Russian ad campaign, and they said they were only just beginning to learn the full extent of the social media efforts.
"It's consistent with everything else we've seen in terms of Russian active measures -- a combination of cyber, of propaganda and paid and social media," said Sen. John Cornyn, the No. 2 Republican who sits on both the Senate Intelligence and Judiciary panels, both of which received the Facebook ads. "So, we're just looking at the tip of the iceberg."
http://www.cnn.com/2017/10/03/politics/ ... index.html
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Mon Oct 09, 2017 8:44 am

GAME CHANGE
Russia Recruited YouTubers to Bash ‘Racist B*tch’ Hillary Clinton Over Rap Beats
Wannabe YouTube stars and diehard Donald Trump supporters ‘Williams & Kalvin’ totally swear they’re from Atlanta. In reality, they were working for the Kremlin.

BEN COLLINS
GIDEON RESNICK
SPENCER ACKERMAN
10.08.17 9:00 PM ET
According to the YouTube page for “Williams and Kalvin,” the Clintons are “serial killers who are going to rape the whole nation.” Donald Trump can’t be racist because he’s a “businessman.” Hillary Clinton’s campaign was “fund[ed] by the Muslim.”
These are a sample of the videos put together by two black video bloggers calling themselves Williams and Kalvin Johnson, whose social media pages investigators say are part of the broad Russian campaign to influence American politics. Across Facebook, Twitter, Instagram, and YouTube, they purported to offer “a word of truth” to African-American audiences.
“We, the black people, we stand in one unity. We stand in one to say that Hillary Clinton is not our candidate,” one of the men says in a November video that warned Clinton “is going to stand for the Muslim. We don’t stand for her.”

Williams and Kalvin’s content was pulled from Facebook in August after it was identified as a Russian government-backed propaganda account, The Daily Beast has confirmed with multiple sources familiar with the account and the reasons for its removal. Williams and Kalvin’s account was also suspended from Twitter in August. But the YouTube page for Williams and Kalvin remains live at press time.
It’s reminiscent of the Russian attempts to impersonate a California-based Muslim group and piggyback off of the Black Lives Matter protests to spread the Kremlin’s message. But this time, the Kremlin operation used real people, not just memes and hijacked hashtags.
The discovery of living, breathing, real-life avatars for Kremlin talking points deepens and complicates the emerging picture of how Russian propaganda reached what Facebook alone estimated last week were 10 million users in the United States—a number considered by many outside experts to be a lowball estimate.
‘This Woman is a Witch’

RELATED IN TECH

Exclusive: Russians Pushed Trump Rallies in 17 U.S. Cities

Facebook: We Won’t Reveal If Russia Targeted You

Facebook Won’t Reveal if Russia Targeted You During Election

Russia Used Facebook Events to Organize Anti-Immigrant Rallies on U.S. Soil

Exclusive: Russia Used Facebook to Host Rallies on U.S. Soil


Videos published by Williams and Kalvin in late 2016, especially in October, often engaged in fever swamp theories about Hillary Clinton and in some cases promoting Donald Trump directly.
One specific video published in October, prior to the presidential election, refers to Hillary Clinton as an “old racist bitch.”
“She’s a fucking racist,” the host says over a subdued rap beat. “And this woman is a witch,” he says as a picture portrays Clinton in Wizard of Oz attire. He goes on to praise Julian Assange for releasing hacked emails. “This woman, she’s sick on her head.”
Other videos are more explicit about urging people to vote for Trump.
“This is time for change. This is why I say that let our vote go for Trump. Because this man is a businessman. He’s not a politician. We can have deal with him,” Williams says in a video published in August of 2016. “Because I don’t see him as a racist. Because any businessman cannot be a racist because when you are a racist, then your business is going down.” He then makes a black-power fist as he endorses Trump.

For good measure, the video also stated that Barack Obama’s legacy was “police brutality, injustice [and a] lack of education for our children,” illustrated with Obama’s face giving way to Samuel L. Jackson’s character from Django Unchained.
Williams and Kalvin’s content on one social network did not stay penned in there. While the videos only racked up hundreds of views on YouTube at press time, some Williams and Kalvin videos on Facebook reached thousands of people. Before the account was shuttered, Williams and Kalvin’s Facebook page had 48,000 fans.
Facebook and Twitter declined to comment for this article. As with previous Daily Beast investigations into Russian propaganda on Facebook, the company did not challenge The Daily Beast’s reporting. Facebook says that as a matter of policy, it shares information with its competitors.
“We have been working with many others in the technology industry, including with Google and Twitter, on a range of elements related to this investigation,” its vice president for policy and communications, Elliot Schrage, blogged late last week.
Google did not respond to specific questions about the Williams and Kalvin account.
“All videos uploaded to YouTube must comply with our Community Guidelines and we routinely remove videos flagged by our community that violate those policies. We also terminate the accounts of users who repeatedly violate our Guidelines or Terms of Service,” Google spokesperson Andrea Faville told The Daily Beast.

Sen. Mark Warner (D-VA), the vice chair of the intelligence committee, encouraged the social media companies to deepen their shared understanding of Russian propaganda and inform unsuspecting users of its presence.
“It’s incumbent upon each social media company to dedicate the resources necessary to conducting their own robust internal investigations about how Russians may have used their platforms to sow misinformation and propaganda during the 2016 election, and to work with Congress to put in place standards and safeguards to prevent the Russians or other bad actors from doing the same thing again in the future,” Warner told The Daily Beast.
“It’s also critically important for each of these companies to alert users exposed to this content—content created and disseminated to sow division, disinformation, and discord—that is associated with accounts managed by Russian actors.”
Requests for comment from Williams and Kalvin through their Facebook and Twitter accounts, which contained mostly political messages and almost no interaction with other people, went unreturned.
Efforts to reach out to Facebook friends of Williams and Kalvin, who also joined their sole Facebook group, “#BlackLivesMatter against #PoliceBrutality,” also did not receive a response. None of Williams or Kalvin’s Facebook friends are from Atlanta, where the two claim they live on Facebook and in videos. On Facebook, both Williams and Kalvin claim their hometown is Owerri in Nigeria.
In 2015, Williams Johnson claimed he had just spent his “first Thanksgiving with my LIL BROTHER in America!” and attached a since deleted Tweet from Alex Jay, who is a rapper and Instagram model. Jay told The Daily Beast he had never heard of Williams.
“My last name is Johnson, but nope,” he said. “Don’t know anyone with that name.”

‘I Support Bernie Sanders’

While Kalvin Johnson’s personal page appears to still be active on Facebook, the most recent post is from November 2015. One of the posts from that month includes a link to a story about Hillary Clinton wanting to censor New York’s Laugh Factory comedy club. “Hillary must be in prison for this!” the account wrote with the link attached.
The pair also promoted a shirt labeling Bill Clinton as a rapist in an October video called “A word of truth about a rapist’s wife.”
“To say the truth, Bill Clinton is a rapist. And there is a lot of fact to prove it,” the host says, before saying the Clintons are “serial killers and they are going to rape the whole nation.”
The video concludes with the line: “We have to do all we can to not allow this racist bitch to become the next president.”
In an August video, one of the hosts explicitly endorses the movie Clinton Cash and begins the video by saying, “I support Bernie Sanders.”
“Today is old bitch Clinton time,” the host says before a title card informs people watching that the film will premiere the day prior to the Democratic National Convention.
The rest of the video is an advertisement for the movie, which was based on a book mostly consisting of research from a nonprofit investigative research outfit founded by Breitbart editor Peter Schweizer and Breitbart CEO and ousted White House chief strategist Steve Bannon.
In another video, Williams and Kalvin push the conspiracy that Bill Clinton has an African-American son named Danney Williams, an idea often amplified by right-wing media with no corroborating evidence.
“A black guy who claims to be his son. So when I saw all those headlines, I thought that is kind of fake. And now it seems to be true. Man, this is fucking amazing,” Kalvin says. “Now we can see that he has a son. And his son is a black.”
“A word of truth about Danney Williams” was published three weeks before the election.
“It seems that Hillary has never been wanting sex at all,” he continues. “I wonder how she give birth to a daughter, Chelsea. I think that Bill have no time to fuck Hillary.”
‘Third Parties and Contract Cutouts’

According to Clint Watts, a former FBI counterterrorism agent who testified to the Senate Intelligence Committee on Russian cyberattacks, using third party contractors from both inside Russia and countries with cheap labor is a method used by the Kremlin to “muddy the waters on attribution” of propaganda.
“Often, (the Kremlin) will contract out entities to do this so they can say, ‘You can’t prove that it’s us,’” Watts told The Daily Beast. “It’s pretty routine for them to try to gain resources through third parties and contract cutouts.”
Williams and Kalvin’s videos are not particularly rigorous about nuances of American culture and geography. Kalvin, for example, claims that Baton Rouge is in “L.A.” Another video calls LeBron James the best “basket” player of the year.
Watts called the low quality videos a “weakness in their system.”
“In a normal influence campaign, you do these things called ‘audience analysis’ and ‘product testing’ to see what works before you put it out there. They didn’t. They try everything, then go with what works,” said Watts. “They skipped the product testing phase. They didn’t do it. And they also don’t care.”
Watts said Russia’s forays into YouTube influence is “probably new for them in the U.S. space” after years of social media campaigns across Eastern Europe.
“They thought they were going to do something really cool and amazing—and I’m sure they thought it came out amazing,” he said. “But it didn’t take off, and they showed their hand.”
https://www.thedailybeast.com/russia-re ... -rap-beats


Image

How Facebook ads helped elect Trump
Trump campaign digital director Brad Parscale says Donald Trump won election on Facebook with highly targeted ads -- and infrastructure was a key issue

President Donald Trump talked on Twitter, but Facebook was the crucial tool that helped elect him, says the man who directed the digital aspects of the Trump campaign. Brad Parscale tells Lesley Stahl how he fine-tuned political ads posted on Facebook to directly reach voters with the exact messages they cared most about – infrastructure key among them -- and had handpicked Republican Facebook employees to guide him. Stahl's report will be broadcast on 60 Minutes Sunday, Oct. 8 at 7 p.m. ET/PT.

"Twitter is how [Trump] talked to the people, Facebook was going to be how he won," Parscale tells Stahl. Parscale says he used the majority of his digital ad budget on Facebook ads and explained how efficient they could be, particularly in reaching the rural vote. "So now Facebook lets you get to…15 people in the Florida Panhandle that I would never buy a TV commercial for," says Parscale. And people anywhere could be targeted with the messages they cared about. "Infrastructure…so I started making ads that showed the bridge crumbling…that's micro targeting…I can find the 1,500 people in one town that care about infrastructure. Now, that might be a voter that normally votes Democrat," he says. Parscale says the campaign would average 50-60,000 different ad versions every day, some days peaking at 100,000 separate iterations – changing design, colors, backgrounds and words – all in an effort to refine ads and engage users.

Parscale received help utilizing Facebook's technology from Facebook employees provided by the company who showed up for work to his office multiple days a week. He says they had to be partisan and he questioned them to make sure. "I wanted people who supported Donald Trump." Parscale calls these Facebook employees "embeds" who could teach him every aspect of the technology. "I want to know everything you would tell Hillary's campaign plus some," he says he told them.

Both campaigns used Facebook's advertising technology extensively to reach voters, but Parscale says the Clinton campaign didn't go as far as using "embeds." "I had heard that they did not accept any of [Facebook's] offers."

The conservative Parscale sees an irony in all this. "These social platforms are all invented by very liberal people on the West and East Coast. And we figure out how to use it to push conservative values. I don't think they thought that would ever happen," says Parscale.
https://www.cbsnews.com/news/how-facebo ... ect-trump/



From "World Without Mind" by Franklin Foer:

Image
Image
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Wed Oct 11, 2017 6:08 pm

House intel committee to release Russia-linked Facebook ads
CNN Digital Expansion DC Manu RajuJeremy Herb
By Manu Raju and Jeremy Herb, CNN
Updated 4:58 PM ET, Wed October 11, 2017


The decision ends a stand-off over making the Russia-linked ads public

Facebook turned over 3,000 Russian-linked ads to the intelligence committees
(CNN)The House intelligence committee will release copies of the election-related Facebook ads that were purchased by Russian-linked accounts, the committee leaders said Wednesday.

Following a meeting with Facebook Chief Operating Officer Sheryl Sandberg, the leaders of the House Russia investigation -- Reps. Mike Conaway of Texas and Adam Schiff of California -- said they had reached an agreement to release the Russia-linked content.
"It will be released by our committee," said Schiff.
The decision for the House intelligence committee to release the ads ends a standoff between Congress and Facebook over making the ads public, after Facebook turned over 3,000 Russian-linked ads to the intelligence committees but bristled at the notion of releasing them, citing the company's privacy policy.

RELATED: Russian-bought Black Lives Matter ad on Facebook targeted Baltimore and Ferguson
close dialog


Facebook, along with Twitter and Google, are scheduled to testify before the House and Senate intelligence committees for back-to-back public hearings November 1 on Russian efforts to use social media platforms to influence the 2016 US elections.
Conaway said it was unlikely his committee would release the ads before that hearing.
But "my personal hope is we do this as quickly as we can," Conaway said.
The committee planned to work with Facebook to "scrub" personally identifiable information from the ads, Schiff said.
While Conaway and Schiff had previously expressed a desire to release the ads, Senate intelligence committee chairman Richard Burr of North Carolina said he did not want his committee to do so, arguing that any documents turned over to the committee were sensitive and should not be made public.
This is how easy it is to buy a Facebook ad like the Russian 'troll farms' did
This is how easy it is to buy a Facebook ad like the Russian 'troll farms' did
Conaway said it was too soon to say whether Facebook did enough to protect against Russian efforts to influence the US election through social media.
Any discussion about whether Americans were involved with the Russian-linked Facebook effort was beyond the scope of Wednesday's meeting, Schiff said.
RELATED: 2016 Presidential Campaign Hacking Fast Facts
Facebook told Congress last month that it had sold roughly 3,000 ads to 470 Russian accounts tied to the Internet Research Agency, a Kremlin-linked troll farm. Both lawmakers and Facebook representatives have said that the apparent goal of the ads was to amplify political discord by exploiting tensions over hot-button political issues like race, immigration and gun rights.
Sandberg was also meeting Wednesday with House Majority Leader Kevin McCarthy, Minority Leader Nancy Pelosi and Minority Whip Steny Hoyer. She is scheduled to meet with the Congressional Black Caucus on Thursday.
Asked what he wanted to know from Sandberg, Hoyer said: "What they knew and when they knew it."
CNN's Kristin Wilson and Dylan Byers contributed to this report.
http://www.cnn.com/2017/10/11/politics/ ... index.html
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Sat Oct 14, 2017 11:35 am

What Facebook Did to American Democracy

And why it was so hard to see it coming

ALEXIS C. MADRIGAL OCT 12, 2017 TECHNOLOGY



In the media world, as in so many other realms, there is a sharp discontinuity in the timeline: before the 2016 election, and after.

Things we thought we understood—narratives, data, software, news events—have had to be reinterpreted in light of Donald Trump’s surprising win as well as the continuing questions about the role that misinformation and disinformation played in his election.

Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election. Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

* * *

We’ve known since at least 2012 that Facebook was a powerful, non-neutral force in electoral politics. In that year, a combined University of California, San Diego and Facebook research team led by James Fowler published a study in Nature, which argued that Facebook’s “I Voted” button had driven a small but measurable increase in turnout, primarily among young people.

Rebecca Rosen’s 2012 story, “Did Facebook Give Democrats the Upper Hand?” relied on new research from Fowler, et al., about the presidential election that year. Again, the conclusion of their work was that Facebook’s get-out-the-vote message could have driven a substantial chunk of the increase in youth voter participation in the 2012 general election. Fowler told Rosen that it was “even possible that Facebook is completely responsible” for the youth voter increase. And because a higher proportion of young people vote Democratic than the general population, the net effect of Facebook’s GOTV effort would have been to help the Dems.

The potential for Facebook to have an impact on an election was clear for at least half a decade.
The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome. And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.

In June 2014, Harvard Law scholar Jonathan Zittrain wrote an essay in New Republic called, “Facebook Could Decide an Election Without Anyone Ever Finding Out,” in which he called attention to the possibility of Facebook selectively depressing voter turnout. (He also suggested that Facebook be seen as an “information fiduciary,” charged with certain special roles and responsibilities because it controls so much personal data.)

In late 2014, The Daily Dot called attention to an obscure Facebook-produced case study on how strategists defeated a statewide measure in Florida by relentlessly focusing Facebook ads on Broward and Dade counties, Democratic strongholds. Working with a tiny budget that would have allowed them to send a single mailer to just 150,000 households, the digital-advertising firm Chong and Koster was able to obtain remarkable results. “Where the Facebook ads appeared, we did almost 20 percentage points better than where they didn’t,” testified a leader of the firm. “Within that area, the people who saw the ads were 17 percent more likely to vote our way than the people who didn’t. Within that group, the people who voted the way we wanted them to, when asked why, often cited the messages they learned from the Facebook ads.”

In April 2016, Rob Meyer published “How Facebook Could Tilt the 2016 Election” after a company meeting in which some employees apparently put the stopping-Trump question to Mark Zuckerberg. Based on Fowler’s research, Meyer reimagined Zittrain’s hypothetical as a direct Facebook intervention to depress turnout among non-college graduates, who leaned Trump as a whole.

Facebook, of course, said it would never do such a thing. “Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community,” a spokesperson said. “We as a company are neutral—we have not and will not use our products in a way that attempts to influence how people vote.”

They wouldn’t do it intentionally, at least.

As all these examples show, though, the potential for Facebook to have an impact on an election was clear for at least half a decade before Donald Trump was elected. But rather than focusing specifically on the integrity of elections, most writers—myself included, some observers like Sasha Issenberg, Zeynep Tufekci, and Daniel Kreiss excepted—bundled electoral problems inside other, broader concerns like privacy, surveillance, tech ideology, media-industry competition, or the psychological effects of social media.

From the system’s perspective, success is correctly predicting what you’ll like, comment on, or share.
The same was true even of people inside Facebook. “If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was,” wrote Antonio García Martínez, who managed ad targeting for Facebook back then. “And yet, now we live in that otherworldly political reality.”

Not to excuse us, but this was back on the Old Earth, too, when electoral politics was not the thing that every single person talked about all the time. There were other important dynamics to Facebook’s growing power that needed to be covered.

* * *

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

What’s crucial to understand is that, from the system’s perspective, success is correctly predicting what you’ll like, comment on, or share. That’s what matters. People call this “engagement.” There are other factors, as Slate’s Will Oremus noted in this rare story about the News Feed ranking team. But who knows how much weight they actually receive and for how long as the system evolves. For example, one change that Facebook highlighted to Oremus in early 2016—taking into account how long people look at a story, even if they don’t click it—was subsequently dismissed by Lars Backstrom, the VP of engineering in charge of News Feed ranking, as a “noisy” signal that’s also “biased in a few ways” making it “hard to use” in a May 2017 technical talk.

Facebook’s engineers do not want to introduce noise into the system. Because the News Feed, this machine for generating engagement, is Facebook’s most important technical system. Their success predicting what you’ll like is why users spend an average of more than 50 minutes a day on the site, and why even the former creator of the “like” button worries about how well the site captures attention. News Feed works really well.

If every News Feed is different, how can anyone understand what other people are seeing and responding to?
But as far as “personalized newspapers” go, this one’s editorial sensibilities are limited. Most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. And this is true not just in politics, but the broader culture.

That this could be a problem was apparent to many. Eli Pariser’s The Filter Bubble, which came out in the summer of 2011, became the most widely cited distillation of the effects Facebook and other internet platforms could have on public discourse.

Pariser began the book research when he noticed conservative people, whom he’d befriended on the platform despite his left-leaning politics, had disappeared from his News Feed. “I was still clicking my progressive friends’ links more than my conservative friends’— and links to the latest Lady Gaga videos more than either,” he wrote. “So no conservative links for me.”

Through the book, he traces the many potential problems that the “personalization” of media might bring. Most germane to this discussion, he raised the point that if every one of the billion News Feeds is different, how can anyone understand what other people are seeing and responding to?

“The most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument. As the number of different segments and messages increases, it becomes harder and harder for the campaigns to track who’s saying what to whom,” Pariser wrote. “How does a [political] campaign know what its opponent is saying if ads are only targeted to white Jewish men between 28 and 34 who have expressed a fondness for U2 on Facebook and who donated to Barack Obama’s campaign?”

This did, indeed, become an enormous problem. When I was editor in chief of Fusion, we set about trying to track the “digital campaign” with several dedicated people. What we quickly realized was that there was both too much data—the noisiness of all the different posts by the various candidates and their associates—as well as too little. Targeting made tracking the actual messaging that the campaigns were paying for impossible to track. On Facebook, the campaigns could show ads only to the people they targeted. We couldn’t actually see the messages that were actually reaching people in battleground areas. From the outside, it was a technical impossibility to know what ads were running on Facebook, one that the company had fought to keep intact.

Across the landscape, it began to dawn on people: Damn, Facebook owns us.
Pariser suggests in his book, “one simple solution to this problem would simply be to require campaigns to immediately disclose all of their online advertising materials and to whom each ad is targeted.” Which could happen in future campaigns.

Imagine if this had happened in 2016. If there were data sets of all the ads that the campaigns and others had run, we’d know a lot more about what actually happened last year. The Filter Bubble is obviously prescient work, but there was one thing that Pariser and most other people did not foresee. And that’s that Facebook became completely dominant as a media distributor.

* * *

About two years after Pariser published his book, Facebook took over the news-media ecosystem. They’ve never publicly admitted it, but in late 2013, they began to serve ads inviting users to “like” media pages. This caused a massive increase in the amount of traffic that Facebook sent to media companies. At The Atlantic and other publishers across the media landscape, it was like a tide was carrying us to new traffic records. Without hiring anyone else, without changing strategy or tactics, without publishing more, suddenly everything was easier.

While traffic to The Atlantic from Facebook.com increased, at the time, most of the new traffic did not look like it was coming from Facebook within The Atlantic’s analytics. It showed up as “direct/bookmarked” or some variation, depending on the software. It looked like what I called “dark social” back in 2012. But as BuzzFeed’s Charlie Warzel pointed out at the time, and as I came to believe, it was primarily Facebook traffic in disguise. Between August and October of 2013, BuzzFeed’s “partner network” of hundreds of websites saw a jump in traffic from Facebook of 69 percent.

At The Atlantic, we ran a series of experiments that showed, pretty definitively from our perspective, that most of the stuff that looked like “dark social” was, in fact, traffic coming from within Facebook’s mobile app. Across the landscape, it began to dawn on people who thought about these kinds of things: Damn, Facebook owns us. They had taken over media distribution.

Why? This is a best guess, proffered by Robinson Meyer as it was happening: Facebook wanted to crush Twitter, which had drawn a disproportionate share of media and media-figure attention. Just as Instagram borrowed Snapchat’s “Stories” to help crush the site’s growth, Facebook decided it needed to own “news” to take the wind out of the newly IPO’d Twitter.

Videos changed the dynamics of the News Feed for anyone trying to understand what the hell was going on.
The first sign that this new system had some kinks came with “Upworthy-style” headlines. (And you’ll never guess what happened next!) Things didn’t just go kind of viral, they went ViralNova, a site which, like Upworthy itself, Facebook eventually smacked down. Many of the new sites had, like Upworthy, which was cofounded by Pariser, a progressive bent.

Less noticed was that a right-wing media was developing in opposition to and alongside these left-leaning sites. “By 2014, the outlines of the Facebook-native hard-right voice and grievance spectrum were there,” The New York Times’ media and tech writer John Herrman told me, “and I tricked myself into thinking they were a reaction/counterpart to the wave of soft progressive/inspirational content that had just crested. It ended up a Reaction in a much bigger and destabilizing sense.”

The other sign of algorithmic trouble was the wild swings that Facebook Video underwent. In the early days, just about any old video was likely to generate many, many, many views. The numbers were insane in the early days. Just as an example, a Fortune article noted that BuzzFeed’s video views “grew 80-fold in a year, reaching more than 500 million in April.” Suddenly, all kinds of video—good, bad, and ugly—were doing 1-2-3 million views.

As with news, Facebook’s video push was a direct assault on a competitor, YouTube. Videos changed the dynamics of the News Feed for individuals, for media companies, and for anyone trying to understand what the hell was going on.

Individuals were suddenly inundated with video. Media companies, despite no business model, were forced to crank out video somehow or risk their pages/brands losing relevance as video posts crowded others out.

And on top of all that, scholars and industry observers were used to looking at what was happening in articles to understand how information was flowing. Now, by far the most viewed media objects on Facebook, and therefore on the internet, were videos without transcripts or centralized repositories. In the early days, many successful videos were just “freebooted” (i.e., stolen) videos from other places or reposts. All of which served to confuse and obfuscate the transport mechanisms for information and ideas on Facebook.

By July, Breitbart had surpassed The New York Times’ main account in interactions.
Through this messy, chaotic, dynamic situation, a new media rose up through the Facebook burst to occupy the big filter bubbles. On the right, Breitbart is the center of a new conservative network. A study of 1.25 million election news articles found “a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.”

Breitbart, of course, also lent Steve Bannon, its chief, to the Trump campaign, creating another feedback loop between the candidate and a rabid partisan press. Through 2015, Breitbart went from a medium-sized site with a small Facebook page of 100,000 likes into a powerful force shaping the election with almost 1.5 million likes. In the key metric for Facebook’s News Feed, its posts got 886,000 interactions from Facebook users in January. By July, Breitbart had surpassed The New York Times’ main account in interactions. By December, it was doing 10 million interactions per month, about 50 percent of Fox News, which had 11.5 million likes on its main page. Breitbart’s audience was hyper-engaged.

There is no precise equivalent to the Breitbart phenomenon on the left. Rather the big news organizations are classified as center-left, basically, with fringier left-wing sites showing far smaller followings than Breitbart on the right.

And this new, hyperpartisan media created the perfect conditions for another dynamic that influenced the 2016 election, the rise of fake news.


Sites by partisan attention (Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman)
* * *

In a December 2015 article for BuzzFeed, Joseph Bernstein argued that “the dark forces of the internet became a counterculture.” He called it “Chanterculture” after the trolls who gathered at the meme-creating, often-racist 4chan message board. Others ended up calling it the “alt-right.” This culture combined a bunch of people who loved to perpetuate hoaxes with angry Gamergaters with “free-speech” advocates like Milo Yiannopoulos with honest-to-God neo-Nazis and white supremacists. And these people loved Donald Trump.

“This year Chanterculture found its true hero, who makes it plain that what we’re seeing is a genuine movement: the current master of American resentment, Donald Trump,” Bernstein wrote. “Everywhere you look on ‘politically incorrect’ subforums and random chans, he looms.”

When you combine hyper-partisan media with a group of people who love to clown “normies,” you end up with things like Pizzagate, a patently ridiculous and widely debunked conspiracy theory that held there was a child-pedophilia ring linked to Hillary Clinton somehow. It was just the most bizarre thing in the entire world. And many of the figures in Bernstein’s story were all over it, including several who the current president has consorted with on social media.

But Pizzagate was but the most Pynchonian of all the crazy misinformation and hoaxes that spread in the run-up to the election.

BuzzFeed, deeply attuned to the flows of the social web, was all over the story through reporter Craig Silverman. His best-known analysis happened after the election, when he showed that “in the final three months of the U.S. presidential campaign, the top-performing fake election-news stories on Facebook generated more engagement than the top stories from major news outlets such as The New York Times, The Washington Post, The Huffington Post, NBC News, and others.”

But he also tracked fake news before the election, as did other outlets such as The Washington Post, including showing that Facebook’s “Trending” algorithm regularly promoted fake news. By September of 2016, even the Pope himself was talking about fake news, by which we mean actual hoaxes or lies perpetuated by a variety of actors.

The fake news generated a ton of engagement, which meant that it spread far and wide.
The longevity of Snopes shows that hoaxes are nothing new to the internet. Already in January 2015, Robinson Meyer reported about how Facebook was “cracking down on the fake news stories that plague News Feeds everywhere.”

What made the election cycle different was that all of these changes to the information ecosystem had made it possible to develop weird businesses around fake news. Some random website posting aggregated news about the election could not drive a lot of traffic. But some random website announcing that the Pope had endorsed Donald Trump definitely could. The fake news generated a ton of engagement, which meant that it spread far and wide.

A few days before the election Silverman and fellow BuzzFeed contributor Lawrence Alexander traced 100 pro–Donald Trump sites to a town of 45,000 in Macedonia. Some teens there realized they could make money off the election, and just like that, became a node in the information network that helped Trump beat Clinton.

Whatever weird thing you imagine might happen, something weirder probably did happen. Reporters tried to keep up, but it was too strange. As Max Read put it in New York Magazine, Facebook is “like a four-dimensional object, we catch slices of it when it passes through the three-dimensional world we recognize.” No one can quite wrap their heads around what this thing has become, or all the things this thing has become.

“Not even President-Pope-Viceroy Zuckerberg himself seemed prepared for the role Facebook has played in global politics this past year,” Read wrote.

And we haven’t even gotten to the Russians.

* * *

Russia’s disinformation campaigns are well known. During his reporting for a story in The New York Times Magazine, Adrian Chen sat across the street from the headquarters of the Internet Research Agency, watching workaday Russian agents/internet trolls head inside. He heard how the place had “industrialized the art of trolling” from a former employee. “Management was obsessed with statistics—page views, number of posts, a blog’s place on LiveJournal’s traffic charts—and team leaders compelled hard work through a system of bonuses and fines,” he wrote. Of course they wanted to maximize engagement, too!

There were reports that Russian trolls were commenting on American news sites. There were many, many reports of Russia’s propaganda offensive in Ukraine. Ukrainian journalists run a website dedicated to cataloging these disinformation attempts called StopFake. It has hundreds of posts reaching back into 2014.

The influence campaign just happened on Facebook without anyone noticing.
A Guardian reporter who looked into Russian military doctrine around information war found a handbook that described how it might work. “The deployment of information weapons, [the book] suggests, ‘acts like an invisible radiation’ upon its targets: ‘The population doesn’t even feel it is being acted upon. So the state doesn’t switch on its self-defense mechanisms,’” wrote Peter Pomerantsev.

As more details about the Russian disinformation campaign come to the surface through Facebook’s continued digging, it’s fair to say that it’s not just the state that did not switch on its self-defense mechanisms. The influence campaign just happened on Facebook without anyone noticing.

As many people have noted, the 3,000 ads that have been linked to Russia are a drop in the bucket, even if they did reach millions of people. The real game is simply that Russian operatives created pages that reached people “organically,” as the saying goes. Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, pulled data on the six publicly known Russia-linked Facebook pages. He found that their posts had been shared 340 million times. And those were six of 470 pages that Facebook has linked to Russian operatives. You’re probably talking billions of shares, with who knows how many views, and with what kind of specific targeting.

The Russians are good at engagement! Yet, before the U.S. election, even after Hillary Clinton and intelligence agencies fingered Russian intelligence meddling in the election, even after news reports suggested that a disinformation campaign was afoot, nothing about the actual operations on Facebook came out.

In the aftermath of these discoveries, three Facebook security researchers, Jen Weedon, William Nuland, and Alex Stamos, released a white paper called Information Operations and Facebook. “We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam, and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” they wrote.

“These social platforms are all invented by very liberal people. And we figure out how to use it to push conservative values.”
One key theme of the paper is that they were used to dealing with economic actors, who responded to costs and incentives. When it comes to Russian operatives paid to Facebook, those constraints no longer hold. “The area of information operations does provide a unique challenge,” they wrote, “in that those sponsoring such operations are often not constrained by per-unit economic realities in the same way as spammers and click fraudsters, which increases the complexity of deterrence.” They were not expecting that.

Add everything up. The chaos of a billion-person platform that competitively dominated media distribution. The known electoral efficacy of Facebook. The wild fake news and misinformation rampaging across the internet generally and Facebook specifically. The Russian info operations. All of these things were known.

And yet no one could quite put it all together: The dominant social network had altered the information and persuasion environment of the election beyond recognition while taking a very big chunk of the estimated $1.4 billion worth of digital advertising purchased during the election. There were hundreds of millions of dollars of dark ads doing their work. Fake news all over the place. Macedonian teens campaigning for Trump. Ragingly partisan media infospheres serving up only the news you wanted to hear. Who could believe anything? What room was there for policy positions when all this stuff was eating up News Feed space? Who the hell knew what was going on?

As late as August 20, 2016, the The Washington Post could say this of the campaigns:

Hillary Clinton is running arguably the most digital presidential campaign in U.S. history. Donald Trump is running one of the most analog campaigns in recent memory. The Clinton team is bent on finding more effective ways to identify supporters and ensure they cast ballots; Trump is, famously and unapologetically, sticking to a 1980s-era focus on courting attention and voters via television.
Just a week earlier, Trump’s campaign had hired Cambridge Analytica. Soon, they’d ramped up to $70 million a month in Facebook advertising spending. And the next thing you knew, Brad Parscale, Trump’s digital director, is doing the postmortem rounds talking up his win.

“These social platforms are all invented by very liberal people on the west and east coasts,” Parscale said. “And we figure out how to use it to push conservative values. I don’t think they thought that would ever happen.”

And that was part of the media’s problem, too.

* * *

Before Trump’s election, the impact of internet technology generally and Facebook specifically was seen as favoring Democrats. Even a TechCrunch critique of Rosen’s 2012 article about Facebook’s electoral power argued, “the internet inherently advantages liberals because, on average, their greater psychological embrace of disruption leads to more innovation (after all, nearly every major digital breakthrough, from online fundraising to the use of big data, was pioneered by Democrats).”

Certainly, the Obama tech team that I profiled in 2012 thought this was the case. Of course, social media would benefit the (youthful, diverse, internet-savvy) left. And the political bent of just about all Silicon Valley companies runs Democratic. For all the talk about Facebook employees embedding with the Trump campaign, the former CEO of Google, Eric Schmidt, sat with the Obama tech team on Election Day 2012.

In June 2015, The New York Times ran an article about Republicans trying to ramp up their digital campaigns that began like this: “The criticism after the 2012 presidential election was swift and harsh: Democrats were light-years ahead of Republicans when it came to digital strategy and tactics, and Republicans had serious work to do on the technology front if they ever hoped to win back the White House.”

“Facebook is what propelled Breitbart to a massive audience. We know its power.”
It cited Sasha Issenberg, the most astute reporter on political technology. “The Republicans have a particular challenge,” Issenberg said, “which is, in these areas they don’t have many people with either the hard skills or the experience to go out and take on this type of work.”

University of North Carolina journalism professor Daniel Kreiss wrote a whole (good) book, Prototype Politics, showing that Democrats had an incredible personnel advantage. “Drawing on an innovative data set of the professional careers of 629 staffers working in technology on presidential campaigns from 2004 to 2012 and data from interviews with more than 60 party and campaign staffers,” Kriess wrote, “the book details how and explains why the Democrats have invested more in technology, attracted staffers with specialized expertise to work in electoral politics, and founded an array of firms and organizations to diffuse technological innovations down ballot and across election cycles.”

Which is to say: It’s not that no journalists, internet-focused lawyers, or technologists saw Facebook’s looming electoral presence—it was undeniable—but all the evidence pointed to the structural change benefitting Democrats. And let’s just state the obvious: Most reporters and professors are probably about as liberal as your standard Silicon Valley technologist, so this conclusion fit into the comfort zone of those in the field.

By late October, the role that Facebook might be playing in the Trump campaign—and more broadly—was emerging. Joshua Green and Issenberg reported a long feature on the data operation then in motion. The Trump campaign was working to suppress “idealistic white liberals, young women, and African Americans,” and they’d be doing it with targeted, “dark” Facebook ads. These ads are only visible to the buyer, the ad recipients, and Facebook. No one who hasn’t been targeted by then can see them. How was anyone supposed to know what was going on, when the key campaign terrain was literally invisible to outside observers?

Steve Bannon was confident in the operation. “I wouldn’t have come aboard, even for Trump, if I hadn’t known they were building this massive Facebook and data engine,” Bannon told them. “Facebook is what propelled Breitbart to a massive audience. We know its power.”

The very roots of the electoral system had been destabilized.
Issenberg and Green called it “an odd gambit” which had “no scientific basis.” Then again, Trump’s whole campaign had seemed like an odd gambit with no scientific basis. The conventional wisdom was that Trump was going to lose and lose badly. In the days before the election, The Huffington Post’s data team had Clinton’s election probability at 98.3 percent. A member of the team, Ryan Grim, went after Nate Silver for his more conservative probability of 64.7 percent, accusing him of skewing his data for “punditry” reasons. Grim ended his post on the topic, “If you want to put your faith in the numbers, you can relax. She’s got this.”

Narrator: She did not have this.

But the point isn’t that a Republican beat a Democrat. The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.

In the middle of the summer of the election, the former Facebook ad-targeting product manager, Antonio García Martínez, released an autobiography called Chaos Monkeys. He called his colleagues “chaos monkeys,” messing with industry after industry in their company-creating fervor. “The question for society,” he wrote, “is whether it can survive these entrepreneurial chaos monkeys intact, and at what human cost.” This is the real epitaph of the election.

The information systems that people use to process news have been rerouted through Facebook, and in the process, mostly broken and hidden from view. It wasn’t just liberal bias that kept the media from putting everything together. Much of the hundreds of millions of dollars that was spent during the election cycle came in the form of “dark ads.”

The truth is that while many reporters knew some things that were going on on Facebook, no one knew everything that was going on on Facebook, not even Facebook. And so, during the most significant shift in the technology of politics since the television, the first draft of history is filled with undecipherable whorls and empty pages. Meanwhile, the 2018 midterms loom.
https://www.theatlantic.com/technology/ ... urce=atltw







Image
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Thu Oct 26, 2017 12:26 am

FACEBOOK FAILED TO PROTECT 30 MILLION USERS FROM HAVING THEIR DATA HARVESTED BY TRUMP CAMPAIGN AFFILIATE
Mattathias Schwartz
March 30 2017, 1:01 p.m.
LEIA EM PORTUGUÊS

IN 2014, TRACES of an unusual survey, connected to Facebook, began appearing on internet message boards. The boards were frequented by remote freelance workers who bid on “human intelligence tasks” in an online marketplace, called Mechanical Turk, controlled by Amazon. The “turkers,” as they’re known, tend to perform work that is rote and repetitive, like flagging pornographic images or digging through search engine results for email addresses. Most jobs pay between 1 and 15 cents. “Turking makes us our rent money and helps pay off debt,” one turker told The Intercept. Another turker has called the work “voluntary slave labor.”

The task posted by “Global Science Research” appeared ordinary, at least on the surface. The company offered turkers $1 or $2 to complete an online survey. But there were a couple of additional requirements as well. First, Global Science Research was only interested in American turkers. Second, the turkers had to download a Facebook app before they could collect payment. Global Science Research said the app would “download some information about you and your network … basic demographics and likes of categories, places, famous people, etc. from you and your friends.”

“Our terms of service clearly prohibit misuse,” said a spokesperson for Amazon Web Services, by email. “When we learned of this activity back in 2015, we suspended the requester for violating our terms of service.”

Although Facebook’s early growth was driven by closed, exclusive networks at college and universities, it has gradually herded users to agree to increasingly permissive terms of service. By 2014, anything a user’s friends could see was also potentially visible to the developers of any app that they chose to download. Some of the turkers noticed that the Global Science Research app appeared to be taking advantage of Facebook’s porousness. “Someone can learn everything about you by looking at hundreds of pics, messages, friends, and likes,” warned one, writing on a message board. “More than you realize.” Others were more blasé. “I don’t put any info on FB,” one wrote. “Not even my real name … it’s backwards that people put sooo much info on Facebook, and then complain when their privacy is violated.”

In late 2015, the turkers began reporting that the Global Science Research survey had abruptly shut down. The Guardian had published a report that exposed exactly who the turkers were working for. Their data was being collected by Aleksandr Kogan, a young lecturer at Cambridge University. Kogan founded Global Science Research in 2014, after the university’s psychology department refused to allow him to use its own pool of data for commercial purposes. The data collection that Kogan undertook independent of the university was done on behalf of a military contractor called Strategic Communication Laboratories, or SCL. The company’s election division claims to use “data-driven messaging” as part of “delivering electoral success.”

SCL has a growing U.S. spin-off, called Cambridge Analytica, which was paid millions of dollars by Donald Trump’s campaign. Much of the money came from committees funded by the hedge fund billionaire Robert Mercer, who reportedly has a large stake in Cambridge Analytica. For a time, one of Cambridge Analytica’s officers was Stephen K. Bannon, Trump’s senior adviser. Months after Bannon claimed to have severed ties with the company, checks from the Trump campaign for Cambridge Analytica’s services continued to show up at one of Bannon’s addresses in Los Angeles.

“You can say Mr. Mercer declined to comment,” said Jonathan Gasthalter, a spokesperson for Robert Mercer, by email.
Image
FaceBook Elections signs stand in the media area at Quicken Loans Arena in Cleveland, Thursday, Aug. 6, 2015, before the first Republican presidential debate. (AP Photo/John Minchillo) Facebook Elections signs in the media area at Quicken Loans Arena in Cleveland, Aug. 6, 2015, before the first Republican presidential debate of the 2016 election. Photo: John Minchillo/AP
The Intercept interviewed five individuals familiar with Kogan’s work for SCL. All declined to be identified, citing concerns about an ongoing inquiry at Cambridge and fears of possible litigation. Two sources familiar with the SCL project told The Intercept that Kogan had arranged for more than 100,000 people to complete the Facebook survey and download an app. A third source with direct knowledge of the project said that Global Science Research obtained data from 185,000 survey participants as well as their Facebook friends. The source said that this group of 185,000 was recruited through a data company, not Mechanical Turk, and that it yielded 30 million usable profiles. No one in this larger group of 30 million knew that “likes” and demographic data from their Facebook profiles were being harvested by political operatives hired to influence American voters.

Kogan declined to comment. In late 2014, he gave a talk in Singapore in which he claimed to have “a sample of 50+ million individuals about whom we have the capacity to predict virtually any trait.” Global Science Research’s public filings for 2015 show the company holding 145,111 British pounds in its bank account. Kogan has since changed his name to Spectre. Writing online, he has said that he changed his name to Spectre after getting married. “My wife and I are both scientists and quite religious, and light is a strong symbol of both,” he explained.

The purpose of Kogan’s work was to develop an algorithm for the “national profiling capacity of American citizens” as part of SCL’s work on U.S. elections, according to an internal document signed by an SCL employee describing the research.

“We do not do any work with Facebook likes,” wrote Lindsey Platts, a spokesperson for Cambridge Analytica, in an email. The company currently “has no relationship with GSR,” Platts said.

“Cambridge Analytica does not comment on specific clients or projects,” she added when asked whether the company was involved with Global Science Research’s work in 2014 and 2015.

The Guardian, which was was the first to report on Cambridge Analytica’s work on U.S. elections, in late 2015, noted that the company drew on research “spanning tens of millions of Facebook users, harvested largely without their permission.” Kogan disputed this at the time, telling The Guardian that his turker surveys had collected no more than “a couple of thousand responses” for any one client. While it is unclear how many responses Global Science Research obtained through Mechanical Turk and how many it recruited through a data company, all five of the sources interviewed by The Intercept confirmed that Kogan’s work on behalf of SCL involved collecting data from survey participants’ networks of Facebook friends, individuals who had not themselves consented to give their data to Global Science Research and were not aware that they were the objects of Kogan’s study. In September 2016, Alexander Nix, Cambridge Analytica’s CEO, said that the company built a model based on “hundreds and hundreds of thousands of Americans” filling out personality surveys, generating a “model to predict the personality of every single adult in the United States of America.”

Shortly after The Guardian published its 2015 article, Facebook contacted Global Science Research and requested that it delete the data it had taken from Facebook users. Facebook’s policies give Facebook the right to delete data gathered by any app deemed to be “negatively impacting the Platform.” The company believes that Kogan and SCL complied with the request, which was made during the Republican primary, before Cambridge Analytica switched over from Ted Cruz’s campaign to Donald Trump’s. It remains unclear what was ultimately done with the Facebook data, or whether any models or algorithms derived from it wound up being used by the Trump campaign.

In public, Facebook continues to maintain that whatever happened during the run-up to the election was business as usual. “Our investigation to date has not uncovered anything that suggests wrongdoing,” a Facebook spokesperson told The Intercept.

Facebook appears not to have considered Global Science Research’s data collection to have been a serious ethical lapse. Joseph Chancellor, Kogan’s main collaborator on the SCL project and a former co-owner of Global Science Research, is now employed by Facebook Research. “The work that he did previously has no bearing on the work that he does at Facebook,” a Facebook spokesperson told The Intercept.

Chancellor declined to comment.

Cambridge Analytica has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; Cambridge Analytica has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages. Nix has said that neurotic voters tend to be moved by “rational and fear-based” arguments, while introverted, agreeable voters are more susceptible to “tradition and habits and family and community.”

Dan Gillmor, director of the Knight Center at Arizona State University, said he was skeptical of the idea that the Trump campaign got a decisive edge from data analytics. But, he added, such techniques will likely become more effective in the future. “It’s reasonable to believe that sooner or later, we’re going to see widespread manipulation of people’s decision-making, including in elections, in ways that are more widespread and granular, but even less detectable than today,” he wrote in an email.


Trump’s circle has been open about its use of Facebook to influence the vote. Joel Pollak, an editor at Breitbart, writes in his campaign memoir about Trump’s “armies of Facebook ‘friends,’ … bypassing the gatekeepers in the traditional media.” Roger Stone, a longtime Trump adviser, has written in his own campaign memoir about “geo-targeting” cities to deliver a debunked claim that Bill Clinton had fathered a child out of wedlock, and narrowing down the audience “based on preferences in music, age range, black culture, and other urban interests.”

Clinton, of course, had her own analytics effort, and digital market research is a normal part of any political campaign. But the quantity of data compiled on individuals during the run-up to the election is striking. Alexander Nix, head of Cambridge Analytica, has claimed to “have a massive database of 4-5,000 data points on every adult in America.” Immediately after the election, the company tried to take credit for the win, claiming that its data helped the Trump campaign set the candidate’s travel schedule and place online ads that were viewed 1.5 billion times. Since then, the company has been de-emphasizing its reliance on psychological profiling.

The Information Commissioner’s Office, an official privacy watchdog within the British government, is now looking into whether Cambridge Analytica and similar companies might pose a risk to voters’ rights. The British inquiry was triggered by reports in The Observer of ties between Robert Mercer, Cambridge Analytica, and the Leave.EU campaign, which worked to persuade British voters to leave the European Union. While Nix has previously talked about the firm’s work for Leave.EU, Cambridge Analytica now denies that it had any paid role in the campaign.
Image
Twickenham, members of Leave EU and UKIP hand out leaflets<br /><br /><br /><br /> Grassroots Out action day on EU membership, London, Britain - 05 Mar 2016</p><br /><br /><br /> <p> (Rex Features via AP Images) Leave.EU signage is displayed in London on March 5, 2016. Photo: Rex Features/AP Images
In the U.S., where privacy laws are looser, there is no investigation. Cambridge Analytica is said to be pitching its products to several federal agencies, including the Joint Chiefs of Staff. SCL, its parent company, has new offices near the White House and has reportedly been advised by Gen. Michael Flynn, Trump’s former national security adviser, on how to increase its federal business. (A spokesperson for Flynn denied that he had done any work for SCL.)

Years before the arrival of Kogan’s turkers, Facebook founder Mark Zuckerberg tried to address privacy concerns around the company’s controversial Beacon program, which quietly funneled data from outside websites into Facebook, often without Facebook users being aware of the process. Reflecting on Beacon, Zuckerberg attributed part of Facebook’s success to giving “people control over what and how they share information.” He said that he regretted making Beacon an “opt-out system instead of opt-in … if someone forgot to decline to share something, Beacon went ahead and still shared it with their friends.”

Seven years later, Facebook appears to have made the same mistake, but with far greater consequences. In mid-2014, however, Facebook announced a new review process, where the company would make sure that new apps asked only for data they would actually use. “People want more control,” the company said at that time. “It’s going to make a huge difference with building trust with your app’s audience.” Existing apps were given a full year to switch over to have Facebook review how they handled user data. By that time, Global Science Research already had what it needed.
https://theintercept.com/2017/03/30/fac ... affiliate/
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Mon Oct 30, 2017 6:43 pm

Facebook Says Russian-Backed Election Content Reached 126 Million Americans
by CAROL E. LEE

WASHINGTON — An estimated 126 million Americans, roughly one-third of the nation’s population, received Russian-backed content on Facebook during the 2016 campaign, according to prepared testimony the company submitted Monday to the Senate Judiciary Committee and obtained by NBC News.

Underscoring how widely content on the social media platform can spread, Facebook says in the testimony that while some 29 million Americans directly received material from 80,000 posts by 120 fake Russian-backed pages in their own news feeds, those posts were “shared, liked and followed by people on Facebook, and, as a result, three times more people may have been exposed to a story that originated from the Russian operation.”

The testimony by Facebook's general counsel, Colin Stretch, was submitted to the Judiciary Committee ahead of a hearing on Tuesday with executives from Facebook, Google and Twitter. The hearing is part of the congressional inquiry into Russia’s use of these platforms to try to influence last year’s U.S. presidential election.

Posts from Russian-backed Facebook accounts from January 2015 to August 2017, by Facebook’s estimation, reached potentially half of the 250 million Americans who are eligible to vote. None of the 80,000 posts generated by fake Russian-backed pages includes the 3,000 Facebook advertisements purchased by Russian entities, according to a person familiar with the issue.

The shared content that Facebook estimates reached 126 million Americans was likely hard, if not impossible, for users of the social media platform to identify as originating from Russia.


Stretch, in his prepared testimony, seeks to play down the significance of that level of exposure to content from Russian-backed accounts.

“Our best estimate is that approximately 126 million people may have been served one of their stories at some point during the two-year period,” Stretch says in prepared testimony. “This equals about four-thousandths of one percent (0.004%) of content in News Feed, or approximately 1 out of 23,000 pieces of content.”

The person familiar with the issue said: “Put another way, if each of these posts were a commercial on television, you'd have to watch more than 600 hours of television to see something from” the Russia-backed posts.

Dave Karpf, a professor of media and technology at George Washington University, said the reach of the Russian-backed content is problematic but is unlikely to have affected the outcome of the election.

“It is a problem in that this is evidence that foreign nationals actively attempted to impact our election and they did manage to reach 126 million with messages,” Karpf said. “It’s going to be important for Facebook and Google and Twitter to get a handle on this stuff before the next election, hopefully with the help from our regulators, but what we should avoid is thinking, ‘Wow, 126 million people were duped into voting for Trump.’”

Facebook has said the Russian-backed entities violated the company’s policies because, Stretch says in his prepared testimony, they “came from a set of coordinated, inauthentic accounts.”

“We shut these accounts down and began trying to understand how they misused our platform,” the testimony says.

Stretch’s testimony also says that Facebook tried to mitigate threats “from actors with ties to Russia” by reporting them to U.S. law enforcement, including accounts belonging to a group the U.S. has linked to Russian military intelligence services. Stretch says that group, APT28, also created “fake personas that were then used to seed stolen information to journalists” and that those were “organized under the banner of an organization that called itself DC Leaks” whose accounts Facebook later removed.

He also plans to testify that Facebook has taken lessons from the 2016 campaign and applied them to identify fake accounts ahead of the French and German elections this year.
https://www.nbcnews.com/news/us-news/ru ... ys-n815791


FACEBOOK’S LAST-MINUTE EFFORT TO KEEP CONGRESS AT BAY


ON FRIDAY, FACEBOOK announced stronger guidelines for transparency in advertising on the social network, part of a campaign to forestall government regulation. “We’re going to make advertising more transparent, and not just for political ads,” Rob Goldman, Facebook’s vice president of ads, said in a blog post.
The new rules included two key elements: A searchable archive of political ads related to federal elections, and demographic information, including location and gender, of the people shown such ads. Both appear to have been added by Facebook in the days before the Friday announcement, after earlier versions of its proposed transparency measures received a chilly reception on Capitol Hill, according to people familiar with the talks. In proposals Facebook shared earlier last week, “The ads were still not publicly accessible or searchable and Facebook was still not providing any information related to the targeting for ads,” said one of those people.

The incident dramatizes how Silicon Valley giants are scrambling to head off new regulations from Washington following revelations about how Russia used their tools to meddle in the 2016 US election. Twitter, for example, last week announced new rules that will allow users to see how long an ad has been running, the content of other ads by that advertiser, and which ads have been targeted at them. Executives from Facebook, Google, and Twitter are scheduled to appear Tuesday and Wednesday at congressional hearings examining the election and its aftermath.
Elisabeth Diana, Facebook’s corporate communications director, says the company always intended to allow users to view all political ads, but “made some tweaks” to its policies following talks with industry partners, congressional offices, Twitter, and the Interactive Advertising Bureau. “We went to the Hill proposing some things and they gave us feedback. We’ll keep talking to the Hill and keep talking to our partners,” says Diana, stressing that Facebook will reevaluate its rules based on its pilot test in Canada.

Diana declined to comment on whether Congress requested drastic changes from Facebook. “The two foundational elements of what we announced [on Friday] were transparency and authenticity,” says Diana. “Those two things we laid out almost a month ago and we are delivering on those.”

Facebook is not the only company to respond to criticism from Washington. Twitter’s moves last week, which are more sweeping than Facebook’s, followed criticism of Twitter’s initial response to reports of Russian meddling.
The iterative approach is similar to the way tech companies often launch products, releasing an early version and then tweaking it based on feedback from users. But applying the same process to public-policy concerns could have lasting consequences for global tech giants, who are facing concerns about propaganda and election interference outside the U.S. as well.
A person familiar with Facebook’s internal discussions says the company’s response reflects executives’ concerns about the balance between increased transparency and effective policing of advertisers. For instance, this person says, organizations trying to stir up trouble on Facebook might use the database to figure out how Facebook polices advertisers and use that knowledge to evade restrictions. Some at Facebook were reluctant to create a searchable database of ads because they feared it would diminish the power of advertisers in favor of watchdogs, this person says.

So far, the additional transparency measures from Facebook and Twitter do not seem to have appeased the sponsors of a Senate bill that would require online platforms that run political ads to disclose who paid for an ad, and to maintain a publicly accessible database of all political ads. Sens. Mark Warner, Amy Klobuchar, and John McCain all have reiterated the need for the measure, dubbed the Honest Ads Act. “While it’s good to see Facebook is taking seriously its responsibility to provide greater transparency of its advertisements, much more needs to be done to give Americans full disclosure and prevent foreign interference in our elections,” McCain said in a statement. “I look forward to Facebook, Twitter, and other social-media platforms supporting our Honest Ads Act to ensure that existing campaign finance laws are updated and modernized.”
https://www.wired.com/story/facebooks-l ... ss-at-bay/
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Tue Oct 31, 2017 3:00 pm

'Kill them all' -- Russian-linked Facebook accounts called for violence
by Curt Devine @CNNMoney
October 31, 2017: 12:31 PM ET


Russian-linked site targeted black Americans
Facebook accounts run by Russian trolls repeatedly called for violence against different social and political groups in the U.S., including police officers, Black Lives Matter activists and undocumented immigrants.
Posts from three now-removed Facebook groups created by the Russian Internet Research Agency suggest Russia sought not only to meddle in U.S. politics but to encourage ideologically opposed groups to act out violently against one another. The posts are part of a database compiled by Jonathan Albright, the research director at Columbia University's Tow Center for Digital Journalism, who tracks and analyzes Russian propaganda.
For example, "Being Patriotic," a group that regularly posted content praising Donald Trump's candidacy, stated in an April 2016 post that Black Lives Matter activists who disrespected the American flag should be "be immediately shot." The account accrued about 200,000 followers before it was shut down.
Another Russia-linked group, "Blacktivist," described police brutality in a November 2016 post weeks after the election, and stated, "Black people have to do something. An eye for an eye. The law enforcement officers keep harassing and killing us without consequences."
The group "Secured Borders" had the most violent rhetoric, some of it well after the presidential election. A post in March 2017 described the threat of "dangerous illegal aliens" and said, "The only way to deal with them is to kill them all." Another post about immigrants called for a draconian new law, saying, "if you get deported that's your only warning. You come back you get shot and rolled into a ditch... BANG, problem solved." And a post about refugees said, "the state department needs to be burned to the ground and the rubble reduced to ashes."
Related: Facebook estimates 126 million people were served content from Russia-linked pages
More than two dozen messages encouraging violence are among thousands of controversial posts from Russia-linked Facebook accounts that analysts say sought to increase hostility -- both ideological and physical -- in the U.S. in an effort to further divide American society along political, religious or racial lines.
Mark R. Jacobson, a Georgetown University professor and expert on Russian influence operations, said Russia strategically seeks to undermine U.S. political cohesion by promoting extremist views within opposing political or social groups, and hoping chaos—and violence -- ensues.
"The Russians don't want groups like Black Lives Matter [and] the Alt-Right to sit there and have discussions and debates about the future of America. They want violent clashes," Jacobson said.
Jacobson noted that, during the Cold War, Russia sought to enhance extremist ideas within the civil rights movement in hopes of sparking race-based warfare in the U.S.
"If we start to see violent rallies... we should start to look for the hidden hand of Russian influence behind it," he said.
Columbia University's Albright said even if only a fraction of the accounts' posts called for physical violence, the overall messaging sought to push audiences toward more radical viewpoints that they would act on.
"These posts contained psychological calls to action toward both online and physical behavior," he said.
Some of the violent posts received tens of thousands of likes, comments, shares, or reactions, according to a database of messages Albright compiled from six now-deleted Russia-linked accounts, which included the accounts that posted the violent messages reviewed by CNN.
One post by Secured Borders shared in October 2016, which was interacted with more than 100,000 times, stated, "if Killary wins there will be riots nationwide, not seen since the times of Revolutionary war!!"
Albright said this post was likely amplified through paid advertising because the overwhelming majority of Secured Borders' messages received only a few thousand interactions.
Facebook has said it identified 3,000 ads tied to the Russian troll farm that ran between June 2015 and May 2017, though it's unclear if those ads included any of the messages calling for violence. Facebook shared those ads with Congress, but they have not yet been publicly released.
Related: Even Pokémon Go used by extensive Russian-linked meddling effort
Susan Benesch, director of the Dangerous Speech Project and a faculty associate at Harvard's Berkman Klein Center for Internet & Society, said violent messages like this could increase the possibility of audiences condoning or participating in violence against members of targeted groups.
"People can be heavily influenced by content online even when they don't know where it comes from," Benesch said. "In these cases, we can't know if anyone was actually influenced toward violence, but this type of speech could increase that risk."
Facebook's terms of service prohibit content that is "hate speech, threatening, or... incites violence."
Asked for comment, a Facebook spokesperson told CNN, "We don't allow the promotion of violence on Facebook but know we need to do better. We are hiring thousands of new people to our review teams, building better tools to keep our community safe, and investing in new technologies to help locate more banned content and bad actors."
Facebook's Vice President of Policy and Communications, Elliot Schrage, has said the company is working to develop greater safeguards against election interference and other forms of abuse. In a blog post earlier this month, Schrage said Facebook is "still looking for abuse and bad actors on our platform — our internal investigation continues."
The Internet Research Agency, a secretive company based in St. Petersburg, which the US intelligence community has linked to the Kremlin, appears to be the source of 470 inauthentic Facebook accounts that shared a wide range of controversial messages. Documents obtained by CNN show the IRA included a "Department of Provocations" that sought to spread fake news and social divisions in the West.
http://money.cnn.com/2017/10/31/media/r ... -violence/



The Shift: Forget Washington. Facebook’s Problems Abroad Are Far More Disturbing.

The company has mounted an all-out defense campaign ahead of this week's congressional hearings on election interference in 2016, mobilizing top executives, including Mark Zuckerberg and Sheryl Sandberg.
Violence against the Rohingya Muslims in Myanmar has been fueled, in part, by misinformation and anti-Rohingya propaganda spread on Facebook.
Fighting misinformation is especially difficult on WhatsApp, an app for private messaging, since there is no public information trail to fact-check.
Kevin Roose
Published 4:48 AM ET Mon, 30 Oct 2017 Updated 9:13 AM ET Mon, 30 Oct 2017
The New York Times

For months, Facebook’s headquarters in Menlo Park, Calif., has been in crisis mode, furiously attempting to contain the damage stemming from its role in last year’s presidential campaign. The company has mounted an all-out defense campaign ahead of this week’s congressional hearings on election interference in 2016, hiring three outside communications firms, taking out full-page newspaper ads, and mobilizing top executives, including Mark Zuckerberg and Sheryl Sandberg, to beat back accusations that it failed to prevent Russia from manipulating the outcome of the election.

No other predicament in Facebook’s 13-year history has generated this kind of four-alarm response. But while the focus on Russia is understandable, Facebook has been much less vocal about the abuse of its services in other parts of the world, where the stakes can be much higher than an election.

This past week, my colleagues at The Times reported on the ethnic cleansing of Rohingya Muslims, an ethnic minority in Myanmar that has been subjected to brutal violence and mass displacement. Violence against the Rohingya has been fueled, in part, by misinformation and anti-Rohingya propaganda spread on Facebook, which is used as a primary news source by many people in the country. Doctored photos and unfounded rumors have gone viral on Facebook, including many shared by official government and military accounts.

The information war in Myanmar illuminates a growing problem for Facebook. The company successfully connected the world to a constellation of real-time communication and broadcasting tools, then largely left it to deal with the consequences.

“In a lot of these countries, Facebook is the de facto public square,” said Cynthia Wong, a senior internet researcher for Human Rights Watch. “Because of that, it raises really strong questions about Facebook needing to take on more responsibility for the harms their platform has contributed to.”

In Myanmar, the rise in anti-Rohingya sentiment coincided with a huge boom in social media use that was partly attributable to Facebook itself. In 2016, the company partnered with MTP, the state-run telecom company, to give subscribers access to its Free Basics program. Free Basics includes a limited suite of internet services, including Facebook, that can be used without counting toward a cellphone data plan. As a result, the number of Facebook users in Myanmar has skyrocketed to more than 30 million today from 2 million in 2014.

“We work hard to educate people about our services, highlight tools to help them protect their accounts and promote digital literacy,” said Debbie Frost, a Facebook spokeswoman. “To be more effective in these efforts, we are working with civil society, safety partners, and governments — an approach we have found to be particularly important and effective in countries where people are rapidly coming online and experiencing the internet for the first time through a mobile phone.”

In India, where internet use has also surged in recent years, WhatsApp, the popular Facebook-owned messaging app, has been inundated with rumors, hoaxes and false stories. In May, the Jharkhand region in Eastern India was destabilized by a viral WhatsApp message that falsely claimed that gangs in the area were abducting children. The message incited widespread panic and led to a rash of retaliatory lynchings, in which at least seven people were beaten to death. A local filmmaker, Vinay Purty, told the Hindustan Times that many of the local villagers simply believed the abduction myth was real, since it came from WhatsApp.

“Everything shared on the phone is regarded as true,” Mr. Purty said.

In a statement, WhatsApp said, “WhatsApp has made communications cheaper, easier and more reliable for millions of Indians — with all the benefits that brings. Though we understand that some people, sadly, have used WhatsApp to intimidate others and spread misinformation. It’s why we encourage people to report problematic messages to WhatsApp so that we can take action.”

Facebook is not directly responsible for violent conflict, of course, and viral misinformation is hardly unique to its services. Before social media, there were email hoaxes and urban legends passed from person to person. But the speed of Facebook’s growth in the developing world has made it an especially potent force among first-time internet users, who may not be appropriately skeptical of what they see online.

The company has made many attempts to educate users about the dangers of misinformation. In India and Malaysia, it has taken out newspaper ads with tips for spotting false news. In Myanmar, it has partnered with local organizations to distribute printed copies of its community standards, as well as created educational materials to teach citizens about proper online behavior.

But these efforts, as well-intentioned as they may be, have not stopped the violence, and Facebook does not appear to have made them a top priority. The company has no office in Myanmar, and neither Mr. Zuckerberg nor Ms. Sandberg has made any public statements about the Rohingya crisis.

Correcting misinformation is a thorny philosophical problem for Facebook, which imagines itself as a neutral platform that avoids making editorial decisions. Facebook’s community standards prohibit hate speech and threats, but many harmful viral posts — such as a WhatsApp thread in Southern India that spread false rumors about a government immunization campaign — are neither hateful nor directly threatening, and they wouldn’t be prohibited under Facebook’s community standards as long as they came from authentic accounts. Fighting misinformation is especially difficult on WhatsApp, an app for private messaging, since there is no public information trail to fact-check.

Facebook has argued that the benefits of providing internet access to international users will ultimately outweigh the costs. Adam Mosseri, a Facebook vice president who oversees the News Feed, told a journalism gathering this month, “In the end, I don’t think we as a human race will regret the internet.” Mr. Zuckerberg echoed that sentiment in a 2013 manifesto titled “Is Connectivity a Human Right?,” in which he said that bringing the world’s population online would be “one of the most important things we all do in our lifetimes.”

That optimism may be cold comfort to people in places like South Sudan. Despite being one of the poorest and least-wired countries in the world, with only around 20 percent of its citizens connected to the internet, the African nation has become a hotbed of social media misinformation. As BuzzFeed News has reported, political operatives inside and outside the country have used Facebook posts to spread rumors and incite anger between rival factions, fostering violence that threatens to escalate into a civil war. A United Nations report last year determined that in South Sudan, “social media has been used by partisans on all sides, including some senior government officials, to exaggerate incidents, spread falsehoods and veiled threats, or post outright messages of incitement.”

These are incredibly complex issues, and it may be impossible for Facebook — which is, remember, a technology company, not a global peacekeeping force — to solve them overnight. But as the company’s response to the Russia crisis has proved, it’s capable of acting swiftly and powerfully when it feels its interests are threatened.

Information wars in emerging markets may not represent as big a threat to Facebook’s business as angry lawmakers in Washington. But people are dying, and communities are tearing themselves apart with the tools Facebook has built. That should qualify as an even greater emergency in Menlo Park.
https://www.cnbc.com/2017/10/30/new-yor ... rbing.html
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Wed Nov 01, 2017 4:17 pm

Facebook now says Russian troll army's content reached est. 150 million people, nearly HALF US population.


Once Dismissive, Facebook Now Says 126 Million Users Shown Russian-Generated Election Propaganda
By Elliot Hannon
RUSSIAITINTERNETPOLITICSCOMPANYFACEBOOK
Then-Russian Prime Minister Dmitry Medvedev (L) and Facebook CEO Mark Zuckerberg (R) meet outside Moscow, on October 1, 2012.
YEKATERINA SHTUKINA/AFP/GettyImages

First, Mark Zuckerberg poo-pooed the idea that viral misinformation helped tip the 2016 election. Then, earlier this month, the social network told Congressional investigators Russian-linked accounts had, in fact, bought several thousand ads that were seen by 10 million or so users during the 2016 campaign. On Tuesday, Facebook is set to inform lawmakers that over a two-year period leading up to the election, Russian operatives generated some 80,000 posts and the real number of users—so far—that were shown content created by Russian operatives is more like 126 million—nearly half the U.S. population of voting age. Zero to 126 million in record time.

The disclosure is part of Facebook’s testimony set to be delivered before Congress this week. Representatives from Twitter and Google are also scheduled to appear before Congress, as each has admitted, upon further investigation, that Russian-created posts were far more prevalent than first thought.

From the Washington Post:

Google acknowledged for the first time Monday that it had found evidence that Russian operatives used the company’s platforms to influence American voters, saying in a blog post that it had found 1,108 videos with 43 hours of content related to the Russian effort on YouTube. It also found $4,700 worth of Russian search and display ads.
Twitter also plans to tell congressional investigators that it has identified 2,752 accounts controlled by Russian operatives and more than 36,000 bots that tweeted 1.4 million times during the election, according to a draft of Twitter’s testimony obtained by The Post. The company previously reported 201 accounts linked to Russia.
http://www.slate.com/blogs/future_tense ... ganda.html



Why Twitter Is the Best Social Media Platform for Disinformation
It is time for Twitter to confront bots, extremists, and hostile spies by owning up to its own values.

Thomas Rid
Nov 1 2017, 7:00am

Thomas Rid ( @RIDT ) is Professor of Strategic Studies at Johns Hopkins University/SAIS. Rid was a witness in one of the first open hearings on Russian disinformation of the Senate Select Committee on Intelligence in March, where he called out Twitter as an " unwitting agent " of adversarial intelligence services.

Twitter is the most open social media platform, which is partly why it's used by so many politicians, celebrities, journalists, tech types, conference goers, and experts working on fast-moving topics. As we learned over the past year, Twitter's openness was exploited by adversarial governments trying to influence elections. Twitter is marketing itself as a news platform, the go-to place to find out, in the words of its slogan, "What's happening?"

So what's happening with disinformation on Twitter? That is very hard to tell, because Twitter is actively making it easier to hide evidence of wrongdoing and making it harder to investigate abuse by limiting and monitoring third party research, and by forcing data companies to delete evidence as requested by users. The San Francisco-based firm has long been the platform of choice for adversarial intelligence agencies, malicious automated accounts (so-called bots), and extremists at the fringes. Driven by ideology and the market, the most open and liberal social media platform has become a threat to open and liberal democracy.

In the course of late 2016 and 2017, Facebook tried to confront abuse: by hiring a top-notch security team; by improving account authentication; and by tackling disinformation. Twitter has done the opposite—its security team is rudimentary and reclusive; the company seems to be in denial on the scope of disinformation; and it even optimised its platform for hiding bots and helping adversarial operators to delete incriminating evidence—to delete incriminating evidence not just from Twitter, but even from the archives of third party data providers. I spoke with half a dozen analysts from such intelligence companies with privileged access to Twitter data, all of whom asked for anonymity for fear of upsetting their existing relationship with Twitter. One analyst joked that he would to cut off my feet if I mentioned him or his firm. Twitter declined to comment on the record for this story two times.

Twitter is libertarian to the core. The platform has always allowed users to register any available handle, on as many accounts as they want, anonymously, no real name required, in sharp contrast to Facebook. Users could always delete content, undo engagements, and suspend their accounts. There are strong privacy arguments in favor of giving users full control of their data, even after publication. From the beginning, Twitter has reflected those values and held on to them against pressure from undemocratic governments. But its openness, particularly the openness for deletion, anonymity, and automation, has made the platform easy to exploit.

Let's start with the bots. Twitter is teeming with automated accounts. "The total number of bots on Twitter runs into the millions, given that an individual bot action can involve over 100,000 accounts," Ben Nimmo, a bot hunter at the Atlantic Council, told me. A precise estimate is hard to come by. In March 2017, one study by researchers at the University of Indiana and the University of Southern California estimated that up to 15 percent of all Twitter accounts are bots. In September, another study from Rice University put the number at up to 23 percent, out of a global active user base of approximately 330 million.

Individual cases provide a better measure. At one point in the summer of 2016, around 17,000 bot accounts were put to work amplifying the Russian election interference, estimated one company with direct access to Twitter data. That number only takes into account highly repetitive posts that explicitly referred to Guccifer 2 and DC Leaks, two Russian front organizations called out by the US intelligence community, so the actual number of amplification bots was likely much higher.

A year later, the bot problem had not been contained. On August 29, 2017, Nimmo tried to trigger an "attack" by Russia-linked bots by mentioning the right keywords in on one of his posts in order to bring the problem to the attention of Twitter:


Ben Nimmo triggers a bot "attack" on one of his posts in August 30, 2017. Image: Screenshot
It worked: he got retweeted (and thus spammed) by more than 75,000 bots within hours. Twitter likely suspended more than 50,000 accounts, but as of last week, the posts still had around 18,000 automated spam engagements.

Twitter makes automating accounts easy. A moderately technical user can create and run a simple bot within hours. Camouflaging a primitive online robot as a blue collar worker from Wisconsin is trivial. Professionals can easily mass-produce bots, link them up, and commandeer them as a whole. Botnets, therefore, are pervasive. For example: one editorial in The New York Times, "Trump Is His Own Worst Enemy," was amplified and "attacked" by four different botnets in sequence, through RTs, likes, and @replies. Many of the accounts involved in these waves of amplification averaged well more than 1,500 tweets per day, an inhuman rate of activity.

Spotting bots can be hard for laypersons. An experienced user may be able to recognize an individual account as fake without much effort. But recognizing fake engagement is much harder: if a post has, say, 4,368 retweets and 2,345 likes, even advanced and cautious users will intuitively ascribe more importance to the message—without ever manually checking if the retweets are real. Bots don't sleep, they don't get tired, and they don't ever lose focus. The volume of fake traffic is therefore higher than the volume of fake accounts.

Nimmo told me an estimate of forged engagement is very difficult, and then cautiously added that fake activity could be as much as half the total traffic on Twitter. How many of the tens of thousands of nearly instant likes and retweets that each single post from Donald Trump received during the 2016 campaign, for example, were generated by genuine human Trump supporters? Probably a significant number: he won, after all. But we simply cannot tell how significant the automated Russian amplification of @realdonaldtrump has been; probably not even Twitter knows the precise answer.

Recognizing automated abuse is easy for analysts in principle. One data analytics expert with privileged access to Twitter data was particularly agitated when I asked him what he would ask Twitter in the upcoming Senate hearing on Tuesday: "Do they realize how easy it is to find this stuff?" He added: "It's beyond trivial." Patterns of behaviour are often a giveaway. But spot-the-bot should be even easier for Twitter, as it can see what's happening under the hood. Yet Twitter's internal analysis of Russia-linked automated activity during the 2016 election found only 36,746 accounts—a number that is understating the abuse by an order of magnitude.

For Twitter's methodology and internal telemetry is misleading: seeing what phone carrier or email address was used, what language settings an account has, or if a user logs in from a Russian IP address is not enough, as even a moderately cautious adversary can easily camouflage such indicators. Only state-of-the-art network analysis that takes into account subtle patterns of automated behaviour to link bots and abusers to each other (as opposed to static country indicators) will shed light on the full extent of manipulation. Twitter's lowball number, as provided in their Senate testimony on 31 October 2017, also ignored camouflaged, deleted, and suspended bot activity.

Meanwhile Twitter is granting the same level of privacy protection to hives of anonymous bots commandeered by authoritarian spy agencies as it grants to an American teenager tweeting under her real name from her sofa at home.

As Twitter grew in popularity, its data became more attractive and more valuable. A small number of companies got into the business of analysing and re-selling access to the full take of Twitter data, the so-called "firehose." But Twitter grew alarmed as it lost more control of is user-generated data. By April 2015, the company announced it would take steps "toward developing more direct relationships with data customers." Twitter wanted more control over what third parties do with its data.

One of the hardest questions had been left unresolved over the years of growth: should a post—or an account—be deleted not just from Twitter when the user deletes the content, but also from the databases and from the archives of third party providers? This practice is known as "cross-deletion" in the data analytics community.

Already in 2014, Twitter's policy for developers had a section named Respect Users' Control and Privacy. Developers, the policy stated back then, should "take all reasonable efforts" in order to "delete content that Twitter reports as deleted or expired." In developer jargon this policy is known as cross-deletion: removal of content across different platforms, public and private (other social media platforms like Instagram and Tumblr have similar policies).

The result: data and intelligence companies with access to the full Twitter firehose operated in a gray space on cross-deletion. Some of them kept the deleted data for analysis even after users deleted posts or accounts, or after users un-engaged with content. After all deletions are likely to be of particular interest for follow-on analysis. If a user tries to hide something, or clean something up, that something is by definition more interesting.

By mid-2016, Twitter was slowly clawing back full control of the firehose. Data companies now had to submit "use cases" to work with firehose data. Twitter soon was in a position to monitor search queries and analysis by third party providers. By the summer of 2016, some data companies I spoke with adjusted their policies to implement cross-deletion more thoroughly and more quickly for fear of losing access to the Twitter data stream.

A year later, by June 20, 2017—as the full extent of social media-amplified meddling in the 2016 election was in full public view—Twitter made the problem worse. In an update to its policy document (section C-3), Twitter made it even harder for analytics companies to keep deleted or modified content in their archives. No more grey areas to keep data. And Twitter was now surveilling independent analysis done with its firehose data. Every data firm with privileged access I spoke with was in a state of near panic that they could lose access in punishment for doing "forbidden research," as one concerned analyst said.

The result of the API policy change: Russian operators can clean up their traces from the public domain as well as from the archives, thus hiding evidence from ongoing investigations even more easily. Shadowbrokers, likely one of the most aggressive components of the wider disinformation operation, started deleting its posts in late June.

Twitter's poor market performance makes the problem worse. The social news platform, in contrast to Facebook or Google, has never made money. It therefore pays more attention to its shareholders. One of the most important metrics for its stock price is the "active user base." Millions of bots and fake accounts are boosting the numbers, making the active user base appear much larger than it actually is. The open market is thus creating an incentive to hide the bots. Were Twitter forced to admit the true extent of fake accounts and fake traffic on its platform, it could be the death of the little blue bird.
All of this makes Twitter a convenient disinformation platform. Jack Dorsey, the CEO, has built a news platform optimised for disinformation—not by intention, but in effect.

So what should be happening? Twitter should own up to its values, acknowledge it's a public news platform (in contrast to other social media networks), and stop editing the news. After all, Twitter designed its service in a way that gives every single regular post a publicly accessible and unique URL, no login required to view. Such openness has been a key ingredient in the company's recipe for success. Tweets are on the public record, indeed some have become the public record.

The Economist, in theory, could decide to give its readers the option to unpublish letters-to-the editor, as published in the news magazine. But it is unimaginable that The Economist would even dare to ask the Library of Congress to unpublish content from its archives. Yet this is what Twitter is enforcing—both figuratively and literally, as Twitter's Library of Congress archiving project, announced in 2010, has stalled.

The US President shouldn't be able to edit history. Russian spies and bots shouldn't be able to falsify the historical record. Nor should a Silicon Valley firm with a highly uncertain future be able to mess with how we remember our past.
https://motherboard.vice.com/en_us/arti ... nformation
Mazars and Deutsche Bank could have ended this nightmare before it started.
They could still get him out of office.
But instead, they want mass death.
Don’t forget that.
User avatar
seemslikeadream
 
Posts: 32090
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby 82_28 » Tue Nov 07, 2017 4:36 pm

Facebook to Fight Revenge Porn by Letting Potential Victims Upload Nudes in Advance

Facebook is testing new technology that is designed to help victims of revenge porn acts.

This new tool is currently under testing in Australia, and the company says it plans to expand it to other countries if everything goes well.
New tool modeled after anti-child-porn detection systems

This new protection system works similar to the anti-child-porn detection systems in use at Facebook, and other social media giants like Google, Twitter, Instagram, and others.

It works on a database of file hashes, a cryptographic signature computed for each file.

Facebook says that once an abuser tries to upload an image marked as "revenge porn" in its database, its system will block the upload process. This will work for images shared on the main Facebook service, but also for images shared privately via Messenger, Facebook's IM app.
Potential victims will need to upload nude photos of themselves

The weird thing is that in order to build a database of "revenge porn" file hashes, Facebook will rely on potential victims uploading a copy of the nude photo in advance.

This process involves the victim sending a copy of the nude photo to his own account, via Facebook Messenger. This implies uploading a copy of the nude photo on Facebook Messenger, the very same act the victim is trying to prevent.

The victim can then report the photo to Facebook, which will create a hash of the image that the social network will use to block further uploads of the same photo.

This is possible because in April this year, Facebook modified its image reporting process to take into account images showing "revenge porn" acts.

Facebook says it's not storing a copy of the photo, but only computing the file's hash and adding it to its database of revenge porn imagery.

Victims who fear that former or current partners may upload a nude photo online can pro-actively take this step to block the image from ever being uploaded on Facebook and shared among friends.
Australia one of four countries participating in test program

In Australia, where Facebook is currently testing this new program, possible victims can reach out to the Australian government's e-Safety Commissioner on Facebook to get help with the process.

Speaking to ABC (Australian Broadcasting Corporation), a Facebook spokesperson said Australia is one of the four countries part of this test pilot program.

ABC discovered Facebook's secret test pilot program while investigating a high-profile revenge porn case that took place in Australia, where Australian Football player Nathan Broad shared a nude photo of a young woman online, bearing his recently won championship medal on her bare chest. Broad publicly apologized and the victim withdrew her legal complaint.

Back in 2015, Google started a similar program to start fighting revenge porn images that end up in search results.


https://www.bleepingcomputer.com/news/t ... n-advance/
There is no me. There is no you. There is all. There is no you. There is no me. And that is all. A profound acceptance of an enormous pageantry. A haunting certainty that the unifying principle of this universe is love. -- Propagandhi
User avatar
82_28
 
Posts: 11194
Joined: Fri Nov 30, 2007 4:34 am
Location: North of Queen Anne
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 48 guests