The creepiness that is Facebook

Moderators: DrVolin, 82_28, Elvis, Jeff

Re: The creepiness that is Facebook

Postby seemslikeadream » Wed Apr 04, 2018 7:25 pm

Facebook said the data of most of its 2 billion users has been collected and shared with outsiders

Cambridge Analytica, a firm that ran data operations for President Trump's 2016 campaign, was banned from Facebook on March 16. Here's what you need to know. (Elyse Samuels, Patrick Martin/The Washington Post)

Facebook said Wednesday that most of its 2 billion users likely have had their public profiles scraped by outsiders without the users' explicit permission, dramatically raising the stakes in a privacy controversy that has dogged the company for weeks, spurred investigations in the United States and Europe, and sent the company's stock price tumbling.

The acknowledgment was part of a broader disclosure by Facebook on Wednesday about the ways in which various levels of user data have been taken by everyone from malicious actors to ordinary app developers.

"We’re an idealistic and optimistic company, and for the first decade, we were really focused on all the good that connecting people brings," Chief Executive Mark Zuckerberg said on a call with reporters Wednesday afternoon. "But it’s clear now that we didn’t focus enough on preventing abuse and thinking about how people could use these tools for harm as well."

The Switch newsletter
The day's top stories on the world of tech.
As part of the disclosure, Facebook for the first time detailed the scale of the improper data collection for Cambridge Analytica, a political data consultancy hired by President Trump and other Republican candidates in the last two federal election cycles. The political consultancy gained access to Facebook information on up to 87 million users, 71 million of whom are Americans, Facebook said. Cambridge Analytica obtained the data to build “psychographic” profiles that would help deliver targeted messages intended to shape voter behavior in a wide range of U.S. elections.

But in research sparked by revelations from a Cambridge Analytica whistleblower last month, Facebook determined that the problem of third-party collection of user data was far larger still and, with the company's massive user base, likely affected a large cross-section of people in the developed world.


Is privacy dead? What happens to your data and why it matters.

Facebook and other social media sites are facing scrutiny over their privacy settings. Here's how you can keep your data private and why you should care. (Elyse Samuels, John Parks/The Washington Post)

“Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped,” the company wrote in its blog post.

The scraping by malicious actors typically involved gathering profile information — including names, profile photos, hometown and any other information that was part of a user's public profile, according to Facebook — by using search and account recovery functions. Facebook said it has now disabled the search function and has restricted the account recovery tool. That personal data became more potent when malicious actors were able to match it with email addresses and phone numbers that they had obtained elsewhere, for instance on the dark web.

The data obtained by Cambridge Analytica was more detailed and extensive, including the names, home towns, work and educational histories, religious affiliations and Facebook “likes” of users, among other data. Other users affected were in countries including the Philippines, Indonesia, U.K., Canada and Mexico.

Facebook initially had sought to downplay the problem, saying in March only that 270,000 people had responded to a survey on an app created by the researcher in 2014. That netted Cambridge Analytica the data on the friends of those who responded to the survey, without their permission. But Facebook declined to say at the time how many other users may have had their data collected in the process. The whistleblower, Christopher Wylie, a former researcher for the company, said the real number of affected people was at least 50 million.

Wylie tweeted on Wednesday afternoon that Cambridge Analytica could have obtained even more than 87 million profiles. "Could be more tbh," he wrote, using an abbreviation for "to be honest."

Cambridge Analytica on Wednesday responded to Facebook's announcement by saying that it had licensed data on 30 million users. Facebook banned Cambridge Analytica from its platform last month for obtaining the data under false pretenses.

[ Why Facebook users’ data obtained by Cambridge Analytica has probably spun far out of reach ]

Facebook's announcement, made near the bottom of a blog post Wednesday afternoon on plans to restrict access to data in the future, underscores the severity of a data mishap that appears to have affected about one out of every four Americans and sparked widespread outrage at the carelessness of the company's handling of information on its users. Personal data on users and their Facebook friends was easily and widely available to developers of apps before 2015.

With its moves over the past week, Facebook is embarking on a major shift in its relationship with third-party app developers that have used Facebook’s vast network to expand their businesses. What was largely an automated process will now involve developers agreeing to “strict requirements,” the company said in its blog post Wednesday. The 2015 policy change curtailed developers’ abilities to access data about people’s friends networks but left open many loopholes that the company tightened on Wednesday.

The news quickly reverberated on Capitol Hill, where lawmakers are set to grill Zuckerberg at a series of hearings next week.

"The more we learn, the clearer it is that this was an avalanche of privacy violations that strike at the core of one of our most precious American values – the right to privacy," said Sen. Ed Markey (D-Mass.), who serves on the Senate Commerce Committee, which has called on Zuckerberg to testify at a hearing next week.

“This latest revelation is extremely troubling and shows that Facebook still has a lot of work to do to determine how big this breach actually is,” said Rep. Frank Pallone Jr. (D-N.J.), the top Democrat on the House Energy and Commerce Committee, which will hear from Zuckerberg on Wednesday.

“I’m deeply concerned that Facebook only addresses concerns on its platform when it becomes a public crisis, and that is simply not the way you run a company that is used by over 2 billion people,” he said. “We need to know how they are going to fix this problem next week at our hearing.”

[ Facebook had a closer relationship than it disclosed with the academic it called a liar ]

Facebook announced plans on Wednesday to add new restrictions to how outsiders can gain access to this data, the latest steps in a years-long process by the company to improve its damaged reputation as a steward of the personal privacy of its users.

Developers who in the past could get access to people’s relationship status, calendar events, private Facebook posts, and much more data, will now be cut off from access or be required to endure a much stricter process for obtaining the information.

Cambridge Analytica, which collected this information with the help of Cambridge University psychologist Aleksandr Kogan, was founded by a multimillion-dollar investment by hedge-fund billionaire Robert Mercer and headed by his daughter, Rebekah Mercer, who was the company's president, according to documents provided by Wylie. Serving as vice president was conservative strategist Stephen K. Bannon, who also was the head of Breitbart News. He has since left both jobs and also his post as top White House adviser to Trump.

Until Wednesday, apps that let people input a Facebook event into their calendar could also automatically import lists of all the people who attended that event, Facebook said. Administrators of private groups, some of which have tens of thousands of members, could also let apps scrape the Facebook posts and profiles of members of that group. App developers who want this access will now have to prove their activities benefit the group. Facebook will now need to approve tools that businesses use to operate Facebook pages. A business that uses an app to help it respond quickly to customer messages, for example, will not be able to do so automatically. Developers’ access to Instagram will also be severely restricted.

Facebook is allow banning apps from accessing users' information about their religious or political views, relationship status, education, work history, fitness activity, book reading habits, music listening and news reading activity, video watching and games. Data brokers and businesses collect this type of information to build profiles of their customers’ tastes.

Facebook last week said it is also shutting down access to data brokers who use their own data to target customers on Facebook.

Facebook’s broad changes to how data is used apply mostly to outsiders and third parties. Facebook is not limiting the data the company itself can collect, nor is it restricting its ability to profile users to enable advertisers to target them with personalized messages. One piece of data Facebook said it would stop collecting was the time of phone calls, a response to outrage from users of Facebook’s messenger service who discovered that allowing Facebook to access their phone contact list was giving the company access to their call logs.

Correction: An earlier version of the story said that malicious actors used Facebook's tools to obtain email addresses and phone numbers. In fact, the malicious actors used email addresses and phone numbers that they had previously gathered to obtain other personal information, such as names, hometown, profile photos and other public information from Facebook profiles ... 129a529cd9
Posts: 29113
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Thu Apr 05, 2018 11:08 am

In the final weeks of the 2016 elections, Google and Facebook worked with a dark money group to target anti-Muslim ads like this at swing voters. Docs obtained by OpensecretsDC show Robert Mercer was the group's largest donor, giving $2 million

EXCLUSIVE: Robert Mercer backed a secretive group that worked with Facebook, Google to target anti-Muslim ads at swing voters

by Robert Maguire on April 5, 2018

Robert Mercer (Oliver Contreras/For The Washington Post via Getty Images)

As the final weeks of the 2016 elections ticked down, voters in swing states like Nevada and North Carolina began seeing eerie promotional travel ads as they scrolled through their Facebook feeds or clicked through Google sites.

In one, a woman with a French accent cheerfully welcomes visitors to the “Islamic State of France,” where “under Sharia law, you can enjoy everything the Islamic State of France has to offer, as long as you follow the rules.”

The video has a Man in the High Tower feel. Iconic French tourist sites are both familiar and transformed — the Eiffel Tower is capped with a star and crescent and the spires of the Notre Dame are replaced with the domed qubba of a mosque.

The Mona Lisa is shown looking, the ad says, “as a woman should,” covered in a burka.

If it wasn’t already clear that the ad was meant to stoke viewers’ fears of imminent Muslim conquest, the video is interspersed with violent imagery. Three missiles are seen flying through the sky as the video opens. Blindfolded men are shown kneeling with guns pointed at their heads, and children are shown training with weapons “to defend the caliphate.”

This is one of three mock travel ads.

Another, for the “Islamic State of Germany,” invited visitors to “celebrate the arranged marriages of future jihadi soldiers” at a pork- and alcohol-free Oktoberfest.

“You can even sell your daughter or sister to be married,” the ad notes enthusiastically while showing the covered face of a woman wearing a black burka.

And just to bring it all home, days before the election, the group made an “Islamic States of America“ travel promo, where Syrian refugees have overtaken America. In the ad, the iconic Hollywood sign reads “Allahu Akbar,” and the Statue of Liberty wears a burka and holds a star and crescent. In the video, the 9/11 Memorial in New York City is celebrated as an Islamic victory.

Most Americans have never heard of the far-right neoconservative nonprofit that ran the ads. It has no employees and no volunteers, and it’s run out of the offices of a Washington, D.C. law firm. More importantly, most voters never saw the ads.

And that was by design.

The group, a social welfare organization called Secure America Now, worked hand in hand with Facebook and Google to target their message at voters in swing states who were most likely to be receptive to them.

And new tax documents obtained by OpenSecrets show that the money fueling the group came mostly from just three donors, including the secretive multimillionaire donor Robert Mercer.

As a 501(c)(4) social welfare organization, Secure America Now (SAN) is not required to disclose its donors to the public, but they are required to report them to the IRS. This information is usually redacted when provided for public inspection. However, when OpenSecrets called to request a 2016 return, an unredacted return was provided by the group’s accounting firm.

The filing shows the largest individual contribution, $2 million, came from Robert Mercer, the reclusive hedge fund investor who spent millions in 2016 helping Donald Trump capture the White House.

Mercer has become a household name not only for his political spending in recent years or his peculiar interests — such as part-timing as a New Mexico police officer or funding stockpiles of urine in the Oregon mountains — but also for bankrolling the alt-right and the data firm Cambridge Analytica, both of which helped Trump clutch victory in 2016.

As OpenSecrets reported last month, SAN received another $2 million from the 45Committee, another pro-Trump dark money group, which is itself partly funded by other dark money groups.

The dark money churn is real.

Ronald S. Lauder, the heir to the Estee Lauder fortune, gave $1.1 million. Lauder has long been a supporter of conservative and pro-Israel causes, and his longtime political adviser Allen Roth serves as president of Secure America Now.

The remaining $60,000 in reported contributions came from three donors: Brad Anderson, the former CEO of Best Buy; Foster Friess, the investor and longtime Republican donor; and Olympus Ventures LLC, which is tied to a foundation created by Dick Schulze, the founder of Best Buy.

Social welfare organizations like Secure America Now are not supposed to have politics as their primary purpose, but a combination of vague rules and ineffective oversight make it easy for such groups to spend heavily in elections, without being required to disclose their donors.

SAN reported more than $1 million in political spending to the Federal Election Commission in 2016 — in the form of “independent expenditures,” which means they appealed directly to voters, asking them to support or oppose specific candidates for office.

While there are some differences between what the FEC and the IRS consider political spending, such direct calls for voters to support or oppose candidates are generally considered to be unambiguously political. Yet, in filing their tax returns, SAN told the IRS it only spent $124,192 on politics.

It’s hard to discern where this discrepancy stems from. Marc Owens, a tax lawyer who used to head the IRS Exempt Organizations Division, pointed out in an email to OpenSecrets that the “IRS ‘facts and circumstances’ standard has always been considered to sweep in more activity/expenditures than the FEC ‘independent expenditure’ standard.”

Yet, the IRS isn’t likely to catch the discrepancy on its own. The agency does not systematically analyze FEC reported political spending to catch discrepancies in what groups report on their tax returns. And only seven out of every 1,000 tax returns are audited by the agency, and even then the IRS is not always scrutinizing a group’s political activity.

Even with $1 million in political spending, SAN would find itself well below the threshold for excessive political activity — which, absent specific bright lines from the IRS, is generally assumed to be less than half of a group’s overall spending.

But the only way for the IRS to truly conclude that a group like SAN engaged in too much campaign intervention is to conduct a “facts and circumstances” analysis of the group’s finances. And that’s why those parody travel promotions are so important.

The Islamic State ads offer an interesting test of areas where the IRS definition of campaign intervention might be more expansive than what election law defines as political, even though they don’t mention a candidate. As a Congressional Research Service report noted in 2012, “the standard for determining whether something is campaign activity under the IRC is whether it exhibits a preference for or against a candidate” — not, that is, whether the candidate is actually mentioned.

“Preference can be subtle,” the report notes, “and the IRS takes the position that it is not always necessary to expressly mention a candidate by name.”

So, the fact that SAN didn’t expressly mention candidates in its ads does not mean that they were not political. And facts that have been reported since the election suggest that there is plenty of evidence supporting the conclusion that SAN’s digital ads were aimed at influencing the election.

For one, as Bloomberg reported last October, internal reports from the ad agency that ran SAN’s digital campaign, and individuals who were involved with the effort, showed that SAN worked with Google and Facebook to target the ads at swing-state voters.

“Facebook advertising salespeople, creative advisers and technical experts competed with sales staff from Alphabet Inc.’s Google for millions in ad dollars from Secure America Now,” the Bloomberg report writes.

The ads weren’t targeted broadly at the American public, but at Americans who would be mostly likely to decide the winners in critical senate races and the campaign for the White House.

Other ads in the campaign highlight this aim. For example, other digital ads run by SAN targeted specific candidates.

“STOP SUPPORT OF TERRORISM. VOTE AGAINST CATHERINE CORTEZ MASTO,” read one ad, referencing the Democratic candidate for Nevada’s open senate seat.

The ads “were viewed millions of times on Facebook and Google,” Bloomberg wrote, citing internal documents. And Facebook went so far as to use Secure America Now as a test case for new technology, sending out 12 different versions of the video to see which was the most popular.

One other reason why the IRS might rule that the ads entail campaign intervention, despite being absent any mention of a candidate, is that the subject of Muslim’s entering the country was what Owens refers to as “an issue distinguishing the candidates for a given office.”

In December 2015, Donald Trump called for “a total and complete shutdown of Muslims entering the United States.” The month before, he had referred to Syrian refugees as possibly “one of the great Trojan horses.”

One of the most defining differences between Trump and his opponent was his views on Muslims in general, and Muslim refugees in particular. SAN’s ads were likely trying to exploit that difference when they targeted swing voters, with Google and Facebook’s help.

The tax filing itself offers little insight into how much these campaigns actually cost, beyond the “millions” cited by Bloomberg. Large portions of the group’s spending are a black hole. More than $7.4 million in the SAN’s spending — 87 percent of their overall outlays — went towards “membership educational devlp” and “direct mail educational prgm,” which aren’t defined or detailed anywhere in the filing.

Secure America Now representatives did not respond to any of the questions OpenSecrets sent, other than to confirm receipt of the questions.

Neither the IRS nor the FEC are likely to look into SAN’s activities any time soon, if at all, so the group will continue to play a role in public life.

A recent New York Times report detailing internal emails between top Trump fundraiser Elliott Broidy and a political adviser to leaders in the United Arab Emirates and Saudi Arabia noted that Broidy referenced SAN as “one of the groups I am working with” to push the Trump administration to fill key positions with individuals favorable to those Persian Gulf leaders. ... -grp040518

The Islamic States of America ad above was the last in a series of three ads the group ran in 2016. Here's the Islamic State of France, which shows the Mona Lisa wearing a burka, and the spires of the Notre Dame replaced with the domes of a mosque

In the Islamic State of Germany ad, visitors are invited to “celebrate the arranged marriages of future jihadi soldiers” at a pork- and alcohol-free Oktoberfest. “You can even sell your daughter or sister to be married.”
Posts: 29113
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Fri Apr 06, 2018 8:10 am

god I have to handed it to you Nordic you were right about this one although I don't think you knew how right you were

Facebook sent a doctor on a secret mission to ask hospitals to share patient data

Facebook was in talks with top hospitals and other medical groups as recently as last month about a proposal to share data about the social networks of their most vulnerable patients.
The idea was to build profiles of people that included their medical conditions, information that health systems have, as well as social and economic factors gleaned from Facebook.
Facebook said the project is on hiatus so it can focus on "other important work, including doing a better job of protecting people's data."
Christina Farr | @chrissyfarr
Published 18 Hours Ago Updated 13 Hours Ago
CB FULL Christina Farr 180405 Facebook health partnership on hold on concerns of data privacy
14 Hours Ago | 05:37
Facebook has asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.

The proposal never went past the planning phases and has been put on pause after the Cambridge Analytica data leak scandal raised public concerns over how Facebook and others collect and use detailed information about Facebook users.

"This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone's data," a Facebook spokesperson told CNBC.

But as recently as last month, the company was talking to several health organizations, including Stanford Medical School and American College of Cardiology, about signing the data-sharing agreement.

While the data shared would obscure personally identifiable information, such as the patient's name, Facebook proposed using a common computer science technique called "hashing" to match individuals who existed in both sets. Facebook says the data would have been used only for research conducted by the medical community.

The project could have raised new concerns about the massive amount of data Facebook collects about its users, and how this data can be used in ways users never expected.

That issue has been in the spotlight after reports that Cambridge Analytica, a political research organization that did work for Donald Trump, improperly got ahold of detailed information about Facebook users without their permission. It then tried to use this data to target political ads to them.

Facebook said on Wednesday that as many as 87 million people's data might have been shared this way. The company has recently announced new privacy policies and controls meant to restrict the type of data it collects and shares, and how that data can be used.

Led out of Building 8

The exploratory effort to share medical-related data was led by an interventional cardiologist called Freddy Abnousi, who describes his role on LinkedIn as "leading top-secret projects." It was under the purview of Regina Dugan, the head of Facebook's "Building 8" experiment projects group, before she left in October 2017.

Facebook's pitch, according to two people who heard it and one who is familiar with the project, was to combine what a health system knows about its patients (such as: person has heart disease, is age 50, takes 2 medications and made 3 trips to the hospital this year) with what Facebook knows (such as: user is age 50, married with 3 kids, English isn't a primary language, actively engages with the community by sending a lot of messages).

The project would then figure out if this combined information could improve patient care, initially with a focus on cardiovascular health. For instance, if Facebook could determine that an elderly patient doesn't have many nearby close friends or much community support, the health system might decide to send over a nurse to check in after a major surgery.

The people declined to be named as they were asked to sign confidentiality agreements.

Facebook provided a quote from Cathleen Gates, the interim CEO of the American College of Cardiology, explaining the possible benefits of the plan:

"For the first time in history, people are sharing information about themselves online in ways that may help determine how to improve their health. As part of its mission to transform cardiovascular care and improve heart health, the American College of Cardiology has been engaged in discussions with Facebook around the use of anonymized Facebook data, coupled with anonymized ACC data, to further scientific research on the ways social media can aid in the prevention and treatment of heart disease—the #1 cause of death in the world. This partnership is in the very early phases as we work on both sides to ensure privacy, transparency and scientific rigor. No data has been shared between any parties."

Health systems are notoriously careful about sharing patient health information, in part because of state and federal patient privacy laws that are designed to ensure that people's sensitive medical information doesn't end up in the wrong hands.

To address these privacy laws and concerns, Facebook proposed to obscure personally identifiable information, such as names, in the data being shared by both sides.

However, the company proposed using a common cryptographic technique called hashing to match individuals who were in both data sets. That way, both parties would be able to tell when a specific set of Facebook data matched up with a specific set of patient data.

The issue of patient consent did not come up in the early discussions, one of the people said. Critics have attacked Facebook in the past for doing research on users without their permission. Notably, in 2014, Facebook manipulated hundreds of thousands of people's news feeds to study whether certain types of content made people happier or sadder. Facebook later apologized for the study.

Health policy experts say that this health initiative would be problematic if Facebook did not think through the privacy implications.

"Consumers wouldn't have assumed their data would be used in this way," said Aneesh Chopra, president of a health software company specializing in patient data called CareJourney and the former White House chief technology officer.

"If Facebook moves ahead (with its plans), I would be wary of efforts that repurpose user data without explicit consent."

When asked about the plans, Facebook provided the following statement:

"The medical industry has long understood that there are general health benefits to having a close-knit circle of family and friends. But deeper research into this link is needed to help medical professionals develop specific treatment and intervention plans that take social connection into account."

"With this in mind, last year Facebook began discussions with leading medical institutions, including the American College of Cardiology and the Stanford University School of Medicine, to explore whether scientific research using anonymized Facebook data could help the medical community advance our understanding in this area. This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone's data."

"Last month we decided that we should pause these discussions so we can focus on other important work, including doing a better job of protecting people's data and being clearer with them about how that data is used in our products and services."

Facebook has taken only tentative steps into the health sector thus far, such as its campaign to promote organ donation through the social network. It also has a growing "Facebook health" team based in New York that is pitching pharmaceutical companies to invest its ample ad budget into Facebook by targeting users who "liked" a health advocacy page, or fits a certain demographic profile. ... itals.html

Jeremy Ashkenas

You know, I really hate to keep beating a downed zuckerberg, but to the extent that expensive patents indicate corporate intent and direction —

Come along for a ride, and let’s browse a few of Facebook’s recent U.S.P.T.O. patent applications…
4:20 PM - 4 Apr 2018

In “Systems and methods of eye tracking control”, Facebook describes a system that watches your eye movements to track “the object of interest,” or “point of regard.”

Special infrared LEDs are used to shine into your pupil and cornea to determine gaze.

In “Soft matching user identifiers,“ Facebook describes how sending an innocuous event invite to your uploaded contact can trigger a “bounce-back” message, including cookies, device UUIDs, and other unique information for identity matching purposes.

Next! Facebook explains a “user influence score,” and how

“the user influence score can be decreased when the sender is reported to be associated, within a specified time period, with other users who are reported to be associated with undesired content.”

In another patent, Facebook writes:

“…the log may record information about actions users perform on a third party system, including webpage viewing histories, advertisements that were engaged, purchases made, and other patterns from shopping and buying”

In (now newsy!) “Dynamic enforcement of privacy settings…” FB dreams of:

“a message from the social networking system to the external system requesting the external system to cease using the information obtained in the previously transmitted response.”

In “Sentiment polarity for users,” Facebook reads your comments for positive or negative “affinity scores”, and generates “trust scores” for strong feelings.

“Data sets from trusted users are then used as a training set to train a machine learning model”

In “Identifying and using identities”:

“A list of people known to a user is maintained.”

“the people known to a user may be inferred by monitoring the actions of the user.”

“identifiers also may be inferred based on indicia other than user actions.”

In “Implicit Contacts in an Online Social Network” Facebook says it

“may determine the social-graph affinity of various social-graph entities for each other”

“the overall affinity may change based on continued monitoring of the actions or relationships“

In a new travel rec’s patent, FB writes:

“The system may monitor such actions on the online social network, on a third-party system, on other suitable systems, or any combination thereof. Any suitable type of user actions may be tracked or monitored.”

In our final patent, Facebook discusses advertising based on what you browse:

“The social networking system monitors implicit interactions between the user and objects of the social networking system with which the user has not established a connection”

Jeremy Ashkenas Retweeted Chet Faliszek
Whew! That was a lot of patent-ese!

But I think — as one might say at the Times — a portrait emerges of the kind of surveillance machine Facebook aspires to continue constructing.

For a good wine pairing, follow up with @chetfaliszek’s thread on FB+VR:Jeremy Ashkenas added,
Posts: 29113
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Tue Apr 10, 2018 6:01 pm

Zuck's booster seat at the hearing today


Best question so far: Would you be comfortable sharing the name of the hotel you stayed at last night.

Dick Durbin
Posts: 29113
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby Grizzly » Tue Apr 10, 2018 6:03 pm

What happened to Nordic? Is he banned?
If Barthes can forgive me, “What the public wants is the image of passion Justice, not passion Justice itself.”
Posts: 2082
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby seemslikeadream » Tue Apr 10, 2018 6:07 pm

I don't know where he is and no he was not banned

Facebook, Cambridge Analytica hit with class-action lawsuit

Facebook and Cambridge Analytica were hit with a class-action lawsuit on Tuesday, just hours before Mark Zuckerberg is slated to testify to Congress about how the political consulting firm managed to improperly obtain data on 87 million Facebook users.

The lawsuit was filed by seven people who were swept up in the trove of data that wound up in the hands of Cambridge Analytica, a company that did work for the President Trump's campaign ahead of the 2016 election.

“Facebook has made billions of dollars selling advertisements targeted to its customers, and in this instance made millions selling advertisements to political campaigns that developed those very ads on the back of their customers’ own stolen personal information,” Richard Fields, one of the attorneys for the defendants, said in a statement. “That’s unacceptable, and they must be held accountable.”
“We are committed to vigorously enforcing our policies to protect people’s information," Paul Grewal, Facebook's deputy general counsel, said in a statement. "We will take whatever steps are required to see that this happens.”

Cambridge Analytica did not immediately respond when asked to comment on the lawsuit.

The class-action lawsuit was filed on behalf of all American and British users among the 87 million who were unwittingly swept up in the data leak.

The lawsuit alleges that Facebook knowingly built its platform to allow third parties such as Aleksandr Kogan, the academic who obtained the data through an app on the site, to “steal users’ personal information.”

The filing also argues that Facebook failed to protect its users’ information and didn’t disclose the leak until it came out in media reports.

The lawsuit was filed in U.S. District Court in Delaware. ... on-lawsuit

Bobby Rush: What is the difference between Facebook's methodology and J Edgar Hoover's?

Posts: 29113
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

The Chutzpah of The Hill Insinuating The Valley is Shady....

Postby Cordelia » Thu Apr 12, 2018 6:00 pm

Didn't mean to submit....
Last edited by Cordelia on Fri Apr 13, 2018 6:47 am, edited 1 time in total.
"We may not choose the parameters of our destiny. But we give it its content." Dag Hammarskjold ~ 'Waymarks'
User avatar
Posts: 2992
Joined: Sun Oct 11, 2009 7:07 pm
Location: USA
Blog: View Blog (0)

The Chutzpah of The Hill Insinuating The Valley is Shady....

Postby Cordelia » Thu Apr 12, 2018 6:10 pm

"We may not choose the parameters of our destiny. But we give it its content." Dag Hammarskjold ~ 'Waymarks'
User avatar
Posts: 2992
Joined: Sun Oct 11, 2009 7:07 pm
Location: USA
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby Iamwhomiam » Thu Apr 12, 2018 10:15 pm

I always rejected any request to join in a game or app. Officially, I only rejected three apps, none of which I joined. Still, a friend's friend got my data stolen by an app they joined.

I remember long ago telling Wombat I had nothing to hide, but I had no idea how evolved information gathering had grown to be so intrusive as to be predictive of personal behaviors.

I've always known that simply connecting to the internet is akin to signing into a FBi app. I believed all that would be collected was my log-in location, aside my public personally provided data, and perhaps the sites I visited and of course, whom I communicated with via email though I felt the contents of our messages would remain private. (But not from the Feds)

I think we're already slaves, laying the foundation for ai advancement and dominance.
User avatar
Posts: 5786
Joined: Thu Sep 27, 2007 2:47 am
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby Jerky » Thu Apr 12, 2018 10:31 pm

Same game, different equipment.

User avatar
Posts: 2029
Joined: Fri Apr 22, 2005 6:28 pm
Location: Toronto, ON
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby seemslikeadream » Fri Apr 13, 2018 8:12 am

How Facebook Blew It

A months long investigation uncovered concerns that Cambridge Analytica may have used improperly obtained academic data to craft its psychometric profiles.

How Facebook Blew It

In early 2014, a couple years before a bizarre election season marred by waves of false stories and cyberattacks and foreign disinformation campaigns, thousands of Americans were asked to take a quiz. On Amazon’s Mechanical Turk platform, where people are paid to perform microtasks, users would be paid $1 or $2 apiece to answer some questions about their personality and turn over their Facebook data and that of all of their friends. A similar request was also distributed on Qualtrics, a survey website. Soon, some people noticed that the task violated Amazon’s own rules.

“Umm . . . log into Facebook so we can take ‘some demographic data, your likes, your friends’ list, whether your friends know one another, and some of your private messages,'” someone wrote on a message board in May 2014. “MESSAGES, even?! I hope Amazon listens to all of our violation flags and bans the requester. That is ridiculous.” Another quiz taker ridiculed the quiz-maker’s promise to protect user data: “But its totally safe dud[e], trust us.”

Collecting the data was Aleksander Kogan, a Cambridge University psychology lecturer who was being paid by the political consulting firm Cambridge Analytica to gather as much Facebook data on as many Americans as possible in a number of key U.S. states. The firm, backed by right-wing billionaire donor Robert Mercer, would later claim to have helped Trump win, using an arsenal that included, as its then CEO boasted in September 2016, a psychometric “model to predict the personality of every single adult in the United States of America.” With enough good data, the idea was that Cambridge Analytica could slice up the electorate into tiny segments, microtarget just the right voters in the right states with emotionally tailored, under-the-radar online ads, and–in a very tight election–gain an advantage by, in theory, gerrymandering the mind of the electorate.

Four years and 87 million user profiles later, the data harvest has become the center of a growing firestorm on multiple continents, scaring off users and investors, dispatching lawmakers, regulators, and many gaggles of reporters, and leaving executives at the social network scrambling to figure out their biggest crisis to date. Shocking and outrageous as the story may be, what’s most surprising about what happened is that it should have been surprising at all.

Days after exposés by Carole Cadwalladr and other reporters at the Guardian, the New York Times, and Britain’s Channel 4, CEO Mark Zuckerberg took to Facebook to explain how the company was addressing the problem, and to offer his own timeline of events. He said that Facebook first learned of the Cambridge Analytica project in December 2015 from a Guardian article, and that it was subsequently assured that the data had been deleted. But the company offered few details about how exactly it pursued the pilfered data that December. It has also said little about Kogan’s main collaborator, Joseph Chancellor, a former postdoctoral researcher at the university who began working at Facebook that same month. Facebook did not respond to specific requests for comment, but it has said it is reviewing Chancellor’s role.

Concerns about the Cambridge Analytica project—also detailed last year by reporters for Das Magazin and The Intercept—first emerged in 2014 inside Cambridge University’s Psychometrics Center. As the data harvest was under way that summer, the school turned to an external arbitrator in an effort to resolve a dispute between Kogan and his colleagues. According to documents and a person familiar with the issue who spoke to Fast Company, there were concerns about Cambridge Analytica’s interest in licensing the university’s own cache of models and Facebook data.

There were also suspicions that Kogan, in his work for Cambridge Analytica, may have improperly used the school’s own academic research and database, which itself contained millions of Facebook profiles.

Kogan denied he had used academic data for his side project, but the arbitration ended quickly and inconclusively after he withdrew from the process, citing a nondisclosure agreement with Cambridge Analytica. A number of questions were left unanswered. The school considered legal action, according to a person familiar with the incident, but the idea was ultimately dropped over concerns about the time and cost involved in bringing the research center and its students into a potentially lengthy and ugly dispute.

Michal Kosinski, who was then deputy director of the Psychometrics Center, told Fast Company in November that he couldn’t be sure that the center’s data hadn’t been improperly used by Kogan and Chancellor. “Alex and Joe collected their own data,” Kosinski wrote in an email. “It is possible that they stole our data, but they also spent several hundred thousand on [Amazon Mechanical Turk] and data providers—enough to collect much more than what is available in our sample.”

A Cambridge University spokesperson said in a statement to Fast Company that it had no evidence suggesting that Kogan had used the Center’s resources for his work, and that it had sought and received assurances from him to that effect. But University officials have also contacted Facebook requesting “all relevant evidence in their possession.” He emphasized that Cambridge Analytica has no affiliation with the University. Chancellor and Kogan did not respond to requests for comment.

The university’s own database, with over 6 million anonymous Facebook profiles, remains perhaps the largest known public cache of Facebook data for research purposes. For five years, Kosinski and David Stillwell, a then research associate, had used a popular early Facebook app that Stillwell had created, “My Personality,” to administer personality quizzes and collect Facebook data, with users’ consent. In a 2013 paper in the Proceedings of the National Academy of Sciences, they used the database to demonstrate how people’s social media data can be used to score and predict human personality traits with surprising accuracy.

Cambridge University’s psychometric predictor. [Image: Apply Magic Sauce]
Kogan, who ran his own lab at Cambridge devoted to pro-sociality and well-being, first discussed psychometrics with Cambridge Analytica in London in January 2014. He subsequently approached Stillwell with an offer to license the school’s prediction models on behalf of Cambridge Analytica’s affiliate, SCL Group. Such arrangements aren’t unusual—universities regularly license their research for commercial purposes to obtain additional funding—but the negotiations failed. Kogan then enlisted Chancellor, and the two co-founded a company, Global Science Research, to build their own cache of Facebook data and psychological models.
Under the terms that Kogan submitted to Facebook, the company said that his permission to harvest large quantities of data was strictly restricted to academic use, and that he broke its rules by sharing the data with a third party. But Kogan contended last week that he had informed Facebook of his intent for the data: prior to the harvest, he had updated the terms of service of his app to inform users that the data were going to be used not for academic purposes, but rather that GSR would be permitted to “sell, license… and archive” the data. Facebook did not respond to an emailed request to clarify that discrepancy.

Apart from questions over how Kogan may have used the university’s own data and models, his colleagues soon grew concerned about the intentions of Cambridge Analytica and SCL, whose past and current clients include the British Ministry of Defense, the U.S. Department of State, NATO, and a slew of political campaigns around the world. At the Psychometrics Center, there were worries that the mere association of Kogan’s work with the university and its database could hurt the department’s reputation.

The external arbitration began that summer, and was proceeding when Kogan withdrew from the process. At the Psychometrics Center’s request, Kogan, Chancellor, and SCL offered certification in writing that none of the university’s intellectual property had been sent to the firm, and the matter was dropped.

Within a few months, Kogan and Chancellor had finished their own data harvest, at a total cost to Cambridge Analytica—for over 34 million psychometric scores and data on some 87 million Facebook profiles—of around $800,000, or approximately $0.016¢ per profile. By the summer of 2015, Chancellor boasted on his LinkedIn page that Global Science Research now possessed “a massive data pool of 40-plus million individuals across the United States—for each of whom we have generated detailed characteristic and trait profiles.”

In December 2015, as Facebook began to investigate the data harvest, Chancellor began working at Facebook Research. (His interests, according to his company page, include “happiness, emotions, social influences, and positive character traits.”) The social network would apparently continue to looking into the pilfered data over the following months. As late as April 2017, a Facebook spokesperson told the Intercept, “Our investigation to date has not uncovered anything that suggests wrongdoing.”

Amazon would eventually ban Kogan and GSR over a year later, in December 2015, after a reporter for the Guardian described what was happening. “Our terms of service clearly prohibit misuse,” a spokesperson for Amazon told Fast Company. By then, however, it was too late. Thousands of Americans, along with their friends—millions of U.S. voters who never even knew about the quizzes—were unwittingly drawn into a strange new kind of information war, one waged not by Russians but by Britons and Americans.

Related: How Amazon Helped Cambridge Analytica Harvest Americans’ Data

There are still many open questions about what Cambridge Analytica did. David Carroll, an associate professor at Parsons School for Design, filed a claim this month in the U.K. in an effort to obtain all the data it has on him—not just the prediction scores in his voter profile, but the data from which it was derived—and to resolve a litany of mysteries he’s been pursuing for over a year. “Where did they get the data, what did they do with it, who did they share it with, and do we have a right to opt out?”

Meanwhile, special counsel Robert Mueller, who’s investigating possible links between the Trump campaign and Russia, has his own burning questions about Cambridge Analytica’s work and where its data may have gone. Last year, his team requested emails from the data firm, and it also obtained search warrants to examine the records of Facebook, whose employees worked in the same Trump campaign office as Cambridge Analytica’s. Mueller’s team also interviewed Trump son-in-law Jared Kushner and Trump campaign staffers, and subpoenaed Steve Bannon. The former Trump adviser was a vice president at Cambridge Analytica from 2014 to mid-2016, when he joined the Trump campaign as its chairman. Former Trump adviser, Lt. General Michael Flynn, who pled guilty in the Mueller probe to lying about his conversations with Russian officials, disclosed last August that he was also a paid adviser to Cambridge affiliate SCL Group.

Cambridge Analytica repeated its claim in a statement last month that it deleted the Facebook data in 2015, that it undertook an internal audit to ensure it had in 2016, and that it “did not use any GSR data in the work we did in the 2016 U.S. presidential election.” Christopher Wylie, the ex-Cambridge Analytica contractor who disclosed the data-gathering operation to reporters, said this was “categorically not true.”

Kosinksi is also skeptical about Cambridge Analytica’s claims. “CA would say anything to reduce the legal heat they are in,” he wrote in an email last November, when asked about the company’s contradictory accounts. “Specifically, Facebook ordered them to delete the data and derived models; admitting that they used the models would put them in trouble not only with investigators around the world, but also Facebook.”

“I am not sure why would anyone even listen to what they are saying,” he added. “As they must either be lying now, or they were lying earlier, and as they have obvious motives to lie.”

Carroll’s legal claim against Cambridge Analytica is based in U.K. law, which requires companies to disclose all the data they have on individuals, no matter their nationality. Carroll hopes that Americans will demand strict data regulations that look more like Europe’s, but when it comes to Cambridge Analytica, given that it appears to store data in the U.K., he believes the legal floodgates are already open. “As it stands, actually, every American voter is eligible to sue this company,” he said.

To Carroll, the question of the actual ability of Cambridge to influence elections is separate from a more essential problem: a company based in the U.K. is collecting massive amounts of data on Americans in order to influence their elections. “In the most basic terms, it’s just an incredible invasion of privacy to reattach our consumer behavior to our political registration,” says Carroll. “That’s basically what [Cambridge Analytica] claims to be able to do, and to do it at scale. And wouldn’t a psychological profile be something we would assume remain confidential between us and our therapist or physician?”

However creepy it may appear, the larger problem with Cambridge Analytica isn’t about what it did or didn’t do for Trump, or how effective its techniques were. Psychologically based messaging, especially as it improves, might make the difference in a tight election, but these things are hard to measure, and many have called the company’s claims hot air. The bigger outrage is what Cambridge Analytica has revealed about the system it exploited, a vast economy built on top of our personal data, one that’s grown to be as unregulated as it is commonplace. By the end of the week, even Zuckerberg was musing aloud that yes, perhaps Facebook should be regulated.

Not A Bug, But A Feature

To be clear, said Zuckerberg—in full-page ads in many newspapers—this was “a breach of trust.” But it was not a “data breach,” not in the traditional sense. Like affiliate marketers and Kremlin-backed trolls, Cambridge Analytica was simply using Facebook as it was designed to be used. For five years, Facebook offered companies, researchers, marketers, and political campaigners the ability, through third-party apps, to extract a wealth of personal information about Facebook users and their so-called social graph, well beyond what they posted on the company’s platforms. Carol Davidsen, who managed data and analytics for Obama for America in 2012, took advantage of this feature with an app that gathered data directly from over 1 million users and their friends in an effort to create a database of every American voter, albeit with disclosures that the data was for the campaign.

The policy was good for Facebook’s business, too. Encouraging developers to build popular apps like Candy Crush or Farmville could lead to more time-onsite for users, generating more ad revenue for the company and fortifying its network against competitors. But it was also an invitation to unscrupulous developers to vacuum up and re-sell vast amounts of user data. As the Wall Street Journal reported in 2010, an online tracking company, RapLeaf, was packaging data it had gathered from third-party Facebook apps and selling it to advertisers and political consultants. Facebook responded by cutting off the company’s access and promised it would “dramatically limit” the misuse of its users’ personal information by outside parties.

In its policies, Facebook assures users that it verifies the security and integrity of its developers’ apps, and instructs developers not to, for instance, sell data to data brokers or use the data for surveillance. If you violate these rules. according to Facebook:

Enforcement is both automated and manual, and can include disabling your app, restricting you and your app’s access to platform functionality, requiring that you delete data, terminating our agreements with you or any other action that we deem appropriate.

If you proceeded on the apparently valid assumption that Facebook was lax in monitoring and reinforcing compliance with these rules, you could agree to this policy in early 2014 and access a gold mine.

Sandy Parakilas, a former Facebook employee who worked on fixing privacy problems on the company’s developer platform ahead of its 2012 IPO, said in an opinion piece last year that when he proposed doing a deeper audit of third-party uses of Facebook’s data, a company executive told him, “Do you really want to see what you’ll find?”

Facebook would stop allowing developers to access friends’ data in mid-2015, roughly three years after Parakilas left the company, explaining that users wanted more control over their data. “We’ve heard from people that they are often surprised when a friend shares their information with an app,” a manager wrote in a press release in 2014 about the new policy. Parakilas, who now works for Uber, suspected another reason for the shift: Facebook executives knew that some apps were harvesting enormous troves of valuable user data, and were “worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he told the Guardian. “They were worried that they were going to build their own social networks.”

Personal data that could be gathered by third-party apps under Facebook’s social graph API, which ended in 2015. [Source: Symeonidis, Tsormpatzoudi & Preneel (2017)]

Justin Osofsky, a Facebook vice president, rejected Parakilas’s claims in a blog post, saying that after 2014, the company had strengthened its rules and hired hundreds of people “to enforce our policies better and kick bad actors off our platform.” When developers break the rules, Osofsky wrote, “We enforce our policies by banning developers from our platform, pursuing litigation to ensure any improperly collected data is deleted, and working with developers who want to make sure their apps follow the rules.”
In his post about Cambridge Analytica, CEO Mark Zuckerberg said that the company first took action in 2015, after it learned of the data exfiltration; then, in March 2018, Facebook learned from reporters that Cambridge Analytica “may not have deleted the data as they had certified. We immediately banned them from using any of our services.”

However, Facebook’s timeline did not mention that it was still seeking to find and delete the data well into the campaign season of 2016, and eight months after it had first learned of the incident. In August 2016, Wylie, the ex- Cambridge contractor, received a letter from Facebook’s lawyers telling him to delete the data, sign a letter certifying that he had done so, and return it by mail. Wylie says he complied, but by then copies of the data had already spread through email. “They waited two years and did absolutely nothing to check that the data was deleted,” he told the Guardian.

Beyond what Cambridge Analytica did, what Facebook knew and what Facebook said about that—and when—is being scrutinized by lawmakers and regulators. As part of a 2011 settlement Facebook made with the Federal Trade Commission (FTC), the company was put on a 20-year privacy “probation,” with regular audits, over charges that it had told users they could keep their information private “and then repeatedly allowing it to be shared and made public.” The agreement specifically prohibited deceptive statements, required users to affirmatively agree to the sharing of their data with outside parties, and required that Facebook report any “unauthorized access to data” to the FTC. In this case, there’s so far no record of Facebook ever reporting any such unauthorized access to data. The agency is now investigating.

Whatever senators want of Facebook and other data-focused companies, regulations in Europe are already set to change some of their ways. Starting May 25, the General Data Protection Regulation will require all companies collecting data from EU individuals to get “unambiguous” consent to collect that data, allow users easy ways to opt out of giving consent, and give them the right to refuse that their data be used for targeted marketing purposes. Consumers will also have the right to obtain their data from the companies that collect it. And there are hefty fines if any of the regulation is violated, up to 20 million euros.

Getting Out The Vote—Or Not

Like Google, the world’s other dominant digital advertiser, Facebook sells marketers the ability to target users with ads across its platforms and the web, but it doesn’t sell its piles of raw user data. Plenty of other companies, however, do. The famous Silicon Valley giants are only the most visible parts of a fast-growing, sprawling, and largely unregulated market for information about us. Companies like Nielsen, Comscore, Xaxis, Rocketfuel, and a range of anonymous data brokers sit on top of an enormous mountain of consumer info, and it fuels all modern political campaigns.

Political campaigns, like advertisers of toothpaste, can buy personal information from data brokers en masse and enrich it with their own data. Using data-matching algorithms, they can also reidentify “anonymous” user data from Facebook or other places by cross referencing it against other information, like voter files. To microtarget Facebook users according to more than their interests and demographics, campaigns can upload their own pre-selected lists of people to Facebook using a tool called Custom Audiences, and then find others like them with its Lookalike Audiences tool. (Facebook said last week that it would end another feature that allows two large third-party data brokers, Axciom and Experian, to offer their own ad targeting directly through the social network.)

Like Cambridge Analytica, campaigns could also use readily available data to target voters along psychological lines too. When he published his key findings on psychometrics and personal data in 2013, Kosinski was well aware of the alarming privacy implications. In a commentary in Science that year, he warned of the detail revealed in one’s online behavior—and what might happen if non-academic entities got their hands on this data, too. “Commercial companies, governmental institutions, or even your Facebook friends could use software to infer attributes such as intelligence, sexual orientation, or political views that an individual may not have intended to share,” Kosinski wrote.

Recent marketing experiments on Facebook by Kosinski and Stillwell have showed that advertisements geared to an individual’s personality—specifically an introverted or extroverted woman—can lead to up to 50% more purchases of beauty products than untailored or badly tailored ads. At the same time, they noted, “Psychological mass persuasion could be abused to manipulate people to behave in ways that are neither in their best interest nor in the best interest of society.”

For instance, certain ads could be targeted at those who are deemed “vulnerable” to believing fraudulent news stories on social media, or who are simply likely to share them with others. A research paper seen by reporters at Cambridge Analytica’s offices in 2016 suggested the company was also interested in research about people with a low “need for cognition”—that is, people who don’t use cognitive processes to make decisions or who lack the knowledge to do so. In late 2016, researchers found evidence indicating that Trump had found disproportionate support among that group—so-called “low information voters.” “It’s basically a gullibility score,” said Carroll.

Facebook’s own experiments in psychological influence date back at least to 2012, when its researchers conducted an “emotional contagion” study on 700,000 users. By putting certain words in people’s feeds, they demonstrated they could influence users’ moods in subtle and predictable ways. When their findings were published in 2014, the experimenters incited a firestorm of criticism for, among other things, failing to obtain the informed consent of their participants. Chief operating officer Sheryl Sandburg apologized, and the company said it would establish an “enhanced” review process for research focused on groups of people or emotions. More recently, in a 2017 document obtained by The Australian, a Facebook manager told advertisers that the platform could detect teenage users’ emotional states in order to better target ads at users who feel “insecure,” “anxious,” or “worthless.” Facebook has said it does not do this, and that the document was provisional.

The company’s influence over voters has also come under scrutiny. In 2012, as the Obama campaign was harnessing the platform in new ways, Facebook researchers found that an “I voted” button increased turnout in a California election by more than 300,000 votes. A similar effort in the summer of 2016 reportedly encouraged 2 million Americans to register to vote. “Facebook clearly moved the needle in a significant way,” Alex Padilla, California’s secretary of state, told the Times that October.

The company has also invested heavily in its own in-house political ad operations. Antonio Garcia-Martinez, a former Facebook product manager, described in Wired the company’s teams of data scientists and campaign veterans, who are “specialized by political party, and charged with convincing deep-pocketed politicians that they do have the kind of influence needed to alter the outcome of elections.” (In recent months, Facebook quietly removed some of its webpages describing its elections work.)

A Facebook page about a successful Senate campaign showcases its work in elections. [Screenshot: Facebook]
Kosinski, now a professor at Stanford, is quick to point to the power of data to influence positive behavior. In 2010, the British government launched a Behavioral Insights Team, or nudge unit, as it’s often called, devoted to encouraging people “to make better choices for themselves and society,” and other governments have followed suit. As the personal data piles up and the algorithms get more sophisticated, Kosinski told a conference last year, personality models and emotional messaging may only get better at, say, helping music fans discover new songs or encouraging people to stop smoking. “Or,” he added, “stop voting, which is not so great.”
Despite an arms race for personal data, there is, however, currently no comprehensive federal law in the U.S. governing how companies gather and use it. Instead, data privacy is policed by a patchwork of overlapping and occasionally contradictory state laws that state authorities and the Federal Trade Commission sometimes enforce. Voter privacy, meanwhile, falls into a legal gray area. While the Federal Election Commission regulates campaigns, it has few privacy rules; and while the FTC regulates commercial privacy issues, it has no jurisdiction over political campaigns. Ira Rubinstein, a professor at New York University School of Law, wrote in a 2014 law review article that “political dossiers may be the largest unregulated assemblage of personal data in contemporary American life.”

The Trump campaign’s own data arsenal included not just Facebook and Cambridge Analytica, but the Republican National Committee Data Trust, which had launched a $100 million effort in the wake of Mitt Romney’s 2012 loss. With the identities of over 200 million people in the U.S., the RNC database would eventually acquire roughly 9.5 billion data points on three out of every five potential U.S. voters, scoring them on their likely political preferences based on vast quantities of external data, including things like voter registration records, gun ownership records, credit card purchases, internet account identities, grocery card data, and magazine subscriptions, as well as other publicly harvested data. (The full scope of the database was only revealed when a security researcher discovered it last year on an unlocked Amazon web server.)

As AdAge reported, a number of Cambridge staffers working inside the Trump digital team’s San Antonio office mixed this database with their own data in order to target ads on Facebook. A Cambridge Analytica representative was also based out of Trump Tower, helping the campaign “compile and evaluate data from multiple sources.” Theresa Hong, the Trump campaign’s digital content director, explained to the BBC last year how, with Cambridge Analytica’s help, the campaign could use Facebook to target, say, a working mother—not with “a war-ridden destructive ad” featuring Trump, but rather one that’s more “warm and fuzzy,” she said.

Facebook, like Twitter and Google, also sent staffers to work inside Trump’s San Antonio office, helping the campaign leverage the social network’s tools and an ad auction system that incentivizes highly shareable ads. (Hillary Clinton’s campaign did not accept such direct help.) By the day of his third debate with Clinton, the Trump team was spreading 175,000 different digital ads a day to find the most persuasive ones for the right kind of voter. After the election, Gary Coby, the campaign’s director of digital ads, praised Facebook’s role in Trump’s victory. “Every ad network and platform wants to serve the ad that’s going to get the most engagement,” he tweeted. “THE best part of campaign & #1 selling point when urging people to come help: “It’s the Wild West. Max freedom…. EOD Facebook shld be celebrating success of product. Guessing, if [Clinton] utilized w/ same scale, story would be 180.”

The campaign would use Facebook in uglier ways too. Days before the election, Bloomberg reported, the Trump team was rounding out a massive Facebook and Instagram ad purchase with a “major voter suppression” effort. The effort, composed of short anti-Clinton video ads, targeted the “three groups Clinton needs to win overwhelmingly . . . idealistic white liberals, young women, and African-Americans” with ads meant to keep them from voting. (In its February indictment, the Justice Department found that Russian operatives had spread racially targeted ads and messages on Facebook to do something similar.) And because they were sent as “dark” or “unpublished” ads, Parscale told Bloomberg, “only the people we want to see it, see it.”

Asked about the “suppression” ads, Parscale told NPR after the election, “I think all campaigns run negative and positive ads. We found data, and we ran hundreds of thousands of [Facebook] brand-lift surveys and other types of tests to see how that content was affecting those people so we could see where we were moving them.”

So far, there is no evidence that Cambridge Analytica’s psychological profiles were used by the Trump campaign to target the “voter discouragement” ads. But in the summer of 2016, Cambridge, too, was tasked with “voter disengagement,” and “to persuade Democrat voters to stay at home” in several critical states, according to a memo written about its work for a Republican political action committee that was seen by the Guardian. Prior to 2007 elections in Nigeria, SCL Group, Cambridge’s affiliate company, also said it had advised its client to “aim to dissuade opposition supporters from voting.” A company spokesperson contends that the company does not wage “voter discouragement” efforts.

While Facebook’s policies prohibit any kind of discriminatory advertising, its ad tools, as ProPublica reported in 2016, allow ad buyers to target people along racist, bigoted, or discriminatory lines. Last month, the company was sued for permitting ads that appear to violate housing discrimination laws.

The company has vowed a crackdown on that kind of microtargeting. It also announced it would bring more transparency to political ads, as federal regulators begin the process of adding disclosure requirements for online campaign messages. But Facebook hasn’t spoken publicly about the Trump campaign’s “voter suppression” ads, or whether it could determine what impact the ads may have had on voters. A company spokesperson told Fast Company in November that he “can’t comment on this type of speculation.”

For David Carroll, tracing the links between our personal data and digital political ads reveals a problem that’s vaster than Facebook, bigger than Russia. The Cambridge Analytica/Facebook scandal could enlarge the frame around the issue of election meddling, and bring the threat closer to home, he said. “I think it could direct the national conversation we’re having about the security of our elections, and in particular voter registration data, as not just thinking in regards to cyberattacks, but simply leaking data in the open market.”

Learning From Mistakes

After election day, some Facebook employees were “sick to their stomachs” at the thought that false stories and propaganda had tipped the scales, the Washington Post reported, but Zuckerberg insisted that “fake news” on Facebook had not been a problem. Eventually, amid a swarm of questions by lawmakers, researchers and reporters, the company began to acknowledge the impact of Russia’s “information operations.” First, Facebook said in October that it had found that only 10 million users had seen Russian ads. Later, it said the number was 126 million, before updating the tally again to 150 million. The company has committed more resources to fighting misinformation, but even now, it says it is uncertain of the full reach of Russia’s propaganda campaign. Meanwhile, critics contend it has also thwarted independent efforts to understand the problem by taking down thousands of posts.

To some, the company’s initial response to the Cambridge Analytica scandal earlier this month echoed its response to Russian interference. Its first statement about the incident on March 16, saying it was temporarily suspending Cambridge Analytica, Kogan and Wylie, and considering legal action, was meant to convey that Facebook was being proactive about the problem, discredit the ex-employee, and get ahead of reporting by the Guardian and the Times, Bloomberg reported. Facebook appeared to use a similar pre-emptive maneuver last November in advance of a Fast Company story on Russian activity on Instagram.

Jonathan Albright, research director at the Tow Center for Digital Journalism, who has conducted extensive studies of misinformation on Facebook, called the company’s initial reaction to the Cambridge Analytica revelations a poor attempt to cover up for past fumbles “that are now spiraling out of control.” He added, “Little lies turn into bigger lies.”

Facebook has announced a raft of privacy measures in the wake of the controversy. The company will conduct audits to try to locate any loose data, notify users whose data had been harvested, and further tighten data access to third parties, it said. It is now reviewing “thousands” of apps that may have amassed mountains of user data, some possibly far larger than Cambridge Analytica’s grab. And it announced changes to its privacy settings last week, saying that users could now adjust their privacy on one page rather than going to over 20 separate pages.

The social network has changed its privacy controls dozens of times before, and it was a flurry of such changes a decade ago that led to the FTC’s 2011 settlement with the company over privacy issues. “Facebook represented that third-party apps that users installed would have access only to user information that they needed to operate,” one of the FTC’s charges read. “In fact, the apps could access nearly all of users’ personal data – data the apps didn’t need.” The FTC is now investigating whether Facebook broke the agreement it made then not to make “misrepresentations about the privacy or security of consumers’ personal information.”

After the most recent revelations, Zuckerberg was asked by the Times if he felt any guilt about how his platform and its users’ data had been abused. “I think what we’re seeing is, there are new challenges that I don’t think anyone had anticipated before,” he said. “If you had asked me, when I got started with Facebook if one of the central things I’d need to work on now is preventing governments from interfering in each other’s elections, there’s no way I thought that’s what I’d be doing, if we talked in 2004 in my dorm room.”

Election meddling may have been a bit far-fetched for the dorm room. But Zuckerberg has had plenty of time to think about risks to user privacy. In 2003, he built a website that allowed Harvard undergraduates to compare and rate the attractiveness of their fellow students and then rank them accordingly. The site was based on data he had harvested without their permission, the ID photos of undergraduates stored on dorms’ websites, which were commonly called student “facebooks.”

There was a small uproar, and Zuckerberg was accused by the school’s administrative board of breaching security, violating copyrights, and violating individual privacy. The “charges were based on a complaint from the computer services department over his unauthorized use of online facebook photographs,” the Harvard Crimson reported. Amid the criticism, Zuckerberg decided to keep the site offline. “Issues about violating people’s privacy don’t seem to be surmountable,” he told the newspaper then. “I’m not willing to risk insulting anyone.”

For his next big website, Zuckerberg invited people to upload their faces themselves. The following year, after launching, Zuckerberg boasted to a friend of the mountain of information he was sitting on: “over 4,000 emails, pictures, addresses, sns,” many belonging to his fellow students.

“Yea so if you ever need info about anyone at Harvard . . . just ask,” Zuckerberg wrote in a leaked private message. His friend asked how he did it. “People just submitted it,” he said. “I don’t know why.”

“They ‘trust me,'” he added. “Dumb fucks.”

Those were the early days of moving fast and breaking things, and nearly 15 years later, Zuckerberg certainly regrets saying that. But even then he had caught on to a lucrative flaw in our relationship with data at the beginning of the 21st century, a delusional trust in distant companies based on agreements people don’t read, which have been virtually impossible to enforce. It’s a flaw that has since been abused by all kinds of hackers, for purposes the public is still largely in the dark about, even today.

With reporting from Cale Weissman.

This story was updated to include a higher tally from April 4 of 87 million harvested Facebook profiles. ... ok-blew-it
Posts: 29113
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby seemslikeadream » Mon Apr 16, 2018 9:08 pm

APR 16, 2018 @ 09:05 AM
These Ex-Spies Are Harvesting Facebook Photos For A Massive Facial Recognition Database

Thomas Fox-Brewster , FORBES STAFF

When Mark Zuckerberg appeared before the House Energy and Commerce Committee last week in the aftermath of the Cambridge Analytica revelations, he tried to describe the difference between "surveillance and what we do." "The difference is extremely clear," a nervous-looking Zuckerberg said. "On Facebook, you have control over your information... the information we collect you can choose to have us not collect."

But not a single member of the committee pushed the billionaire CEO about surveillance companies who exploit the data on Facebook for profit. Forbes has uncovered one case that might shock them: over the last five years a secretive surveillance company founded by a former Israeli intelligence officer has been quietly building a massive facial recognition database consisting of faces acquired from the giant social network, YouTube and countless other websites. Privacy activists are suitably alarmed.

That database forms the core of a facial recognition service called Face-Int, now owned by Israeli vendor Verint after it snapped up the product's creator, little-known surveillance company Terrogence, in 2017. Both Verint and Terrogence have long been vendors for the U.S. government, providing bleeding-edge spy tech to the NSA, the U.S. Navy and countless other intelligence and security agencies.

As described on the Terrogence website, the database consists of facial profiles of thousands of suspects "harvested from such online sources as YouTube, Facebook and open and closed forums all over the globe." Those faces were extracted from as many as 35,000 videos and photos of terrorist training camps, motivational clips and terror attacks. That same marketing page was online in 2013, according to internet archive the Wayback Machine, indicating the product is at least five years old. The age of the product also suggests far more than 35,000 videos and photos have been raided by the Face-Int technology by now, though Terrogence co-founder and research lead Shai Arbel declined to comment for this article.

Raising the stakes of facial recognition

Though Terrogence is primarily focused on helping intelligence agencies and law enforcement fight terrorism online, LinkedIn profiles of current and former employees indicate it's also involved in other, more political endeavours. One ex-staffer, in describing her role as a Terrogence analyst, said she'd "conducted public perception management operations on behalf of foreign and domestic governmental clients," and used "open source intelligence practices and social media engineering methods to investigate political and social groups." She was not reachable at the time of publication.

And now concerns have been raised over just how Terrogence has grabbed all those faces from Facebook and other online sources. What's apparent, though, is that Terrogence is yet another company that's been able to clandestinely take advantage of Facebook's openness, on top of Cambridge Analytica, which acquired information on as many as 87 million users in 2014 from U.K.-based researcher Aleksandr Kogan to help target individuals during its work for the Donald Trump and Ted Cruz presidential campaigns.

"It raises the stakes of face recognition - it intensifies the potential negative consequences," warned Jay Stanley, senior policy analyst at the American Civil Liberties Union (ACLU). "When you contemplate face recognition that's everywhere, we have to think about what that’s going to mean for us. If private companies are scraping photos and combining them with personal info in order to make judgements about people - are you a terrorist, or how likely are you to be a shoplifter or anything in between - then it exposes everyone to the risk of being misidentified, or correctly identified and being misjudged."

Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation, said that if the facial recognition database had been shared with the US government, it would threaten the free speech and privacy rights of social media users.

"Applying face recognition accurately to video is extremely challenging, and we know that face recognition performs poorly with people of color and especially with women and those with darker skin tones," Lynch told Forbes. "Combining these two known problems with face recognition, there is a high chance this technology would regularly misidentify people as terrorists or criminals.

"This could impact the travel and civil rights of tens of thousands of law-abiding travelers who would then have to prove they are not the terrorist or criminal the system has identified them to be."

It's unclear just how the Face-Int product acquires faces, though it appears similar to a project run by the NSA, as revealed by whistleblower Edward Snowden in 2014, where the intelligence agency had gathered 55,000 "facial recognition quality images" from the web back by 2011. Co-founder Arbel, a former intelligence officer with the Israeli military, declined to respond to questions about how the tech works, though he described Face-Int as "amazing" in a text message and confirmed it continues to operate under Verint.

A spokesperson for Facebook, which employs its own facial recognition tech to help identify users' visages in photos across the platform, said it appeared Terrogence's product would violate its policies, including one that prohibits the use of data grabbed from the social network to provide tools for surveillance. Facebook also doesn't allow accessing or collecting information via automated methods, such as harvesting bots or scrapers. The spokesperson noted that it hadn't found any Facebook apps operated by the company.

A social media monitor

There's no evidence America has purchased Face-Int. But it has benefitted from other intelligence services built by Terrogence. The vendor has scored at least two contracts with the U.S. government, both with the U.S. Navy and worth a total of $148,000, according to public records. The contracts, one from 2014 the other signed off in 2015, were for subscriptions to the company's Mobius and TGAlertS products.

Mobius consists of reports on the latest trends in terrorists' improvised explosive devices (IEDs) and their tactics. The reports are based on intel gathered from various social media platforms "where global terrorists seek to recruit, radicalize and plot their next attack," according to a company brochure. TGAlertS, meanwhile, provides "near real-time" information on urgent issues uncovered by Terrogence staff trawling the web.

Those employees gather information in part through fake profiles. As another brochure put it, they "elicit information by carefully guiding online discussion, often drumming up interest and facilitating communication by employing multiple virtual entities in a single operation."

This is far from Arbel's first rodeo in the surveillance industrial complex: he co-founded SenseCy, which was acquired by Verint in 2017. It too sets up "virtual entities" to gather intelligence. "Perfected over many years of practice, SenseCy operates dozens of virtual entities combine strong, believable cover stories with well-perfected web interaction methodologies, and are sourcing invaluable intelligence from all relevant web platforms," a blurb on its site currently reads. The company appears to be more focused on cybersecurity protection than government surveillance, however.

The privatization of blacklists

If Terrogence isn't solely focused on terrorism, but has a political side to its business too, its facial recognition work could sweep up a vast number of people. That brings up another particularly worrying aspect of the business in which Terrogence operates: the dawn of "the privatisation of blacklisting," warned Stanley. "We've been fighting with the government for years over due process on those lists... people being put on them without being told why and not being sure how those lists are being used," he told Forbes.

"A lot of those problems could intensify if you have a bunch of private quasi-vigilantes making their own blacklists of all kinds." Just earlier this month, Verint launched what appeared to be an entirely separate facial recognition product, FaceDetect. It promises to identify individuals "regardless of face obstructions, suspect ageing, disguises and ethnicity" and "allows operators to instantaneously add suspects to watch-lists."

But Stanley also questioned Facebook's policies on user control of profile photos. The social network has the largest collection of faces in the world, and yet profile pictures, to an extent, can't be entirely locked down, he said. A Facebook spokesperson said profile photos are always public but it's possible to adjust the privacy settings of previous profile snaps to limit who can see them.

Privacy advocacy groups like the ACLU now want to see users given more control over those images. Given the recent furore surrounding Cambridge Analytica, such changes might come sooner rather than later. ... ent=safari
Posts: 29113
Joined: Wed Apr 27, 2005 11:28 pm
Location: into the black
Blog: View Blog (83)

Re: The creepiness that is Facebook

Postby Cordelia » Thu Apr 19, 2018 11:38 am

Mark Zuckerberg Quietly Moves 1.5 Billion Users’ Rights Out of Europe’s Reach

Facebook is putting up a covert fight against new privacy rules.

Maya Kosoff
April 19, 2018 10:13 am

In 2008, Facebook followed the lead of countless companies before it, establishing an international headquarters in Ireland in order to skirt relatively higher U.S. corporate tax rates. In doing so, the company also ensured that international Facebook users—those outside the U.S. and Canada—would be subject to European Union rules. Before the E.U. got serious about protecting user privacy, this wasn’t such a bad deal; regulations abroad were no more severe than regulations in the States. But now that strict General Data Protection Regulation (G.D.P.R.) rules are set to take effect late next month, Reuters reports Facebook is planning to argue that G.D.P.R. rules should apply solely to its European users, meaning they would not effect its 1.5 billion members in Africa, Asia, Australia, and Latin America.

Essentially relocating 1.5 billion users’ rights from Dublin to Delaware would drastically reduce Facebook’s risk of exposure under G.D.P.R., which will let European regulators fine companies that collect personal data without their users’ consent; ultimately, the change would affect the majority—more than 70 percent—of Facebook’s 2-billion-person user network. The 1.5 billion people impacted by the move would be governed by the U.S.’s more lenient privacy laws, and would no longer be able to file complaints with Ireland’s Data Protection Commissioner. Facebook confirmed the move but downplayed its importance, saying in a statement, “We apply the same privacy protections everywhere, regardless of whether your agreement is with Facebook Inc. or Facebook Ireland.”

More ... opes-reach

"When you give me everyone a voice and give me people power, the system usually ends up in a really good place. So, what we view our role as, is giving me people that power." - Mark Zuckerberg
"We may not choose the parameters of our destiny. But we give it its content." Dag Hammarskjold ~ 'Waymarks'
User avatar
Posts: 2992
Joined: Sun Oct 11, 2009 7:07 pm
Location: USA
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby DrEvil » Thu Apr 19, 2018 5:58 pm

"They trust me, the dumb fucks". - Mark Zuckerberg
"I only read American. I want my fantasy pure." - Dave
User avatar
Posts: 2404
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: The creepiness that is Facebook

Postby Grizzly » Fri Apr 27, 2018 9:15 am

Reps say ‘we have yet to receive any responses’ to questions from Zuckerberg testimony
House Democrats unveil a new set of questions for Facebook

When Mark Zuckerberg testified before Congress earlier this month, he left a lot of questions in his wake. More than 20 times, he responded to Congressional inquiries by saying that he didn’t have the information on hand but that his team would follow up with more information after the hearing had closed.

But 13 days after the hearings, congressional democrats on the House Energy and Commerce committee say still they haven’t heard anything from Facebook.
"“We have yet to receive any responses”"

It’s been two weeks since our hearing and we have yet to receive responses to questions that Mr. Zuckerberg could not answer on that day,” Rep. Frank Pallone, Jr. (D-NJ) said. “Furthermore, our Committee staff met with Facebook staffers two weeks prior to the hearing, and there are still a lot of unanswered questions from that meeting.” Facebook declined to comment.

A formal deadline for Facebook’s responses has yet to be set by the committee and the window is still open for representatives to add new questions. As a result, it’s not entirely surprising that Facebook hasn’t yet replied. Still, Pallone said the lack of further information was hampering Congressional efforts to develop new privacy regulations. “It simply should not take this long to respond to this Committee’s questions about critical privacy and data security issues,” Pallone said. “This information is critical as the Committee looks to develop comprehensive privacy and data security legislation that would include any company that collects and uses consumers’ data.”

Today, the committee sent a new list of questions to Facebook in an effort to nail down information that was not available during the hearings. There are 113 separate questions on the list, many of which include multiple sub-questions, largely dealing with information collected and held by Facebook that is not explicitly shared by users. That information, which includes both shadow profiles and broader ad-tracking, was largely skimmed over during Zuckerberg’s testimony.

Congress is already considering multiple bills that would place further regulatory restrictions on Facebook, including the Honest Ads Act, which would place stronger disclosure requirements on online political ads, and the CONSENT Act, which would require explicit opt-in consent for data collection. Earlier this week, Senators introduced a new bill called the Social Media Privacy Protection and Consumer Rights Act, which would give US users the right to see all the data a given site holds on them, and delete any or all of it on request.
If Barthes can forgive me, “What the public wants is the image of passion Justice, not passion Justice itself.”
Posts: 2082
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)


Return to General Discussion

Who is online

Users browsing this forum: Marionumber1 and 12 guests