Moderators: Elvis, DrVolin, Jeff

Re: Surveillance

Postby identity » Thu Aug 29, 2019 6:25 pm

Ring pro.jpg

(Amazon-owned) Doorbell-camera firm Ring has partnered with 400 police forces, extending surveillance concerns

By Drew Harwell August 28 at 6:53 PM

The doorbell-camera company Ring has forged video-sharing partnerships with more than 400 police forces across the United States, granting them potential access to homeowners’ camera footage and a powerful role in what the company calls the nation’s “new neighborhood watch.”

The partnerships let police request the video recorded by homeowners’ cameras within a specific time and area, helping officers see footage from the company’s millions of Internet-connected cameras installed nationwide, the company said. Officers don’t receive ongoing or live-video access, and homeowners can decline the requests, which Ring sends via email thanking them for “making your neighborhood a safer place.”

The number of police deals, which has not previously been reported, is likely to fuel broader questions about privacy, surveillance and the expanding reach of tech giants and local police. The rapid growth of the program, which began in spring 2018, surprised some civil liberties advocates, who thought that fewer than 300 agencies had signed on.

Ring is owned by Amazon, which bought the firm last year for more than $800 million, financial filings show. Amazon founder Jeff Bezos owns The Washington Post.

Ring officials and law enforcement partners portray the vast camera network as an irrepressible shield for neighborhoods, saying it can assist police investigators and protect homes from criminals, intruders and thieves.

“The mission has always been making the neighborhood safer,” said Eric Kuhn, the general manager of Neighbors, Ring’s crime-focused companion app. “We’ve had a lot of success in terms of deterring crime and solving crimes that would otherwise not be solved as quickly.”

But legal experts and privacy advocates have voiced alarm about the company’s eyes-everywhere ambitions and increasingly close relationship with police, saying the program could threaten civil liberties, turn residents into informants, and subject innocent people, including those who Ring users have flagged as “suspicious,” to greater surveillance and potential risk.

“If the police demanded every citizen put a camera at their door and give officers access to it, we might all recoil,” said Andrew Guthrie Ferguson, a law professor and author of “The Rise of Big Data Policing.”

By tapping into “a perceived need for more self-surveillance and by playing on consumer fears about crime and security,” he added, Ring has found “a clever workaround for the development of a wholly new surveillance network, without the kind of scrutiny that would happen if it was coming from the police or government.”

Begun in 2013 as a line of Internet-connected “smart doorbells,” Ring has grown into one of the nation’s biggest household names in home security. The company, based in Santa Monica, Calif., sells a line of alarm systems, floodlight cameras and motion-detecting doorbell cameras starting at $99, as well as monthly “Ring Protect” subscriptions that allow homeowners to save the videos or have their systems professionally monitored around the clock.

Ring users are alerted when the doorbell chimes or the camera senses motion, and they can view their camera’s live feed from afar using a mobile app. Users also have the option of sharing footage to Ring’s public social network, Neighbors, which allows people to report local crimes, discuss suspicious events and share videos from their Ring cameras, cellphones and other devices.

The Neighbors feed operates like an endless stream of local suspicion, combining official police reports compiled by Neighbors’ “News Team” with what Ring calls “hyperlocal” posts from nearby homeowners reporting stolen packages, mysterious noises, questionable visitors and missing cats. About a third of Neighbors posts are for “suspicious activity” or “unknown visitors,” the company said. (About a quarter of posts are crime-related; a fifth are for lost pets.)

Users, which the company calls “neighbors,” are anonymous on the app, but the public video does not obscure faces or voices from anyone caught on camera. Participating police officers can chat directly with users on the Neighbors feed and get alerts when a homeowner posts a message from inside their watched jurisdiction. The Neighbors app also alerts users when a new police force partners up, saying, “Your Ring Neighborhood just got a whole lot stronger.”

To seek out Ring video that has not been publicly shared, officers can use a special “Neighbors Portal” map interface to designate a time range and local area, up to half a square mile wide, and get Ring to send an automated email to all users within that range, alongside a case number and message from police.

The user can click to share their Ring videos, review them before sharing, decline or, at the bottom of the email, unsubscribe from future footage-sharing requests. “If you would like to take direct action to make your neighborhood safer, this is a great opportunity,” an email supplied by Ring states.

Ring says police officers don’t have access to live video feeds and aren’t told which homes use Ring cameras or how homeowners responded unless the users consent. Officers could previously access a “heat map” showing the general density of where Ring devices were in use, but the company said it has removed that feature from the video request because it was deemed “no longer useful."

Ring said it would not provide user video footage in response to a subpoena but would comply if company officials were presented with a search warrant or thought they had a legal obligation to produce the content. “Ring does not disclose customer information in response to government demands unless we’re required to do so to comply with a legally valid and binding order,” the company said in a statement.

Ring users consent to the company giving recorded video to “law enforcement authorities, government officials and/or third parties” if the company thinks it’s necessary to comply with “legal process or reasonable government request,” its terms of service state. The company says it can also store footage deleted by the user to comply with legal obligations.

The high-resolution cameras can provide detailed images of not just a front doorstep but also neighboring homes across the street and down the block. Ring users have further expanded their home monitoring by installing the motion-detecting cameras along driveways, decks and rooftops.

Some officers said they now look for Ring doorbells, notable for their glowing circular buttons, when investigating crimes or canvassing neighborhoods, in case they need to pursue legal maneuvers later to obtain the video.

Ring users have shared videos of package thieves, burglars and vandals in the hope of naming and shaming and apprehending the perpetrators, but they’ve also done so for people — possibly salespeople, petitioners or strangers in need of help — who knock on the door and leave without incident. (Other recorded visitors include lizards, deer, mantises, snakes and snooping raccoons.)

Ring users’ ability to report people as suspicious has been criticized for its potential to contribute to racial profiling and heightened community distrust. Last Halloween in southern Maryland, a Ring user living near a middle school posted a video of two boys ringing their doorbell with the title: “Early trick or treat, or are they up to no good?”

The video, which has been viewed in the Neighbors app more than 5,700 times, inspired a rash of comments: Some questioned the children’s motives, while others said they looked like harmless kids. “Those cuties? You’re joking, right?” one commenter said. After The Post shared this video with Ring, the company removed it, saying it no longer fits the service’s community guidelines because “there is no objective reason stated that would put their behavior in question.”

Since formally beginning its Neighbors police partnerships with officers in Greenfield, Wis., in March 2018, Ring has extended the program to 401 police departments and sheriff’s offices nationwide, from northwest Washington state to Key West, Fla., company data show. Shortly after this story was published, Ring founder Jamie Siminoff released a blog post saying that count had already expanded, to 405 agencies.

The partnerships cover vast expanses of major states — with 31 agencies in California, 57 in Texas and 67 in Florida — and blanket entire regions beneath Ring’s camera network, including about a dozen agencies each in the metropolitan areas surrounding Chicago, Dallas, Detroit, Kansas City, Los Angeles and Phoenix.

Sgt. William Pickering, an officer with the Norfolk Police Department in Virginia, which is working with Ring, compared the system’s expansion to the onset of DNA evidence in criminal cases — a momentous capability, unlocked by new technology, that helps police gain the upper hand.

“We have so many photojournalists out there, and they’re right there when things happen, and they’re able to take photos and videos all the time. As a law enforcement agency, that is of great value to us,” Pickering said.

“When a neighbor posts a suspicious individual who walked across their front lawn, that allows them at that very moment to share that in real time with anyone who’s been watching. Now we have everybody in the community being alerted to a suspicious person.” (A Ring spokeswoman later said this example would be removed from Neighbors because it does not pass the service’s community guidelines, which require “an attempted criminal activity or unusual behavior that is cause for concern.”)

Ring has pushed aggressively to secure new police allies. Some police officials said they first met with Ring at a law-enforcement conference, after which the company flew representatives to police headquarters to walk officers through the technology and help them prepare for real-world deployment.

The company has urged police officials to use social media to encourage homeowners to use Neighbors, and Pickering said the Norfolk department had posted a special code to its Facebook page to encourage residents to sign on.

Ring has offered discounts to cities and community groups that spend public or taxpayer-supported money on the cameras. The firm also has given free cameras to police departments that can be distributed to local homeowners. The company said it began phasing out the giveaway program for new partners earlier this year.

Pickering said his agency is working with its city attorney to classify the roughly 40 cameras Ring gave them as a legal donation. But some officers said they were uncomfortable with the gift, because it could be construed as the police extending an official seal of approval to a private company.

“We don’t want to push a particular product,” said Radd Rotello, an officer with the Frisco Police Department in Texas, which has partnered with Ring. “We as the police department are not doing that. That’s not our place.”

Ring has for months sought to keep key details of its police-partnership program confidential, but public records from agencies nationwide have revealed glimpses of the company’s close work with local police. In a June email to a New Jersey police officer first reported by Motherboard, a Ring representative suggested ways officers could improve their “opt-in rate” for video requests, including greater interaction with users on the Neighbors app.

“The more users you have the more useful information you can collect,” the representative wrote. Ring says it offers training and education materials to its police partners so they can accurately represent the service’s work.

Ring officials have stepped up their sharing of video from monitored doorsteps to help portray the devices as theft deterrents and friendly home companions. In one recent example, a father in Massachusetts can be seen using his Ring Video Doorbell’s speakers to talk to his daughter’s date while he was at work, saying, “I still get to see your face, but you don’t get to see mine.”

The company is also pushing to market itself as a potent defense for community peace of mind, saying its cameras offer “proactive home and neighborhood security in a way no other company has before.” The company is hiring video producers and on-camera hosts to develop user testimonials and videos marketing the Ring brand, with a job listing stating that applicants should deliver ideas with an “approachable yet authoritative tone.”

Rotello, who runs his department’s neighborhood-watch program, said Ring’s local growth has had an interesting side effect: People now believe “crime is rampant in Frisco,” now that they see it all mapped and detailed in a mobile app. He has had to inform people, he said, that “the crime has always been there; you’re just now starting to figure it out.”

He added, however, that the technology has become a potent awareness tool for crime prevention, and he said he appreciates how the technology has inspired in residents a newfound vigilance.

“Would you rather live in an ‘ignorance is bliss’ type of world?” he said. “Or would you rather know what’s going on?”

That hyper-awareness of murky and sometimes-distant criminal threats has been widely criticized by privacy advocates, who argue that Ring has sought to turn police officers into surveillance-system salespeople and capitalize on neighborhood fears.

“It’s a business model based in paranoia,” said Evan Greer, deputy director of the digital advocacy group Fight for the Future. “They’re doing what Uber did for taxis, but for surveillance cameras, by making them more user-friendly. … It’s a privately run surveillance dragnet built outside the democratic process, but they’re marketing it as just another product, just another app.”

Ring’s expansion also has led some to question its plans. The company applied for a facial-recognition patent last year that could alert when a person designated as “suspicious” was caught on camera. The cameras do not currently use facial-recognition software, and a spokeswoman said the application was designed only to explore future possibilities.

Amazon, Ring’s parent company, has developed facial-recognition software, called Rekognition, that is used by police nationwide. The technology is improving all the time: Earlier this month, Amazon’s Web Services arm announced that it had upgraded the face-scanning system’s accuracy at estimating a person’s emotion and was even perceptive enough to track “a new emotion: ‘Fear.’ ”

For now, the Ring systems’ police expansion is earning early community support. Mike Diaz, a member of the city council in Chula Vista, Calif., where police have partnered with Ring, said the cameras could be an important safeguard for some local neighborhoods where residents are tired of dealing with crime. He’s not bothered, he added, by the concerns he has heard about how the company is partnering with police in hopes of selling more cameras.

“That’s America, right?” Diaz said. “Who doesn’t want to put bad guys away?”

edit: And from The Atlantic in June:

People are far more comfortable with surveillance when they think they’re the only ones watching.
Sidney Fussell, Jun 24, 2019

In most cases, when police want to search your neighborhood, they need a warrant and a reason to believe something’s amiss. Now “reasonable suspicion” is going the way of dial-up. Fifty police departments across the United States are partnering with Amazon to collect footage from people who use Ring, the company’s internet-connected doorbell. Some are offering discounted or free Ring doorbells in exchange for a pledge to register the devices with law enforcement and submit all requested footage. Amazon has also filed patents to expand its Ring line beyond doorbells and into cameras mounted on motor vehicles, inside wearable “smart glasses,” even atop security drones that circle your home and call the police if they detect a disturbance.

Privacy experts are expectably wary of a digital “neighborhood watch”: citizens spying on one another, with Silicon Valley’s help. In a statement to The Atlantic, a spokesperson for Amazon Ring said the company doesn’t endorse the giveaways that require users to hand over footage, and noted that most of the 50 partners allow residents to choose whether they want to hand over footage. (Earlier this month, however, CNET quoted a New Jersey police captain admitting to sending officers to people’s doorsteps when they don’t respond to footage requests. No warrant required.)

Suspicion is currency. Selling consumers a 24/7 surveillance apparatus of their own making shrinks police oversight, expands the network of cameras blanketing American cities, and sends money to Amazon. That’s the trick of high-tech home surveillance: For users, it feels empowering. But it also creates a regulatory gray zone: When private citizens own the cameras, their footage isn’t subject to the same rules as police surveillance.

“People only think one step ahead of themselves,” says Brian Hofer, the chair of the City of Oakland’s Privacy Advisory Commission, which advises the city on surveillance and privacy. “They aren’t thinking down the line. Securing your home is defensive. [Installing] cameras pointing at your neighbors’ houses and license-plate readers tracking their vehicles is a whole different ball game.”

Different for two reasons. First, Ring is part of a surveillance ecosystem far more sophisticated than a single officer reviewing footage. According to CNET, police in Indiana matched Ring footage of nearby cars against a license-plate-reader system to track drivers. According to a BuzzFeed report, Amazon included Ring footage in Facebook ads for the product, potentially showing Facebook’s users anyone caught on the footage—without their consent, and regardless of whether they were convicted of or charged with a crime.

And second, private behavior on apps such as Nextdoor and Facebook isn’t subject to government oversight. As part of a national heel turn on invasive tech, Oakland, San Francisco, and Seattle have passed laws targeted at advanced police technology, such as license-plate readers, body cameras, and facial-recognition software. But the Ring program evades even the vanguard of anti-surveillance regulation.

Just as homeowners have every right to set up cameras on their own home, they have every right to share and comment on footage online and even to privately use surveillance technology such as license-plate readers.

“People have tried to outlaw [private-party] license-plate readers, and they’ve lost every time because it’s actually a First Amendment activity,” Hofer says. “I have the right to go out and collect info and repackage it if I want, and sell it to customers if I want. On the other hand, when you see clearly in front of your face the horror stories coming out of Nextdoor, it’s clear there has to be some sort of oversight. I don’t know what that silver bullet is.”

“My personal preference is to win a ‘hearts and minds’ campaign rather than try to mandate or restrict private behavior through legislation,” Hofer continues. “We start getting into some tricky constitutional areas if we try to regulate private behavior.”

There’s a tension inherent to any fight about Ring, or products like it: How can you regulate police use of camera footage without controlling the private citizens who generate that footage? Every digital interaction, from liking a photo to sending an email to filing taxes online, comes with a privacy concern. Privacy advocates want police oversight, not a nanny state where people are chastised for and restricted from everything they may want to do with their own devices in their own home. But a fully unrestricted digital neighborhood watch may actually end up making companies more powerful.

“I’m concerned about police departments starting to imagine the public-safety infrastructure and hinging it on the whims of a company like Amazon,” says Dave Maass, the Electronic Frontier Foundation’s senior investigative researcher. Maass wonders what happens when police or citizens rely on technology for social stability—and then companies, definitionally driven by profit motive, abruptly change course. Amazon has the right to change its terms and services as it likes, pushing updates and making changes. Earlier this year, Google Nest owners found out their security cameras came equipped with a microphone. The devices were inactive until Google pushed an update, allowing them to be activated.

“Are they coming in and just trying to disrupt and get quick market dominance?” Maass asks. “And then 10 years from now there’s all sorts of unforeseen [consequences] because we didn’t think through these issues when we adopted these technologies?”

One way to affect the “hearts and minds” outreach that Hofer mentioned might be thinking through those consequences. Sharing a video clip with one person means sharing it with millions. Empowering yourself through surveillance means profit share for tech companies. Agreeing to hand over video footage means sharpening police eyes, not just your own.
You do not have the required permissions to view the files attached to this post.
Posts: 705
Joined: Fri Mar 20, 2015 5:00 am
Blog: View Blog (0)

Re: Surveillance

Postby elfismiles » Thu Oct 17, 2019 9:25 am

The Government Is Testing Mass Surveillance on the Border Before Turning It on Americans
Almost every technology developed in the border lands in the last two decades now exists in local police departments
Jack Herrera
Oct 17 ... 48e3da784b
User avatar
Posts: 8462
Joined: Fri Aug 11, 2006 6:46 pm
Blog: View Blog (4)

Re: Surveillance

Postby elfismiles » Fri Oct 18, 2019 4:42 pm

Lawmaker: TSA Should Halt Facial Recognition Programs Absent Formal Policies
By Aaron Boyd, Senior Editor, Nextgov, October 17, 2019 05:40 PM ET
The agency said it is working on those policies while the technology is tested through pilot programs.

The federal government is ramping up the use of facial recognition technology at airports across the country, though at least one lawmaker wants the Transportation Security Agency to slow down.

During a hearing Thursday held by the Senate Committee on Commerce, Science and Transportation’s Subcommittee on Security, Sen. Ed Markey, D-Mass., brought up the coming ubiquity of facial recognition technologies and warned against moving forward without sufficient protections for the data being collected, as well as the civil liberties of travelers.

“As we work to keep pace with emerging threats to aviation travel, civil liberties cannot be an afterthought,” Markey said. “The public lacks enforceable rights and rules to protect travelers’ privacy and address unique threats that TSA’s biometric data collection poses to our civil liberties.”

Markey leveled a series of questions at Denver International Airport Chief Operations Officer Chris McLaughlin, using him as a sounding board for the senator’s concerns.

“Do you agree that any collection of Americans’ biometric information at airports should always be voluntary?” Markey asked.

“Yes, I do,” McLaughlin replied.

“Do you agree that TSA should enact enforceable rules and take all necessary steps to ensure that biometric data it collects is secure?”


“Do you agree that TSA should enact binding safeguards to ensure that its use of biometric technology does not disproportionately burden or misidentify people of color?”

“Absolutely, yes.”

“I agree with you. I agree with all of your answers,” Markey said. “We’re, however, quickly moving toward a point of no return when it comes to the deployment of facial recognition technology.”

Markey called on TSA to halt deployment of facial recognition tech—such as the ongoing pilot at Las Vegas McCarran International Airport—until officially policies are set in place.

“TSA should stop using these invasive tools in the absence of formal rules that reflect our values and protect our privacies,” he said. “I call upon the agency to formalize these rules. It’s absolutely essential. We should not be moving forward until we’ve decided what those protections are going to be.”

A TSA spokesperson told Nextgov work on official policies is still in progress while the technology is currently limited to testing.

“Specific privacy policies have not yet been formalized but will align with [Homeland Security Department] requirements on the development of privacy policies and will implement the Fair Information Practice Principles to the greatest extent practicable,” the spokesperson said in an email, adding that the agency has published privacy impact statements on the pilot program.

“TSA expects to limit its use of facial recognition to identity verification functions at the checkpoint,” the spokesperson said, citing previous congressional testimony from acting Deputy Administrator Patricia Cogswell. ... es/160685/
User avatar
Posts: 8462
Joined: Fri Aug 11, 2006 6:46 pm
Blog: View Blog (4)

Re: Surveillance

Postby JackRiddler » Fri Jan 17, 2020 5:59 pm

Should this be in Top Secret America instead?


Trust No One

Lee Fang
December 11 2019, 4:42 p.m. ... echnology/

See that headline, about the Trump campaign using phone location technology to track potential voters? The name of the company doing this work for Trump 2020 is Phunware, sort of a next-gen Cambridge Analytica. Since June 2019, Phunware has employed Brittany Kaiser (see note 1 in the comments below). Currently, Kaiser is being paraded around left-liberal media as the lead whistleblower against her former employer, Cambridge Analytica. In a new documentary, she describes CA's use of data harvesting to selectively target, deceive and mislead millions of people on behalf of the Trump and other campaigns. They tried to sway supporters to vote for him, and to discourage potential detractors from voting at all.*

In public, Kaiser now agrees that she was working for the bad guys. Should we be grateful for this supposed change of heart?

If Kaiser's role with Phunware, at the same time as her turn as a celebrity whistleblower against CA, isn't dubious enough, check out the quoted excerpt from her recent interview with Democracy Now! (notes 2, 3). Amy Goodman asked Kaiser about her involvement with the data harvesting operations of the Obama 2008 campaign, which at the time were celebrated as the state of the art. Kaiser completely avoided the question, producing a confused but nevertheless revealing word salad. In this passage, she seemed to say that harvesting data from total Internet surveillance and deploying it to influence people covertly is actually a good thing, assuming it is managed by the right hands.

For example, as a "human rights activist," Kaiser apparently believes data algorithms can help decision makers, like the ones who just launched the Soleimani attack, to whom she alludes (!), act in time to prevent wars and genocides. Also, she says she was attracted to CA in the first place by its participation in a NATO program that would use harvested social media data to identify young people susceptible to extremist messaging, and intervene to help them before they did something like joining ISIS.

You can't make this shit up, and I didn't. Just read what she says. I'm sorry that the disinformation and manipulation is so tangled and layered and deep, and that the tools of our free media (the Internet) are also the tools of manipulation, surveillance, and censorship. But what did you expect? White hats and black hats?

* PS. I believe Kaiser and the documentary, The Great Hack, greatly exaggerate the impact of the CA operation, odious though it was, certainly in comparison to that of the conventional corporate media, who launched the Trump campaign into the stratosphere -- or to the old-fashioned party machinery on the state level, who suppressed the anti-Trump vote to secure the Electoral College victory.


1. Phunware press release, 19 June 2019. ... sory-board

2. Transcript of interview on Democracy Now!, broadcast 7 Jan 2020.


AMY GOODMAN: So, talk about your trajectory. I mean, Karim and Jehane, you do this very well in the film, but it is a very unlikely path to a firm that may well have been illegal in what it did, in working with Facebook, harvesting all this information, that ultimately helped to get Trump elected. But that’s not really where you came from. In the film, I’m looking at pictures of you and Michelle Obama. You were a key figure in President Obama’s social media team in his election campaign.

BRITTANY KAISER: I have always been a political and human rights activist. That’s where I came from, so it was really easy to snap back into that kind of work. I actually was in the third year of my Ph.D., writing about prevention of genocide, war crimes and crimes against humanity, when I first met the former CEO of Cambridge Analytica, Alexander Nix. My Ph.D. ended up being about how you could get real-time information, so how you could use big data systems, in order to build early-warning systems to give people who make decisions, like the decision that was just made about Iran — give them real-time information so that they can prevent war before it happens. Unfortunately, no one at my law school could teach me anything about predictive algorithms, so I joined this company part-time in order to start to learn how these early-warning systems could possibly be built.

AMY GOODMAN: Well, explain. Explain your meeting with Alexander Nix, who is the head — came from the defense contractor — right? — SCL, and then was the head of Cambridge Analytica, who said, “Let me get you drunk and steal your secrets.”

BRITTANY KAISER: Yes, he did. Not that becoming, but he has always been an incredibly good salesman. In one of my first meetings with him, he showed me a contract that the company had with NATO in order to identify young people in the United Kingdom who were vulnerable to being recruited into ISIS, and running counterpropaganda communications to keep them at home safe with their families instead of sneaking themselves into Syria. So, obviously, that type of work was incredibly attractive to me. And I thought, “Hey, data can really be used for good and for human rights impact. This is something I really want to learn how to do.”

AMY GOODMAN: But soon you were on your way to the United States with Alexander Nix, meeting with Corey Lewandowski, who at the time was the campaign manager for Donald Trump. When did those red flags go up for you?

BRITTANY KAISER: There were red flags here and there, especially when I would call our lawyers, who were actually Giuliani’s firm at the time, in order to ask for advice on what I could and could not do with certain data projects. And I always got told, “Hey, you’re creating too many invoices.”

But what really landed the plane for me was, a month after Donald Trump’s election, everybody at Cambridge Analytica who had worked both on the Trump campaign and on the Trump super PAC, which ran the “Defeat Crooked Hillary” campaign — they gave us a two-day-long debrief, which I write about in detail in my book Targeted, about what they did. They showed us how much data they collected, how they modeled it, how they identified people as individuals that could be convinced not to vote, and the types of disinformation that they sent these people in order to change their minds. It was the most horrific two days of my life.

3. Video of interview. ... _analytica

COMMENT: Oh! That's when she figured it out. Before that meeting, a month after Trump was elected, she had no cause to realize.

"Are we the baddies?"


We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
Posts: 15473
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Surveillance

Postby identity » Sat Jan 18, 2020 7:37 pm

The Secretive Company That Might End Privacy as We Know It

A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.
By Kashmir Hill
Jan. 18, 2020Updated 2:25 p.m. ET

Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s distinctive yellow hair on their own photos.
Then Mr. Ton-That — an Australian techie and onetime model — did something momentous: He invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology. When I began looking into the company in November, its website was a bare page showing a nonexistent Manhattan address as its place of business. The company’s one employee listed on LinkedIn, a sales manager named “John Good,” turned out to be Mr. Ton-That, using a fake name. For a month, people affiliated with the company would not return my emails or phone calls.

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.
The company eventually started answering my questions, saying that its earlier silence was typical of an early-stage start-up in stealth mode. Mr. Ton-That acknowledged designing a prototype for use with augmented-reality glasses but said the company had no plans to release it. And he said my photo had rung alarm bells because the app “flags possible anomalous search behavior” in order to prevent users from conducting what it deemed “inappropriate searches.”
In addition to Mr. Ton-That, Clearview was founded by Richard Schwartz — who was an aide to Rudolph W. Giuliani when he was mayor of New York — and backed financially by Peter Thiel, a venture capitalist behind Facebook and Palantir.
Another early investor is a small firm called Kirenaga Partners. Its founder, David Scalzo, dismissed concerns about Clearview making the internet searchable by face, saying it’s a valuable crime-solving tool.

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

Mr. Ton-That, 31, grew up a long way from Silicon Valley. In his native Australia, he was raised on tales of his royal ancestors in Vietnam. In 2007, he dropped out of college and moved to San Francisco. The iPhone had just arrived, and his goal was to get in early on what he expected would be a vibrant market for social media apps. But his early ventures never gained real traction.
In 2009, Mr. Ton-That created a site that let people share links to videos with all the contacts in their instant messengers. Mr. Ton-That shut it down after it was branded a “phishing scam.” In 2015, he spun up Trump Hair, which added Mr. Trump’s distinctive coif to people in a photo, and a photo-sharing program. Both fizzled.
Dispirited, Mr. Ton-That moved to New York in 2016. Tall and slender, with long black hair, he considered a modeling career, he said, but after one shoot he returned to trying to figure out the next big thing in tech. He started reading academic papers on artificial intelligence, image recognition and machine learning.

Mr. Schwartz and Mr. Ton-That met in 2016 at a book event at the Manhattan Institute, a conservative think tank. Mr. Schwartz, now 61, had amassed an impressive Rolodex working for Mr. Giuliani in the 1990s and serving as the editorial page editor of The New York Daily News in the early 2000s. The two soon decided to go into the facial recognition business together: Mr. Ton-That would build the app, and Mr. Schwartz would use his contacts to drum up commercial interest.
Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and driver’s license photos. In recent years, facial recognition algorithms have improved in accuracy, and companies like Amazon offer products that can create a facial recognition program for any database of images.
Mr. Ton-That wanted to go way beyond that. He began in 2016 by recruiting a couple of engineers. One helped design a program that can automatically collect images of people’s faces from across the internet, such as employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and even Venmo. Representatives of those companies said their policies prohibit such scraping, and Twitter said it explicitly banned use of its data for facial recognition.

Another engineer was hired to perfect a facial recognition algorithm that was derived from academic papers. The result: a system that uses what Mr. Ton-That described as a “state-of-the-art neural net” to convert all the images into mathematical formulas, or vectors, based on facial geometry — like how far apart a person’s eyes are. Clearview created a vast directory that clustered all the photos with similar vectors into “neighborhoods.” When a user uploads a photo of a face into Clearview’s system, it converts the face into a vector and then shows all the scraped photos stored in that vector’s neighborhood — along with the links to the sites from which those images came.
Mr. Schwartz paid for server costs and basic expenses, but the operation was bare bones; everyone worked from home. “I was living on credit card debt,” Mr. Ton-That said. “Plus, I was a Bitcoin believer, so I had some of those.”

By the end of 2017, the company had a formidable facial recognition tool, which it called Smartcheckr. But Mr. Schwartz and Mr. Ton-That weren’t sure whom they were going to sell it to.
Maybe it could be used to vet babysitters or as an add-on feature for surveillance cameras. What about a tool for security guards in the lobbies of buildings or to help hotels greet guests by name? “We thought of every idea,” Mr. Ton-That said.
One of the odder pitches, in late 2017, was to Paul Nehlen — an anti-Semite and self-described “pro-white” Republican running for Congress in Wisconsin — to use “unconventional databases” for “extreme opposition research,” according to a document provided to Mr. Nehlen and later posted online. Mr. Ton-That said the company never actually offered such services.
The company soon changed its name to Clearview AI and began marketing to law enforcement. That was when the company got its first round of funding from outside investors: Mr. Thiel and Kirenaga Partners. Among other things, Mr. Thiel was famous for secretly financing Hulk Hogan’s lawsuit that bankrupted the popular website Gawker. Both Mr. Thiel and Mr. Ton-That had been the subject of negative articles by Gawker.

“In 2017, Peter gave a talented young founder $200,000, which two years later converted to equity in Clearview AI,” said Jeremiah Hall, Mr. Thiel’s spokesman. “That was Peter’s only contribution; he is not involved in the company.”
Even after a second funding round in 2019, Clearview remains tiny, having raised $7 million from investors, according to Pitchbook, a website that tracks investments in start-ups. The company declined to confirm the amount.
In February, the Indiana State Police started experimenting with Clearview. They solved a case within 20 minutes of using the app. Two men had gotten into a fight in a park, and it ended when one shot the other in the stomach. A bystander recorded the crime on a phone, so the police had a still of the gunman’s face to run through Clearview’s app.
They immediately got a match: The man appeared in a video that someone had posted on social media, and his name was included in a caption on the video. “He did not have a driver’s license and hadn’t been arrested as an adult, so he wasn’t in government databases,” said Chuck Cohen, an Indiana State Police captain at the time.
The man was arrested and charged; Mr. Cohen said he probably wouldn’t have been identified without the ability to search social media for his face. The Indiana State Police became Clearview’s first paying customer, according to the company. (The police declined to comment beyond saying that they tested Clearview’s app.)
Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That. (“I’m thrilled to have the opportunity to help Hoan build Clearview into a mission-driven organization that’s helping law enforcement protect children and enhance the safety of communities across the country,” Mr. Schwartz said through a spokeswoman.)
The company’s main contact for customers was Jessica Medeiros Garrison, who managed Luther Strange’s Republican campaign for Alabama attorney general. Brandon Fricke, an N.F.L. agent engaged to the Fox Nation host Tomi Lahren, said in a financial disclosure report during a congressional campaign in California that he was a “growth consultant” for the company. (Clearview said that it was a brief, unpaid role, and that the company had enlisted Democrats to help market its product as well.)

The company’s most effective sales technique was offering 30-day free trials to officers, who then encouraged their acquisition departments to sign up and praised the tool to officers from other police departments at conferences and online, according to the company and documents provided by police departments in response to public-record requests. Mr. Ton-That finally had his viral hit.
In July, a detective in Clifton, N.J., urged his captain in an email to buy the software because it was “able to identify a suspect in a matter of seconds.” During the department’s free trial, Clearview had identified shoplifters, an Apple Store thief and a good Samaritan who had punched out a man threatening people with a knife.
Photos “could be covertly taken with telephoto lens and input into the software, without ‘burning’ the surveillance operation,” the detective wrote in the email, provided to The Times by two researchers, Beryl Lipton of MuckRock and Freddy Martinez of Open the Government. They discovered Clearview late last year while looking into how local police departments are using facial recognition.
According to a Clearview sales presentation reviewed by The Times, the app helped identify a range of individuals: a person who was accused of sexually abusing a child whose face appeared in the mirror of someone’s else gym photo; the person behind a string of mailbox thefts in Atlanta; a John Doe found dead on an Alabama sidewalk; and suspects in multiple identity-fraud cases at banks.

In Gainesville, Fla., Detective Sgt. Nick Ferrara heard about Clearview last summer when it advertised on CrimeDex, a list-serv for investigators who specialize in financial crimes. He said he had previously relied solely on a state-provided facial recognition tool, FACES, which draws from more than 30 million Florida mug shots and Department of Motor Vehicle photos.

Sergeant Ferrara found Clearview’s app superior, he said. Its nationwide database of images is much larger, and unlike FACES, Clearview’s algorithm doesn’t require photos of people looking straight at the camera.
“With Clearview, you can use photos that aren’t perfect,” Sergeant Ferrara said. “A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.”

He uploaded his own photo to the system, and it brought up his Venmo page. He ran photos from old, dead-end cases and identified more than 30 suspects. In September, the Gainesville Police Department paid $10,000 for an annual Clearview license.
Federal law enforcement, including the F.B.I. and the Department of Homeland Security, are trying it, as are Canadian law enforcement authorities, according to the company and government officials.
Despite its growing popularity, Clearview avoided public mention until the end of 2019, when Florida prosecutors charged a woman with grand theft after two grills and a vacuum were stolen from an Ace Hardware store in Clermont. She was identified when the police ran a still from a surveillance video through Clearview, which led them to her Facebook page. A tattoo visible in the surveillance video and Facebook photos confirmed her identity, according to an affidavit in the case.

Mr. Ton-That said the tool does not always work. Most of the photos in Clearview’s database are taken at eye level. Much of the material that the police upload is from surveillance cameras mounted on ceilings or high on walls.
“They put surveillance cameras too high,” Mr. Ton-That lamented. “The angle is wrong for good face recognition.”

Despite that, the company said, its tool finds matches up to 75 percent of the time. But it is unclear how often the tool delivers false matches, because it has not been tested by an independent party such as the National Institute of Standards and Technology, a federal agency that rates the performance of facial recognition algorithms.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
But current and former law enforcement officials say the app is effective. “For us, the testing was whether it worked or not,” said Mr. Cohen, the former Indiana State Police captain.
One reason that Clearview is catching on is that its service is unique. That’s because Facebook and other social media sites prohibit people from scraping users’ images — Clearview is violating the sites’ terms of service.
“A lot of people are doing it,” Mr. Ton-That shrugged. “Facebook knows.”
Jay Nancarrow, a Facebook spokesman, said the company was reviewing the situation with Clearview and “will take appropriate action if we find they are violating our rules.”
Mr. Thiel, the Clearview investor, sits on Facebook’s board. Mr. Nancarrow declined to comment on Mr. Thiel's personal investments.
Some law enforcement officials said they didn’t realize the photos they uploaded were being sent to and stored on Clearview’s servers. Clearview tries to pre-empt concerns with an F.A.Q. document given to would-be clients that says its customer-support employees won’t look at the photos that the police upload.

Clearview also hired Paul D. Clement, a United States solicitor general under President George W. Bush, to assuage concerns about the app’s legality.
In an August memo that Clearview provided to potential customers, including the Atlanta Police Department and the Pinellas County Sheriff’s Office in Florida, Mr. Clement said law enforcement agencies “do not violate the federal Constitution or relevant existing state biometric and privacy laws when using Clearview for its intended purpose.”
Mr. Clement, now a partner at Kirkland & Ellis, wrote that the authorities don’t have to tell defendants that they were identified via Clearview, as long as it isn’t the sole basis for getting a warrant to arrest them. Mr. Clement did not respond to multiple requests for comment.
The memo appeared to be effective; the Atlanta police and Pinellas County Sheriff’s Office soon started using Clearview.
Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see. After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches. When asked about this, Mr. Ton-That laughed and called it a “software bug.”
“It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math,” said Al Gidari, a privacy professor at Stanford Law School. “Absent a very strong federal privacy law, we’re all screwed.”

Mr. Ton-That said his company used only publicly available images. If you change a privacy setting in Facebook so that search engines can’t link to your profile, your Facebook photos won’t be included in the database, he said.
But if your profile has already been scraped, it is too late. The company keeps all the images it has scraped even if they are later deleted or taken down, though Mr. Ton-That said the company was working on a tool that would let people request that images be removed if they had been taken down from the website of origin.

Woodrow Hartzog, a professor of law and computer science at Northeastern University in Boston, sees Clearview as the latest proof that facial recognition should be banned in the United States.
“We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Mr. Hartzog said. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

During a recent interview at Clearview’s offices in a WeWork location in Manhattan’s Chelsea neighborhood, Mr. Ton-That demonstrated the app on himself. He took a selfie and uploaded it. The app pulled up 23 photos of him. In one, he is shirtless and lighting a cigarette while covered in what looks like blood.
Mr. Ton-That then took my photo with the app. The “software bug” had been fixed, and now my photo returned numerous results, dating back a decade, including photos of myself that I had never seen before. When I used my hand to cover my nose and the bottom of my face, the app still returned seven correct matches for me.
Police officers and Clearview’s investors predict that its app will eventually be available to the public.

Mr. Ton-That said he was reluctant. “There’s always going to be a community of bad people who will misuse it,” he said.
Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.
Asked about the implications of bringing such a power into the world, Mr. Ton-That seemed taken aback.
“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”

Mr. Ton-That said the tool does not always work. Most of the photos in Clearview’s database are taken at eye level. Much of the material that the police upload is from surveillance cameras mounted on ceilings or high on walls.
“They put surveillance cameras too high,” Mr. Ton-That lamented. “The angle is wrong for good face recognition.”

Just in the last few months, my local Rapid Transit service (which has its own police dept.) has begun installing PTZ cameras within clear globes on rail platforms at or just above head-height (on platforms already covered by ceiling-mounted CCTV cameras). I wonder why?
Posts: 705
Joined: Fri Mar 20, 2015 5:00 am
Blog: View Blog (0)

Re: Surveillance

Postby JackRiddler » Sun Jan 19, 2020 12:54 am

This shit will be real time in no time. We have no idea how fucked we are. No worries, there will be studies by well-funded mercenary academics to explain why it's all just panic and nostalgia, and actually quite harmless or else very good for us.

Sometimes I think how Star Trek (1967) foresaw this, and made it seem both fun and necessary. They always knew where everyone was. Where's Mr. Spock? Deck 7!
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
Posts: 15473
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Surveillance

Postby identity » Sun Jan 19, 2020 1:09 am

JackRiddler » Sat Jan 18, 2020 8:54 pm wrote:This shit will be real time in no time. We have no idea how fucked we are.

I have been saying since at least 2001 that we are only in the infancy of surveillance. I may soon need to update "infancy" to "childhood."
Posts: 705
Joined: Fri Mar 20, 2015 5:00 am
Blog: View Blog (0)

Re: Surveillance

Postby identity » Mon Jan 20, 2020 8:41 pm

Facial Recognition Is the Perfect Tool for Oppression
With such a grave threat to privacy and civil liberties, measured regulation should be abandoned in favor of an outright ban

Woodrow Hartzog
Aug 2, 2018

Co-authored with Evan Selinger

The Trojans would have loved facial recognition technology.

It’s easy to accept an outwardly compelling but ultimately illusory view about what the future will look like once the full potential of facial recognition technology is unlocked. From this perspective, you’ll never have to meet a stranger, fuss with passwords, or worry about forgetting your wallet. You’ll be able organize your entire video and picture collection in seconds — even instantly find photos of your kids running around at summer camp. More important, missing people will be located, schools will become safe, and the bad guys won’t get away with hiding in the shadows or under desks.

Total convenience. Absolute justice. Churches completely full on Sundays. At long last, our tech utopia will be realized.

We believe facial recognition technology is the most uniquely dangerous surveillance mechanism ever invented.

Tempted by this vision, people will continue to invite facial recognition technology into their homes and onto their devices, allowing it to play a central role in ever more aspects of their lives. And that’s how the trap gets sprung and the unfortunate truth becomes revealed: Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.
We should keep this Trojan horse outside of the city.

The Current Debate

The ACLU, along with nearly 70 other civil rights organizations, has asked Amazon to stop selling facial recognition technology to the government and further called on Congress to enact a moratorium on government uses of facial recognition technology. The media weighed in, and important voices expressed anxiety. Over at the Washington Post, the editorial board declared, “Congress should intervene soon.” Even some members of Congress — many of whom were recently misidentified by Amazon’s facial recognition software — are rightly worried.

We’re in the mix, too. Along with a group of other scholars, we asked Amazon to change its ways.

In response, Brad Smith, president of Microsoft, called for the U.S. government to regulate facial recognition tech. “The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself…This, in fact, is what we believe is needed today — a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission,” he wrote on Microsoft’s blog.

Corporate leadership is important, and regulation that imposes limits on facial recognition technology can be helpful. But partial protections and “well-articulated guidelines” will never be enough. Whatever help legislation might provide, the protections likely won’t be passed until face-scanning technology becomes much cheaper and easier to use. Smith actually seems to make this point, albeit unintentionally. He emphasizes that “Microsoft called for national privacy legislation for the United States in 2005.” Well, it’s 2018, and Congress has yet to pass anything.

If facial recognition technology continues to be further developed and deployed, a formidable infrastructure will be built, and we’ll be stuck with it. History suggests that highly publicized successes, the fear of failing to beef up security, and the sheer intoxicant of power will tempt overreach, motivate mission creep, and ultimately lead to systematic abuse.

The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives.

Why a Ban Is Necessary

A call to ban facial recognition systems, full stop, is extreme. Really smart scholars like Judith Donath argue that it’s the wrong approach. She suggests a more technologically neutral tactic, built around the larger questions that identify the specific activities to be prohibited, the harms to be avoided, and the values, rights, and situations we are trying to protect. For almost every other digital technology, we agree with this approach.

But we believe facial recognition technology is the most uniquely dangerous surveillance mechanism ever invented. It’s the missing piece in an already dangerous surveillance infrastructure, built because that infrastructure benefits both the government and private sectors. And when technologies become so dangerous, and the harm-to-benefit ratio becomes so imbalanced, categorical bans are worth considering. The law already prohibits certain kinds of dangerous digital technologies, like spyware. Facial recognition technology is far more dangerous. It’s worth singling out, with a specific prohibition on top of a robust, holistic, value-based, and largely technology-neutral regulatory framework. Such a layered system will help avoid regulatory whack-a-mole where lawmakers are always chasing tech trends.

Surveillance conducted with facial recognition systems is intrinsically oppressive. The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled. Even legislation that holds out the promise of stringent protective procedures won’t prevent chill from impeding crucial opportunities for human flourishing by dampening expressive and religious conduct.

Facial recognition technology also enables a host of other abuses and corrosive activities:
• Disproportionate impact on people of color and other minority and vulnerable populations.
• Due process harms, which might include shifting the ideal from “presumed innocent” to “people who have not been found guilty of a crime, yet.”
• Facilitation of harassment and violence.
• Denial of fundamental rights and opportunities, such as protection against “arbitrary government tracking of one’s movements, habits, relationships, interests, and thoughts.”
• The suffocating restraint of the relentless, perfect enforcement of law.
• The normalized elimination of practical obscurity.
• The amplification of surveillance capitalism.

As facial recognition scholar Clare Garvie rightly observes, mistakes with the technology can have deadly consequences:
What happens if a system like this gets it wrong? A mistake by a video-based surveillance system may mean an innocent person is followed, investigated, and maybe even arrested and charged for a crime he or she didn’t commit. A mistake by a face-scanning surveillance system on a body camera could be lethal. An officer alerted to a potential threat to public safety or to himself, must, in an instant, decide whether to draw his weapon. A false alert places an innocent person in those crosshairs.

Two reports, among others, thoroughly detail many of these problems. There’s the invaluable paper written by Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation, “Face Off: Law Enforcement Use of Face Recognition Technology.” And there’s the indispensable study “The Perpetual Line-Up,” from Georgetown’s Center on Privacy and Technology, co-authored by Clare Garvie, Alvaro Bedoya, and Jonathan Frankle. Our view is deeply informed by this rigorous scholarship, and we would urge anyone interested in the topic to carefully read it.

Despite the problems our colleagues have documented, you might be skeptical that a ban is needed. After all, other technologies pose similar threats: geolocation data, social media data, search history data, and so many other components of our big data trails can be highly revealing in themselves and downright soul-searching in the aggregate. And yet, facial recognition remains uniquely dangerous. Even among biometrics, such as fingerprints, DNA samples, and iris scans, facial recognition stands apart.

Systems that use face prints have five distinguishing features that justify singling them out for a ban. First, faces are hard to hide or change. They can’t be encrypted, unlike a hard drive, email, or text message. They are remotely capturable from distant cameras and increasingly inexpensive to obtain and store in the cloud — a feature that, itself, drives surveillance creep.

Second, there is an existing legacy of name and face databases, such as for driver’s licenses, mugshots, and social media profiles. This makes further exploitation easy through “plug and play” mechanisms.

Third, unlike traditional surveillance systems, which frequently require new, expensive hardware or new data sources, the data inputs for facial recognition are widespread and in the field right now, namely with CCTV and officer-worn body cams.

Fourth, tipping point creep. Any database of faces created to identify individuals arrested or caught on camera requires creating matching databases that, with a few lines of code, can be applied to analyze body cam or CCTV feeds in real time. New York Governor Andrew Cuomo perfectly expressed the logic of facial recognition creep, insisting that vehicle license-plate scanning is insignificant compared to what cameras can do once enabled with facial recognition tech. “When it reads that license plate, it reads it for scofflaws…[but] the toll is almost the least significant contribution that this electronic equipment can actually perform,” Cuomo said. “We are now moving to facial recognition technology, which takes it to a whole new level, where it can see the face of the person in the car and run that technology against databases.” If you build it, they will surveil.

Finally, it bears noting that faces, unlike fingerprints, gait, or iris patterns, are central to our identity. Faces are conduits between our on- and offline lives, and they can be the thread that connects all of our real-name, anonymous, and pseudonymous activities. It’s easy to think people don’t have a strong privacy interest in faces because many of us routinely show them in public. Indeed, outside of areas where burkas are common, hiding our faces often prompts suspicion.

The thing is we actually do have a privacy interest in our faces, and this is because humans have historically developed the values and institutions associated with privacy protections during periods where it’s been difficult to identify most people we don’t know. Thanks to biological constraints, the human memory is limited; without technological augmentation, we can remember only so many faces. And thanks to population size and distribution, we’ll encounter only so many people over the course of our lifetimes. These limitations create obscurity zones, and because of them, people have had great success hiding in public.

Recent Supreme Court decisions about the 4th Amendment have shown that fighting for privacy protections in public spaces isn’t antiquated. Just this summer, in Carpenter v. United States, our highest court ruled by a 5–4 vote that the Constitution protects cellphone location data. In the majority opinion, Chief Justice John Roberts wrote, “A person does not surrender all Fourth Amendment protection by venturing into the public sphere. To the contrary, ‘what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.’”

Why Facial Recognition Technology Can’t Be Procedurally Regulated

Because facial recognition technology poses an extraordinary danger, society can’t afford to have faith in internal processes of reform like self-regulation. Financial rewards will encourage entrepreneurialism that pushes facial recognition technology to its limits, and corporate lobbying will tilt heavily in this direction.

Facial recognition technology is a menace disguised as a gift.

Society also can’t wait for a populist uprising. Facial recognition technology will continue to be marketed as a component of the latest and greatest apps and devices. Apple is already pitching Face ID as the best new feature of its new iPhone. The same goes for ideologically charged news coverage of events where facial recognition technology appears to save the day.

Finally, society shouldn’t place its hopes in conventional approaches to regulation. Since facial recognition technology poses a unique threat, it can’t be contained by measures that define appropriate and inappropriate uses and that hope to balance potential social benefit with a deterrent for bad actors. This is one of the rare situations that requires an absolute prohibition, something like the Ottawa Treaty on landmines.

Right now, there are a few smart proposals to control facial recognition technology and even fewer actual laws limiting it. The biometric laws in Illinois and Texas, for example, are commendable, yet they follow the traditional regulatory strategy of requiring those who would collect and use facial recognition (and other biometric identifiers) to follow a basic set of fair information practices and privacy protocols. These include requirements to get informed consent prior to collection, mandated data protection obligations and retention limits, prohibitions on profiting from biometric data, limited ability to disclose biometric data to others, and, notably, private causes of action for violations of the statutes.

Proposed facial recognition laws follow along similar lines. The Federal Trade Commission recommends a similar “notice, choice, and fair data limits” approach to facial recognition. The Electronic Frontier Foundation’s report, which focuses on law enforcement, contains similar though more robust suggestions. These include placing restrictions on collecting and storing data; recommending limiting the combination of one or more biometrics in a single database; defining clear rules for use, sharing, and security; and providing notice, audit trials, and independent oversight. In its model face recognition legislation, the Georgetown Law Center on Privacy and Technology’s report proposes significant restrictions on government access to face-print databases as well as meaningful limitations on use of real-time facial recognition.

Tragically, most of these existing and proposed requirements are procedural, and in our opinion they won’t ultimately stop surveillance creep and the spread of face-scanning infrastructure. For starters, some of the basic assumptions about consent, notice, and choice that are built into the existing legal frameworks are faulty. Informed consent as a regulatory mechanism for surveillance and data practices is a spectacular failure. Even if people were given all the control in the world, they wouldn’t be able to meaningfully exercise it at scale.

Yet lawmakers and industry trudge on, oblivious to people’s time and resource limitations. Additionally, these rules, like most privacy rules in the digital age, are riddled with holes. Some of the statutes apply only to how data is collected or stored but largely ignore how it is used. Others apply only to commercial actors or to the government and are so ambiguous as to tolerate all kinds of pernicious activity. And to recognize the touted benefits of facial recognition would require more cameras, more infrastructure, and face databases of all-encompassing breadth.

The Future of Human Faces

Because facial recognition technology holds out the promise of translating who we are and everywhere we go into trackable information that can be nearly instantly stored, shared, and analyzed, its future development threatens to leave us constantly compromised. The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited. In such a world, critics of facial recognition technology will be disempowered, silenced, or cease to exist.
Posts: 705
Joined: Fri Mar 20, 2015 5:00 am
Blog: View Blog (0)

Re: Surveillance

Postby identity » Mon Jan 20, 2020 9:02 pm

Facial recognition is the plutonium of AI

It’s dangerous, racializing, and has few legitimate uses;
facial recognition needs regulation and control
on par with nuclear waste.

By Luke Stark

When, in 1941, Glenn T. Seaborg and his colleagues at the University of
California Berkeley isolated—and subsequently named—plutonium, the
radioactive element 93, Seaborg reportedly suggested the periodic symbol
Pu for the discovery. According to Seaborg, it “sounded like the words a
child would exclaim, ‘Pee-yoo!’ when smelling something bad”. Plutonium, industrially
produced for the American atomic bombs dropped on the Japanese cities of Hiroshima
and Nagasaki in August 1945, was ill favored even by its discoverers.

Today, plutonium has very few nonmilitary
uses (its application in nuclear
weapons being, of course, a moral
abomination in itself). Plutonium is
produced as a byproduct of uraniumbased
nuclear power, and is the chief
component of nuclear waste; in miniscule
amounts, is also used as a power
source in specialized scientific instruments,
such as aboard space probes.

Plutonium has only highly specialized
and tightly controlled uses, and poses
such a high risk of toxicity if allowed to
proliferate that it is controlled by international
regimes, and not produced at
all if possible.

Plutonium, in other words, is an apt
material metaphor for digital facialrecognition technologies: Something
to be recognized as anathema to the
health of human society, and heavily
restricted as a result.
Readers might object that the analogy
between plutonium and facial
recognition technologies is not just
alarmist, but nonsensical. Yet in forthcoming
work, the University of Washington’s
Anna Lauren Hoffmann and
I argue the metaphors we use to make
sense of digital systems can reveal
important similarities between a new
technology or practice, and other, older
technological problems.


Recognizing facial recognition as
plutonium-like in its hazardous effects
only underscores the need to build on
calls for regulation like Smith’s, paying
close attention to how the government
regulates a hazardous substance like
plutonium. Smith notes one potential
limited use case for facial recognition:
As an accessibility tool for the visually
impaired. Under a strong regulatory
scheme, devices enabling this kind
of functionality, like other digital accessibility
devices and clinical health
apps might be regulated by the Food
and Drug Administration. Just as the
use of a substance like plutonium for
specialized medical or security applications
is highly constrained and
closely monitored, facial recognition
technologies could be subject to similar
constraints. Plutonium serves as a
useful metaphor for facial recognition
because it signals some technologies
are so dangerous if broadly accessible
that they should be banned for almost
all practical purposes.

Facial recognition’s racializing effects
are so potentially toxic to our lives
as social beings that its widespread use
doesn’t outweigh the risks. “The future
of human flourishing depends upon
facial recognition technology being
banned before the systems become
too entrenched in our lives,” Hartzog
and Selinger write. “Otherwise, people
won’t know what it’s like to be in
public without being automatically
identified, profiled, and potentially
exploited.” To avoid the social toxicity
and racial discrimination it will bring,
facial recognition technologies need
to be understood for what they are: nuclear-
level threats to be handled with
extraordinary care.
Posts: 705
Joined: Fri Mar 20, 2015 5:00 am
Blog: View Blog (0)

Re: Surveillance

Postby Grizzly » Tue Jan 21, 2020 2:55 pm

Was the internet made accessible to the public with the intention of creating a surveillance state/world? My guess OR I HIGHLY SUSPECT, that, that would be emphatically , YES.

Renata Ávila: “The Internet of creation disappeared. Now we have the Internet of surveillance and control”

An interview with this specialist in human rights, technology and freedom of expression to discuss how today’s societies are advancing to the drumbeat of “digital colonialism”.
If Barthes can forgive me, “What the public wants is the image of passion Justice, not passion Justice itself.”
User avatar
Posts: 3320
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Surveillance

Postby Grizzly » Wed Jan 22, 2020 4:34 am

LE Tactics - “Parallel Construction” - U.S. directs agents to cover up program used to investigate Americans
Exclusive: U.S. directs agents to cover up program used to investigate Americans
If Barthes can forgive me, “What the public wants is the image of passion Justice, not passion Justice itself.”
User avatar
Posts: 3320
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Surveillance

Postby JackRiddler » Wed Jan 22, 2020 9:12 am

Grizzly » Tue Jan 21, 2020 1:55 pm wrote:Was the internet made accessible to the public with the intention of creating a surveillance state/world? My guess OR I HIGHLY SUSPECT, that, that would be emphatically , YES.

Renata Ávila: “The Internet of creation disappeared. Now we have the Internet of surveillance and control”

An interview with this specialist in human rights, technology and freedom of expression to discuss how today’s societies are advancing to the drumbeat of “digital colonialism”.

I think this question is beside the point, a variation on chicken-and-egg. It's like asking whether a machine gun is designed to make holes in people and things, or to be mounted on top of a wall or vehicle as a visible deterrent. Both, obviously. After its development by the government's industrial-policy program (a.k.a. the MIC), the Internet was headed for mass-commercialization and for-profit privatization, no matter what. And from the beginning its potentials as the basis of a mass surveillance state/world were evident to anyone participating, so that the players reckoned on it as part of their planning and decision-making, whether or not they thought it was the central point. We can all see the combination. The oligarchs and new megacorps produced by the new sector of capital were all MIC contractors from the start, or they were incubated as investment objects by elements of the spook state, or, even if they were just lucky prospectors who hit a big strike, they quickly cozied up and integrated themselves in the surveillance state.

We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
Posts: 15473
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Surveillance

Postby Grizzly » Wed Jan 22, 2020 12:40 pm

^^^ You're NOT wrong. Just wish you had spoken to my LE Tactics - “Parallel Construction” post above. I would have liked to have heard your thoughts on that Jackriddler, as your above was already redundant before you took the energy to write it.
If Barthes can forgive me, “What the public wants is the image of passion Justice, not passion Justice itself.”
User avatar
Posts: 3320
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Surveillance

Postby JackRiddler » Wed Jan 22, 2020 6:24 pm

Grizzly » Wed Jan 22, 2020 11:40 am wrote:^^^ You're NOT wrong. Just wish you had spoken to my LE Tactics - “Parallel Construction” post above. I would have liked to have heard your thoughts on that Jackriddler, as your above was already redundant before you took the energy to write it.

Sorry, answered your question but had not followed that link.

Now that I see it... oof! The inevitable has arrived on that dimension too, wouldn't you say? Like the old-style framing of suspects the copper "knows" to be guilty but cannot prove without enhancement, with the prosecutor helping of course. But turbocharged and not requiring a personal grudge. A whole universe of new people to choose from and fuck up. How can they resist?
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
Posts: 15473
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Surveillance

Postby chump » Mon Feb 03, 2020 7:28 pm

User avatar
Posts: 2261
Joined: Thu Aug 06, 2009 10:28 pm
Blog: View Blog (0)


Return to Data & Research Compilations

Who is online

Users browsing this forum: No registered users and 4 guests