... and why the fuck should we believe them... ?
James Clapper: ‘I Didn’t Lie’ to Congress About NSA Surveillance, I ‘Simply Didn’t Understand’ the Question
Former National Security Agency Director James Clapper said he did not lie about mass domestic surveillance programs when he testified to Congress in 2013, but rather he made a mistake and did not understand what specific program was being asked about.
CNN’s John Berman asked for Clapper’s reaction to The Intercept’s Glen Greenwald report: “The very first NSA program we revealed from Snowden documents, the mass domestic spying program of Americans’ phone records which James Clapper lied about and Obama insisted was vital to national security has been shut down.”
“Well, the original thought behind this, and this program was put in place as a direct result of 9/11, the point was to be able to track quickly a foreign communicant talking to somebody in this country who may have been plotting a terrorist plot, and was put in place during the Bush Administration for that reason,” Clapper said. “I always regarded it as kind of a safeguard or insurance policy so that if the need came up you would have this to refer to.”
“As far as the comment, the allegation about my lying, I didn’t lie, I made a big mistake and I just simply didn’t understand what I was being asked about. I thought of another surveillance program, Section 702 of the Foreign Intelligence Surveillance Act when I was being asked about Section 215 of the Patriot Act at the time, I just didn’t understand that” he continued.
When Berman said because it has been reported no terrorists have been caught using the surveillance program, he asked Clapper if it suggests it does not work.
“Well, that’s true, and I think probably at the time contemporaneously back 2013 or so when all this broke that we may have oversold it a bit because, you know, we were hard-pressed to point out to a specific case in point,” Clapper admitted. “What this was was just trying to capitalize on the lesson learned from 9/11. I will say that — and I’ve said this publicly many times before, that what this did prove was the need for the intelligence community to have been more transparent.”
Watch above, via CNN.
Israeli espionage operations in the United States have "gone too far," senior U.S. intelligence officials have told Congress in recent weeks, Newsweek reported on Tuesday.
Newsweek quotes confidential briefings to Congress and says Israel's massive spying is behind the failure to provide visa waiver to Israelis entering U.S.
We Got U.S. Border Officials to Testify Under Oath. Here’s What We Found Out.
Hugh Handeyside, Senior Staff Attorney, ACLU National Security Project
& Nathan Freed Wessler, Staff Attorney, ACLU Speech, Privacy, and Technology Project
& Esha Bhandari, Staff Attorney, ACLU Speech, Privacy, and Technology Project
April 30, 2019 | 1:45 PM
Electronic Device Searches Privacy at Borders and Checkpoints Privacy & Technology
CBP Officer processes a passenger into the United States at an airport
In September 2017, we, along with the Electronic Frontier Foundation, sued the federal government for its warrantless and suspicionless searches of phones and laptops at airports and other U.S. ports of entry.
The government immediately tried to dismiss our case, arguing that the First and Fourth Amendments do not protect against such searches. But the court ruled that our clients — 10 U.S. citizens and one lawful permanent resident whose phones and laptops were searched while returning to the United States — could move forward with their claims.
Since then, U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement have had to turn over documents and evidence about why and how they conduct warrantless and suspicionless searches of electronic devices at the border. And their officials have had to sit down with us to explain — under oath — their policies and practices governing such warrantless searches.
What we learned is alarming, and we’re now back in court with this new evidence asking the judge to skip trial altogether and rule for our clients.
The information we uncovered through our lawsuit shows that CBP and ICE are asserting near-unfettered authority to search and seize travelers’ devices at the border, for purposes far afield from the enforcement of immigration and customs laws. The agencies’ policies allow officers to search devices for general law enforcement purposes, such as investigating and enforcing bankruptcy, environmental, and consumer protection laws. The agencies also say that they can search and seize devices for the purpose of compiling “risk assessments” or to advance pre-existing investigations. The policies even allow officers to consider requests from other government agencies to search specific travelers’ devices.
CBP and ICE also say they can search a traveler’s electronic devices to find information about someone else. That means they can search a U.S. citizen’s devices to probe whether that person’s family or friends may be undocumented; the devices of a journalist or scholar with foreign sources who may be of interest to the U.S. government; or the devices of a traveler who is the business partner or colleague of someone under investigation.
Both agencies allow officers to retain information from travelers’ electronic devices and share it with other government entities, including state, local, and foreign law enforcement agencies.
Say NO to Trump's Border Wall
Add your name
Let’s get one thing clear: The government cannot use the pretext of the “border” to make an end run around the Constitution.
The border is not a lawless place. CBP and ICE are not exempt from the Constitution. And the information on our phones and laptops is no less deserving of constitutional protections than, say, international mail or our homes.
Warrantless and suspicionless searches of our electronic devices at the border violate the Fourth Amendment, which protects us against unreasonable searches and seizures – including at the border. Border officers do have authority to search our belongings for contraband or illegal items, but mobile electronic devices are unlike any other item officers encounter at the border. For instance, they contain far more personal and revealing information than could be gleaned from a thorough search of a person’s home, which requires a warrant.
These searches also violate the First Amendment. People will self-censor and avoid expressing dissent if they know that returning to the United States means that border officers can read and retain what they say privately, or see what topics they searched online. Similarly, journalists will avoid reporting on issues that the U.S. government may have an interest in, or that may place them in contact with sensitive sources.
Our clients’ experiences demonstrate the intrusiveness of device searches at the border and the emotional toll they exact. For instance, Zainab Merchant and Nadia Alasaad both wear headscarves in public for religious reasons, and their smartphones contained photos of themselves without headscarves that they did not want border officers to see. Officers searched the phones nonetheless. On another occasion, a border officer searched Ms. Merchant’s phone even though she repeatedly told the officer that it contained attorney-client privileged communications. After repeated searches of his electronic devices, Isma’il Kushkush, a journalist, felt worried that he was being targeted because of his reporting, and he questioned whether to continue covering issues overseas.
Crossing the U.S. border shouldn’t mean facing the prospect of turning over years of emails, photos, location data, medical and financial information, browsing history, or other personal information on our mobile devices. That’s why we’re asking a federal court to rule that border agencies must do what any other law enforcement agency would have to do in order to search electronic devices: get a warrant.
May 1, 2019
China: How Mass Surveillance Works in Xinjiang
‘Reverse Engineering’ Police App Reveals Profiling, Monitoring Strategies
Since late 2016, the Chinese government has subjected the 13 million ethnic Uyghurs and other Turkic Muslims in Xinjiang to mass arbitrary detention, forced political indoctrination, restrictions on movement, and religious oppression. Credible estimates indicate that under this heightened repression, up to one million people are being held in “political education” camps. The government’s “Strike Hard Campaign against Violent Terrorism” (Strike Hard Campaign, 严厉打击暴力恐怖活动专项行动) has turned Xinjiang into one of China’s major centers for using innovative technologies for social control.
“Our research shows, for the first time, that Xinjiang police are using illegally gathered information about people’s completely lawful behavior – and using it against them.”
This report provides a detailed description and analysis of a mobile app that police and other officials use to communicate with the Integrated Joint Operations Platform (IJOP, 一体化联合作战平台), one of the main systems Chinese authorities use for mass surveillance in Xinjiang. Human Rights Watch first reported on the IJOP in February 2018, noting the policing program aggregates data about people and flags to officials those it deems potentially threatening; some of those targeted are detained and sent to political education camps and other facilities. But by “reverse engineering” this mobile app, we now know specifically the kinds of behaviors and people this mass surveillance system targets.
The findings have broader significance, providing an unprecedented window into how mass surveillance actually works in Xinjiang, because the IJOP system is central to a larger ecosystem of social monitoring and control in the region. They also shed light on how mass surveillance functions in China. While Xinjiang’s systems are particularly intrusive, their basic designs are similar to those the police are planning and implementing throughout China.
Many—perhaps all—of the mass surveillance practices described in this report appear to be contrary to Chinese law. They violate the internationally guaranteed rights to privacy, to be presumed innocent until proven guilty, and to freedom of association and movement. Their impact on other rights, such as freedom of expression and religion, is profound.
Human Rights Watch finds that officials use the IJOP app to fulfill three broad functions: collecting personal information, reporting on activities or circumstances deemed suspicious, and prompting investigations of people the system flags as problematic.
Analysis of the IJOP app reveals that authorities are collecting massive amounts of personal information—from the color of a person’s car to their height down to the precise centimeter—and feeding it into the IJOP central system, linking that data to the person’s national identification card number. Our analysis also shows that Xinjiang authorities consider many forms of lawful, everyday, non-violent behavior—such as “not socializing with neighbors, often avoiding using the front door”—as suspicious. The app also labels the use of 51 network tools as suspicious, including many Virtual Private Networks (VPNs) and encrypted communication tools, such as WhatsApp and Viber.
The IJOP app demonstrates that Chinese authorities consider certain peaceful religious activities as suspicious, such as donating to mosques or preaching the Quran without authorization. But most of the other behavior the app considers problematic are ethnic-and religion-neutral. Our findings suggest the IJOP system surveils and collects data on everyone in Xinjiang. The system is tracking the movement of people by monitoring the “trajectory” and location data of their phones, ID cards, and vehicles; it is also monitoring the use of electricity and gas stations of everybody in the region. This is consistent with Xinjiang local government statements that emphasize officials must collect data for the IJOP system in a “comprehensive manner” from “everyone in every household.”
When the IJOP system detects irregularities or deviations from what it considers normal, such as when people are using a phone that is not registered to them, when they use more electricity than “normal,” or when they leave the area in which they are registered to live without police permission, the system flags these “micro-clues” to the authorities as suspicious and prompts an investigation.
Another key element of IJOP system is the monitoring of personal relationships. Authorities seem to consider some of these relationships inherently suspicious. For example, the IJOP app instructs officers to investigate people who are related to people who have obtained a new phone number or who have foreign links.
The authorities have sought to justify mass surveillance in Xinjiang as a means to fight terrorism. While the app instructs officials to check for “terrorism” and “violent audio-visual content” when conducting phone and software checks, these terms are broadly defined under Chinese laws. It also instructs officials to watch out for “adherents of Wahhabism,” a term suggesting an ultra-conservative form of Islamic belief, and “families of those…who detonated [devices] and killed themselves.” But many—if not most—behaviors the IJOP system pays special attention to have no clear relationship to terrorism or extremism. Our analysis of the IJOP system suggests that gathering information to counter genuine terrorism or extremist violence is not a central goal of the system.
The app also scores government officials on their performance in fulfilling tasks and is a tool for higher-level supervisors to assign tasks to, and keep tabs on the performance of, lower-level officials. The IJOP app, in part, aims to control government officials to ensure that they are efficiently carrying out the government’s repressive orders.
In creating the IJOP system, the Chinese government has benefitted from Chinese companies who provide them with technologies. While the Chinese government has primary responsibility for the human rights violations taking place in Xinjiang, these companies also have a responsibility under international law to respect human rights, avoid complicity in abuses, and adequately remedy them when they occur.
As detailed below, the IJOP system and some of the region’s checkpoints work together to form a series of invisible or virtual fences. Authorities describe them as a series of “filters” or “sieves” throughout the region, sifting out undesirable elements. Depending on the level of threat authorities perceive—determined by factors programmed into the IJOP system—, individuals’ freedom of movement is restricted to different degrees. Some are held captive in Xinjiang’s prisons and political education camps; others are subjected to house arrest, not allowed to leave their registered locales, not allowed to enter public places, or not allowed to leave China.
Government control over movement in Xinjiang today bears similarities to the Mao Zedong era (1949-1976), when people were restricted to where they were registered to live and police could detain anyone for venturing outside their locales. After economic liberalization was launched in 1979, most of these controls had become largely obsolete. However, Xinjiang’s modern police state—which uses a combination of technological systems and administrative controls—empowers the authorities to reimpose a Mao-era degree of control, but in a graded manner that also meets the economy’s demands for largely free movement of labor.
The intrusive, massive collection of personal information through the IJOP app helps explain reports by Turkic Muslims in Xinjiang that government officials have asked them or their family members a bewildering array of personal questions. When government agents conduct intrusive visits to Muslims’ homes and offices, for example, they typically ask whether the residents own exercise equipment and how they communicate with families who live abroad; it appears that such officials are fulfilling requirements sent to them through apps such as the IJOP app. The IJOP app does not require government officials to inform the people whose daily lives are pored over and logged the purpose of such intrusive data collection or how their information is being used or stored, much less obtain consent for such data collection.
A checkpoint in Turpan, Xinjiang. Some of Xinjiang’s checkpoints are equipped with special machines that, in addition to recognizing people through their ID cards or facial recognition, are also vacuuming up people’s identifying information from their
A checkpoint in Turpan, Xinjiang. Some of Xinjiang’s checkpoints are equipped with special machines that, in addition to recognizing people through their ID cards or facial recognition, are also vacuuming up people’s identifying information from their electronic devices. © 2018 Darren Byler
The Strike Hard Campaign has shown complete disregard for the rights of Turkic Muslims to be presumed innocent until proven guilty. In Xinjiang, authorities have created a system that considers individuals suspicious based on broad and dubious criteria, and then generates lists of people to be evaluated by officials for detention. Official documents state that individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize the number of people they find “untrustworthy” in detention. Such people are then subjected to police interrogation without basic procedural protections. They have no right to legal counsel, and some are subjected to torture and mistreatment, for which they have no effective redress, as we have documented in our September 2018 report. The result is Chinese authorities, bolstered by technology, arbitrarily and indefinitely detaining Turkic Muslims in Xinjiang en masse for actions and behavior that are not crimes under Chinese law.
And yet Chinese authorities continue to make wildly inaccurate claims that their “sophisticated” systems are keeping Xinjiang safe by “targeting” terrorists “with precision.” In China, the lack of an independent judiciary and free press, coupled with fierce government hostility to independent civil society organizations, means there is no way to hold the government or participating businesses accountable for their actions, including for the devastating consequences these systems inflict on people’s lives.
The Chinese government should immediately shut down the IJOP and delete all the data it has collected from individuals in Xinjiang. It should cease the Strike Hard Campaign, including all compulsory programs aimed at surveilling and controlling Turkic Muslims. All those held in political education camps should be unconditionally released and the camps shut down. The government should also investigate Party Secretary Chen Quanguo and other senior officials implicated in human rights abuses, including violating privacy rights, and grant access to Xinjiang, as requested by the Office of the United Nations High Commissioner for Human Rights and UN human rights experts.
Concerned foreign governments should impose targeted sanctions, such as the US Global Magnitsky Act, including visa bans and asset freezes, against Party Secretary Chen and other senior officials linked to abuses in the Strike Hard Campaign. They should also impose appropriate export control mechanisms to prevent the Chinese government from obtaining technologies used to violate basic rights.
Why WhatsApp Will Never Be Secure
Pavel DurovMay 15, 2019
The world seems to be shocked by the news that WhatsApp turned any phone into spyware. Everything on your phone, including photos, emails and texts was accessible by attackers just because you had WhatsApp installed .
This news didn’t surprise me though. Last year WhatsApp had to admit they had a very similar issue – a single video call via WhatsApp was all a hacker needed to get access to your phone’s entire data .
Every time WhatsApp has to fix a critical vulnerability in their app, a new one seems to appear in its place. All of their security issues are conveniently suitable for surveillance, and look and work a lot like backdoors.
Unlike Telegram, WhatsApp is not open source, so there’s no way for a security researcher to easily check whether there are backdoors in its code. Not only does WhatsApp not publish its code, they do the exact opposite: WhatsApp deliberately obfuscates their apps’ binaries to make sure no one is able to study them thoroughly.
WhatsApp and its parent company Facebook may even be required to implement backdoors – via secret processes such as the FBI’s gag orders . It’s not easy to run a secure communication app from the US. A week our team spent in the US in 2016 prompted 3 infiltration attempts by the FBI . Imagine what 10 years in that environment can bring upon a US-based company.
I understand security agencies justify planting backdoors as anti-terror efforts. The problem is such backdoors can also be used by criminals and authoritarian governments. No wonder dictators seem to love WhatsApp. Its lack of security allows them to spy on their own people, so WhatsApp continues being freely available in places like Russia or Iran, where Telegram is banned by the authorities .
As a matter of fact, I started working on Telegram as a direct response to personal pressure from the Russian authorities. Back then, in 2012, WhatsApp was still transferring messages in plain-text in transit. That was insane. Not just governments or hackers, but mobile providers and wifi admins had access to all WhatsApp texts .
Later WhatsApp added some encryption, which quickly turned out to be a marketing ploy: The key to decrypt messages was available to at least several governments, including the Russians . Then, as Telegram started to gain popularity, WhatsApp founders sold their company to Facebook and declared that “Privacy was in their DNA” . If true, it must have been a dormant or a recessive gene.
3 years ago WhatsApp announced they implemented end-to-end encryption so “no third party can access messages“. It coincided with an aggressive push for all of its users to back up their chats in the cloud. When making this push, WhatsApp didn’t tell its users that when backed up, messages are no longer protected by end-to-end encryption and can be accessed by hackers and law enforcement. Brilliant marketing, and some naive people are serving their time in jail as a result .
Those resilient enough not to fall for constant popups telling them to back up their chats can still be traced by a number of tricks – from accessing their contacts’ backups to invisible encryption key changes . The metadata generated by WhatsApp users – logs describing who chats with whom and when – is leaked to all kinds of agencies in large volumes by WhatsApp’s mother company . On top of this, you have a mix of critical vulnerabilities succeeding one another.
WhatsApp has a consistent history – from zero encryption at its inception to a succession of security issues strangely suitable for surveillance purposes. Looking back, there hasn’t been a single day in WhatsApp’s 10 year journey when this service was secure. That’s why I don’t think that just updating WhatsApp's mobile app will make it secure for anyone. For WhatsApp to become a privacy-oriented service, it has to risk losing entire markets and clashing with authorities in their home country. They don’t seem to be ready for that .
Last year, the founders of WhatsApp left the company due to concerns over users’ privacy . They are definitely tied by either gag orders or NDAs, so are unable discuss backdoors publicly without risking to lose their fortunes and freedom. They were able to admit, however, that "they sold their users' privacy" .
I can understand the reluctance of WhatsApp founders to provide more detail – it’s not easy to put your comfort at risk. Several years ago I had to leave my country after refusing to comply with government-sanctioned privacy breaches of VK users . It was not pleasant. But would I do something like this again? Gladly. Every one of us is going to die eventually, but we as a species will stick around for a while. That’s why I think accumulating money, fame or power is irrelevant. Serving humanity is the only thing that really matters in the long run.
And yet, despite our intentions, I feel we let humanity down in this whole WhatsApp spyware story. A lot of people can’t stop using WhatsApp, because their friends and family are still on it. It means we at Telegram did a bad job of persuading people to switch over. While we did attract hundreds of millions of users in the last five years, this wasn’t enough. The majority of internet users are still held hostage by the Facebook/WhatsApp/Instagram empire. Many of those who use Telegram are also on WhatsApp, meaning their phones are still vulnerable. Even those who ditched WhatsApp completely are probably using Facebook or Instagram, both of which think it’s OK to store your passwords in plaintext  (I still can’t believe a tech company could do something like this and get away with it).
In almost 6 years of its existence, Telegram hasn't had any major data leak or security flaw of the kind WhatsApp demonstrates every few months. In the same 6 years, we disclosed exactly zero bytes of data to third-parties, while Facebook/WhatsApp has been sharing pretty much everything with everybody who claimed they worked for a government .
Few people outside the Telegram fan community realize that most of the new features in messaging appear on Telegram first, and are then carbon-copied by WhatsApp down to the tiniest details. More recently we are witnessing the attempt by Facebook to borrow Telegram’s entire philosophy, with Zuckerberg suddenly declaring the importance of privacy and speed, practically citing Telegram’s app description word for word in his F8 speech.
But whining about FB’s hypocrisy and lack of creativity won’t help. We have to admit Facebook is executing an efficient strategy. Look what they did to Snapchat .
We at Telegram have to acknowledge our responsibility in forming the future. It’s either us or the Facebook monopoly. It’s either freedom and privacy or greed and hypocrisy. Our team has been competing with Facebook for the last 13 years. We already beat them once, in the Eastern European social networking market . We will beat them again in the global messaging market. We have to.
It won't be easy. The Facebook marketing department is huge. We at Telegram, however, do zero marketing. We don’t want to pay journalists and researchers to tell the world about Telegram. For that, we rely on you – the millions of our users. If you like Telegram enough, you will tell your friends about it. And if every Telegram user persuades 3 of their friends to delete WhatsApp and permanently move to Telegram, Telegram will already be more popular than WhatsApp.
The age of greed and hypocrisy will end. An era of freedom and privacy will begin. It is much closer than it seems.
 Business Insider WhatsApp was hacked and attackers installed spyware on people’s phones – May 15, 2019
 Security Today WhatsApp Bug Allowed Hackers to Hijack Accounts – October 12, 2018
 Wikipedia Gag order – United States
 Neowin FBI asked Durov and developer for Telegram backdoor – September 19, 0271
 The Baffler The Crypto-Keepers – September 17, 2017
 New York Times What Is Telegram, and Why Are Iran and Russia Trying to Ban It? – May 2, 2018
 YourDailyMac Whatsapp leaks usernames, telephone numbers and messages – May 19, 2011
 The H Security Sniffer tool displays other people's WhatsApp messages – May 13, 2012
 FilePerms WhatsApp is broken, really broken – September 12, 2012
 International Business Times Respect for Privacy Is Coded Into WhatsApp's DNA: Founder Jan Koum – March 18, 2014
 Slate How Did the FBI Access Paul Manafort’s Encrypted Messages? – June 5, 2018
 AppleInsider WhatsApp backdoor defeats end-to-end encryption, potentially allows Facebook to read messages – January 13, 2017
 Forbes Forget About Backdoors, This Is The Data WhatsApp Actually Hands To Cops – January 22, 2017
 New York Times Facebook Said to Create Censorship Tool to Get Back Into China – November 22, 2016
 The Verge WhatsApp co-founder Jan Koum is leaving Facebook after clashing over data privacy – April 30, 2018
 CNET WhatsApp co-founder: 'I sold my users' privacy' with Facebook acquisition – September 25, 2018
 New York Times Once celebrated in Russia, programmer Pavel Durov chooses exile – December 2, 2014
 TechCrunch Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext – March 21, 2019
 Engadget Facebook stored millions of Instagram passwords in plain text – 18 April, 2019
 Vanity Fair Snapchat is doing so badly, the feds are getting involved – November 14, 2018
 HuffPost Vkontakte, Facebook Competitor In Russia, Dominates – October 26, 2012
Subscribe to Hidden Forces and gain access to the episode overtime, transcript, and show rundown here: http://hiddenforces.io/subscribe In Episode 79 of Hidden Forces, Demetri Kofinas speaks with Shoshana Zuboff, about the rise of “Surveillance Capitalism,” a pernicious new economic logic, which robs us of our experiences, dispossess us of our sanctuaries and makes our lives increasingly unlivable. In 1609, while in search of a rumored, northeast passage to Asia on behalf of the Dutch East India Company, the English explorer and navigator Henry Hudson, landed on what is modern day New York City.
Grizzly » Thu Jun 06, 2019 7:15 pm wrote:Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming
Then there are the alerts generated by vague messages between friends. How is a school district supposed to respond when one student writes to another, “Tomorrow it will all be over?”
https://www.bnnbloomberg.ca/the-future- ... -1.1270598
The future will be recorded, on your smart speaker
Here’s Amazon’s solution: Alexa already stores what it hears in a buffer. Under the new configuration, according to the application, once Alexa detects a wake word, “the device will go backwards through the audio in the buffer to determine the start of the utterance that includes the wakeword.” After finding what it scores as the most likely start of the command, Alexa will perform a similar calculation to find the end. The command will then be processed exactly like one that was preceded by the wake word.
It all makes a great deal of sense. Why then the concern? It seems to me that there are two potential issues.
One is a worry about what happens to the information from the audio buffer. Alexa currently retains recordings for a period of time, helping it model the user’s needs and wants. This feature, which can be partially disabled, has already caused privacy problems. Courts have issued subpoenas for Alexa recordings. And as Bloomberg News has reported, human beings at Amazon already listen to much of what Alexa hears, in an effort to improve the algorithm. But no always-on feature was necessary for those recordings to survive.
The second concern might be that an Alexa which listens more closely, responding to natural language commands, will soon become an Alexa that fades into the background. The relative formality with which the device must be addressed serves as a reminder that we are addressing just that — a device. The more casually we can speak, the more casual we will likely be about using it. We might, quite literally, forget that Alexa is there. Only the consumer can decide whether that is a feature or a bug.
Amazon says that it has no current plan to change the way Alexa listens, but bear in mind that the always-on feature can be implemented whether or not it is ever patented. In other words, if the notion of a device that is always awake worries you, the fact that a patent application has been filed shouldn’t cause you to worry more. Any device that listens to you can already be made always-on. (Including, by the way, your smartphone.)
We accept that our laptops and smart televisions are recording the choices we make and sending them we know not where. The only reason we imagine that our spoken words are safe is that speech is an older, more instinctive technology. We still think of speech as special, a distinctively human function, and when we are in spaces we consider private, we consider our voice as something heard only by our most intimate and trusted acquaintances.
But to the computers that now surround us, speech is just another form of data. The various voice-commanded devices of today, whether in our homes, smartphones or cars, work just like keyboards or touchscreens. The only difference is that the human input is a voice. And the only way they can get that input is to listen for it.
So let’s calm down. Yes, it can be fun to imagine a future in which our homes are entirely connected and yet we’re able to keep private everything we want to keep private. But that ship sailed long before Amazon decided to seek a patent on a minor and welcome change to Alexa.
ACLU wants lawmakers to start dealing with an increase in AI-powered surveillance.
Sharon Weinberger commends a book on how a film inspired the United States to develop technology to capture everyone’s every move.
Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All Arthur Holland Michel Houghton Mifflin Harcourt (2019)
In the 1998 Hollywood thriller Enemy of the State, an innocent man (played by Will Smith) is pursued by a rogue spy agency that uses the advanced satellite “Big Daddy” to monitor his every move. The film — released 15 years before Edward Snowden blew the whistle on a global surveillance complex — has achieved a cult following.
It was, however, much more than just prescient: it was also an inspiration, even a blueprint, for one of the most powerful surveillance technologies ever created. So contends technology writer and researcher Arthur Holland Michel in his compelling book Eyes in the Sky. He notes that a researcher (unnamed) at the Lawrence Livermore National Laboratory in California who saw the movie at its debut decided to “explore — theoretically, at first — how emerging digital-imaging technology could be affixed to a satellite” to craft something like Big Daddy, despite the “nightmare scenario” it unleashes in the film. Holland Michel repeatedly notes this contradiction between military scientists’ good intentions and a technology based on a dystopian Hollywood plot.
Governments want your smart devices to have stupid security flaws
He traces the development of that technology, called wide-area motion imagery (WAMI, pronounced ‘whammy’), by the US military from 2001. A camera on steroids, WAMI can capture images of large areas, in some cases an entire city. The technology got its big break after 2003, in the chaotic period following the US-led invasion of Iraq, where home-made bombs — improvised explosive devices (IEDs) — became the leading killer of US and coalition troops. Defence officials began to call for a Manhattan Project to spot and tackle the devices.
In 2006, the cinematically inspired research was picked up by DARPA, the Defense Advanced Research Projects Agency, which is tasked with US military innovation (D. Kaiser Nature 543, 176–177; 2017). DARPA funded the building of an aircraft-mounted camera with a capacity of almost two billion pixels. The Air Force had dubbed the project Gorgon Stare, after the monsters of penetrating gaze from classical Greek mythology, whose horrifying appearance turned observers to stone. (DARPA called its programme Argus, after another mythical creature: a giant with 100 eyes.)
Some books use blockbuster action films to demonstrate — or exaggerate — a technology’s terrifying potential. Here, Enemy of the State shows up repeatedly because it is integral to the development of Gorgon Stare. Researchers play clips from it in their briefings; they compare their technology to Big Daddy (although their camera is so far only on aircraft, not a satellite). At one point, incredibly, they consult the company responsible for the movie’s aerial filming. (It set me wondering — which government lab out there is currently building the Death Star from Stars Wars?)
An MQ-9 Reaper remotely piloted aircraft with surveillance system attached
A camera on an MQ-9 Reaper drone.Credit: A1c Aaron Montoya/Planet Pix via ZUMA
Holland Michel’s book is not the first to look at technologies intended to achieve omniscience, but it is among the best. Writers examining the intersection of technology and privacy often repeat well-worn tropes, claiming that every novelty is the new Big Brother. But Eyes in the Sky is that rare creature: a deeply reported and deftly written investigation that seeks to understand both the implications of a technology and the motivations of its creators. Holland Michel notes tensions between security and privacy without hyping them.
Masters of war
And he gets those responsible for building WAMI to speak to him candidly — sometimes shockingly so. Take, for example, the former US military officer who touts the ‘benefits’ of the colonial subjugation of India (which he bizarrely claims created order among the country’s ethnic groups) to justify mass surveillance in the United States.
This potential for domestic mass surveillance becomes a key point. As the story proceeds, WAMI’s creators start looking for ways to use the battlefield technology at home: having built a new hammer, they search for more nails. Here, the story takes an even more dystopian turn. John Arnold, “a media-shy billionaire”, uses his own money to help secretly deploy a WAMI system to assist the police in tracking suspects in crime-ridden Baltimore, Maryland. Arnold, who has funded other “new crime-fighting technologies”, first learnt about WAMI’s use overseas from a podcast, and decided to debut it stateside. “Even the mayor was kept in the dark,” Holland Michel writes.
Is this our future? A world in which billionaires fund the police to record entire cities from above? That plot twist is less Enemy of the State than Batman, although it’s hard to know who the hero is. (At least the fictional Big Daddy was funded by Congress, even if its supporters had to kill one stubborn lawmaker to get the job done.) It’s enough to make us all reach for tinfoil hats, which could come in handy to block what Holland Michel warns is coming next: infrared imaging that can detect people inside their homes. WAMI, if deployed above your city, already has the capacity to track your daily commute and errands, and allow those watching to retrace your steps for days or weeks.
Dreaming of death rays: the search for laser weapons
To his credit, Holland Michel’s interviews with surveillance technologists are reported with context but without commentary, allowing readers to draw their own conclusions. In one understated episode, he reveals that — after the Baltimore project was exposed — the owner of the company that built and deployed the WAMI system there had “personally” provided gifts to a community organizer. The organizer was working to convince Baltimore residents that a sky-borne Big Brother might be in their interests.
One unanswered, and perhaps unanswerable, question is how successful WAMI was at its original purpose: preventing insurgent bomb attacks in Iraq and Afghanistan. Holland Michel isn’t sure, because the answer is classified. Although investment in WAMI is “furious and ongoing”, he notes, “the Air Force declined repeated requests for even an approximate indication of WAMI’s impact on the battlefield”.
What we do know is that Afghanistan, one of the most surveilled countries on Earth, is slipping further into chaos. That can’t be blamed on WAMI, but it does indicate that the tech is not today’s Manhattan Project.
The long entanglement of war and astrophysics
There are other questions. By focusing on a specific technology, does Holland Michel miss a bigger picture? Is the more serious threat the access of governments and corporations to our electronic devices? The answer to both is no, because he also traces how meshing WAMI with other sensors, including those on smartphones, will eventually create “a fully fused city” where “there may be nowhere to hide”. In the end, Eyes in the Sky transcends its title by using Gorgon Stare as a window into our future. And that is bleak.
When Gorgon Stare is completed, Michael Meermans, an executive at Sierra Nevada (the company in Sparks, Nevada, that built it) asks himself rhetorically whether the task is over. Of course not. “When it comes to the world of actually collecting information and creating knowledge,” Meermans says, “you can never stop.”
The Corbett Report
Episode 358 - The 5G Dragnet
Telecom companies are currently scrambling to implement fifth-generation cellular network technology. But the world of 5G is a world where all objects are wired and constantly communicating data to one another. The dark truth is that the development of 5G networks and the various networked products that they will give rise to in the global smart city infrastructure, represent the greatest threat to freedom in the history of humanity.
TRANSCRIPT AND SOURCES:
It is heartening to see that the health effect of the Extremely High Frequency radiation emitted by 5G transmitters is finally starting to break through to the public consciousness. But if we concentrate solely on the health effects of 5G, we risk falling into a trap. If the only danger of 5G were the danger to our health, then, if the safety of the technology could be demonstrated to the public or an equivalent, less harmful technology could be deployed, then there would be no more reason to resist.
To concentrate solely on the health effects of 5G is to miss the broader picture of total surveillance in the technocratic dystopia that this technology enables. In this picture, the 5G network is a platform for a system in which every action, every transaction, every interaction that we have in our daily life is monitored, data-based and analyzed in real time.
The tech giants use our data not only to predict our behaviour but to change it. But we can resist this attack on democracy
In a BBC interview last week, Facebook’s vice-president, Nick Clegg, surprised viewers by calling for new “rules of the road” on privacy, data collection and other company practices that have attracted heavy criticism during the past year. “It’s not for private companies … to come up with those rules,” he insisted. “It is for democratic politicians in the democratic world to do so.”
Facebook’s response would be to adopt a “mature role”, not “shunning” but “advocating” the new rules. For a company that has fiercely resisted new laws, Clegg’s message aimed to persuade us that the page had turned. Yet his remarks sounded like Newspeak, as if to obscure ugly facts.
A few weeks earlier Facebook’s chiefs, Mark Zuckerberg and Sheryl Sandberg, snubbed a subpoena from the Canadian parliament to appear for questioning. Clegg then showcased Silicon Valley’s standard defence against the rule of law – warning that any restrictions resulting from “tech-lash” risked making it “almost impossible for tech to innovate properly”, and summoning the spectre of Chinese ascendance. “I can predict that … we will have tech domination from a country with wholly different sets of values.”
Sign up to the Media Briefing: news for the news-makers
Both Facebook and Google have long relied on this misguided formula to shield them from law. In 2011, the former Google CEO Eric Schmidt warned that government overreach would foolishly constrain innovation, “We’ll move much faster than any government.”Then, in 2013, Google co-founder Larry Page complained that “old institutions like the law” impede the company’s freedom to “build really great things”. This rhetoric is a hand-me-down from another era when “Gilded Age” barons in the late-19th century United States insisted that there was no need for law when one had the “law of evolution”, the “laws of capital” and the “laws of industrial society”. As the historian David Nasaw put it, the millionaires preached that “democracy had its limits, beyond which voters and their elected representatives dared not trespass lest economic calamity befall the nation”.
Surveillance capitalism is an economic logic that has hijacked the digital for its own purposes
The tech companies’ innovation rhetoric effectively blinded users and lawmakers for many years. Facebook and Google were regarded as innovative companies that sometimes made dreadful mistakes at the expense of our privacy. Since then the picture has sharpened. It’s easier to see that what we thought of as mistakes actually were the innovations – Google Glass, Facebook giving private information to developers, and more. Each of these was an expression of a larger breakthrough: the invention of what I call surveillance capitalism.
Surveillance capitalism is not the same as digital technology. It is an economic logic that has hijacked the digital for its own purposes. The logic of surveillance capitalism begins with unilaterally claiming private human experience as free raw material for production and sales. It wants your walk in the park, online browsing and communications, hunt for a parking space, voice at the breakfast table …
These experiences are translated into behavioural data. Some of this data may be applied to product or service improvements, and the rest is valued for its predictive power. These flows of predictive data are fed into computational products that predict human behaviour. A leaked Facebook document in 2018 describes its machine-learning system that “ingests trillions of data points every day” and produces “more than 6m predictions per second”. Finally, these prediction products are sold to business customers in markets that trade in human futures.
This economic logic was first invented at Google in the context of online targeted ads where the “clickthrough rate” was the first globally successful prediction product, and targeted ad markets were the first markets to specialise in human futures. During the first years of discovery and invention from 2000 to 2004, Google’s revenues increased by 3,590%. Right from the start it was understood that the only way to protect these revenues was to hide the operations that produce them, keeping “users” in the dark with practices designed to be undetectable and indecipherable.
Surveillance capitalism migrated to Facebook, Microsoft and Amazon – and became the default option in most of the tech sector. It now advances across the economy from insurance, to retail, finance, health, education and more, including every “smart” product and “personalised” service.
Markets in human futures compete on the quality of predictions. This competition to sell certainty produces the economic imperatives that drive business practices. Ultimately, it has become clear that the most predictive data comes from intervening in our lives to tune and herd our behaviour towards the most profitable outcomes. Data scientists describe this as a shift from monitoring to actuation. The idea is not only to know our behaviour but also to shape it in ways that can turn predictions into guarantees. It is no longer enough to automate information flows about us; the goal now is to automate us. As one data scientist explained to me: “We can engineer the context around a particular behaviour and force change that way … We are learning how to write the music, and then we let the music make them dance.”
These economic imperatives erode democracy from below and from above. At the grassroots, systems are designed to evade individual awareness, undermining human agency, eliminating decision rights, diminishing autonomy and depriving us of the right to combat. The big picture reveals extreme concentrations of knowledge and power. Surveillance capitalists know everything about us, but we know little about them. Their knowledge is used for others’ interests, not our own. Surveillance capitalism thrives in the absence of law. In a way, this is good news. We have not failed to rein in this rogue capitalism; we’ve not yet tried. More good news: our societies successfully confronted destructive forms of capitalism in the past, asserting new laws that tethered capitalism to the real needs of people. Democracy ended the Gilded Age. We have every reason to believe that we can be successful again.
Bringing big tech to heel: how do we take back control of the internet?
The next great regulatory vision is likely to be framed by warriors for a democracy under threat: lawmakers, citizens and specialists, allied in the knowledge that only democracy can impose the people’s interests through law and regulation. The question is, what kind of regulation? Are existing approaches to privacy and antitrust law the answer? Both are critical but neither is adequate.
One example is privacy law’s call for “data ownership”. It’s a misleading notion because it legitimates the unilateral taking of human experience – your face, your phone, your refrigerator, your emotions – for translation into data in the first place. Even if we achieve “ownership” of the data we have provided to a company like Facebook, we will not achieve “ownership” of the predictions gleaned from it, or the fate of those products in its prediction markets. Data ownership is an individual solution when collective solutions are required. We will never own those 6m predictions produced each second. Surveillance capitalists know this. Clegg knows this. That is why they can tolerate discussions of “data ownership” and publicly invite privacy regulation.
What should lawmakers do? First, interrupt and outlaw surveillance capitalism’s data supplies and revenue flows. This means, at the front end, outlawing the secret theft of private experience. At the back end, we can disrupt revenues by outlawing markets that trade in human futures knowing that their imperatives are fundamentally anti-democratic. We already outlaw markets that traffic in slavery or human organs.
Second, research over the past decade suggests that when “users” are informed of surveillance capitalism’s backstage operations, they want protection, and they want alternatives. We need laws and regulation designed to advantage companies that want to break with surveillance capitalism. Competitors that align themselves with the actual needs of people and the norms of a market democracy are likely to attract just about every person on Earth as their customer.
Third, lawmakers will need to support new forms of collective action, just as nearly a century ago workers won legal protection for their rights to organise, to bargain collectively and to strike. Lawmakers need citizen support, and citizens need the leadership of their elected officials.
Surveillance capitalists are rich and powerful, but they are not invulnerable. They fear law. They fear lawmakers. They fear citizens who insist on a different path. Both groups are bound together in the work of rescuing the digital future for democracy. Mr Clegg, be careful what you wish for.
• Shoshana Zuboff is an academic and the author of The Age of Surveillance Capitalism
Users browsing this forum: No registered users and 2 guests