... and why the fuck should we believe them... ?
James Clapper: ‘I Didn’t Lie’ to Congress About NSA Surveillance, I ‘Simply Didn’t Understand’ the Question
Former National Security Agency Director James Clapper said he did not lie about mass domestic surveillance programs when he testified to Congress in 2013, but rather he made a mistake and did not understand what specific program was being asked about.
CNN’s John Berman asked for Clapper’s reaction to The Intercept’s Glen Greenwald report: “The very first NSA program we revealed from Snowden documents, the mass domestic spying program of Americans’ phone records which James Clapper lied about and Obama insisted was vital to national security has been shut down.”
“Well, the original thought behind this, and this program was put in place as a direct result of 9/11, the point was to be able to track quickly a foreign communicant talking to somebody in this country who may have been plotting a terrorist plot, and was put in place during the Bush Administration for that reason,” Clapper said. “I always regarded it as kind of a safeguard or insurance policy so that if the need came up you would have this to refer to.”
“As far as the comment, the allegation about my lying, I didn’t lie, I made a big mistake and I just simply didn’t understand what I was being asked about. I thought of another surveillance program, Section 702 of the Foreign Intelligence Surveillance Act when I was being asked about Section 215 of the Patriot Act at the time, I just didn’t understand that” he continued.
When Berman said because it has been reported no terrorists have been caught using the surveillance program, he asked Clapper if it suggests it does not work.
“Well, that’s true, and I think probably at the time contemporaneously back 2013 or so when all this broke that we may have oversold it a bit because, you know, we were hard-pressed to point out to a specific case in point,” Clapper admitted. “What this was was just trying to capitalize on the lesson learned from 9/11. I will say that — and I’ve said this publicly many times before, that what this did prove was the need for the intelligence community to have been more transparent.”
Watch above, via CNN.
Israeli espionage operations in the United States have "gone too far," senior U.S. intelligence officials have told Congress in recent weeks, Newsweek reported on Tuesday.
Newsweek quotes confidential briefings to Congress and says Israel's massive spying is behind the failure to provide visa waiver to Israelis entering U.S.
We Got U.S. Border Officials to Testify Under Oath. Here’s What We Found Out.
Hugh Handeyside, Senior Staff Attorney, ACLU National Security Project
& Nathan Freed Wessler, Staff Attorney, ACLU Speech, Privacy, and Technology Project
& Esha Bhandari, Staff Attorney, ACLU Speech, Privacy, and Technology Project
April 30, 2019 | 1:45 PM
Electronic Device Searches Privacy at Borders and Checkpoints Privacy & Technology
CBP Officer processes a passenger into the United States at an airport
In September 2017, we, along with the Electronic Frontier Foundation, sued the federal government for its warrantless and suspicionless searches of phones and laptops at airports and other U.S. ports of entry.
The government immediately tried to dismiss our case, arguing that the First and Fourth Amendments do not protect against such searches. But the court ruled that our clients — 10 U.S. citizens and one lawful permanent resident whose phones and laptops were searched while returning to the United States — could move forward with their claims.
Since then, U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement have had to turn over documents and evidence about why and how they conduct warrantless and suspicionless searches of electronic devices at the border. And their officials have had to sit down with us to explain — under oath — their policies and practices governing such warrantless searches.
What we learned is alarming, and we’re now back in court with this new evidence asking the judge to skip trial altogether and rule for our clients.
The information we uncovered through our lawsuit shows that CBP and ICE are asserting near-unfettered authority to search and seize travelers’ devices at the border, for purposes far afield from the enforcement of immigration and customs laws. The agencies’ policies allow officers to search devices for general law enforcement purposes, such as investigating and enforcing bankruptcy, environmental, and consumer protection laws. The agencies also say that they can search and seize devices for the purpose of compiling “risk assessments” or to advance pre-existing investigations. The policies even allow officers to consider requests from other government agencies to search specific travelers’ devices.
CBP and ICE also say they can search a traveler’s electronic devices to find information about someone else. That means they can search a U.S. citizen’s devices to probe whether that person’s family or friends may be undocumented; the devices of a journalist or scholar with foreign sources who may be of interest to the U.S. government; or the devices of a traveler who is the business partner or colleague of someone under investigation.
Both agencies allow officers to retain information from travelers’ electronic devices and share it with other government entities, including state, local, and foreign law enforcement agencies.
Say NO to Trump's Border Wall
Add your name
Let’s get one thing clear: The government cannot use the pretext of the “border” to make an end run around the Constitution.
The border is not a lawless place. CBP and ICE are not exempt from the Constitution. And the information on our phones and laptops is no less deserving of constitutional protections than, say, international mail or our homes.
Warrantless and suspicionless searches of our electronic devices at the border violate the Fourth Amendment, which protects us against unreasonable searches and seizures – including at the border. Border officers do have authority to search our belongings for contraband or illegal items, but mobile electronic devices are unlike any other item officers encounter at the border. For instance, they contain far more personal and revealing information than could be gleaned from a thorough search of a person’s home, which requires a warrant.
These searches also violate the First Amendment. People will self-censor and avoid expressing dissent if they know that returning to the United States means that border officers can read and retain what they say privately, or see what topics they searched online. Similarly, journalists will avoid reporting on issues that the U.S. government may have an interest in, or that may place them in contact with sensitive sources.
Our clients’ experiences demonstrate the intrusiveness of device searches at the border and the emotional toll they exact. For instance, Zainab Merchant and Nadia Alasaad both wear headscarves in public for religious reasons, and their smartphones contained photos of themselves without headscarves that they did not want border officers to see. Officers searched the phones nonetheless. On another occasion, a border officer searched Ms. Merchant’s phone even though she repeatedly told the officer that it contained attorney-client privileged communications. After repeated searches of his electronic devices, Isma’il Kushkush, a journalist, felt worried that he was being targeted because of his reporting, and he questioned whether to continue covering issues overseas.
Crossing the U.S. border shouldn’t mean facing the prospect of turning over years of emails, photos, location data, medical and financial information, browsing history, or other personal information on our mobile devices. That’s why we’re asking a federal court to rule that border agencies must do what any other law enforcement agency would have to do in order to search electronic devices: get a warrant.
May 1, 2019
China: How Mass Surveillance Works in Xinjiang
‘Reverse Engineering’ Police App Reveals Profiling, Monitoring Strategies
Since late 2016, the Chinese government has subjected the 13 million ethnic Uyghurs and other Turkic Muslims in Xinjiang to mass arbitrary detention, forced political indoctrination, restrictions on movement, and religious oppression. Credible estimates indicate that under this heightened repression, up to one million people are being held in “political education” camps. The government’s “Strike Hard Campaign against Violent Terrorism” (Strike Hard Campaign, 严厉打击暴力恐怖活动专项行动) has turned Xinjiang into one of China’s major centers for using innovative technologies for social control.
“Our research shows, for the first time, that Xinjiang police are using illegally gathered information about people’s completely lawful behavior – and using it against them.”
This report provides a detailed description and analysis of a mobile app that police and other officials use to communicate with the Integrated Joint Operations Platform (IJOP, 一体化联合作战平台), one of the main systems Chinese authorities use for mass surveillance in Xinjiang. Human Rights Watch first reported on the IJOP in February 2018, noting the policing program aggregates data about people and flags to officials those it deems potentially threatening; some of those targeted are detained and sent to political education camps and other facilities. But by “reverse engineering” this mobile app, we now know specifically the kinds of behaviors and people this mass surveillance system targets.
The findings have broader significance, providing an unprecedented window into how mass surveillance actually works in Xinjiang, because the IJOP system is central to a larger ecosystem of social monitoring and control in the region. They also shed light on how mass surveillance functions in China. While Xinjiang’s systems are particularly intrusive, their basic designs are similar to those the police are planning and implementing throughout China.
Many—perhaps all—of the mass surveillance practices described in this report appear to be contrary to Chinese law. They violate the internationally guaranteed rights to privacy, to be presumed innocent until proven guilty, and to freedom of association and movement. Their impact on other rights, such as freedom of expression and religion, is profound.
Human Rights Watch finds that officials use the IJOP app to fulfill three broad functions: collecting personal information, reporting on activities or circumstances deemed suspicious, and prompting investigations of people the system flags as problematic.
Analysis of the IJOP app reveals that authorities are collecting massive amounts of personal information—from the color of a person’s car to their height down to the precise centimeter—and feeding it into the IJOP central system, linking that data to the person’s national identification card number. Our analysis also shows that Xinjiang authorities consider many forms of lawful, everyday, non-violent behavior—such as “not socializing with neighbors, often avoiding using the front door”—as suspicious. The app also labels the use of 51 network tools as suspicious, including many Virtual Private Networks (VPNs) and encrypted communication tools, such as WhatsApp and Viber.
The IJOP app demonstrates that Chinese authorities consider certain peaceful religious activities as suspicious, such as donating to mosques or preaching the Quran without authorization. But most of the other behavior the app considers problematic are ethnic-and religion-neutral. Our findings suggest the IJOP system surveils and collects data on everyone in Xinjiang. The system is tracking the movement of people by monitoring the “trajectory” and location data of their phones, ID cards, and vehicles; it is also monitoring the use of electricity and gas stations of everybody in the region. This is consistent with Xinjiang local government statements that emphasize officials must collect data for the IJOP system in a “comprehensive manner” from “everyone in every household.”
When the IJOP system detects irregularities or deviations from what it considers normal, such as when people are using a phone that is not registered to them, when they use more electricity than “normal,” or when they leave the area in which they are registered to live without police permission, the system flags these “micro-clues” to the authorities as suspicious and prompts an investigation.
Another key element of IJOP system is the monitoring of personal relationships. Authorities seem to consider some of these relationships inherently suspicious. For example, the IJOP app instructs officers to investigate people who are related to people who have obtained a new phone number or who have foreign links.
The authorities have sought to justify mass surveillance in Xinjiang as a means to fight terrorism. While the app instructs officials to check for “terrorism” and “violent audio-visual content” when conducting phone and software checks, these terms are broadly defined under Chinese laws. It also instructs officials to watch out for “adherents of Wahhabism,” a term suggesting an ultra-conservative form of Islamic belief, and “families of those…who detonated [devices] and killed themselves.” But many—if not most—behaviors the IJOP system pays special attention to have no clear relationship to terrorism or extremism. Our analysis of the IJOP system suggests that gathering information to counter genuine terrorism or extremist violence is not a central goal of the system.
The app also scores government officials on their performance in fulfilling tasks and is a tool for higher-level supervisors to assign tasks to, and keep tabs on the performance of, lower-level officials. The IJOP app, in part, aims to control government officials to ensure that they are efficiently carrying out the government’s repressive orders.
In creating the IJOP system, the Chinese government has benefitted from Chinese companies who provide them with technologies. While the Chinese government has primary responsibility for the human rights violations taking place in Xinjiang, these companies also have a responsibility under international law to respect human rights, avoid complicity in abuses, and adequately remedy them when they occur.
As detailed below, the IJOP system and some of the region’s checkpoints work together to form a series of invisible or virtual fences. Authorities describe them as a series of “filters” or “sieves” throughout the region, sifting out undesirable elements. Depending on the level of threat authorities perceive—determined by factors programmed into the IJOP system—, individuals’ freedom of movement is restricted to different degrees. Some are held captive in Xinjiang’s prisons and political education camps; others are subjected to house arrest, not allowed to leave their registered locales, not allowed to enter public places, or not allowed to leave China.
Government control over movement in Xinjiang today bears similarities to the Mao Zedong era (1949-1976), when people were restricted to where they were registered to live and police could detain anyone for venturing outside their locales. After economic liberalization was launched in 1979, most of these controls had become largely obsolete. However, Xinjiang’s modern police state—which uses a combination of technological systems and administrative controls—empowers the authorities to reimpose a Mao-era degree of control, but in a graded manner that also meets the economy’s demands for largely free movement of labor.
The intrusive, massive collection of personal information through the IJOP app helps explain reports by Turkic Muslims in Xinjiang that government officials have asked them or their family members a bewildering array of personal questions. When government agents conduct intrusive visits to Muslims’ homes and offices, for example, they typically ask whether the residents own exercise equipment and how they communicate with families who live abroad; it appears that such officials are fulfilling requirements sent to them through apps such as the IJOP app. The IJOP app does not require government officials to inform the people whose daily lives are pored over and logged the purpose of such intrusive data collection or how their information is being used or stored, much less obtain consent for such data collection.
A checkpoint in Turpan, Xinjiang. Some of Xinjiang’s checkpoints are equipped with special machines that, in addition to recognizing people through their ID cards or facial recognition, are also vacuuming up people’s identifying information from their
A checkpoint in Turpan, Xinjiang. Some of Xinjiang’s checkpoints are equipped with special machines that, in addition to recognizing people through their ID cards or facial recognition, are also vacuuming up people’s identifying information from their electronic devices. © 2018 Darren Byler
The Strike Hard Campaign has shown complete disregard for the rights of Turkic Muslims to be presumed innocent until proven guilty. In Xinjiang, authorities have created a system that considers individuals suspicious based on broad and dubious criteria, and then generates lists of people to be evaluated by officials for detention. Official documents state that individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize the number of people they find “untrustworthy” in detention. Such people are then subjected to police interrogation without basic procedural protections. They have no right to legal counsel, and some are subjected to torture and mistreatment, for which they have no effective redress, as we have documented in our September 2018 report. The result is Chinese authorities, bolstered by technology, arbitrarily and indefinitely detaining Turkic Muslims in Xinjiang en masse for actions and behavior that are not crimes under Chinese law.
And yet Chinese authorities continue to make wildly inaccurate claims that their “sophisticated” systems are keeping Xinjiang safe by “targeting” terrorists “with precision.” In China, the lack of an independent judiciary and free press, coupled with fierce government hostility to independent civil society organizations, means there is no way to hold the government or participating businesses accountable for their actions, including for the devastating consequences these systems inflict on people’s lives.
The Chinese government should immediately shut down the IJOP and delete all the data it has collected from individuals in Xinjiang. It should cease the Strike Hard Campaign, including all compulsory programs aimed at surveilling and controlling Turkic Muslims. All those held in political education camps should be unconditionally released and the camps shut down. The government should also investigate Party Secretary Chen Quanguo and other senior officials implicated in human rights abuses, including violating privacy rights, and grant access to Xinjiang, as requested by the Office of the United Nations High Commissioner for Human Rights and UN human rights experts.
Concerned foreign governments should impose targeted sanctions, such as the US Global Magnitsky Act, including visa bans and asset freezes, against Party Secretary Chen and other senior officials linked to abuses in the Strike Hard Campaign. They should also impose appropriate export control mechanisms to prevent the Chinese government from obtaining technologies used to violate basic rights.
Why WhatsApp Will Never Be Secure
Pavel DurovMay 15, 2019
The world seems to be shocked by the news that WhatsApp turned any phone into spyware. Everything on your phone, including photos, emails and texts was accessible by attackers just because you had WhatsApp installed .
This news didn’t surprise me though. Last year WhatsApp had to admit they had a very similar issue – a single video call via WhatsApp was all a hacker needed to get access to your phone’s entire data .
Every time WhatsApp has to fix a critical vulnerability in their app, a new one seems to appear in its place. All of their security issues are conveniently suitable for surveillance, and look and work a lot like backdoors.
Unlike Telegram, WhatsApp is not open source, so there’s no way for a security researcher to easily check whether there are backdoors in its code. Not only does WhatsApp not publish its code, they do the exact opposite: WhatsApp deliberately obfuscates their apps’ binaries to make sure no one is able to study them thoroughly.
WhatsApp and its parent company Facebook may even be required to implement backdoors – via secret processes such as the FBI’s gag orders . It’s not easy to run a secure communication app from the US. A week our team spent in the US in 2016 prompted 3 infiltration attempts by the FBI . Imagine what 10 years in that environment can bring upon a US-based company.
I understand security agencies justify planting backdoors as anti-terror efforts. The problem is such backdoors can also be used by criminals and authoritarian governments. No wonder dictators seem to love WhatsApp. Its lack of security allows them to spy on their own people, so WhatsApp continues being freely available in places like Russia or Iran, where Telegram is banned by the authorities .
As a matter of fact, I started working on Telegram as a direct response to personal pressure from the Russian authorities. Back then, in 2012, WhatsApp was still transferring messages in plain-text in transit. That was insane. Not just governments or hackers, but mobile providers and wifi admins had access to all WhatsApp texts .
Later WhatsApp added some encryption, which quickly turned out to be a marketing ploy: The key to decrypt messages was available to at least several governments, including the Russians . Then, as Telegram started to gain popularity, WhatsApp founders sold their company to Facebook and declared that “Privacy was in their DNA” . If true, it must have been a dormant or a recessive gene.
3 years ago WhatsApp announced they implemented end-to-end encryption so “no third party can access messages“. It coincided with an aggressive push for all of its users to back up their chats in the cloud. When making this push, WhatsApp didn’t tell its users that when backed up, messages are no longer protected by end-to-end encryption and can be accessed by hackers and law enforcement. Brilliant marketing, and some naive people are serving their time in jail as a result .
Those resilient enough not to fall for constant popups telling them to back up their chats can still be traced by a number of tricks – from accessing their contacts’ backups to invisible encryption key changes . The metadata generated by WhatsApp users – logs describing who chats with whom and when – is leaked to all kinds of agencies in large volumes by WhatsApp’s mother company . On top of this, you have a mix of critical vulnerabilities succeeding one another.
WhatsApp has a consistent history – from zero encryption at its inception to a succession of security issues strangely suitable for surveillance purposes. Looking back, there hasn’t been a single day in WhatsApp’s 10 year journey when this service was secure. That’s why I don’t think that just updating WhatsApp's mobile app will make it secure for anyone. For WhatsApp to become a privacy-oriented service, it has to risk losing entire markets and clashing with authorities in their home country. They don’t seem to be ready for that .
Last year, the founders of WhatsApp left the company due to concerns over users’ privacy . They are definitely tied by either gag orders or NDAs, so are unable discuss backdoors publicly without risking to lose their fortunes and freedom. They were able to admit, however, that "they sold their users' privacy" .
I can understand the reluctance of WhatsApp founders to provide more detail – it’s not easy to put your comfort at risk. Several years ago I had to leave my country after refusing to comply with government-sanctioned privacy breaches of VK users . It was not pleasant. But would I do something like this again? Gladly. Every one of us is going to die eventually, but we as a species will stick around for a while. That’s why I think accumulating money, fame or power is irrelevant. Serving humanity is the only thing that really matters in the long run.
And yet, despite our intentions, I feel we let humanity down in this whole WhatsApp spyware story. A lot of people can’t stop using WhatsApp, because their friends and family are still on it. It means we at Telegram did a bad job of persuading people to switch over. While we did attract hundreds of millions of users in the last five years, this wasn’t enough. The majority of internet users are still held hostage by the Facebook/WhatsApp/Instagram empire. Many of those who use Telegram are also on WhatsApp, meaning their phones are still vulnerable. Even those who ditched WhatsApp completely are probably using Facebook or Instagram, both of which think it’s OK to store your passwords in plaintext  (I still can’t believe a tech company could do something like this and get away with it).
In almost 6 years of its existence, Telegram hasn't had any major data leak or security flaw of the kind WhatsApp demonstrates every few months. In the same 6 years, we disclosed exactly zero bytes of data to third-parties, while Facebook/WhatsApp has been sharing pretty much everything with everybody who claimed they worked for a government .
Few people outside the Telegram fan community realize that most of the new features in messaging appear on Telegram first, and are then carbon-copied by WhatsApp down to the tiniest details. More recently we are witnessing the attempt by Facebook to borrow Telegram’s entire philosophy, with Zuckerberg suddenly declaring the importance of privacy and speed, practically citing Telegram’s app description word for word in his F8 speech.
But whining about FB’s hypocrisy and lack of creativity won’t help. We have to admit Facebook is executing an efficient strategy. Look what they did to Snapchat .
We at Telegram have to acknowledge our responsibility in forming the future. It’s either us or the Facebook monopoly. It’s either freedom and privacy or greed and hypocrisy. Our team has been competing with Facebook for the last 13 years. We already beat them once, in the Eastern European social networking market . We will beat them again in the global messaging market. We have to.
It won't be easy. The Facebook marketing department is huge. We at Telegram, however, do zero marketing. We don’t want to pay journalists and researchers to tell the world about Telegram. For that, we rely on you – the millions of our users. If you like Telegram enough, you will tell your friends about it. And if every Telegram user persuades 3 of their friends to delete WhatsApp and permanently move to Telegram, Telegram will already be more popular than WhatsApp.
The age of greed and hypocrisy will end. An era of freedom and privacy will begin. It is much closer than it seems.
 Business Insider WhatsApp was hacked and attackers installed spyware on people’s phones – May 15, 2019
 Security Today WhatsApp Bug Allowed Hackers to Hijack Accounts – October 12, 2018
 Wikipedia Gag order – United States
 Neowin FBI asked Durov and developer for Telegram backdoor – September 19, 0271
 The Baffler The Crypto-Keepers – September 17, 2017
 New York Times What Is Telegram, and Why Are Iran and Russia Trying to Ban It? – May 2, 2018
 YourDailyMac Whatsapp leaks usernames, telephone numbers and messages – May 19, 2011
 The H Security Sniffer tool displays other people's WhatsApp messages – May 13, 2012
 FilePerms WhatsApp is broken, really broken – September 12, 2012
 International Business Times Respect for Privacy Is Coded Into WhatsApp's DNA: Founder Jan Koum – March 18, 2014
 Slate How Did the FBI Access Paul Manafort’s Encrypted Messages? – June 5, 2018
 AppleInsider WhatsApp backdoor defeats end-to-end encryption, potentially allows Facebook to read messages – January 13, 2017
 Forbes Forget About Backdoors, This Is The Data WhatsApp Actually Hands To Cops – January 22, 2017
 New York Times Facebook Said to Create Censorship Tool to Get Back Into China – November 22, 2016
 The Verge WhatsApp co-founder Jan Koum is leaving Facebook after clashing over data privacy – April 30, 2018
 CNET WhatsApp co-founder: 'I sold my users' privacy' with Facebook acquisition – September 25, 2018
 New York Times Once celebrated in Russia, programmer Pavel Durov chooses exile – December 2, 2014
 TechCrunch Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext – March 21, 2019
 Engadget Facebook stored millions of Instagram passwords in plain text – 18 April, 2019
 Vanity Fair Snapchat is doing so badly, the feds are getting involved – November 14, 2018
 HuffPost Vkontakte, Facebook Competitor In Russia, Dominates – October 26, 2012
Subscribe to Hidden Forces and gain access to the episode overtime, transcript, and show rundown here: http://hiddenforces.io/subscribe In Episode 79 of Hidden Forces, Demetri Kofinas speaks with Shoshana Zuboff, about the rise of “Surveillance Capitalism,” a pernicious new economic logic, which robs us of our experiences, dispossess us of our sanctuaries and makes our lives increasingly unlivable. In 1609, while in search of a rumored, northeast passage to Asia on behalf of the Dutch East India Company, the English explorer and navigator Henry Hudson, landed on what is modern day New York City.
Grizzly » Thu Jun 06, 2019 7:15 pm wrote:Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming
Then there are the alerts generated by vague messages between friends. How is a school district supposed to respond when one student writes to another, “Tomorrow it will all be over?”
https://www.bnnbloomberg.ca/the-future- ... -1.1270598
The future will be recorded, on your smart speaker
Here’s Amazon’s solution: Alexa already stores what it hears in a buffer. Under the new configuration, according to the application, once Alexa detects a wake word, “the device will go backwards through the audio in the buffer to determine the start of the utterance that includes the wakeword.” After finding what it scores as the most likely start of the command, Alexa will perform a similar calculation to find the end. The command will then be processed exactly like one that was preceded by the wake word.
It all makes a great deal of sense. Why then the concern? It seems to me that there are two potential issues.
One is a worry about what happens to the information from the audio buffer. Alexa currently retains recordings for a period of time, helping it model the user’s needs and wants. This feature, which can be partially disabled, has already caused privacy problems. Courts have issued subpoenas for Alexa recordings. And as Bloomberg News has reported, human beings at Amazon already listen to much of what Alexa hears, in an effort to improve the algorithm. But no always-on feature was necessary for those recordings to survive.
The second concern might be that an Alexa which listens more closely, responding to natural language commands, will soon become an Alexa that fades into the background. The relative formality with which the device must be addressed serves as a reminder that we are addressing just that — a device. The more casually we can speak, the more casual we will likely be about using it. We might, quite literally, forget that Alexa is there. Only the consumer can decide whether that is a feature or a bug.
Amazon says that it has no current plan to change the way Alexa listens, but bear in mind that the always-on feature can be implemented whether or not it is ever patented. In other words, if the notion of a device that is always awake worries you, the fact that a patent application has been filed shouldn’t cause you to worry more. Any device that listens to you can already be made always-on. (Including, by the way, your smartphone.)
We accept that our laptops and smart televisions are recording the choices we make and sending them we know not where. The only reason we imagine that our spoken words are safe is that speech is an older, more instinctive technology. We still think of speech as special, a distinctively human function, and when we are in spaces we consider private, we consider our voice as something heard only by our most intimate and trusted acquaintances.
But to the computers that now surround us, speech is just another form of data. The various voice-commanded devices of today, whether in our homes, smartphones or cars, work just like keyboards or touchscreens. The only difference is that the human input is a voice. And the only way they can get that input is to listen for it.
So let’s calm down. Yes, it can be fun to imagine a future in which our homes are entirely connected and yet we’re able to keep private everything we want to keep private. But that ship sailed long before Amazon decided to seek a patent on a minor and welcome change to Alexa.
ACLU wants lawmakers to start dealing with an increase in AI-powered surveillance.
Users browsing this forum: No registered users and 1 guest