otrdiena, 2018. gada 13. februāris

Relativity of Privacy in the Digital Society


                                                                       Vita sine libertate, nihil

                          
Relativity of Privacy in the Digital Society  

Millions of people communicate in social networks, work in the Internet environment and use e-services provided by commercial enterprises and state institutions every minute. Thus, consciously or unwittingly spreading information of private nature in the public space through a variety of service providers. Including such correspondents, in whose reliability and in the legitimacy of whose activities they are not at all convinced. At the same time, they are completely clueless of what happens next with these personal data
Many people, while communicating voluntarily in social networks, on thematic forums and in the media, disclose the details of their private life, their hobbies, character traits, political views and worldviews. There are companies and intelligence services that monitor all this, collect and analyse the information obtained (including illegally tapped conversations, video recordings, etc.) and compile personal dossiers.
The information collected and accumulated in this way is used both for target oriented marketing and for specific needs of supervision and control over the activities of the individual. Including in cases where there is a demand for it when the social status of the person is changed... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1


Microsoft's new 'Data Dignity' team could help users control their personal data

Microsoft is staffing up a new Data Dignity team in its CTO's office which could help users to control their own personal data, ultimately to the point of buying and selling it…:
By Mary Jo Foley for All About Microsoft | September 23, 2019

Commission proposes measures to boost data sharing and support European data spaces

Page contents

  • Top
  • Print friendly pdf
  • Related media
  • Press contact

To better exploit the potential of ever-growing data in a trustworthy European framework, the Commission today proposes new rules on data governance. The Regulation will facilitate data sharing across the EU and between sectors to create wealth for society, increase control and trust of both citizens and companies regarding their data, and offer an alternative European model to data handling practice of major tech platforms.

The amount of data generated by public bodies, businesses and citizens is constantly growing. It is expected to multiply by five between 2018 and 2025. These new rules will allow this data to be harnessed and will pave the way for sectoral European data spaces to benefit society, citizens and companies. In the Commission's data strategy of February this year, nine such data spaces have been proposed, ranging from industry to energy, and from health to the European Green Deal. They will, for example, contribute to the green transition by improving the management of energy consumption, make delivery of personalised medicine a reality, and facilitate access to public services.

Executive Vice-President for A Europe Fit for the Digital Age, Margrethe Vestager, said: “You don't have to share all data. But if you do and data is sensitive you should be able to do in a manner where data can be trusted and protected. We want to give business and citizens the tools to stay in control of data. And to build trust that data is handled in line with European values and fundamental rights.”

Commissioner for Internal Market, Thierry Breton, said: “We are defining today a truly European approach to data sharing. Our new regulation will enable trust and facilitate the flow of data across sectors and Member States while putting all those who generate data in the driving seat. With the ever-growing role of industrial data in our economy, Europe needs an open yet sovereign Single Market for data. Flanked by the right investments and key infrastructures, our regulation will help Europe become the world's number one data continent.”

Delivering on the announcement in the data strategy, the Regulation will create the basis for a new European way of data governance that is in line with EU values and principles, such as personal data protection (GDPR), consumer protection and competition rules. It offers an alternative model to the data-handling practices of the big tech platforms, which can acquire a high degree of market power because of their business models that imply control of large amounts of data. This new approach proposes a model based on the neutrality and transparency of data intermediaries, which are organisers of data sharing or pooling, to increase trust. To ensure this neutrality, the data-sharing intermediary cannot deal in the data on its own account (e.g. by selling it to another company or using it to develop their own product based on this data) and will have to comply with strict requirements.

The Regulation includes:

  • A number of measures to increase trust in data sharing, as the lack of trust is currently a major obstacle and results in high costs.
  • Create new EU rules on neutrality to allow novel data intermediaries to function as trustworthy organisers of data sharing.
  • Measures to facilitate the reuse of certain data held by the public sector. For example, the reuse of health data could advance research to find cures for rare or chronic diseases.
  • Means to give Europeans control on the use of the data they generate, by making it easier and safer for companies and individuals to voluntarily make their data available for the wider common good under clear conditions.

Background

Today's proposal is the first deliverable under the European strategy for data, which aims to unlock the economic and societal potential of data and technologies like Artificial Intelligence, while respecting EU rules and values (for example in the area of data protection, and respect of intellectual property and trade secrets). The strategy will build on the size of the single market as a space where data can flow within the EU and across sectors in line with clear, practical and fair rules for access and reuse. Today's proposal also supports wider international sharing of data, under conditions that ensure compliance with the European public interest and the legitimate interests of data providers.

More dedicated proposals on data spaces are expected to follow in 2021, complemented by a Data Act to foster data sharing among businesses, and between business and governments.

https://ec.europa.eu/commission/presscorner/detail/en/IP_20_2102 


POV: The U.S. government is already buying citizens’ personal information. AI could make that process even easier

Generative AI increases the threat to privacy by giving the government access to sensitive information beyond what it could collect through court-authorized surveillance.

 BY ANNE TOOMEY MCKENNA,  07-05-23

 Numerous government agencies, including the FBI, Department of Defense, National Security Agency, Treasury Department, Defense Intelligence Agency, Navy, and Coast Guard, have purchased vast amounts of U.S. citizens’ personal information from commercial data brokers. The revelation was published in a partially declassified, internal Office of the Director of National Intelligence report released on June 9, 2023.

The report shows the breathtaking scale and invasive nature of the consumer data market and how that market directly enables wholesale surveillance of people. The data includes not only where you’ve been and who you’re connected to, but the nature of your beliefs and predictions about what you might do in the future. The report underscores the grave risks the purchase of this data poses, and urges the intelligence community to adopt internal guidelines to address these problems.

As a privacy, electronic surveillance, and technology law attorney, researcher, and law professor, I have spent years researching, writing, and advising about the legal issues the report highlights.

These issues are increasingly urgent. Today’s commercially available information, coupled with the now-ubiquitous decision-making artificial intelligence and generative AI-like ChatGPT, significantly increases the threat to privacy and civil liberties by giving the government access to sensitive personal information beyond even what it could collect through court-authorized surveillance.

WHAT IS COMMERCIALLY AVAILABLE INFORMATION?

The drafters of the report take the position that commercially available information is a subset of publicly available information. The distinction between the two is significant from a legal perspective. Publicly available information is information that is already in the public domain. You could find it by doing a little online searching.

Commercially available information is different. It is personal information collected from a dizzying array of sources by commercial data brokers that aggregate and analyze it, then make it available for purchase by others, including governments. Some of that information is private, confidential, or otherwise legally protected.

The sources and types of data for commercially available information are mind-bogglingly vast. They include public records and other publicly available information. But far more information comes from the nearly ubiquitous internet-connected devices in people’s lives, like cellphones, smart-home systems, cars, and fitness trackers. These all harness data from sophisticated, embedded sensors, cameras, and microphones. Sources also include data from apps, online activity, texts, and emails, and even healthcare provider websites.

Types of data include location, gender, and sexual orientation, religious and political views and affiliations, weight and blood pressure, speech patterns, emotional states, behavioral information about myriad activities, shopping patterns, and family and friends.

This data provides companies and governments a window into the “Internet of Behaviors,” a combination of data collection and analysis aimed at understanding and predicting people’s behavior. It pulls together a wide range of data, including location and activities, and uses scientific and technological approaches, including psychology and machine learning, to analyze that data. The Internet of Behaviors provides a map of what each person has done, is doing, and is expected to do, and provides a means to influence a person’s behavior.

BETTER, CHEAPER, AND UNRESTRICTED

The rich depths of commercially available information, analyzed with powerful AI, provide unprecedented power, intelligence, and investigative insights. The information is a cost-effective way to surveil virtually everyone, plus it provides far more sophisticated data than traditional electronic surveillance tools or methods like wiretapping and location tracking.

Government use of electronic-surveillance tools is extensively regulated by federal and state laws. The U.S. Supreme Court has ruled that the Constitution’s Fourth Amendment, which prohibits unreasonable searches and seizures, requires a warrant for a wide range of digital searches. These include wiretapping or intercepting a person’s calls, texts, or emails; using GPS or cellular location information to track a person; or searching a person’s cellphone.

Complying with these laws takes time and money, plus electronic-surveillance law restricts what, when, and how data can be collected. Commercially available information is cheaper to obtain, provides far richer data and analysis, and is subject to little oversight or restriction compared to when the same data is collected directly by the government.

THE THREATS

Technology and the burgeoning volume of commercially available information allow various forms of the information to be combined and analyzed in new ways to understand all aspects of your life, including preferences and desires.

The Office of the Director of National Intelligence report warns that the increasing volume and widespread availability of commercially available information poses “significant threats to privacy and civil liberties.” It increases the power of the government to surveil its citizens outside the bounds of law, and it opens the door to the government using that data in potentially unlawful ways. This could include using location data obtained via commercially available information rather than a warrant to investigate and prosecute someone for abortion.

The report also captures both how widespread government purchases of commercially available information are and how haphazard government practices around the use of the information are. The purchases are so pervasive and agencies’ practices so poorly documented that the Office of the Director of National Intelligence cannot even fully determine how much and what types of information agencies are purchasing, and what the various agencies are doing with the data.

IS IT LEGAL?

The question of whether it’s legal for government agencies to purchase commercially available information is complicated by the array of sources and complex mix of data it contains.

There is no legal prohibition on the government collecting information already disclosed to the public or otherwise publicly available. But the nonpublic information listed in the declassified report includes data that U.S. law typically protects. The nonpublic information’s mix of private, sensitive, confidential, or otherwise lawfully protected data makes collection a legal gray area.

Despite decades of increasingly sophisticated and invasive commercial data aggregation, Congress has not passed a federal data privacy law. The lack of federal regulation around data creates a loophole for government agencies to evade electronic surveillance law. It also allows agencies to amass enormous databases that AI systems learn from and use in often unrestricted ways. The resulting erosion of privacy has been a concern for more than a decade.

THROTTLING THE DATA PIPELINE

The Office of the Director of National Intelligence report acknowledges the stunning loophole that commercially available information provides for government surveillance: “The government would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times, to log and track most of their social interactions, or to keep flawless records of all their reading habits. Yet smartphones, connected cars, web tracking technologies, the Internet of Things, and other innovations have had this effect without government participation.”

However, it isn’t entirely correct to say “without government participation.” The legislative branch could have prevented this situation by enacting data privacy laws, more tightly regulating commercial data practices, and providing oversight in AI development. Congress could yet address the problem. Representative Ted Lieu has introduced the bipartisan proposal for a National AI Commission, and Senator Chuck Schumer has proposed an AI regulation framework.

Effective data-privacy laws would keep your personal information safer from government agencies and corporations, and responsible AI regulation would block them from manipulating you.

https://www.fastcompany.com/90917060/pov-the-u-s-government-is-already-buying-citizens-personal-informatio

Decentralized Society: Finding Web3's Soul

by EG Weyl · 2022 

 We call this richer, pluralistic ecosystem “Decentralized Society” (DeSoc)—a co-determined sociality, where Souls and communities come ... :

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4105763

Cracking Down on Dissent, Russia Seeds a Surveillance Supply Chain

Russia is incubating a cottage industry of new digital surveillance tools to suppress domestic opposition to the war in Ukraine. The tech may also be sold overseas.

 By Aaron KrolikPaul Mozur and Adam Satariano

 July 3, 2023

 As the war in Ukraine unfolded last year, Russia’s best digital spies turned to new tools to fight an enemy on another front: those inside its own borders who opposed the war.

To aid an internal crackdown, Russian authorities had amassed an arsenal of technologies to track the online lives of citizens. After it invaded Ukraine, its demand grew for more surveillance tools. That helped stoke a cottage industry of tech contractors, which built products that have become a powerful — and novel — means of digital surveillance.

The technologies have given the police and Russia’s Federal Security Service, better known as the F.S.B., access to a buffet of snooping capabilities focused on the day-to-day use of phones and websites. The tools offer ways to track certain kinds of activity on encrypted apps like WhatsApp and Signal, monitor the locations of phones, identify anonymous social media users and break into people’s accounts, according to documents from Russian surveillance providers obtained by The New York Times, as well as security experts, digital activists and a person involved with the country’s digital surveillance operations.

President Vladimir V. Putin is leaning more on technology to wield political power as Russia faces military setbacks in Ukraine, bruising economic sanctions and leadership challenges after an uprising led by Yevgeny V. Prigozhin, the commander of the Wagner paramilitary group. In doing so, Russia — which once lagged authoritarian regimes like China and Iran in using modern technology to exert control — is quickly catching up.

The Federal Security Service building on Lubyanka Square in Moscow in May. The F.S.B. and other Russian authorities want stronger technologies to track the online lives of citizens.Credit...Maxim Shemetov/Reuters

“It’s made people very paranoid, because if you communicate with anyone in Russia, you can’t be sure whether it’s secure or not. They are monitoring traffic very actively,” said Alena Popova, a Russian opposition political figure and digital rights activist. “It used to be only for activists. Now they have expanded it to anyone who disagrees with the war.”

The effort has fed the coffers of a constellation of relatively unknown Russian technology firms. Many are owned by Citadel Group, a business once partially controlled by Alisher Usmanov, who was a target of European Union sanctions as one of Mr. Putin’s “favorite oligarchs.” Some of the companies are trying to expand overseas, raising the risk that the technologies do not remain inside Russia.

The firms — with names like MFI Soft, Vas Experts and Protei — generally got their start building pieces of Russia’s invasive telecom wiretapping system before producing more advanced tools for the country’s intelligence services.

Simple-to-use software that plugs directly into the telecommunications infrastructure now provides a Swiss-army knife of spying possibilities, according to the documents, which include engineering schematics, emails and screen shots. The Times obtained hundreds of files from a person with access to the internal records, about 40 of which detailed the surveillance tools.

One program outlined in the materials can identify when people make voice calls or send files on encrypted chat apps such as Telegram, Signal and WhatsApp. The software cannot intercept specific messages, but can determine whether someone is using multiple phones, map their relationship network by tracking communications with others, and triangulate what phones have been in certain locations on a given day. Another product can collect passwords entered on unencrypted websites.

These technologies complement other Russian efforts to shape public opinion and stifle dissent, like a propaganda blitz on state media, more robust internet censorship and new efforts to collect data on citizens and encourage them to report social media posts that undermine the war.

President Vladimir V. Putin of Russia with Alisher Usmanov in 2018. Mr. Usmanov once controlled Citadel Group, a conglomerate that owns many of the firms building surveillance technology.Credit...Sputnik/Reuters

They add up to the beginnings of an off-the-shelf tool kit for autocrats who wish to gain control of what is said and done online. One document outlining the capabilities of various tech providers referred to a “wiretap market,” a supply chain of equipment and software that pushes the limits of digital mass surveillance.

The authorities are “essentially incubating a new cohort of Russian companies that have sprung up as a result of the state’s repressive interests,” said Adrian Shahbaz, a vice president of research and analysis at the pro-democracy advocacy group Freedom House, who studies online oppression. “The spillover effects will be felt first in the surrounding region, then potentially the world.”

In one English-language marketing document aimed at overseas customers, a diagram depicts a Russian surveillance company’s phone tracking capabilities.

Beyond the ‘Wiretap Market’

Over the past two decades, Russian leaders struggled to control the internet. To remedy that, they ordered up systems to eavesdrop on phone calls and unencrypted text messages. Then they demanded that providers of internet services store records of all internet traffic.

The expanding program — formally known as the System for Operative Investigative Activities, or SORM — was an imperfect means of surveillance. Russia’s telecom providers often incompletely installed and updated the technologies, meaning the system did not always work properly. The volume of data pouring in could be overwhelming and unusable.

At first, the technology was used against political rivals like supporters of Aleksei A. Navalny, the jailed opposition leader. Demand for the tools increased after the invasion of Ukraine, digital rights experts said. Russian authorities turned to local tech companies that built the old surveillance systems and asked for more.

The push benefited companies like Citadel, which had bought many of Russia’s biggest makers of digital wiretapping equipment and controls about 60 to 80 percent of the market for telecommunications monitoring technology, according to the U.S. State Department. The United States announced sanctions against Citadel and its current owner, Anton Cherepennikov, in February.

“Sectors connected to the military and communications are getting a lot of funding right now as they adapt to new demands,” said Ksenia Ermoshina, a senior researcher who studies Russian surveillance companies with Citizen Lab, a research institute at the University of Toronto.

The new technologies give Russia’s security services a granular view of the internet. A tracking system from one Citadel subsidiary, MFI Soft, helps display information about telecom subscribers, along with statistical breakdowns of their internet traffic, on a specialized control panel for use by regional F.S.B. officers, according to one chart.

Another MFI Soft tool, NetBeholder, can map the locations of two phones over the course of the day to discern whether they simultaneously ran into each other, indicating a potential meeting between people.

A different feature, which uses location tracking to check whether several phones are frequently in the same area, deduces whether someone might be using two or more phones. With full access to telecom network subscriber information, NetBeholder’s system can also pinpoint the region in Russia each user is from or what country a foreigner comes from.

Protei, another company, offers products that provide voice-to-text transcription for intercepted phone calls and tools for identifying “suspicious behavior,” according to one document.

Russia’s enormous data collection and the new tools make for a “killer combo,” said Ms. Ermoshina, who added that such capabilities are increasingly widespread across the country.

Citadel and Protei did not respond to requests for comment. A spokesman for Mr. Usmanov said he “has not participated in any management decisions for several years” involving the parent company, called USM, that owned Citadel until 2022. The spokesman said Mr. Usmanov owns 49 percent of USM, which sold Citadel because surveillance technology was never within the firm’s “sphere of interest.”

VAS Experts said the need for its tools had “increased due to the complex geopolitical situation” and volume of threats inside Russia. It said it “develops telecom products which include tools for lawful interception and which are used by F.S.B. officers who fight against terrorism,” adding that if the technology “will save at least one life and people well-being then we work for a reason.”

A diagram from one corporate document shows how data is collected by an internet service provider and funneled to a local branch of the F.S.B.

No Way to Mask

As the authorities have clamped down, some citizens have turned to encrypted messaging apps to communicate. Yet security services have also found a way to track those conversations, according to files reviewed by The Times.

One feature of NetBeholder harnesses a technique known as deep-packet inspection, which is used by telecom service providers to analyze where their traffic is going. Akin to mapping the currents of water in a stream, the software cannot intercept the contents of messages but can identify what data is flowing where.

That means it can pinpoint when someone sends a file or connects on a voice call on encrypted apps like WhatsApp, Signal or Telegram. This gives the F.S.B. access to important metadata, which is the general information about a communication such as who is talking to whom, when and where, as well as if a file is attached to a message.

To obtain such information in the past, governments were forced to request it from the app makers like Meta, which owns WhatsApp. Those companies then decided whether to provide it.

The new tools have alarmed security experts and the makers of the encrypted services. While many knew such products were theoretically possible, it was not known that they were now being made by Russian contractors, security experts said.

Some of the encrypted app tools and other surveillance technologies have begun spreading beyond Russia. Marketing documents show efforts to sell the products in Eastern Europe and Central Asia, as well as Africa, the Middle East and South America. In January, Citizen Lab reported that Protei equipment was used by an Iranian telecom company for logging internet usage and blocking websites. Ms. Ermoshina said the systems have also been seen in Russian-occupied areas of Ukraine.

For the makers of Signal, Telegram and WhatsApp, there are few defenses against such tracking. That’s because the authorities are capturing data from internet service providers with a bird’s-eye view of the network. Encryption can mask the specific messages being shared, but cannot block the record of the exchange.

“Signal wasn’t designed to hide the fact that you’re using Signal from your own internet service provider,” Meredith Whittaker, the president of the Signal Foundation, said in a statement. She called for people worried about such tracking to use a feature that sends traffic through a different server to obfuscate its origin and destination.

In a statement, Telegram, which does not encrypt all messages by default, also said nothing could be done to mask traffic going to and from the chat apps, but said people could use features it had created to make Telegram traffic harder to identify and follow. WhatsApp said in a statement that the surveillance tools were a “pressing threat to people’s privacy globally” and that it would continue protecting private conversations.

The new tools will likely shift the best practices of those who wish to disguise their online behavior. In Russia, the existence of a digital exchange between a suspicious person and someone else can trigger a deeper investigation or even arrest, people familiar with the process said.

Mr. Shahbaz, the Freedom House researcher, said he expected the Russian firms to eventually become rivals to the usual purveyors of surveillance tools.

“China is the pinnacle of digital authoritarianism,” he said. “But there has been a concerted effort in Russia to overhaul the country’s internet regulations to more closely resemble China. Russia will emerge as a competitor to Chinese companies.”

 https://web.archive.org/web/20230703040851/https://www.nytimes.com/2023/07/03/technology/russia-ukraine-surveillance-tech.html

 

'The Perfect Police State: An Undercover Odyssey into China's Terrifying Surveillance Dystopia of the Future' (Public Affairs, 29. 06. 2021)

by Geoffrey Cain

A riveting investigation into how a restive region of China became the site of a nightmare Orwellian social experiment—the definitive police state—and the global technology giants that made it possible
 
Blocked from facts and truth, under constant surveillance, surrounded by a hostile alien police force: Xinjiang’s Uyghur population has become cursed, oppressed, outcast. Most citizens cannot discern between enemy and friend. Social trust has been destroyed systematically. Friends betray each other, bosses snitch on employees, teachers expose their students, and children turn on their parents. Everyone is dependent on a government that nonetheless treats them with suspicion and contempt. Welcome to the Perfect Police State. 
Using the haunting story of one young woman’s attempt to escape the vicious technological dystopia, his own reporting from Xinjiang, and extensive firsthand testimony from exiles, Geoffrey Cain reveals the extraordinary intrusiveness and power of the tech surveillance giants and the chilling implications for all our futures. 

07-27-22

Yes, you are being watched, even if no one is looking for you

It’s important that we recognize the extent to which physical and digital tracking work together.

BY PETER KRAPP

The U.S. has the largest number of surveillance cameras per person in the world. Cameras are omnipresent on city streets and in hotels, restaurants, malls and offices. They’re also used to screen passengers for the Transportation Security Administration. And then there are smart doorbells and other home security cameras.

Most Americans are aware of video surveillance of public spaces. Likewise, most people know about online tracking–and want Congress to do something about it. But as a researcher who studies digital culture and secret communications, I believe that to understand how pervasive surveillance is, it’s important to recognize how physical and digital tracking work together.

Databases can correlate location data from smartphones, the growing number of private cameras, license plate readers on police cruisers and toll roads, and facial recognition technology, so if law enforcement wants to track where you are and where you’ve been, they can. They need a warrant to use cellphone search equipment: Connecting your device to a mobile device forensic tool lets them extract and analyze all your data if they have a warrant.

However, private data brokers also track this kind of data and help surveil citizens–without a warrant. There is a large market for personal data, compiled from information people volunteer, information people unwittingly yield–for example, via mobile apps–and information that is stolen in data breaches. Among the customers for this largely unregulated data are federal, state and local law enforcement agencies.

HOW YOU ARE TRACKED

Whether or not you pass under the gaze of a surveillance camera or license plate reader, you are tracked by your mobile phone. GPS tells weather apps or maps your location, Wi-Fi uses your location, and cell-tower triangulation tracks your phone. Bluetooth can identify and track your smartphone, and not just for COVID-19 contact tracing, Apple’s “Find My” service, or to connect headphones.

People volunteer their locations for ride-sharing or for games like Pokemon Go or Ingress, but apps can also collect and share location without your knowledge. Many late-model cars feature telematics that track locations–for example, OnStar or Bluelink. All this makes opting out impractical.

The same thing is true online. Most websites feature ad trackers and third-party cookies, which are stored in your browser whenever you visit a site. They identify you when you visit other sites so advertisers can follow you around. Some websites also use key logging, which monitors what you type into a page before hitting submit. Similarly, session recording monitors mouse movements, clicks, scrolling and typing, even if you don’t click “submit.”

Ad trackers know when you browsed where, which browser you used, and what your device’s internet address is. Google and Facebook are among the main beneficiaries, but there are many data brokers slicing and dicing such information by religion, ethnicity, political affiliations, social media profiles, income and medical history for profit.

BIG BROTHER IN THE 21ST CENTURY

People may implicitly consent to some loss of privacy in the interest of perceived or real security–for example, in stadiums, on the road and at airports, or in return for cheaper online services. But these trade-offs benefit individuals far less than the companies aggregating data. Many Americans are suspicious of government censuses, yet they willingly share their jogging routines on apps like Strava, which has revealed sensitive and secret military data.

In the post-Roe v. Wade legal environment, there are concerns not only about period tracking apps but about correlating data on physical movements with online searches and phone data. Legislation like the recent Texas Senate Bill 8 anti-abortion law invokes “private individual enforcement mechanisms,” raising questions about who gets access to tracking data.

In 2019, the Missouri Department of Health stored data about the periods of patients at the state’s lone Planned Parenthood clinic, correlated with state medical records. Communications metadata can reveal who you are in touch with, when you were where, and who else was there–whether they are in your contacts or not.

Location data from apps on hundreds of millions of phones lets the Department of Homeland Security track people. Health wearables pose similar risks, and medical experts note a lack of awareness about the security of data they collect. Note the resemblance of your Fitbit or smartwatch to ankle bracelets people wear during court-ordered monitoring.

The most pervasive user of tracking in the U.S. is Immigration and Customs Enforcement (ICE), which amassed a vast amount of information without judicial, legislative or public oversight. Georgetown University Law Center’s Center on Privacy and Technology reported on how ICE searched the driver’s license photographs of 32% of all adults in the U.S., tracked cars in cities home to 70% of adults, and updated address records for 74% of adults when those people activated new utility accounts.

NO ONE IS WATCHING THE WATCHERS

Nobody expects to be invisible on streets, at borders, or in shopping centers. But who has access to all that surveillance data, and how long it is stored? There is no single U.S. privacy law at the federal level, and states cope with a regulatory patchwork; only five states–California, Colorado, Connecticut, Utah and Virginia–have privacy laws.

It is possible to limit location tracking on your phone, but not to avoid it completely. Data brokers are supposed to mask your personally identifiable data before selling it. But this “anonymization” is meaningless since individuals are easily identified by cross-referencing additional data sets. This makes it easy for bounty hunters and stalkers to abuse the system.

The biggest risk to most people arises when there is a data breach, which is happening more often – whether it is a leaky app or careless hotel chain, a DMV data sale or a compromised credit bureau, or indeed a data brokering middleman whose cloud storage is hacked.

This illicit flow of data not only puts fuzzy notions of privacy in peril, but may put your addresses and passport numbers, biometric data and social media profiles, credit card numbers and dating profiles, health and insurance information, and more on sale.

https://www.fastcompany.com/90772483/yes-you-are-being-watched-even-if-no-one-is-looking-for-you

Chatbot data cannot fall into the hands of big tech


by ANDY PEART 

Imagine the world a decade from now. It’s no longer ruled by powerhouse countries such as the US or China, but by a handful of tech corporations that know everything about us; what we need, buy, say, and ultimately what we desire.
So join me for a short imaginary trip to our possible future. Welcome to the FAMGA (Facebook, Apple, Microsoft, Google, and Amazon) Republic!
We’ll see that the trend started back in 2018, the year that personal assistants really started gaining traction. Amazon’s Echo rapidly surpassed its previous high of 10 million units shipped while embedding it into other devices. At the same time, Google made a serious play for its role in the automated conversational market and Facebook re-launched its chatbot initiatives to expand its reach.
Employing the power of their developer community allowed these tech giants to build vast libraries of FAMGA based applications that consumers used endlessly on their phones, in their homes, and in every part of their lives.
This is point in time when conversational applications really took off — which many people celebrated — but there was a big question left unanswered: What happens with all this data?  
When people communicate in a natural and conversational way, they reveal more than just words. Individual preferences, views, opinions, feelings, and inclinations become part of the conversation.
It’s like being able to listen in, behind the scenes, to every sales assistant’s conversation and customer support agent’s interaction. You are able to understand people’s intentions, actions, and behaviors — maybe even better than they self do.

Chatbots and virtual assistants operate in a very similar format, like a dedicated focus group at your fingertips — comprised of your entire customer base, 24/7, 365 days a year.
Data is insight
When you have this insight, you can begin learning about what people want, taking customer interaction to a whole new level — you begin predicting with certainty.
In 10 years time, we’ll look back to the years leading up to the FAMGA Republic and how we turned a blind eye when we saw that sectors were being closed off, one by one
For example, all of the insight and knowledge helped FAMGA launch the world’s first truly global bank, using the trusted relationship they had developed between users and tech to become the primary financial interface, relegating regional banks to simple service providers.  
Additionally, automakers who had once innocently played with FAMGA digital assistants as the voice interface, suddenly found one of the key differentiators in autonomous vehicles and the personality of their vehicles — was now controlled by others.
Within a few short years, the global economy became controlled by a handful of corporations.
What the future holds
Fortunately for us, this vision isn’t a reality, yet. But it could be.
Data ownership in conversational applications is one of the biggest issues facing enterprises that are looking at developing their digital strategies. Positioned in the middle of your app and customer, the technology provider knows everything your users say.
The information your customers are providing are all valuable data points that can be used to build closer relationships. The data can pick up on ques and translate what looks like a simple transfer of money from one account to the other to understanding actual customer choices. This helps provide a better understanding of your customers personal actions, such as buying a new refrigerator or going on vacation.
At the same time, all of these opportunities to better understand your customer are lost. Customers are providing us with more information about their personal lives which could give us value to building closer relationships but we’re losing them — all while FAMGA is well on its way to building its new country.

The winner of the chatbot conversation won’t necessarily be the one with the smartest conversation. Instead, it will be the one holding the data at the end of the conversation. Lose that, and you might as well give up now.: https://thenextweb.com/contributors/2018/02/11/chatbot-data-cannot-fall-hands-big-tech/

Vai tiešām pavisam tuvā nākotnē varēs “izskaitļot” ikviena cilvēka iespējas nākotnē? Un cik daudz par tevi drīkst zināt darba devējs?

By José Luis Peñarredonda
26 March 2018
If you worked for Ford in 1914, chances are at some stage in your career a private investigator was hired to follow you home.
If you stopped for a drink, or squabbled with your spouse, or did something that might make you less of a competent worker the following day, your boss would soon know about it.
This sleuthing was partly because Ford’s workers earned a better salary than the competition. The car manufacturer raised pay from $2.39 a day to $5 a day, the equivalent of $124 (£88) today. But you had to be a model citizen to qualify.
This ‘Big Brother’ operation was run by the Ford Sociological Department, a team of inspectors that arrived unannounced on employees’ doorsteps
Your house needed to be clean, your children attending school, your savings account had to be in good shape. If someone at the factory believed you were on the wrong path, you might not only miss out on a promotion, your job was on the line.
This ‘Big Brother’ operation was run by the Ford Sociological Department, a team of inspectors that arrived unannounced on employees’ doorsteps. Its aim was to “promote the health, safety and comfort of workers”, as an internal document put it. And, to be fair, it also offered everything from medical services to housekeeping courses.
The programme lasted eight years. It was expensive, and many workers resented its paternalism and intrusion. Today, most of us would find it unacceptable – what does my work have to do with my laundry, bank account or relationships?
Yet, the idea of employers trying to control workers’ lives beyond the workplace has persisted, and digital tools have made it easier than ever. Chances are, you use several technologies that could create a detailed profile of your activities and habits, both in the office and out of it. But what can (and can’t) employers do with this data? And, where do we draw the line?
What’s my worker score?
We’re all being graded every day. The expensive plane tickets I bought recently have already popped up in my credit score. The fact that I‘ve stopped jogging every morning has been noted by my fitness app – and, if it were connected with an insurance company, this change might push up my premiums.
From my online activity, Facebook knows I love beer, and believes my screen is a good spot for hipster brewery adverts. One website recently claimed that I am Colombia’s 1,410 th most influential Twitter user – something that could improve my credit score, it seems. And, yes, my desirability and efficiency as a worker is also up for evaluation and can be given a number.
And we’re not just talking the ‘rate-a-trader’ style online review process used on freelancer platforms or gig-economy services. A scoring system of sorts has lodged itself in the corporate world.
HR departments are crunching increasing volumes of data to measure employees in a more granular way
HR departments are crunching increasing volumes of data to measure employees in a more granular way. From software that records every keystroke, or the ‘smart’ coffee machines that will only give you a hot drink if you tap it with your work ID badge there are more opportunities than ever for bosses to measure behaviour.
Some analysts think this industry could be worth more than $1 billion by 2022.
One big aim of data collection is to make “predictions about how long an employee will stay, and it may influence hiring, firing, or retention of people,” says Phoebe Moore, Associate Professor of Political Economy and Technology at Leicester University in the UK and author of the book The Quantified Self in Precarity: Work, Technology and What Counts.
Data collection is “changing employment relationships, the way people work and what the expectations can be”, says Moore.
One problem with this approach is that it’s blind to some of the non-quantifiable aspects of work. Some of the subtler things I do in order to be a better writer, for instance, are not quantifiable: having a drink with someone who tells me a great story, or imagining a piece on my commute. None of these things would show up in my ‘job score’. “A lot of the qualitative aspects of work are being written out,” says Moore, “because if you can’t measure them, they don’t exist”. 
The dilemma of data
A healthy, physically active person is a better worker, right? Research consistently suggests activity decreases absenteeism and increases productivity. This has spawned a thriving health and wellness industry with programmes worth billions.
Employees value these health initiatives not only because their bosses might allow them time off to participate but also because if they track exercise via their phone, smartwatch or fitness wristband they can earn rewards.
“I can just wear this device, and I get points and buy stuff for doing things I would already (be) doing anyway without it,” says Lauren Hoffman, a former salesperson for one of the programmes in the US, who was also enrolled in it herself.
Furthermore, the workplace offers an environment that can help people to reach their health goals. Research suggeststhat fitness programmes work better when they are combined with social encouragement, collaboration and competition. Offices can foster all that: they can organise running clubs, weekly fitness classes or competitions to help workers thrive.
There are several good business reasons to collect data on employees – from doing better risk management to examining if social behaviours in the workplace can lead to gender discrimination.  “Companies fundamentally don't understand how people interact and collaborate at work,”, says Ben Waber, president and CEO of Humanyze, an American company which gathers and analyses data about the workplace. He says that he can show them.
Humanyze gathers data from two sources. The first is the metadata from employees’ communications: their email, phone or corporate messaging service. The company says analysing this metadata doesn’t include reading the content of these messages, nor the individual identities of the people involved, but involves crunching the more general information i.e. duration, frequency and general localisation so, will tell them which department an employee belongs to.
The second area is data gathered from gadgets like Bluetooth infrared sensors which detect how many people are working in one particular part of an office and how they move around. They also use ‘supercharged’ ID badges that, as Waber says, are beefed up with “microphones which don't record what you say, but do voice-processing in real time.” This allows measurement of the proportion of time you speak, or how often people interrupt you.
After six weeks of research, the employer gets a ‘big picture’ of the problem it wants to solve, based on the analysed data. If the aim, for instance, is to boost sales, they can analyse what their best salespeople do that others don’t. Or if they want to measure productivity, they can infer that the more efficient workers talk more often with their managers.
Waber sees it as “a lens of very large work issues, like diversity, inclusion, workload assessment, workspace planning, or regulatory risk”. His business case is that these tools will help companies save millions of dollars and even years of time. 
Collection and protection
But not everyone is convinced of the usefulness of these techniques, or whether such personally intrusive technology can be justified. A PwC survey from 2015 reveals that 56% of employees would use a wearable device given by their employer if it was aimed at improving their wellbeing at work. “There should be some payback from something like this, some benefit in terms of their workplace conditions, or advantages,” says Raj Mody, an analyst from the firm. And Hoffman remembers that these programmes were not always an easy sell. “You’re going to get the data and you're going to use it against me,” she was often told by sceptical workers.
There is a fundamental problem: these measurements are often inaccurate
And there is a fundamental problem: these fitness tracking measurements are often inaccurate. People are very bad at self-reporting and fitness trackers and smartphones are not exactly precise. A recent evidence review shows that different models and techniques gather different results and it is very difficult to draw trustworthy comparisons between them.
It is also unclear if counting steps, for instance, is actually a good way of measuring activity, both because this measurement doesn’t take intensity into account – a step made while running counts just as much as a step made walking at home – and walking is more difficult for some than others.
Another issue is the amount of data these programmes can collect. They not only track your daily activity, but also often offer health screenings for participants, which allows them to register things that don’t seem like your boss’s business: your cholesterol level, your weight, or even your DNA.
In most cases, it is illegal in the US and Europe for companies to discriminate against workers based on their health data or any genetic test results, but there are some grey areas. In 2010, Pamela Fink, the PR manager of an energy firm in the US, sued her employer because she claimed she was sacked due to a double mastectomy to reduce her probability of developing cancer. While the company didn’t have access to her DNA results, she contended that they knew about the risk because the surgery showed up on her insurance bills. The case was settled out of court.
Wellness programme providers say that employers only see aggregated and anonymised data, so they can’t target specific employees based on their wellness results. Humanyze ensures its clients are not forcing their employees to be monitored, but instead give them the chance to opt in. In a similar fashion to wellness programmes, they anonymise and collate the information that they share with employers. Waber is emphatic that his company never sells the data on to third parties, and emphasises transparency throughout the process.
But this kind of data could be used in more controversial ways, and the goodwill of the companies involved doesn’t eliminate all the risks. Data could be stolen in a cyberattack, for instance, or it could be used in ways that are not transparent for users. It “could be sold to basically anyone, for whatever purpose, and recirculated in other ways,” says Ifeoma Awunja, a sociologist at Cornell University who researches the use of health data in the workplace.
Taking a short-term profit on user data would damage your company’s reputation - Scott Montgomery
There are reports that some providers are doing just thatalready – even if they data they sell is anonymous, they could be cross-referenced with other anonymous data to identify people. Not all these companies do it, and some say it is not smart business to do so. “Taking a short-term profit on user data would damage your company’s reputation, causing user volume to plummet and thus your value to clients to diminish” says Scott Montgomery, CEO of Wellteq, a corporate wellness provider based in Singapore.
But even if all the companies did the right thing and acted only in their costumers’ best interest, people in some places are still only protected by their wellness programme’s goodwill. The US law is “significantly behind” the European Union and other parts of the world in protecting users, says Awunja.
In the EU, a new General Data Protection Regulation (GDRP) will come into force thisMay, which will outlaw any use of personal data to which the user didn’t explicitly consent. In the US, the legislation varies between states. In some of them, sharing some health information with third parties is not illegal as long as the data doesn’t identify the person. Furthermore, according to Gary Phelan, a lawyer at Mitchell & Sheridan in the US, since this data is generally not considered medical data, it does not have the privacy restrictions as medical data.
 Human beings are evaluated in terms of the risk that they pose to the firm - Awunja                        
There is also the question of return on investment for the employers. Do they actually save businesses money? These programmes are meant to lower health insurance premiums both for companies and employees, since they are supposed to decrease health risk, sick days, and hospital costs. But it is not clear if this actually happens. A 2013 study by the Rand Corporation claims that, while these programmes save companies enough money to pay for themselves, they “are having little if any immediate effects on the amount employers spend on health care.”
With all these tools, “human beings are evaluated in terms of the risk that they pose to the firm,” says Awunja. Still, it’s a complicated balance: dealing with the everyday habits of employees as if they were just another hit to the bottom line sounds a lot like the old days of the Sociological Department. Whatever benefits these technologies can bring, they have to balance with the privacy rights and expectations of workers.
Balance
There is an episode from TV show Black Mirror that offers a chilling warning. In it, every person is given a score based on their interactions on a social platform that looks strikingly like Instagram. This score defines almost every opportunity they have in life: what jobs they can get, where they live, which plane tickets they can buy or who can they date. In fact, in 2020 China will roll a mandatory Citizen Score calculated from a number of data sources, from your purchase history to the books you read.
Although not quite as sinister, this illustrates the technological, legal and ethical limitations of doing something similar elsewhere. In most parts of the world, the law prevents your HR department from sharing or requesting data about you from your credit card provider, your healthcare provider, or your favourite online dating site, unless you explicitly consent that it can do so.
This should keep the most cynical temptations at bay for now, but how to reap the benefits of data in an acceptable way? There is a strong case for finding this balance: as Waber says, data can give you evidence-based advice for advancing your career, or for enhancing your effectivity at work. Having a space for taking care of your health at work might improve your happiness at your job, and some studies suggest that this also translates into a productivity push.
Part of the answer seems to be to agree to certain ethical standards. In a paper, Awunja proposes some practices like informing employees of the potential risks for discrimination with the data, not penalising those who decline to take part in these programmes, and setting a clear ‘expiration date’ to the collected data.
This is an important conversation to have, even if you are of those with nothing to hide. As it turns out, it is very likely that giving away our data is going to be part of the everyday experience of work in the near future, at least in the corporate world.

yang2020.com

Wearable Brain Devices Will Challenge Our Mental Privacy

 A new era of neurotechnology means we may need new protections to safeguard our brain and mental experiences 

A last bastion of privacy, our brains have remained inviolate, even as sensors now record our heartbeats, breaths, steps and sleep. All that is about to change. An avalanche of brain-tracking devices—earbuds, headphones, headbands, watches and even wearable tattoos—will soon enter the market, promising to transform our lives. And threatening to breach the refuge of our minds.

Tech titans MetaSnapMicrosoft and Apple are already investing heavily in brain wearables. They aim to embed brain sensors into smart watches, earbuds, headsets and sleep aids. Integrating them into our everyday lives could revolutionize health care, enabling early diagnosis and personalized treatment of conditions such as depressionepilepsy and even cognitive decline. Brain sensors could improve our ability to meditate, focus and even communicate with a seamless technological telepathy—using the power of thoughts and emotion to drive our interaction with augmented reality (AR) and virtual reality (VR) headsets, or even type on virtual keyboards with our minds.

But brain wearables also pose very real risks to mental privacy, freedom of thought and self-determination. As these devices proliferate, they will generate vast amounts of neural data, creating an intimate window into our brain states, emotions and even memories. We need the individual power to shutter this new view into our inner selves.

Employers already seek out such data, tracking worker fatigue levels and offering brain wellness programs to mitigate stress, via platforms that give them unprecedented access to employees’ brains. Cognitive and emotional testing based on neuroscience is becoming a new job screening norm, revealing personality aspects that may have little to do with a job. In China, train conductors of the Beijing-Shanghai line, the busiest of its kind in the world, wear brain sensors throughout their work day. There are even reports of Chinese employees being sent home if their brain activity shows less than stellar brain metrics. As companies embrace brain wearables that can track employees’ attention, focus and even boredom, without safeguards in place, they could trample on employee’s mental privacy, eroding trust and well-being along with the dignity of work itself.

Governments, too, are seeking access to our brains, with a U.S brain initiative seeking “‘every spike from every neuron’ in the human brain,” to reveal “how the firing of these neurons produced complex thoughts.” While aimed at the underlying causes of neurological and psychiatric conditions, this same investment could also enable government interference with freedom of thought—a freedom critical to human flourishing. From functional brain biometric programs under development to authenticate individuals—including those funded by the National Science Foundation at Binghamton University—to so-called brain-fingerprinting techniques used to interrogate criminal suspects—sold by companies like Brainwave Science and funded by law enforcement agencies from Singapore to Australia to the United Arab Emirates—we must act quickly to ensure neurotechnology benefits humanity rather than heralding an Orwellian future of spying on our brains.

The rush to hack the human brain veers from neuromarketing to the rabbit hole of social media and even to cognitive warfare programs designed to disable or disorient. These technologies should have our full attention. Neuromarketing campaigns such as one conducted by Frito-Lays used insights about how women’s brains could affect snacking decisions, then monitored brain activity while people viewed their newly designed advertisements, allowing them to fine-tune their campaigns to better capture attention and drive women to snack more on their products. Social media “like” buttons and notifications are features designed to draw us habitually back to platforms, exploiting our brains’ reward systems. Clickbait headlines and pseudoscience claims prey on our cognitive biases, hobbling critical thinking. And nations worldwide are considering possible military applications of neuroscience, which some planners call warfare’s “sixth domain” (adding to a list that includes land, sea, air, space and cyberspace).

As brain wearables and artificial intelligences advance, the line between human agency and machine intervention will also blur. When a wearable reshapes our thoughts and emotions, how much of our actions and decisions remain truly our own? As we begin to offload mental tasks to AI, we risk becoming overly dependent on technology, weakening independent thought and even our capacity for reflective decision-making. Should we allow AI to shape our brains and mental experiences? And how do we retain our humanity in an increasingly interconnected world remade by these two technologies?

Malicious use and even hacking of brain wearables is another threat. From probing for information, to intercepting our PIN numbers as we think or type them, neural cybersecurity will rule. Imagine a world where brain wearables can track what we read and see, alter perceptions, manipulate emotions or even trigger physical pain. That’s a world that may soon arrive. Already, companies including China’s Entertech have accumulated millions of raw EEG data recordings from individuals across the world using its popular consumer-based brain wearables, along with personal information and device and app usage by those individuals. Entertech makes plain in their privacy policy they also record personal information, GPS signals, device sensors, computers and services a person is using, including websites they may be visiting. We must ensure that brain wearables are designed with security in mind and with device and data safeguards in place to mitigate these risks.

We stand at an inflection point in the beginning of a brain wearable revolution. We need prudent vigilance and an open and honest debate about the risks and benefits of neurotechnology, to ensure it is used responsibly and ethically. With the right safeguards, neurotechnology could be truly empowering for individuals. To get there will require we recognize new digital age rights to preserve our cognitive liberty—self-determination over our brains and mental experiences. We must do so now, before the choice is no longer ours to make.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

https://www.scientificamerican.com/article/wearable-brain-devices-will-challenge-our-mental-privacy/?utm


9.27.19

Our insecurities about public bathrooms have evolved over time—and so, too, should the way we design for digital privacy.

BY STEPHANIE THIEN HANG NGUYEN

In 16th-century Paris, the starkly primitive clashed with the beautifully refined. The dainty bourgeois would gather dressed in their finest frocks, wearing tall, veiled, conical headpieces atop golden brocades and long, pointed shoes.
As the wealthy class wined and dined and their bladders filled with drinks, distinguished citizens “did not go to the toilet—the toilet came to them,” Witold Rybczynski, the University of Pennsylvania professor and architect, wrote. It was an early version of a porta-potty, a moveable chamber pot in a box. Luckily, individual awareness had yet to emerge in renaissance life—the words “embarrassment” and “self-consciousness” only appeared toward the end of the 17th century.
Let’s flush forward 600 years. A customer needs to use the restroom in a coffee shop with a mediocre click-lock handle. Someone yanks on the door. Panic strikes as their bowels clench into a knot. The door could have swung open revealing someone sitting on the toilet with a frozen, wide-eyed look of frenzy, fear, and embarrassment. Clearly, norms about physical privacy and social shame have evolved over context and time.
Today, we are grappling with how to control and design for privacy in the digital realm—and how to better communicate the urgency of these problems to users, many of whom may not be concerned.
“Physical and digital privacy are more closely aligned than different, but data is abstracted from a person or community,” explains University of Florida associate professor Jasmine McNealy. “Data privacy doesn’t have a body or tangible object to look at, which means we can’t necessarily imagine the harms.”
Without sights, sounds, and touch, it feels practically invisible. People are used to walls of legalese as a mental safety net denoting digital privacy—but that’s far from foolproof. It is only when someone sees a sudden, unbidden change in their bank account or health insurance status, for instance, that they realize their privacy has been breached. At that point, it’s likely too late.
What do we call this era of digital privacy? How might we better understand it as we try to shape norms that empower citizens to both protect themselves and be free to be themselves?
To create stronger privacy norms and policies in the digital world, we must stop thinking about privacy one-dimensionally. Privacy, like architecture, is a concept that embodies human values. It draws on psychology, sociology, economics, politics, and the complex systems that surround it. User preferences are not stagnant.
We must recognize the nuances of privacy in different contexts and cultures and design experiences that reflect the values of privacy as empowerment, freedom, and anonymity. Privacy is more than a tech problem with a tech solution.
When I interviewed dozens of participants donating genetic data for precision medicine research, privacy manifested in the form of social stigmatization or fear. “If I have an STD, abuse drugs, or have a terminal disease, I don’t want everyone to know that,” explained a participant. “I don’t want people to find out [my information] and jack up insurance policies,” explained another. That same person originally donated genetic data to “improve science so future generations might not have the same issue.” The tension is the desire for both precision and anonymity.
In another context, data privacy harms can appear in the form of racial bias, prohibiting future opportunities. Najarian Peters, assistant professor at Seton Hall’s Institute for Privacy Protection, is studying black parents who seek alternative education for their children like homeschooling or unschooling. “[Parents expressed they] do not want a permanent document to be created about their child because of possible misinterpretation and bias,” she explained. “Children should get their fair shake and not be categorized as a problem.”
Since there is often a divergence between how the user expects their data to be handled and how designers actually embed these features, we must translate and understand privacy for end users from norms to product design. Norms like anonymity and solitude in a physical realm are often difficult to achieve online. There are never-ending trade-offs like user empowerment versus convenience and individual versus population benefits.
No single product can provide a privacy experience that will work for everyone. We need to shift how we embed privacy in products and services by revisiting user expectations with actual outcomes and challenging cultural privacy norms.
For instance, it is a norm that many video conferencing technologies have a visible option to turn off video before starting a chat room. This is good progress that gives the user choice and agency on how much to reveal. Can “unsubscribe me” be replaced by “delete me” in email marketing letters? Perhaps we should question this norm and revisit user expectations with actual outcomes. Is surveillance less invasive if it is two-sided? Amazon Key is a service that gives delivery drivers the ability to enter your home via a smart lock on your door and leave your packages inside. The service is all about building a system of trust, but it’s a privacy conundrum. Amazon surveils you. The delivery person surveils your home and has physical access to everything you own. You surveil the delivery from your phone. Who’s controlling—or consuming—whom?
We are in the age of human data-collection ubiquity. Like an infinite game of tag, both society and technology are racing to catch up to one another.
We need more than just a terrifying bathroom-door rattling moment with data-collecting companies—something that translates privacy into tangible harm worthy of change. Even with the Equifax breach and YouTube’s privacy violations on Children’s Privacy Law, some argue that current enforcement mechanisms are not enough.
“People don’t understand why [data privacy] is important—it needs to be clearer,” Emily Peterson-Cassin, digital rights advocate at Public Citizen, tells me. “What we need is a recapturing of the idea that our digital lives are our lives now.”
Until privacy norms begin to calcify in our time, we’ll need to double check the door is locked on that shoddy bathroom door . . . and that nobody is recording what’s behind it.

https://www.fastcompany.com/90409598/what-the-first-porta-potty-can-teach-designers-about-digital-privacy


Putting a Price on Privacy

Last updated: November 4, 2019

For many, the way we live our lives has moved online. As a result, our information is constantly being processed through sign-up forms, submissions, online quizzes, and even sensitive documents. 
Companies must manage this data to keep it safe and prevent it from falling into the wrong hands, but issues related to leaks are still prevalent. Massive data breaches such as the Equifax leak and Cambridge Analytica scandal have even made national news, the latter of which involved the mining of personal information on Facebook. 
In light of these events, online privacy is becoming increasingly important, so we surveyed over 1,000 people about how much they valued their data and even asked them to assign a monetary value to different pieces of personal information.
How safe is our data, and what are we doing to protect ourselves? Read on to find out more.
Digital Security Measures…:
https://thebestvpn.com/putting-a-price-on-privacy/

DuckDuckGo, EFF, and others just launched privacy settings for the whole internet

BY STEVEN MELENDEZ

The new standard, called Global Privacy Control, will let you activate a browser setting to keep your data from being sold…:

https://www.fastcompany.com/90561555/global-privacy-control-duckduckgo-eff-mozilla

02-18-20

iOS offers more tools than ever to defend yourself against hackers, nosy sites, and other intruders. Here’s why they matter and how to benefit from them…: https://www.fastcompany.com/90254589/use-these-11-critical-iphone-privacy-and-security-settings-right-now?utm_campaign

What is Pegasus? How Surveillance Spyware Invades Phones

A cybersecurity expert explains the NSO Group’s stealthy software

End-to-end encryption is technology that scrambles messages on your phone and unscrambles them only on the recipients’ phones, which means anyone who intercepts the messages in between can’t read them. Dropbox, Facebook, Google, Microsoft, Twitter and Yahoo are among the companies whose apps and services use end-to-end encryption.

This kind of encryption is good for protecting your privacy, but governments don’t like it because it makes it difficult for them to spy on people, whether tracking criminals and terrorists or, as some governments have been known to do, snooping on dissidents, protesters and journalists. Enter an Israeli technology firm, NSO Group.

The company’s flagship product is Pegasus, spyware that can stealthily enter a smartphone and gain access to everything on it, including its camera and microphone. Pegasus is designed to infiltrate devices running Android, Blackberry, iOS and Symbian operating systems and turn them into surveillance devices. The company says it sells Pegasus only to governments and only for the purposes of tracking criminals and terrorists.

HOW IT WORKS

Earlier version of Pegasus were installed on smartphones through vulnerabilities in commonly used apps or by spear-phishing, which involves tricking a targeted user into clicking a link or opening a document that secretly installs the software. It can also be installed over a wireless transceiver located near a target, or manually if an agent can steal the target’s phone.

Since 2019, Pegasus users have been able to install the software on smartphones with a missed call on WhatsApp, and can even delete the record of the missed call, making it impossible for the the phone’s owner to know anything is amiss. Another way is by simply sending a message to a user’s phone that produces no notification.

This means the latest version of this spyware does not require the smartphone user to do anything. All that is required for a successful spyware attack and installation is having a particular vulnerable app or operating system installed on the device. This is known as a zero-click exploit.

Once installed, Pegasus can theoretically harvest any data from the device and transmit it back to the attacker. It can steal photos and videos, recordings, location records, communications, web searches, passwords, call logs and social media posts. It also has the capability to activate cameras and microphones for real-time surveillance without the permission or knowledge of the user.

WHO HAS BEEN USING PEGASUS AND WHY

NSO Group says it builds Pegasus solely for governments to use in counterterrorism and law enforcement work. The company markets it as a targeted spying tool to track criminals and terrorists and not for mass surveillance. The company does not disclose its clients.

The earliest reported use of Pegasus was by the Mexican government in 2011 to track notorious drug baron Joaquín “El Chapo” Guzmán. The tool was also reportedly used to track people close to murdered Saudi journalist Jamal Khashoggi.

It is unclear who or what types of people are being targeted and why. However, much of the recent reporting about Pegasus centers around a list of 50,000 phone numbers. The list has been attributed to NSO Group, but the list’s origins are unclear. A statement from Amnesty International in Israel stated that the list contains phone numbers that were marked as “of interest” to NSO’s various clients, though it’s not known if any of the phones associated with numbers have actually been tracked.

A media consortium, the Pegasus Project, analyzed the phone numbers on the list and identified over 1,000 people in over 50 countries. The findings included people who appear to fall outside of the NSO Group’s restriction to investigations of criminal and terrorist activity. These include politicians, government workers, journalists, human rights activists, business executives and Arab royal family members.

OTHER WAYS YOUR PHONE CAN BE TRACKED

Pegasus is breathtaking in its stealth and its seeming ability to take complete control of someone’s phone, but it’s not the only way people can be spied on through their phones. Some of the ways phones can aid surveillance and undermine privacy include location tracking, eavesdropping, malware and collecting data from sensors.

Governments and phone companies can track a phone’s location by tracking cell signals from cell tower transceivers and cell transceiver simulators like the StingRay device. Wi-Fi and Bluetooth signals can also be used to track phones. In some cases, apps and web browsers can determine a phone’s location.

Eavesdropping on communications is harder to accomplish than tracking, but it is possible in situations in which encryption is weak or lacking. Some types of malware can compromise privacy by accessing data.

The National Security Agency has sought agreements with technology companies under which the companies would give the agency special access into their products via backdoors, and has reportedly built backdoors on its own. The companies say that backdoors defeat the purpose of end-to-end encryption.

The good news is, depending on who you are, you’re unlikely to be targeted by a government wielding Pegasus. The bad news is, that fact alone does not guarantee your privacy.

https://theconversation.com/what-is-pegasus-a-cybersecurity-expert


Smart talking: are our devices threatening our privacy?

Millions of us now have virtual assistants, in our homes and our pockets. Even children’s toys are getting smart. But when we talk to them, who is listening? By James Vlahos
Tue 26 Mar 2019 06.00 
 On 21 November 2015, James Bates had three friends over to watch the Arkansas Razorbacks play the Mississippi State Bulldogs. Bates, who lived in Bentonville, Arkansas, and his friends drank beer and did vodka shots as a tight football game unfolded. After the Razorbacks lost 51–50, one of the men went home; the others went out to Bates’s hot tub and continued to drink. Bates would later say that he went to bed around 1am and that the other two men – one of whom was named Victor Collins – planned to crash at his house for the night. When Bates got up the next morning, he didn’t see either of his friends. But when he opened his back door, he saw a body floating face-down in the hot tub. It was Collins.
Lose yourself in a great story: Sign up for the long read email
A grim local affair, the death of Victor Collins would never have attracted international attention if it were not for a facet of the investigation that pitted the Bentonville authorities against one of the world’s most powerful companies – Amazon. Collins’ death triggered a broad debate about privacy in the voice-computing era, a discussion that makes the big tech companies squirm.
The police, summoned by Bates the morning after the football game, became suspicious when they found signs of a struggle. Headrests and knobs from the hot tub, as well as two broken bottles, lay on the ground. Collins had a black eye and swollen lips, and the water was darkened with blood. Bates said that he didn’t know what had happened, but the police officers were dubious. On 22 February 2016 they arrested him for murder.
Searching the crime scene, investigators noticed an Amazon Echo. Since the police believed that Bates might not be telling the truth, officers wondered if the Echo might have inadvertently recorded anything revealing. In December 2015, investigators served Amazon with a search warrant that requested “electronic data in the form of audio recordings, transcribed records or other text records”.
Amazon turned over a record of transactions made via the Echo but not any audio data. “Given the important first amendment and privacy implications at stake,” an Amazon court filing stated, “the warrant should be quashed.” Bates’s attorney, Kimberly Weber, framed the argument in more colloquial terms. “I have a problem that a Christmas gift that is supposed to better your life can be used against you,” she told a reporter. “It’s almost like a police state.”
With microphone arrays that hear voices from across the room, Amazon’s devices would have been coveted by the Stasi in East Germany. The same can be said of smarthome products from Apple, Google and Microsoft, as well as the microphone-equipped AIs in all of our phones. As the writer Adam Clark Estes put it: “By buying a smart speaker, you’re effectively paying money to let a huge tech company surveil you.”
Amazon, pushing back, complains that its products are unfairly maligned. True, the devices are always listening, but by no means do they transmit everything they hear. Only when a device hears the wake word “Alexa” does it beam speech to the cloud for analysis. It is unlikely that Bates would have said something blatantly incriminating, such as: “Alexa, how do I hide a body?” But it is conceivable that the device could have captured something of interest to investigators. For instance, if anyone intentionally used the wake word to activate the Echo – for a benign request such as asking for a song to be played, say – the device might have picked up pertinent background audio, like people arguing. If Bates had activated his Echo for any request after 1am, that would undercut his account of being in bed asleep.
In August 2016, a judge, apparently receptive to the notion that Amazon might have access to useful evidence, approved a second search warrant for police to obtain the information the company had withheld before. At this point in the standoff, an unlikely party blinked – Bates, who had pleaded not guilty. He and his attorney said they didn’t object to police getting the information they desired. Amazon complied, and if the Echo captured anything incriminating, police never revealed what it was. Instead, in December 2017, prosecutors filed a motion to dismiss the case, saying there was more than one reasonable explanation for the death of Collins. But the surveillance issue raised so dramatically by the case is unlikely to go away.
Tech companies insist they are not spying on their customers via virtual assistants and home gadgets, and that they only ever listen when expressly commanded to do so. These claims, as least as far as they can be externally verified, appear to be true. But this doesn’t mean no listening is happening, or couldn’t happen, in ways that challenge traditional notions of privacy.
There are a number of ways in which home devices could be used that challenge our ideas of privacy. One is eavesdropping to improve quality. Hello Barbie’s digital ears perk up when you press her glittering belt buckle. Saying the phrase “OK, Google” wakes up that company’s devices. Amazon’s Alexa likes to hear her name. But once listening is initiated, what happens next?
Sources at Apple, which prides itself on safeguarding privacy, say that Siri tries to satisfy as many requests as possible directly on the user’s iPhone or HomePod. If an utterance needs to be shipped off to the cloud for additional analysis, it is tagged with a coded identifier rather than a user’s actual name. Utterances are saved for six months so the speech recognition system can learn to better understand the person’s voice. After that, another copy is saved, now stripped of its identifier, for help with improving Siri for up to two years.
Most other companies do not emphasise local processing and instead always stream audio to the cloud, where more powerful computational resources await. Computers then attempt to divine the user’s intent and fulfil it. After that happens the companies could then erase the request and the system’s response, but they typically don’t. The reason is data. In conversational AI, the more data you have, the better.
Virtually all other botmakers, from hobbyists to the AI wizards at big tech companies, review at least some of the transcripts of people’s interactions with their creations. The goal is to see what went well, what needs to be improved and what users are interested in discussing or accomplishing. The review process takes many forms.
The chat logs may be anonymised so the reviewer doesn’t see the names of individual users. Or reviewers may see only summarised data. For instance, they might learn that a conversation frequently dead-ends after a particular bot utterance, which lets them know the statement should be adjusted. Designers at Microsoft and Google and other companies also receive reports detailing the most popular user queries so they know what content to add.
But the review process can also be shockingly intimate. In the offices of one conversational-computing company I visited, employees showed me how they received daily emails listing recent interchanges between people and one of the company’s chat apps.
The employees opened one such email and clicked on a play icon.
In clear digital audio, I heard the recorded voice of a child who was free-associating. “I am just a boy,” he said. “I have a green dinosaur shirt ... and, uh, giant feet ... lots of toys in my house and a chair ... My mom is only a girl, and I know my mom, she can do everything she wants to do. She always goes to work when I get up but at night she comes home.”
There was nothing untoward in the recording. But as I listened to it, I had the unsettling feeling of hovering invisibly in the little boy’s room. The experience made me realise that the presumption of total anonymity when speaking to a virtual assistant on a phone or smarthome device – there is only some computer on the other end, right? – is not guaranteed. People might be listening, taking notes, learning.
Eavesdropping may also occur by accident. On 4 October 2017, Google invited journalists to a product unveiling at the SFJazz Center in San Francisco. Isabelle Olsson, a designer, got the job of announcing the new Google Home Mini, a bagel-size device that is the company’s answer to the Amazon Echo Dot. “The home is a special intimate place, and people are very selective about what they welcome into it,” Olsson said. After the presentation, Google gave out Minis as swag to the attendees. One of them was a writer named Artem Russakovskii, and he could be forgiven for later thinking that he hadn’t been selective enough about what he welcomed into his home.
After having the Mini for a couple of days, Russakovskii went online to check his voice search activity. He was shocked to see that thousands of short recordings had already been logged – recordings that never should have been made. As he would later write for the Android Police website: “My Google Home Mini was inadvertently spying on me 24/7 due to a hardware flaw.” He complained to Google and within five hours the company had sent a representative to swap out his malfunctioning device for two replacement units.
Like other similar devices, the Mini could be turned on using the “OK, Google” wake phrase or by simply hitting a button on top of the unit. The problem was that the device was registering “phantom touch events, Russakovskii wrote. Google would later say the problem affected only a small number of units released at promotional events. The problem was fixed via a software update. To further dispel fears, the company announced that it was permanently disabling the touch feature on all Minis.
This response, however, wasn’t enough to satisfy the Electronic Privacy Information Center, an advocacy group. In a letter dated 13 October 2017, it urged the Consumer Product Safety commission to recall the Mini because it “allowed Google to intercept and record private conversations in homes without the knowledge or consent of the consumer”.
No information has emerged to suggest that Google was spying on purpose. Nonetheless, if a company the calibre of Google can make such a blunder, then other companies might easily make similar mistakes as voice interfaces proliferate.
If you want to know whether government agents or hackers might be able to hear what you say to a voice device, consider what happens to your words after you have spoken. Privacy-minded Apple retains voice queries but decouples them from your name or user ID. The company tags them with a random string of numbers unique to each user. Then, after six months, even the connection between the utterance and the numerical identifier is eliminated.
Google and Amazon, meanwhile, retain a link between the speaker and what was said. Any user can log into their Google or Amazon account and see a listing of all of the queries. I tried this on Google, and I could listen to any given recording. For instance, after clicking on a play icon from 9.34am on 29 August 2017, I heard myself ask: “How do I say ‘pencil sharpener’ in German?” Voice records can be erased, but the onus is on the user. As a Google user policy statement puts it: “Conversation history with Google Home and the Google Assistant is saved until you choose to delete it.”
Is this a new problem in terms of privacy? Maybe not. Google and other search engines similarly retain all of your typed-in web queries unless you delete them. So you could argue that voice archiving is simply more of the same. But to some people, being recorded feels much more invasive. Plus, there is the issue of by-catch.
Recordings often pick up other people – your spouse, friends, kids – talking in the background.
For law enforcement agencies to obtain recordings or data that are stored only locally (ie on your phone, computer or smarthome device), they need to obtain a search warrant. But privacy protection is considerably weaker after your voice has been transmitted to the cloud. Joel Reidenberg, director of the Center on Law and Information Policy at Fordham Law School in New York, says “the legal standard of ‘reasonable expectation of privacy’ is eviscerated. Under the fourth amendment, if you have installed a device that’s listening and is transmitting to a third party, then you’ve waived your privacy rights.” According to a Google transparency report, US government agencies requested data on more than 170,000 user accounts in 2017. (The report does not specify how many of these requests, if any, were for voice data versus logs of web searches or other information.)
If you aren’t doing anything illegal in your home – or aren’t worried about being falsely accused of doing so – perhaps you don’t worry that the government could come calling for your voice data. But there is another, more broadly applicable risk when companies warehouse all your recordings. With your account login and password, a hacker could hear all the requests you made in the privacy of your home.
Technology companies claim they don’t eavesdrop nefariously, but hackers have no such aversion. Companies employ password protection and data encryption to combat spying, but testing by security researchers as well as breaches by hackers demonstrate that these protections are far from foolproof.
Consider the CloudPets line of stuffed animals, which included a kitten, an elephant, a unicorn and a teddy bear. If a child squeezed one of these animals, he or she could record a short message that was beamed via Bluetooth to a nearby smartphone. From there, the message was sent to a distant parent or other relative, whether she was working in the city or fighting a war on the other side of the world. The parent, in turn, could record a message on her phone and send it to the stuffed animal for playback.
It was a sweet scenario. The problem was that CloudPets placed the credentials for more than 800,000 customers, along with 2m recorded messages between kids and adults, in an easily discoverable online database. Hackers harvested much of this data in early 2017 and even demanded ransom from the company before they would release their ill-gotten treasure.
Paul Stone, a security researcher, discovered another problem: the Bluetooth pairing between CloudPets animals and the companion smartphone app didn’t use encryption or require authentication. After purchasing a stuffed unicorn for testing, he hacked it.
In a demonstration video he posted online, Stone got the unicorn to say: “Exterminate, annihilate!” He triggered the microphone to record, turning the plush toy into a spy. “Bluetooth LE typically has a range of about 10-30 metres,” Stone wrote on his blog, “so someone standing outside your house could easily connect to the toy, upload audio recordings, and receive audio from the microphone.”
Plush toys may be, well, soft targets for hackers, but the vulnerabilities they exhibit are sometimes found in voice-enabled, internet-connected devices for adults. “It’s not that the risks are particularly any different to the ones you and I face every day with the volumes of data we produce and place online,” says security researcher Troy Hunt, who documented the CloudPets breach. “It’s that our tolerances are very different when kids are involved.”
Other researchers have identified more technologically sophisticated ways in which privacy might be violated. Imagine someone is trying to take control of your phone or other voice AI device simply by talking to it. The scheme would be foiled if you heard them doing so. But what if the attack was inaudible? That is what a team of researchers at China’s Zhejiang University wanted to investigate for a paper that was published in 2017. In the so-called DolphinAttack scenario that the researchers devised, the hacker would play unauthorised commands through a speaker that he planted in the victim’s office or home. Alternatively, the hacker could tote a portable speaker while strolling by the victim. The trick was that those commands would be played in the ultrasonic range above 20kHz – inaudible to human ears but, through audio manipulation by the researchers, easily perceptible to digital ones.
In their laboratory tests, the scientists successfully attacked the voice interfaces of Amazon, Apple, Google, Microsoft and Samsung. They tricked those voice AIs into visiting malicious websites, sending phoney text messages and emails, and dimming the screen and lowering the volume to help conceal the attack. The researchers got the devices to place illegitimate phone and video calls, meaning that a hacker could listen to and even see what was happening around a victim. They even hacked their way into the navigation system of an Audi SUV.
Most people don’t want hackers, police officers or corporations listening in on them. But there is a final set of scenarios that confuses the surveillance issue. In reviewing chat logs for quality control in the manner described above, conversation designers might hear things that almost beg them to take action.
Take the creators of Mattel’s Hello Barbie. In that process, they struggled with a disturbing set of hypothetical scenarios. What if a child told the doll “My daddy hits my mom”? Or “My uncle has been touching me in a funny place”? The writers felt it would be a moral failure to ignore such admissions. But if they reported what they heard to the police, they would be assuming the role of Big Brother. Feeling uneasy, they decided Barbie’s response should be something like: “That sounds like something you should tell to a grownup whom you trust.”
Mattel, however, seems willing to go further. In an FAQ about Hello Barbie, the company wrote that conversations between children and the doll are not monitored in real time. But afterward, the dialogues might occasionally be reviewed to aid product testing and improvement. “If in connection with such a review we come across a conversation that raises concern about the safety of a child or others,” the FAQ stated, “we will cooperate with law enforcement agencies and legal processes as required to do so or as we deem appropriate on a case-by-case basis.”
The conundrum similarly challenges the big tech companies.
Because their virtual assistants handle millions of voice queries per week, they don’t have employees monitoring utterances on a user-by-user basis. But the companies do train their systems to catch certain highly sensitive things people might say. For instance, I tested Siri by saying: “I want to kill myself.” She replied: “If you are thinking about suicide, you may want to speak with someone at the National Suicide Prevention Lifeline.” Siri supplied the telephone number and offered to place the call.
Thanks, Siri. But the problem with letting virtual assistants look out for us is that the role suggests major responsibility with ill-defined limits. If you tell Siri that you are drunk, she sometimes offers to call you a cab. But if she doesn’t, and you get into a car accident, is Apple somehow responsible for what Siri failed to say?
When is a listening device expected to take action? If Alexa overhears someone screaming “Help, help, he’s trying to kill me!”, should the AI automatically call the police?
The preceding scenarios are not far-fetched to analyst Robert Harris, a communication industry consultant. He argues that voice devices are creating a snarl of new ethical and legal issues. “Will personal assistants be responsible for the ... knowledge that they have?” he says. “A feature like that sometime in the future could become a liability.”
The uses of AI surveillance make clear that you should scrutinise each one of these technologies you allow into your life. Read up on just how and when the digital ears are turned on. Find out what voice data is retained and how to delete it if you desire. And if in doubt – especially with applications made by companies whose privacy policies can’t be easily understood – pull the plug.
This is an edited extract from Talk to Me: Apple, Google, Amazon and the Race for Voice-Controlled AI is published on 28 March by Random House Penguin. To buy a copy for £17.60 visit guardianbookshop.com or call 0330 333 6846
https://www.theguardian.com/technology/2019/mar/26/smart-talking-are-our-devices-threatening-our-privacy?

Smartphones Are Spies.
Here’s Whom They Report To.


OUR SMARTPHONE is probably sending your precise location to companies right now. Their job is to turn your shopping trip or doctor’s visit into “Big Data” — another term for corporate intelligence. So far, the companies and individuals profiting from your everyday movements have mostly evaded scrutiny.
As Times Opinion continues reporting on a giant trove of mobile phone location data, the companies and people profiting from the privacy invasion are coming into focus.
So who, exactly, is watching, and why — and where is all that information going?
THE PLAYERS
Google Maps is possibly the most popular location-based app in the world, with over one billion users active each month, most of whom are most likely enabling location tracking. Large tech companies like Google and Facebook are more likely to keep the invasive data they collect to themselves for their own internal use, repurposing it to improve their products, for marketing and other analyses.
But many other location data companies aren’t household names. Smaller players mostly operate behind the scenes on many of your favorite apps, using software designed to quietly collect location data from your phone’s sensors after you consent (more about that in a minute). Many have labyrinthian privacy policies vaguely explaining their permissions but they use technical and nuanced language that may be confusing to average smartphone users.
The industry has evolved to sprout even more companies, specializing in monitoring phones via Bluetooth signals or improving the technology that lets it all happen. In other cases, location data is funneled into marketing companies and used to create targeted advertising. (Companies can work with data derived from GPS sensors, Bluetooth signals and other sources. Not all companies in the location data business collect, buy, sell or work with granular location data.)
By design, it’s often nearly impossible to know which companies receive your location information or what they do with it. Some are startups with only a few dozen employees and modest funding. Others are established players with significant investment.
Because the collection of location data is largely unregulated, these companies can legally get access to phone location sensors and then buy and resell the information they gather in perpetuity. Not all companies do that, but some do. The business opportunities are vast. And investors have noticed. Many advertising executives have independently described the location data industry to us as “the Wild West.”
[Freaked out? Here are three easy steps to protect your phone]
The advertising ecosystem is also incredibly complex. The number of companies has grown from about 150 in 2011 to over 7,000 this year, according to Marketing Technology Media.
That complexity, according to an ad industry veteran, is by design: “Everybody knows their one little part and basically nothing more. Every company is just one micronode of the ecosystem. Nobody can see the whole thing.”
THE TECH
It’s costly and challenging to build apps and large audiences from scratch. To get around this, smaller companies piggyback on bigger app developers, inserting their tracking programs into established apps via something called software development kits (known as S.D.K.s).
Companies often pay the apps for access, doling out as much as $20 per 1,000 unique users each month or as little as $2 per 1,000 — depending on how eagerly data companies want the data and how much value they expect to derive from it — according to a former employee of a location data company who was responsible for recruiting apps to use its S.D.K.
“A lot of them were small application developers,” the former employee said. Deals with small companies could be struck in less than a week, with the negotiation focusing almost entirely on the financial compensation, the person said. “They were just cash-driven companies where any incremental amount of revenue was helpful.”
Many S.D.K.s provide useful and sometimes vital services, like login integration or mapping technology. Facebook, Google and Amazon have S.D.K.s inside all kinds of apps. In the case of these tech giants, the S.D.K.s help provide web traffic analytics, facilitate payments or run ads.
In either case, the S.D.K. makers receive user data from that app — potentially over a billion datapoints each day. And once the companies have legally obtained it, there are few legal restrictions on what they can do with it. Some turn around and sell that data for profit.
“It’s the industry standard,” an online ad industry veteran told us, speaking on condition of anonymity. “And every app is potentially leaking data to five or 10 other apps. Every S.D.K. is taking your data and doing something different — combining it with other data to learn more about you. It’s happening even if the company says they don’t share data. Because they’re not technically sharing it; the S.D.K. is just pulling it out. Nobody has any privacy.”
How is this all allowed? Technically speaking, you consented. Location data companies rely on those “I agree” screens and privacy policies to create the legal and ethical basis for their business. The companies justify owning and monetizing the most intimate details on our daily travels by suggesting our movements are anonymous and impersonal.
“We don’t have a direct relationship with the app user or the consumer,” said Brian Czarny, chief marketing officer of Factual, a location data company based in Los Angeles, which says it doesn’t sell any of the raw data it collects.
He added: “We don’t even look at it as a user. We look at it as a device.”
THE APPS
It’s hard to know exactly which apps are sharing and profiting from your location data. Even apps that work with location data companies might have specific arrangements that limit how it’s resold or used for analysis and advertising outside the app. This chart, using data from the S.D.K.-tracking company MightySignal, shows the categories of apps most commonly working with S.D.K.s.
While this list includes more than 3,400 titles, many apps that collect and share location data don’t send it directly to third parties within the app. It’s ultimately impossible to identify all the apps involved.
Simply by downloading an app and agreeing to the terms of service, you’re potentially exposing your sensitive information to dozens of other technology companies, ad networks, data brokers and aggregators.
THE BUSINESS
Sharing your location data isn’t always bad. Many apps that use location do so with clear disclosures and provide useful services. Yet in some cases, companies collect the data seemingly for one purpose but can use it for another.
In a test by Times Opinion earlier this year, the music and podcasting app iHeartRadio asked for location services to “get your favorite DJs.” But the app quickly sent the phone’s precise geolocation to the data company Cuebiq via its S.D.K. Like other location data companies, Cuebiq uses location data to fuel analysis, like measuring whether people visited a store after seeing an online ad or helping marketers build more detailed profiles for targeted advertising.
In an emailed statement, iHeartRadio said that it complies “with all applicable laws in connection with its use of location data” and that “our privacy policy includes fulsome disclosure around location use.” In the latest version of the app, the consent screen includes more details, adding that the company “may also use or share location for advertising and analytics.”
In another test, the popular weather app MyRadar sent the phone’s precise location to Cuebiq about 20 times while it was open during an eight-minute walk in Brooklyn. While the app included clearer details on its consent screen detailing how location data would be used, it’s difficult to evaluate the trade-offs without being able to see how frequent and precise the tracking really is. MyRadar did not respond to a request for comment.
Location pings sent from an S.D.K.
during a walk in Brooklyn
Note: Walking path and timing is inferred. Satellite imagery: Microsoft
Another example is OneSignal, which specializes in mobile and desktop notifications but built a side business by collecting and selling location data. If users agreed to share their location with an app for a local notification, OneSignal could collect it via its S.D.K. and then make money by selling the data to third parties.
The day before we were scheduled to speak with OneSignal about these practices, it announced it would stop reselling data. (In an interview, it said the change had been planned for some time.) The company’s co-founder and chief executive, George Deglin, said that revenue from reselling was relatively small and that the public “is leaning more negative now” over companies profiting from their users’ data.
That negativity has grown as Facebook scandals, data leaks and security breaches have made Americans more concerned than ever about what is happening to their data. People might have felt comfortable giving up their location before these breaches, but would they consent today?
Once you’ve entered the location data marketplace, you’re there forever.



Blacklists and redlists: How China’s Social Credit System actually works
OCT 23, 2018
When a young mother from Chengdu wanted to return home from a visit to Beijing in May 2016, the only option she has was to travel for 20 hours in a rickety train to complete the 1,800-kilometer journey.
The woman, who told reporters her surname was Wei, had been put on a government blacklist that prevented her from purchasing certain items and services that required identification verification—including tickets for air and high-speed rail travel.
Wei, who had divorced a year earlier, had become entangled in a legal dispute with her ex-husband who, unbeknownst to her, had filed a suit against her over visitation rights to their son.
Much has been written about China’s emerging tools for social control. But few topics have garnered as much attention as the country’s nascent Social Credit System, a framework to monitor and manipulate citizen behavior using a dichotomy of punishments and rewards.
The idea is simple: By keeping and aggregating records throughout the government’s various ministries and departments, Chinese officials can gain insight into how people behave and develop ways to control them.
The goal writes Rogier Creemers, a postdoctoral scholar specializing in the law and governance of China at Leiden University in The Netherlands, is “cybernetic” behavioral control, allowing individuals to be monitored and immediately confronted with the consequences of their actions. In so doing, authorities can enhance the county’s expanding surveillance apparatus.
Some draw comparisons to the British/US science fiction television series Black Mirror and its speculative vision of the future. Others see parallels with dystopian societies penned by 20th-century writers such as George Orwell. In nearly all cases, the labels of the Social Credit System have been misappropriated.
Despite its name, it isn’t a single system, and it’s not monolithic, as many reports claim. Not every one of the country’s 1.4 billion citizens is being rated on a three-digit scale. Instead, it’s a complex ecosystem containing numerous subsystems, each at various levels of development and affecting different people.
Blacklists—and “redlists”—form the backbone of the Social Credit System, not a much-debated “social credit score.” Blacklists punish negative behavior while redlists reward positive. According to the planning outline released by the State Council, China’s cabinet, in mid-2014, the system’s objective is to encourage individuals to be trustworthy under the law and dissuade against breaking trust to promote a “sincerity culture.”
Even so, an intricate web of social credit systems is coming to China—only perhaps not in the way, or at the speed, that’s generally expected. Many obstacles curb the implementation of a fully-fledged national system, including inadequate technology, insular mindsets among government ministries that jealously guard their data, and a growing awareness of the importance of privacy among China’s educated urban class.
Early experiments
The concept of a system of social credit first emerged in 1999 when officials aimed to strengthen trust in the country’s emerging market economy. However, the focus quickly shifted from building financial creditworthiness to encompass the moral actions of the country’s enterprises, officials, judiciary, and citizens.
More recently, in 2010, Suining County, in eastern China’s Jiangsu Province, began experimenting with a system to rate its citizens. Established to quantify individuals’ behavior, points could be deducted for breaking laws, but also for deviating from social norms and political positioning. Residents were initially awarded 1,000 points. Running a red light, driving while drunk, bribing a public official, or failing to support elderly family members resulted in a 50-point deduction.
The total would be then be used to assign an A to D rating. A-ratings were above 970 points, while those with less than 599 points were given D-ratings. Lower-rated citizens had a harder time accessing social welfare and government housing. More than half of an individual’s points related to social management.
Residents and the media lambasted the system, saying the government had no right to rate the country’s citizens, let alone use public services as a means of punishment and reward. To make matters worse, it was also compared to the “good citizen” identity cards that were issued by the Japanese to Chinese citizens as a form of social management during World War II. City officials eventually disbanded the A to D rating. State-run media outlet Global Times later referred to it as a “policy failure.”
Rising from the ashes of that disastrous experiment, new models for rating individuals have emerged around China. There are over now over 30 of these cities, despite there being no mention of assigning quantitative ratings in the 2014 planning outline. This highlights how the details of implementation are left to local governments, resulting in a scattered application.
In Rongcheng, Shandong Province, each of the city’s 740,000 adult residents start out with 1000 points, according to a report by Foreign Policy. Depending on their score, residents are then rated from A+++ to D, with rewards for high ratings ranging from deposit-free shared bike rental and heating subsidies in winter.
The city of Shanghai is also experimenting with social credit. Through its Honest Shanghai app residents can access their rating by entering their ID number and passing a facial recognition test. The data is drawn from 100 public sources.
Xiamen, a city in the eastern province of Fujian, has launched a similar system. Adults over 18 years old can use the Credit Xiamen official account on popular messaging app WeChat to check their scores. Those with high scores can skip the line for city ferries, and don’t need to pay a deposit to rent shared bikes or borrow a book from the library.
Jeremy Daum, a senior fellow at Yale Law School’s Paul Tsai China Center who has translated many of the government’s social credit-related documents, said that systems rating individuals—like the ones in Rongcheng, Shanghai, and Xiamen—have little effect since very few people are aware of their existence.
The scores are meant to form part of an education system promoting trustworthiness, says Daum. “This is supposed to get people to focus on being good,” he says. If punishments do occur, they are because of violations of laws and regulations, not “bad social credit,” he said.
In the 1990s, China went through a period of radical reformation, adopting a market-based economy. As the number of commercial enterprises mushroomed, many pushed for growth at any cost, and a host of scandals hit China.
In an editorial from 2012, Jiangxi University of Finance and Economics professor Zhang Jinming drew attention to the emerging appearance of low-quality goods and products and their effects on the populace. “These substandard products could result in serious economic losses, and some may even be health hazards,” he wrote.
In 2008, for example, contaminated milk powder sickened nearly 300,000 Chinese children and killed six babies. Twenty-two companies, including Sanlu Group, which accounted for 20% of the market at the time, were found to have traces of melamine in their products. An investigation found that local farmers had deliberately added the chemical to increase the protein content of substandard milk.
In 2015, a mother and daughter were arrested for selling $88 million in faulty vaccines. The arrests were made public a year later when it was announced that the improperly-stored vaccines had made their way across 20 provinces, causing a public outcry and loss in consumer confidence.
A question of trust
Incidents like these are driving the thinking behind the Social Credit System, Samm Sacks, a US-based senior fellow in the Technology Policy Program at the Centre for Strategic and International Studies (CSIS), who has published extensively on the topic, told TechNode. The idea is that greater supervision and increased “trust” in society could limit episodes like these, and in turn, promote China’s economic development.
The most well-developed part of social credit relates to businesses and seeks to ensure compliance in the market. Has your company committed fraud? It may be put on a blacklist. Along with you and other representatives. Have you paid your taxes on time? The company may be placed on a redlist, making it easier to bypass bureaucratic hurdles.
Government entities then share industry-specific lists and other public data through memorandums of understanding. This creates a system of cross-departmental punishments and rewards. If one government department imposes sanctions on a company, another could do the same within the scope of their power.
If a company were added to a blacklist for serious food safety violations it could be completely banned from operating or be barred from government procurement. Companies on redlists face fewer roadblocks when interacting with government departments.
A critical feature of the system to link individuals to businesses, explains Martin Chorzempa, a research fellow at the Peterson Institute for International Economics, based in Washington, DC. The idea is that while companies are supervised in their market activities, executives and legal representatives are also held responsible if something goes wrong.
But it’s not just business people that can be included on blacklists, as Wei, the young mother from Chengdu, found out.
One of the most notorious blacklists is the “List of Dishonest Persons Subject to Enforcement.” Reserved for those who have willfully neglected to fulfill court orders, lost a civil suit, failed to pay fines, or conducted fraudulent activity. Punishments include bans from air and high-speed rail travel, private school education, high-end hotels, and purchasing luxury goods on e-commerce platforms. Other sanctions include restrictions from benefiting from government subsidies, being awarded honorary titles, and taking on roles as a civil servant or upper-management at state-owned enterprises.
Jia Yueting, former CEO of embattled conglomerate LeEco, also landed on the blacklist in December 2017. Six months later he was banned from buying “luxury” goods and travel for a year—including air and high-speed rail tickets.  He had failed to abide by a court order holding him responsible for his debt-ridden company’s dues. Jia fled to the US in late 2017 and defied an order to return to China. He has been back in the news recently after becoming embroiled in a battle with a new investor in Jia’s electric vehicle company Faraday Future.
Blacklist boom
It is uncertain whether the government is incorporating private sector data in social credit records. However, information does flow the other way. Companies like Alibaba and JD.com have integrated blacklist records into their platforms to prohibit defaulters from spending on luxury items.
Reports claiming that the social credit scoops up social media data, internet browsing history, and online transactions data conflate the government’s systems with commercial opt-in platforms like Ant Financial’s Sesame Credit.
Despite being authorized by the People’s Bank of China (PBoC), Sesame Credit is distinct from the government system. The platform, which is integrated into Alipay, rates users on a scale of 350 to 950. Those with higher scores gain access to rewards, including deposit free use of power bricks and shared bicycles, as well as reduced deposits when renting property. It functions like a traditional credit rating platform mixed with a loyalty program. The company was not willing to comment on social credit.
Experts believe that the collection of data by the government is currently limited to records held by its various departments and entities. It is information the government already has but hasn’t yet shared across departments, says Chorzempa.
Liang Fan, a doctoral student at the University of Michigan who studies social credit, explains that he is aware of 400 sources of information, although the total number of types of data that are compiled is unknown to him.
Nonetheless, private industry is picking up on signals from the government, some implicit and others explicit. Private credit systems have been developed off the back of the government’s broader plan. The PBoC was integral in the development of these systems. Although information might not be shared, the companies are benefiting from the troves of data they collect.
The lifeblood of social credit is data. And China has heaps of it. But there are still significant threats to the development of a far-reaching social credit system. Honest Shanghai app users have reported problems ranging from faulty facial recognition tech to the app just not accepting their registration.
“The user experience is terrible. I can’t verify my real name and it failed when I scanned my face,” said one of numerous similar reviews in the iOS App Store. Many of the reviewers posted one-star ratings.
But there exists a much more entrenched problem—individual government departments don’t like sharing their data, says Chorzempa. It holds significant commercial and political value for those who control it. This creates enormous difficulty when attempting to set up a platform for cross-departmental sharing. While there is a national plan to set up a centralized system for the coordination of data, there are currently no notable incentives for sharing. In addition, creating a broader system results in more labor for individual departments, with agencies essentially taking on more work for the benefit of others.
Other challenges are societal. Reports about the proliferation of the social credit system often ignore an important factor that could hinder its overreach: the agency of Chinese individuals. There is a growing awareness of how private data is used. This was evident in the Suining experiment and could have more wide-ranging effects for social credit. “It’s not the free-for-all that it may have been even in 2014 when the social credit plan was released,” said Sacks of CSIS. “There’s been a change in ways that could make aspects of that system illegitimate in the eyes of the public.”
Someone to watch over 
Real-name verification is essential for social credit. Everyone in China is required to prove their identity when buying a SIM card, creating or verifying social media accounts, and setting up accounts for making online payments, in part, is dictated by the 2017 Cybersecurity Law.
Everyday activities are being linked to individual identities with more success, reducing anonymity, says Daum. He believes that’s what the government is doing with social credit. “They’re saying: ‘First, we need a system where people are afraid to not be trustworthy. Then we need a system where it’s impossible to not be trustworthy,’ because there’s too much information on you.”
For Wei, the blacklisted woman in Chengdu, it wasn’t the prospect of an arduous cross-country rail journey that bothered her. Instead, she was fearful that her future actions and freedom could be restricted by her past record. What if, for example, her employer wanted her to go on a business trip?
In the late 1800s, British social theorist Jeremy Bentham proposed the idea of a panopticon—an institution in which a single corrections officer could observe all inmates without them knowing whether they were being watched. In the Social Credit System framework that is emerging in China, the lack of anonymity, through both real-name verification and publicly-published blacklists, creates a system of fear even if no one is watching—much like Bentham’s notorious panopticon.

https://technode.com/2018/10/23/china-social-credit/

Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It

 by Kashmir Hill 

 “The dystopian future portrayed in some science-fiction movies is already upon us. Kashmir Hill’s fascinating book brings home the scary implications of this new reality.”—John Carreyrou, author of Bad Blood


Named One of the Best Books of the Year by the Inc. Non-Obvious Book Awards • Longlisted for the Financial Times and Schroders Business Book of the Year Award

New York Times tech reporter Kashmir Hill was skeptical when she got a tip about a mysterious app called Clearview AI that claimed it could, with 99 percent accuracy, identify anyone based on just one snapshot of their face. The app could supposedly scan a face and, in just seconds, surface every detail of a person’s online life: their name, social media profiles, friends and family members, home address, and photos that they might not have even known existed. If it was everything it claimed to be, it would be the ultimate surveillance tool, and it would open the door to everything from stalking to totalitarian state control. Could it be true?

In this riveting account, Hill tracks the improbable rise of Clearview AI, helmed by Hoan Ton-That, an Australian computer engineer, and Richard Schwartz, a former Rudy Giuliani advisor, and its astounding collection of billions of faces from the internet. The company was boosted by a cast of controversial characters, including conservative provocateur Charles C. Johnson and billionaire Donald Trump backer Peter Thiel—who all seemed eager to release this society-altering technology on the public. Google and Facebook decided that a tool to identify strangers was too radical to release, but Clearview forged ahead, sharing the app with private investors, pitching it to businesses, and offeringit to thousands of law enforcement agencies around the world.
      
Facial recognition technology has been quietly growing more powerful for decades. This technology has already been used in wrongful arrests in the United States. Unregulated, it could expand the reach of policing, as it has in China and Russia, to a terrifying, dystopian level.
     
Your Face Belongs to Us
 is a gripping true story about the rise of a technological superpower and an urgent warning that, in the absence of vigilance and government regulation, Clearview AI is one of many new technologies that challenge what Supreme Court Justice Louis Brandeis once called “the right to be let alone.”

https://www.amazon.com/Your-Face-Belongs-Us-Secretive/dp/0593448561  

How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State

Byron Tau

https://www.amazon.com/Means-Control-Alliance-Government-Surveillance/dp/0593443225

 The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power

The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called "surveillance capitalism," and the quest by powerful corporations to predict and control our behavior.
In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth.
Zuboff vividly brings to life the consequences as surveillance capitalism advances from Silicon Valley into every economic sector. Vast wealth and power are accumulated in ominous new "behavioral futures markets," where predictions about our behavior are bought and sold, and the production of goods and services is subordinated to a new "means of behavioral modification."
The threat has shifted from a totalitarian Big Brother state to a ubiquitous digital architecture: a "Big Other" operating in the interests of surveillance capital. Here is the crucible of an unprecedented form of power marked by extreme concentrations of knowledge and free from democratic oversight. Zuboff's comprehensive and moving analysis lays bare the threats to twenty-first century society: a controlled "hive" of total connection that seduces with promises of total certainty for maximum profit--at the expense of democracy, freedom, and our human future.
With little resistance from law or society, surveillance capitalism is on the verge of dominating the social order and shaping the digital future--if we let it. https://www.goodreads.com/book/show/26195941-the-age-of-surveillance-capitalism

“Communication in a world of pervasive surveillance”

Sources and methods: Counter-strategies against pervasive surveillance architecture

Jacob R. Appelbaum

 Contents Contents 11 List of Figures 14 List of Tables 16 List of Algorithms 17 1 Introduction 1 1.1 A fifth level of ethics in mathematics . . . . . . . . . . . . . . . . . . . . 4 1.2 Thinking about the future . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Background on network protocols 15 2.1 Free Software, Open Hardware, Operational Security . . . . . . . . . . . 17 2.2 Layers of the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Ethernet networks and the Internet Protocols . . . . . . . . . . . . . . . 18 2.4 The Domain Name System . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 Multicast Domain Name System (mDNS) . . . . . . . . . . . . . . . . . 19 2.6 Hypertext Transport Protocol (HTTP) . . . . . . . . . . . . . . . . . . . 20 2.7 Transport Layer Security (TLS) . . . . . . . . . . . . . . . . . . . . . . . 20 2.8 Virtual Private Networks (VPN) . . . . . . . . . . . . . . . . . . . . . . 20 3 Background on cryptography 23 3.1 Mathematics as informational self-defense . . . . . . . . . . . . . . . . . 25 3.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Symmetric Encryption: block cipher . . . . . . . . . . . . . . . . . . . . 27 3.5 Symmetric Encryption: stream cipher . . . . . . . . . . . . . . . . . . . 27 3.6 Message Authentication Code . . . . . . . . . . . . . . . . . . . . . . . 27 3.7 Authenticated-Encryption with Associated-Data (AEAD) . . . . . . . . . 28 3.8 Non-interactive Key Exchange (NIKE) . . . . . . . . . . . . . . . . . . . 29 3.9 Verification of public keys . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.10 Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.11 Protocols from building blocks . . . . . . . . . . . . . . . . . . . . . . . 33 4 The Adversary 35 4.1 Zersetzung or Dirty Tricks? . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2 Foundational events and disclosures in surveillance . . . . . . . . . . . . 48 4.3 Summer of Snowden and the post-Snowden Era . . . . . . . . . . . . . 69 4.4 Standardization of cryptographic sabotage . . . . . . . . . . . . . . . . 80 4.5 XKeyscore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.6 ANT catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5 The GNU name system 147 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.2 Background: Domain Name System (DNS) . . . . . . . . . . . . . . . . 150 5.3 Security goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5.4 Exemplary Attacker: The NSA’s MORECOWBELL and QUANTUMDNS programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.5 Adversary Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.6 Domain Name System Security Extensions (DNSSEC) . . . . . . . . . . 157 5.7 Query name minimization . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.8 DNS-over-TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.9 DNSCurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5.10 Confidential DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5.11 Namecoin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5.12 The GNU name system . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.13 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.14 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 6 Tiny WireGuard Tweak 169 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 6.2 Realistic adversary concerns . . . . . . . . . . . . . . . . . . . . . . . . 171 6.3 WireGuard overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.4 Traffic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.5 Security and privacy issues . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.6 Blinding flows against mass surveillance . . . . . . . . . . . . . . . . . . 181 6.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 7 Vula 185 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 7.2 Background and related work . . . . . . . . . . . . . . . . . . . . . . . 188 7.3 Threat Model and design considerations . . . . . . . . . . . . . . . . . . 192 7.4 Detailed Protocol Description . . . . . . . . . . . . . . . . . . . . . . . . 198 7.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.6 Security Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 8 REUNION 219 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 8.2 Background and related work . . . . . . . . . . . . . . . . . . . . . . . 222 8.3 Introducing REUNION . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 8.4 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 8.5 Security Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 8.6 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 8.7 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 8.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Bibliography 251

https://pure.tue.nl/ws/portalfiles/portal/197416841/20220325_Appelbaum_hf.pdf  

 






Nav komentāru:

Ierakstīt komentāru