Commission
proposes measures to boost data sharing and support European data spaces
Page contents
- Top
- Print friendly pdf
- Related media
- Press contact
To better exploit the
potential of ever-growing data in a trustworthy European framework, the
Commission today proposes
new rules on data governance. The Regulation will facilitate data sharing
across the EU and between sectors to create wealth for society, increase
control and trust of both citizens and companies regarding their data, and
offer an alternative European model to data handling practice of major tech
platforms.
The amount of data generated
by public bodies, businesses and citizens is constantly growing. It is expected
to multiply by five between 2018 and 2025. These new rules will allow this data
to be harnessed and will pave the way for sectoral European data
spaces to benefit society, citizens and companies. In the
Commission's data
strategy of February this year, nine such data spaces have been
proposed, ranging from industry to energy, and from health to the European
Green Deal. They will, for example, contribute to the green
transition by improving the management of energy consumption, make delivery of
personalised medicine a reality, and facilitate access to public services.
Executive Vice-President for
A Europe Fit for the Digital Age, Margrethe Vestager, said: “You
don't have to share all data. But if you do and data is sensitive you should be
able to do in a manner where data can be trusted and protected. We want to give
business and citizens the tools to stay in control of data. And to build trust
that data is handled in line with European values and fundamental rights.”
Commissioner for Internal
Market, Thierry Breton, said: “We are defining today a truly
European approach to data sharing. Our new regulation will enable trust and
facilitate the flow of data across sectors and Member States while putting all
those who generate data in the driving seat. With the ever-growing role of
industrial data in our economy, Europe needs an open yet sovereign Single
Market for data. Flanked by the right investments and key infrastructures, our
regulation will help Europe become the world's number one data continent.”
Delivering on the
announcement in the data
strategy, the Regulation will create the basis for a new European way of
data governance that is in line with EU values and principles, such as personal
data protection (GDPR), consumer protection and competition rules. It offers an
alternative model to the data-handling practices of the big tech platforms,
which can acquire a high degree of market power because of their business
models that imply control of large amounts of data. This new approach proposes
a model based on the neutrality and transparency of data intermediaries, which
are organisers of data sharing or pooling, to increase trust. To ensure this
neutrality, the data-sharing intermediary cannot deal in the data on its own
account (e.g. by selling it to another company or using it to develop their own
product based on this data) and will have to comply with strict requirements.
The Regulation includes:
- A number of measures to increase trust
in data sharing, as the lack of trust is currently a major obstacle
and results in high costs.
- Create new EU rules on neutrality to
allow novel data intermediaries to function as trustworthy organisers of
data sharing.
- Measures to facilitate the reuse of
certain data held by the public sector. For example, the reuse of
health data could advance research to find cures for rare or chronic
diseases.
- Means to give Europeans
control on the use of the data they generate, by making it easier
and safer for companies and individuals to voluntarily make their data
available for the wider common good under clear conditions.
Background
Today's proposal is the
first deliverable under the European
strategy for data, which aims to unlock the economic and societal potential
of data and technologies like Artificial Intelligence, while respecting EU
rules and values (for example in the area of data protection, and respect of
intellectual property and trade secrets). The strategy will build on the size
of the single market as a space where data can flow within the EU and across
sectors in line with clear, practical and fair rules for access and reuse.
Today's proposal also supports wider international sharing of data, under
conditions that ensure compliance with the European public interest and the
legitimate interests of data providers.
More dedicated proposals on
data spaces are expected to follow in 2021, complemented by a Data Act to
foster data sharing among businesses, and between business and governments.
https://ec.europa.eu/commission/presscorner/detail/en/IP_20_2102
Generative AI increases the threat to privacy by
giving the government access to sensitive information beyond what it could
collect through court-authorized surveillance.
The report shows the
breathtaking scale and invasive nature of the consumer data market and how that
market directly enables wholesale surveillance of people. The data includes not
only where you’ve been and who you’re connected to, but the nature of your
beliefs and predictions about what you might do in the future. The report
underscores the grave risks the purchase of this data poses, and urges the
intelligence community to adopt internal guidelines to address these problems.
As a privacy, electronic
surveillance, and technology law attorney, researcher, and law
professor, I have spent years researching, writing, and
advising about the legal issues the report highlights.
These issues are increasingly
urgent. Today’s commercially available information, coupled with the
now-ubiquitous decision-making artificial intelligence and generative AI-like
ChatGPT, significantly increases the threat to privacy and civil liberties by
giving the government access to sensitive personal information beyond even what
it could collect through court-authorized surveillance.
WHAT IS COMMERCIALLY
AVAILABLE INFORMATION?
The drafters of the report
take the position that commercially available information is a subset of
publicly available information. The distinction between the two is significant
from a legal perspective. Publicly available information is information that is
already in the public domain. You could find it by doing a little online
searching.
Commercially available
information is different. It is personal information collected from a dizzying
array of sources by commercial data brokers that aggregate and analyze it, then
make it available for purchase by others, including governments. Some of that
information is private, confidential, or otherwise legally protected.
The sources and types of data
for commercially available information are mind-bogglingly vast. They include
public records and other publicly available information. But far more
information comes from the nearly ubiquitous internet-connected devices in
people’s lives, like cellphones, smart-home
systems, cars, and fitness trackers. These all harness data from
sophisticated, embedded sensors,
cameras, and microphones. Sources also include data from apps, online
activity, texts, and emails, and even healthcare
provider websites.
Types of data
include location, gender, and sexual orientation, religious and
political views and affiliations, weight
and blood pressure, speech patterns, emotional states, behavioral information
about myriad activities, shopping patterns, and family and friends.
This data provides companies
and governments a window into the “Internet of Behaviors,”
a combination of data collection and analysis aimed at understanding and
predicting people’s behavior. It pulls together a wide range of data, including
location and activities, and uses scientific and technological approaches,
including psychology and machine learning, to analyze that data. The Internet
of Behaviors provides a map of what each person has done, is doing, and is
expected to do, and provides a means to influence a
person’s behavior.
BETTER, CHEAPER, AND
UNRESTRICTED
The rich depths of
commercially available information, analyzed with powerful AI, provide
unprecedented power, intelligence, and investigative insights. The information
is a cost-effective way to surveil virtually everyone, plus it provides far
more sophisticated data than traditional electronic surveillance tools or
methods like wiretapping and location tracking.
Government use of electronic-surveillance
tools is extensively regulated
by federal and state laws. The U.S. Supreme Court has ruled that the
Constitution’s Fourth
Amendment, which prohibits unreasonable searches and seizures, requires a
warrant for a wide range of digital searches. These include wiretapping
or intercepting
a person’s calls, texts, or emails; using GPS or cellular
location information to track a person; or searching a
person’s cellphone.
Complying with these laws
takes time and money, plus electronic-surveillance law restricts what, when,
and how data can be collected. Commercially available information is cheaper to
obtain, provides far richer data and analysis, and is subject to little
oversight or restriction compared to when the same data is collected directly
by the government.
THE THREATS
Technology and the burgeoning
volume of commercially available information allow various forms of the
information to be combined and analyzed in new ways to understand all aspects
of your life, including preferences and desires.
The Office of the Director of
National Intelligence report warns that the increasing volume and widespread
availability of commercially available information poses “significant threats
to privacy and civil liberties.” It increases the power of the government to
surveil its citizens outside the bounds of law, and it opens the door to the
government using that data in potentially unlawful ways. This could
include using
location data obtained via commercially available information rather than a
warrant to investigate and prosecute someone for abortion.
The report also captures both
how widespread government purchases of commercially available information are
and how haphazard government practices around the use of the information are.
The purchases are so pervasive and agencies’ practices so poorly documented
that the Office of the Director of National Intelligence cannot even fully
determine how much and what types of information agencies are purchasing, and
what the various agencies are doing with the data.
IS IT LEGAL?
The question of whether it’s
legal for government agencies to purchase commercially available information is
complicated by the array of sources and complex mix of data it contains.
There is no legal prohibition
on the government collecting information already disclosed to the public or
otherwise publicly available. But the nonpublic information listed in the
declassified report includes data that U.S. law typically protects. The
nonpublic information’s mix of private, sensitive, confidential, or otherwise
lawfully protected data makes collection a legal gray area.
Despite decades of
increasingly sophisticated and invasive commercial data aggregation, Congress
has not passed a federal data privacy law. The lack of federal regulation
around data creates
a loophole for government agencies to evade electronic surveillance
law. It also allows agencies to amass enormous databases that AI systems learn
from and use in often unrestricted ways. The resulting erosion of privacy has
been a concern for more than a
decade.
THROTTLING THE DATA
PIPELINE
The Office of the Director of
National Intelligence report acknowledges the stunning loophole that
commercially available information provides for government surveillance: “The
government would never have been permitted to compel billions of people to
carry location tracking devices on their persons at all times, to log and track
most of their social interactions, or to keep flawless records of all their
reading habits. Yet smartphones, connected cars, web tracking technologies, the
Internet of Things, and other innovations have had this effect without
government participation.”
However, it isn’t entirely
correct to say “without government participation.” The legislative branch could
have prevented this situation by enacting data privacy laws, more tightly
regulating commercial data practices, and providing oversight in AI
development. Congress could yet address the problem. Representative Ted Lieu
has introduced the bipartisan
proposal for a National AI Commission, and Senator Chuck Schumer has
proposed an
AI regulation framework.
Effective data-privacy laws
would keep your personal information safer from government agencies and
corporations, and responsible AI regulation would block them from manipulating
you.
Decentralized Society: Finding Web3's Soul
by EG
Weyl · 2022
We call this richer,
pluralistic ecosystem “Decentralized Society” (DeSoc)—a co-determined
sociality, where Souls and communities come ... :
Cracking
Down on Dissent, Russia Seeds a Surveillance Supply Chain
Russia is incubating a
cottage industry of new digital surveillance tools to suppress domestic
opposition to the war in Ukraine. The tech may also be sold overseas.
By Aaron Krolik, Paul Mozur and Adam Satariano
July 3, 2023
As the war in Ukraine unfolded last year, Russia’s best digital spies turned to new tools to fight an enemy on another front: those inside its own borders who opposed the war.
To
aid an internal crackdown, Russian authorities had amassed an arsenal of
technologies to track the online lives of citizens. After it invaded Ukraine,
its demand grew for more surveillance tools. That helped stoke a cottage
industry of tech contractors, which built products that have become a powerful
— and novel — means of digital surveillance.
The
technologies have given the police and Russia’s Federal Security Service,
better known as the F.S.B., access to a buffet of snooping capabilities focused
on the day-to-day use of phones and websites. The tools offer ways to track
certain kinds of activity on encrypted apps like WhatsApp and Signal, monitor
the locations of phones, identify anonymous social media users and break into
people’s accounts, according to documents from Russian surveillance providers
obtained by The New York Times, as well as security experts, digital activists
and a person involved with the country’s digital surveillance operations.
President Vladimir V. Putin is leaning more on technology to
wield political power as Russia faces military setbacks in Ukraine, bruising
economic sanctions and leadership challenges after an uprising led by Yevgeny V. Prigozhin, the commander of the Wagner paramilitary
group. In doing so, Russia — which once lagged authoritarian regimes like China
and Iran in using modern technology to exert control — is quickly catching up.
The Federal Security Service building on Lubyanka Square in Moscow in May. The F.S.B. and other Russian authorities want stronger technologies to track the online lives of citizens.Credit...Maxim Shemetov/Reuters
“It’s
made people very paranoid, because if you communicate with anyone in Russia,
you can’t be sure whether it’s secure or not. They are monitoring traffic very
actively,” said Alena Popova, a Russian opposition political figure and digital
rights activist. “It used to be only for activists. Now they have expanded it
to anyone who disagrees with the war.”
The
effort has fed the coffers of a constellation of relatively unknown Russian
technology firms. Many are owned by Citadel Group, a business once partially
controlled by Alisher Usmanov, who was a target of European Union sanctions as one of Mr. Putin’s “favorite
oligarchs.” Some of the companies are trying to expand overseas, raising the
risk that the technologies do not remain inside Russia.
The
firms — with names like MFI Soft, Vas Experts and Protei — generally got their
start building pieces of Russia’s invasive telecom wiretapping system before producing more advanced
tools for the country’s intelligence services.
Simple-to-use
software that plugs directly into the telecommunications infrastructure now
provides a Swiss-army knife of spying possibilities, according to the
documents, which include engineering schematics, emails and screen shots. The
Times obtained hundreds of files from a person with access to the internal
records, about 40 of which detailed the surveillance tools.
One
program outlined in the materials can identify when people make voice calls or
send files on encrypted chat apps such as Telegram, Signal and WhatsApp. The
software cannot intercept specific messages, but can determine whether someone
is using multiple phones, map their relationship network by tracking
communications with others, and triangulate what phones have been in certain
locations on a given day. Another product can collect passwords entered on
unencrypted websites.
These
technologies complement other Russian efforts to shape public opinion and
stifle dissent, like a propaganda blitz on state media, more robust internet censorship and new efforts to collect data on citizens and encourage them to report
social media posts that undermine the war.
President Vladimir V. Putin of Russia with Alisher Usmanov in 2018. Mr. Usmanov once controlled Citadel Group, a conglomerate that owns many of the firms building surveillance technology.Credit...Sputnik/Reuters
They
add up to the beginnings of an off-the-shelf tool kit for autocrats who wish to
gain control of what is said and done online. One document outlining the
capabilities of various tech providers referred to a “wiretap market,” a supply
chain of equipment and software that pushes the limits of digital mass
surveillance.
The
authorities are “essentially incubating a new cohort of Russian companies that
have sprung up as a result of the state’s repressive interests,” said Adrian
Shahbaz, a vice president of research and analysis at the pro-democracy
advocacy group Freedom House, who studies online oppression. “The spillover
effects will be felt first in the surrounding region, then potentially the world.”
In one English-language marketing document aimed at overseas customers, a diagram depicts a Russian surveillance company’s phone tracking capabilities.
Beyond the ‘Wiretap
Market’
Over
the past two decades, Russian leaders struggled to control the internet. To
remedy that, they ordered up systems to eavesdrop on phone calls and
unencrypted text messages. Then they demanded that providers of internet
services store records of all internet traffic.
The
expanding program — formally known as the System for Operative Investigative Activities, or SORM — was
an imperfect means of surveillance. Russia’s telecom providers often
incompletely installed and updated the technologies, meaning the system did not
always work properly. The volume of data pouring in could be overwhelming and
unusable.
At
first, the technology was used against political rivals like supporters
of Aleksei A. Navalny, the jailed opposition leader. Demand for the
tools increased after the invasion of Ukraine, digital rights experts said.
Russian authorities turned to local tech companies that built the old
surveillance systems and asked for more.
The
push benefited companies like Citadel, which had bought many of Russia’s
biggest makers of digital wiretapping equipment and controls about 60 to 80
percent of the market for telecommunications monitoring technology, according
to the U.S. State Department. The United States announced sanctions against Citadel and its current owner, Anton
Cherepennikov, in February.
“Sectors
connected to the military and communications are getting a lot of funding right
now as they adapt to new demands,” said Ksenia Ermoshina, a senior researcher
who studies Russian surveillance companies with Citizen Lab, a research
institute at the University of Toronto.
The
new technologies give Russia’s security services a granular view of the
internet. A tracking system from one Citadel subsidiary, MFI Soft, helps
display information about telecom subscribers, along with statistical
breakdowns of their internet traffic, on a specialized control panel for use by
regional F.S.B. officers, according to one chart.
Another
MFI Soft tool, NetBeholder, can map the locations of two phones over the course
of the day to discern whether they simultaneously ran into each other,
indicating a potential meeting between people.
A
different feature, which uses location tracking to check whether several phones
are frequently in the same area, deduces whether someone might be using two or
more phones. With full access to telecom network subscriber information,
NetBeholder’s system can also pinpoint the region in Russia each user is from or
what country a foreigner comes from.
Protei,
another company, offers products that provide voice-to-text transcription for
intercepted phone calls and tools for identifying “suspicious behavior,”
according to one document.
Russia’s
enormous data collection and the new tools make for a “killer combo,” said Ms.
Ermoshina, who added that such capabilities are increasingly widespread across
the country.
Citadel
and Protei did not respond to requests for comment. A spokesman for Mr. Usmanov
said he “has not participated in any management decisions for several years”
involving the parent company, called USM, that owned Citadel until 2022. The
spokesman said Mr. Usmanov owns 49 percent of USM, which sold Citadel because
surveillance technology was never within the firm’s “sphere of interest.”
VAS
Experts said the need for its tools had “increased due to the complex
geopolitical situation” and volume of threats inside Russia. It said it
“develops telecom products which include tools for lawful interception and which
are used by F.S.B. officers who fight against terrorism,” adding that if the
technology “will save at least one life and people well-being then we work for
a reason.”
A diagram from one corporate document shows how data is collected by an internet service provider and funneled to a local branch of the F.S.B.
No Way to Mask
As
the authorities have clamped down, some citizens have turned to encrypted
messaging apps to communicate. Yet security services have also found a way to
track those conversations, according to files reviewed by The Times.
One
feature of NetBeholder harnesses a technique known as deep-packet inspection,
which is used by telecom service providers to analyze where their traffic is
going. Akin to mapping the currents of water in a stream, the software cannot
intercept the contents of messages but can identify what data is flowing where.
That
means it can pinpoint when someone sends a file or connects on a voice call on
encrypted apps like WhatsApp, Signal or Telegram. This gives the F.S.B. access
to important metadata, which is the general information about a communication
such as who is talking to whom, when and where, as well as if a file is
attached to a message.
To
obtain such information in the past, governments were forced to request it from
the app makers like Meta, which owns WhatsApp. Those companies then decided whether to provide it.
The
new tools have alarmed security experts and the makers of the encrypted
services. While many knew such products were theoretically possible, it was not
known that they were now being made by Russian contractors, security experts
said.
Some
of the encrypted app tools and other surveillance technologies have begun
spreading beyond Russia. Marketing documents show efforts to sell the products
in Eastern Europe and Central Asia, as well as Africa, the Middle East and
South America. In January, Citizen Lab reported that Protei equipment
was used by an Iranian telecom company for logging internet usage and blocking
websites. Ms. Ermoshina said the systems have also been seen in
Russian-occupied areas of Ukraine.
For
the makers of Signal, Telegram and WhatsApp, there are few defenses against
such tracking. That’s because the authorities are capturing data from internet
service providers with a bird’s-eye view of the network. Encryption can mask
the specific messages being shared, but cannot block the record of the
exchange.
“Signal
wasn’t designed to hide the fact that you’re using Signal from your own internet
service provider,” Meredith Whittaker, the president of the Signal Foundation, said
in a statement. She called for people worried about such tracking to use a
feature that sends traffic through a different server to obfuscate its origin
and destination.
In
a statement, Telegram, which does not encrypt all messages by default, also said
nothing could be done to mask traffic going to and from the chat apps, but said
people could use features it had created to make Telegram traffic harder to
identify and follow. WhatsApp said in a statement that the surveillance tools
were a “pressing threat to people’s privacy globally” and that it would
continue protecting private conversations.
The
new tools will likely shift the best practices of those who wish to disguise
their online behavior. In Russia, the existence of a digital exchange between a
suspicious person and someone else can trigger a deeper investigation or even
arrest, people familiar with the process said.
Mr.
Shahbaz, the Freedom House researcher, said he expected the Russian firms to
eventually become rivals to the usual purveyors of surveillance tools.
“China
is the pinnacle of digital authoritarianism,” he said. “But there has been a
concerted effort in Russia to overhaul the country’s internet regulations to
more closely resemble China. Russia will emerge as a competitor to Chinese
companies.”
'The Perfect
Police State: An Undercover Odyssey into China's Terrifying Surveillance
Dystopia of the Future' (Public
Affairs, 29. 06. 2021)
by Geoffrey
Cain
A riveting
investigation into how a restive region of China became the site of a nightmare
Orwellian social experiment—the definitive police state—and the global technology
giants that made it possible
Blocked from facts and truth, under constant surveillance, surrounded by a
hostile alien police force: Xinjiang’s Uyghur population has become cursed,
oppressed, outcast. Most citizens cannot discern between enemy and friend.
Social trust has been destroyed systematically. Friends betray each other,
bosses snitch on employees, teachers expose their students, and children turn
on their parents. Everyone is dependent on a government that nonetheless treats
them with suspicion and contempt. Welcome to the Perfect Police State.
Using the haunting story of one young woman’s attempt to escape the vicious
technological dystopia, his own reporting from Xinjiang, and extensive
firsthand testimony from exiles, Geoffrey Cain reveals the extraordinary
intrusiveness and power of the tech surveillance giants and the chilling
implications for all our futures.
07-27-22
Yes, you are being
watched, even if no one is looking for you
It’s important that we
recognize the extent to which physical and digital tracking work together.
BY PETER KRAPP
The U.S. has the largest number of surveillance cameras per person in
the world. Cameras are omnipresent on city streets and in hotels, restaurants,
malls and offices. They’re also used to screen passengers for the Transportation
Security Administration. And then there are smart doorbells and other home security
cameras.
Most Americans are
aware of video surveillance of public spaces. Likewise, most people know about
online tracking–and want Congress to do something about it. But as a
researcher who studies digital culture and secret communications, I
believe that to understand how pervasive surveillance is, it’s important to
recognize how physical and digital tracking work together.
Databases can
correlate location data from smartphones, the growing number
of private cameras, license plate readers on police cruisers and
toll roads, and facial recognition technology, so if law enforcement
wants to track where you are and where you’ve been, they can. They need a
warrant to use cellphone
search equipment: Connecting your device to a mobile device forensic tool lets them extract
and analyze all your data if they
have a warrant.
However, private data brokers also track this kind of data
and help
surveil citizens–without a warrant. There is a large market for
personal data, compiled from information people volunteer, information people
unwittingly yield–for example, via mobile apps–and information that is stolen in
data breaches. Among the customers for this largely unregulated data are federal, state and local law enforcement agencies.
HOW YOU ARE TRACKED
Whether or not you
pass under the gaze of a surveillance camera or license plate reader, you are
tracked by your mobile phone. GPS tells weather apps or maps your location,
Wi-Fi uses your location, and cell-tower triangulation tracks your
phone. Bluetooth can
identify and track your smartphone, and not just for COVID-19 contact tracing,
Apple’s “Find My” service, or to connect headphones.
People volunteer their
locations for ride-sharing or for games like Pokemon Go or Ingress, but apps
can also collect and share location without your
knowledge. Many late-model cars feature telematics that track locations–for
example, OnStar or Bluelink. All this makes opting out
impractical.
The same thing is true
online. Most websites feature ad trackers and
third-party cookies, which are stored in your browser whenever you
visit a site. They identify you when you visit other sites so advertisers can
follow you around. Some websites also use key logging,
which monitors what you type into a page before hitting submit. Similarly,
session recording monitors mouse movements, clicks, scrolling and typing, even
if you don’t click “submit.”
Ad trackers know when
you browsed where, which browser you used, and what your device’s internet
address is. Google and Facebook are among the main
beneficiaries, but there are many data brokers slicing and dicing such information by
religion, ethnicity, political affiliations, social media profiles, income and
medical history for profit.
BIG BROTHER IN THE 21ST CENTURY
People may implicitly
consent to some loss of privacy in the interest of perceived or real
security–for example, in stadiums, on the road and at airports, or in return
for cheaper online services. But these trade-offs benefit individuals far less
than the companies aggregating data. Many Americans are suspicious of
government censuses, yet they willingly share their jogging
routines on apps like Strava, which has revealed sensitive and secret military data.
In the post-Roe
v. Wade legal environment, there are concerns not only about period tracking apps but about correlating data on physical movements with
online searches and phone data. Legislation like the recent Texas Senate Bill 8 anti-abortion law invokes
“private individual enforcement mechanisms,” raising questions about who
gets access to tracking data.
In 2019, the Missouri Department of Health stored data about
the periods of patients at the state’s lone Planned Parenthood clinic,
correlated with state medical records. Communications metadata can reveal who you are in touch with,
when you were where, and who else was there–whether they are in your contacts
or not.
Location data from
apps on hundreds of millions of phones lets the Department of Homeland Security track people.
Health wearables pose similar risks, and medical
experts note a lack of
awareness about the security of data they collect. Note the
resemblance of your Fitbit or smartwatch to ankle bracelets people wear during
court-ordered monitoring.
The most pervasive
user of tracking in the U.S. is Immigration and Customs Enforcement (ICE),
which amassed a vast amount of information without
judicial, legislative or public oversight. Georgetown University Law Center’s
Center on Privacy and Technology reported on how ICE
searched the driver’s license photographs of 32% of all adults
in the U.S., tracked cars in cities home to 70% of adults, and updated address
records for 74% of adults when those people activated new utility accounts.
NO ONE IS WATCHING THE WATCHERS
Nobody expects to be
invisible on streets, at borders, or in shopping centers. But who has access to
all that surveillance data, and how long it is stored? There is no single U.S. privacy law at the federal
level, and states cope with a regulatory patchwork; only five
states–California, Colorado, Connecticut, Utah and Virginia–have privacy laws.
It is possible
to limit location tracking on your phone, but not
to avoid it completely. Data brokers are supposed to mask your personally identifiable data before selling it.
But this “anonymization” is meaningless since individuals are
easily identified by cross-referencing additional data sets. This makes it easy
for bounty hunters and stalkers to abuse the system.
The biggest risk to
most people arises when there is a data breach, which is happening more often – whether
it is a leaky app or careless hotel chain, a DMV data sale or a compromised credit bureau, or indeed a data brokering middleman whose cloud storage is hacked.
This illicit flow of
data not only puts fuzzy notions of privacy in peril, but may put
your addresses and passport numbers, biometric data and social media profiles,
credit card numbers and dating profiles, health and insurance information, and
more on
sale.
https://www.fastcompany.com/90772483/yes-you-are-being-watched-even-if-no-one-is-looking-for-you
Chatbot data cannot fall into the hands of big tech
Wearable Brain Devices Will Challenge Our Mental
Privacy
- By Nita A. Farahany on March 27, 2023
A last bastion of privacy, our brains have remained
inviolate, even as sensors now record our heartbeats, breaths, steps and sleep.
All that is about to change. An avalanche of brain-tracking devices—earbuds,
headphones, headbands, watches and even wearable tattoos—will soon enter the
market, promising to transform our lives. And threatening to breach the refuge
of our minds.
Tech titans Meta, Snap, Microsoft and Apple are
already investing heavily in brain wearables. They aim to embed brain sensors
into smart watches, earbuds, headsets and sleep aids. Integrating them into our
everyday lives could revolutionize
health care, enabling early diagnosis and personalized treatment of
conditions such as depression, epilepsy and
even cognitive
decline. Brain sensors could improve our ability to meditate, focus and
even communicate with a seamless technological telepathy—using the power of
thoughts and emotion to drive our interaction with augmented reality (AR) and
virtual reality (VR) headsets, or even type
on virtual keyboards with our minds.
But brain wearables also pose very real risks to
mental privacy, freedom of thought and self-determination. As these
devices proliferate, they will generate vast amounts of neural data, creating
an intimate window into our brain states, emotions and even memories. We need
the individual power to shutter this new view into our inner selves.
Employers already seek out such data,
tracking worker fatigue
levels and offering brain
wellness programs to mitigate stress, via platforms that give them
unprecedented access to employees’ brains. Cognitive and emotional testing
based on neuroscience is becoming
a new job screening norm, revealing personality aspects that may have little
to do with a job. In China, train
conductors of the Beijing-Shanghai line, the busiest of its kind in the
world, wear brain sensors throughout their work day. There are even reports of
Chinese employees being sent
home if their brain activity shows less than stellar brain metrics. As
companies embrace brain wearables that can track employees’ attention, focus
and even boredom, without safeguards in place, they could trample on employee’s
mental privacy, eroding trust and well-being along with the dignity of work
itself.
Governments, too, are seeking access to our brains,
with a U.S brain initiative seeking “‘every spike from every
neuron’ in the human brain,” to reveal “how the firing of these neurons
produced complex thoughts.” While aimed at the underlying causes of
neurological and psychiatric conditions, this same investment could also
enable government interference with freedom of thought—a freedom critical to
human flourishing. From functional
brain biometric programs under development to authenticate
individuals—including
those funded by the National Science Foundation at Binghamton University—to
so-called brain-fingerprinting techniques used to interrogate criminal
suspects—sold by companies like Brainwave
Science and funded by law enforcement agencies from Singapore to Australia to
the United
Arab Emirates—we must act quickly to ensure neurotechnology benefits
humanity rather than heralding an Orwellian future of spying on our brains.
The rush to hack the human brain veers from
neuromarketing to the rabbit hole of social media and even to cognitive warfare
programs designed to disable or disorient. These technologies should have our
full attention. Neuromarketing campaigns such as one conducted
by Frito-Lays used insights about how women’s brains could affect
snacking decisions, then monitored brain activity while people viewed their
newly designed advertisements, allowing them to fine-tune their campaigns to
better capture attention and drive women to snack more on their products.
Social media “like” buttons and notifications are features designed to draw us
habitually back to platforms, exploiting our brains’ reward systems. Clickbait
headlines and pseudoscience claims prey on our cognitive
biases, hobbling critical thinking. And nations worldwide are considering
possible military applications of neuroscience, which some planners call
warfare’s “sixth
domain” (adding to a list that includes land, sea, air, space and
cyberspace).
As brain wearables and artificial intelligences
advance, the line between human agency and machine intervention will also blur.
When a wearable reshapes our thoughts and emotions, how much of our actions and
decisions remain truly our own? As we begin to offload mental tasks to AI, we
risk becoming overly dependent on technology, weakening independent thought and
even our capacity for reflective decision-making. Should we allow AI to shape
our brains and mental experiences? And how do we retain our humanity in an
increasingly interconnected world remade by these two technologies?
Malicious use and even hacking of brain wearables is
another threat. From probing for information, to intercepting
our PIN numbers as we think or type them, neural cybersecurity will
rule. Imagine a world where brain wearables can track what we read and see,
alter perceptions, manipulate emotions or even trigger physical pain.
That’s a world that may soon arrive. Already, companies including China’s Entertech have accumulated
millions of raw EEG data recordings from individuals across the world
using its popular consumer-based brain wearables, along with personal
information and device and app usage by those individuals. Entertech makes
plain in their privacy
policy they also record personal information, GPS signals, device
sensors, computers and services a person is using, including websites they may
be visiting. We must ensure that brain wearables are designed with security in
mind and with device and data safeguards in place to mitigate these risks.
We stand at an inflection point in the beginning of
a brain wearable revolution. We need prudent vigilance and an open and honest
debate about the risks and benefits of neurotechnology, to ensure it is used
responsibly and ethically. With the right safeguards, neurotechnology could be
truly empowering for individuals. To get there will require we recognize new
digital age rights to preserve our cognitive liberty—self-determination over
our brains and mental experiences. We must do so now, before the choice is no
longer ours to make.
This is an opinion and analysis article, and the
views expressed by the author or authors are not necessarily those of Scientific
American.
9.27.19
DuckDuckGo,
EFF, and others just launched privacy settings for the whole internet
The new standard, called Global
Privacy Control, will let you
activate a browser setting to keep your data from being sold…:
What is
Pegasus? How Surveillance Spyware Invades Phones
A
cybersecurity expert explains the NSO Group’s stealthy software
- By Bhanukiran
Gurijala, The
Conversation US on August 9, 2021
End-to-end
encryption is technology that scrambles messages on your phone and unscrambles
them only on the recipients’ phones, which means anyone who intercepts the
messages in between can’t read them. Dropbox, Facebook, Google, Microsoft,
Twitter and Yahoo are among the companies whose apps and services use end-to-end encryption.
This
kind of encryption is good for protecting your privacy, but governments don’t like it because it makes it
difficult for them to spy on people, whether tracking criminals and terrorists
or, as some governments have been known to do, snooping on dissidents,
protesters and journalists. Enter an Israeli technology firm, NSO
Group.
The
company’s flagship product is Pegasus, spyware that
can stealthily enter a smartphone and gain access to everything on it,
including its camera and microphone. Pegasus is designed to infiltrate devices
running Android, Blackberry, iOS and Symbian operating
systems and turn them into surveillance devices. The company
says it sells Pegasus only to governments and
only for the purposes of tracking criminals and terrorists.
HOW IT WORKS
Earlier version of Pegasus were installed on
smartphones through vulnerabilities in
commonly used apps or by spear-phishing, which involves tricking a targeted
user into clicking a link or opening a document that secretly installs the
software. It can also be installed over a wireless transceiver located near a target, or manually
if an agent can steal the target’s phone.
Since
2019, Pegasus users have been able to install the software on smartphones with
a missed call on WhatsApp, and can even delete the
record of the missed call, making it impossible for the the phone’s owner to
know anything is amiss. Another way is by simply sending a message to a user’s
phone that produces no notification.
This
means the latest version of this spyware does not require the smartphone user
to do anything. All that is required for a successful spyware attack and
installation is having a particular vulnerable app or operating system
installed on the device. This is known as a zero-click exploit.
Once
installed, Pegasus can theoretically harvest any data from the device and transmit
it back to the attacker. It can steal photos and videos, recordings, location
records, communications, web searches, passwords, call logs and social media
posts. It also has the capability to activate cameras and microphones for
real-time surveillance without the permission or knowledge of the user.
WHO HAS BEEN
USING PEGASUS AND WHY
NSO
Group says it builds Pegasus solely for governments to use in counterterrorism
and law enforcement work. The company markets it as a targeted spying tool to
track criminals and terrorists and not for mass surveillance. The company does
not disclose its clients.
The earliest reported use of Pegasus was by the
Mexican government in 2011 to track notorious drug baron Joaquín “El Chapo”
Guzmán. The tool was also reportedly used to track people close to murdered Saudi journalist
Jamal Khashoggi.
It
is unclear who or what types of people are being targeted and why.
However, much of the
recent reporting about Pegasus centers around a list of 50,000
phone numbers. The list has been attributed to NSO Group, but the list’s
origins are unclear. A statement from Amnesty International in Israel stated
that the
list contains phone numbers that were marked as “of interest”
to NSO’s various clients, though it’s not known if any of the phones associated
with numbers have actually been tracked.
A
media consortium, the
Pegasus Project, analyzed the phone numbers on the list and
identified over 1,000 people in over 50 countries. The findings included people
who appear to fall outside of the NSO Group’s restriction to investigations of
criminal and terrorist activity. These include politicians, government workers,
journalists, human rights activists, business executives and Arab royal family
members.
OTHER WAYS YOUR
PHONE CAN BE TRACKED
Pegasus
is breathtaking in its stealth and its seeming ability to take complete control
of someone’s phone, but it’s not the only way people can be spied on through
their phones. Some of the ways phones can aid surveillance and undermine privacy include
location tracking, eavesdropping, malware and
collecting data from sensors.
Governments
and phone companies can track a phone’s location by tracking cell signals from
cell tower transceivers and cell transceiver simulators like the StingRay device. Wi-Fi and Bluetooth signals
can also be used to track phones. In some cases, apps and web
browsers can determine a phone’s location.
Eavesdropping
on communications is harder to accomplish than tracking, but it is possible in
situations in which encryption is weak or lacking. Some types of malware can
compromise privacy by accessing data.
The
National Security Agency has sought agreements with technology companies under
which the companies would give the agency special access into their products
via backdoors,
and has reportedly built backdoors on its own. The companies
say that backdoors defeat the purpose of end-to-end encryption.
The
good news is, depending on who you are, you’re unlikely to be targeted by a
government wielding Pegasus. The bad news is, that fact alone does not
guarantee your privacy.
https://theconversation.com/what-is-pegasus-a-cybersecurity-expert
during a walk in Brooklyn
Your Face Belongs to Us: A
Secretive Startup's Quest to End Privacy as We Know It
by Kashmir Hill
“The dystopian future portrayed in some science-fiction movies is already upon us. Kashmir Hill’s fascinating book brings home the scary implications of this new reality.”—John Carreyrou, author of Bad Blood
Named One of the Best Books of the Year by the Inc. Non-Obvious
Book Awards • Longlisted for the Financial Times and
Schroders Business Book of the Year Award
New York Times tech reporter Kashmir Hill was skeptical when she
got a tip about a mysterious app called Clearview AI that claimed it could, with
99 percent accuracy, identify anyone based on just one snapshot of their face.
The app could supposedly scan a face and, in just seconds, surface
every detail of a person’s online life: their name, social media profiles,
friends and family members, home address, and photos that they might not have
even known existed. If it was everything it claimed to be, it would be the
ultimate surveillance tool, and it would open the door to everything from
stalking to totalitarian state control. Could it be true?
In this riveting account, Hill tracks the improbable rise of Clearview AI,
helmed by Hoan Ton-That, an Australian computer engineer, and Richard Schwartz,
a former Rudy Giuliani advisor, and its astounding collection of billions of
faces from the internet. The company was boosted by a cast of controversial
characters, including conservative provocateur Charles C. Johnson and
billionaire Donald Trump backer Peter Thiel—who all seemed eager to release
this society-altering technology on the public. Google and Facebook
decided that a tool to identify strangers was too radical to release, but
Clearview forged ahead, sharing the app with private investors, pitching it to
businesses, and offeringit to thousands of law enforcement agencies around
the world.
Facial recognition technology has been quietly growing more powerful for
decades. This technology has already been used in wrongful arrests in the
United States. Unregulated, it could expand the reach of policing, as it has in
China and Russia, to a terrifying, dystopian level.
Your Face Belongs to Us is a gripping true story about the rise of a
technological superpower and an urgent warning that, in the absence of
vigilance and government regulation, Clearview AI is one of many new
technologies that challenge what Supreme Court Justice Louis Brandeis once
called “the right to be let alone.”
https://www.amazon.com/Your-Face-Belongs-Us-Secretive/dp/0593448561
How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State
Byron Tau
https://www.amazon.com/Means-Control-Alliance-Government-Surveillance/dp/0593443225
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth.
Zuboff vividly brings to life the consequences as surveillance capitalism advances from Silicon Valley into every economic sector. Vast wealth and power are accumulated in ominous new "behavioral futures markets," where predictions about our behavior are bought and sold, and the production of goods and services is subordinated to a new "means of behavioral modification."
The threat has shifted from a totalitarian Big Brother state to a ubiquitous digital architecture: a "Big Other" operating in the interests of surveillance capital. Here is the crucible of an unprecedented form of power marked by extreme concentrations of knowledge and free from democratic oversight. Zuboff's comprehensive and moving analysis lays bare the threats to twenty-first century society: a controlled "hive" of total connection that seduces with promises of total certainty for maximum profit--at the expense of democracy, freedom, and our human future.
With little resistance from law or society, surveillance capitalism is on the verge of dominating the social order and shaping the digital future--if we let it. https://www.goodreads.com/book/show/26195941-the-age-of-surveillance-capitalism
“Communication in a world of pervasive surveillance”
Sources and methods:
Counter-strategies against pervasive surveillance architecture
Jacob R. Appelbaum
Contents Contents 11 List of Figures 14 List of Tables 16 List of Algorithms 17 1 Introduction 1 1.1 A fifth level of ethics in mathematics . . . . . . . . . . . . . . . . . . . . 4 1.2 Thinking about the future . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Background on network protocols 15 2.1 Free Software, Open Hardware, Operational Security . . . . . . . . . . . 17 2.2 Layers of the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Ethernet networks and the Internet Protocols . . . . . . . . . . . . . . . 18 2.4 The Domain Name System . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 Multicast Domain Name System (mDNS) . . . . . . . . . . . . . . . . . 19 2.6 Hypertext Transport Protocol (HTTP) . . . . . . . . . . . . . . . . . . . 20 2.7 Transport Layer Security (TLS) . . . . . . . . . . . . . . . . . . . . . . . 20 2.8 Virtual Private Networks (VPN) . . . . . . . . . . . . . . . . . . . . . . 20 3 Background on cryptography 23 3.1 Mathematics as informational self-defense . . . . . . . . . . . . . . . . . 25 3.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Symmetric Encryption: block cipher . . . . . . . . . . . . . . . . . . . . 27 3.5 Symmetric Encryption: stream cipher . . . . . . . . . . . . . . . . . . . 27 3.6 Message Authentication Code . . . . . . . . . . . . . . . . . . . . . . . 27 3.7 Authenticated-Encryption with Associated-Data (AEAD) . . . . . . . . . 28 3.8 Non-interactive Key Exchange (NIKE) . . . . . . . . . . . . . . . . . . . 29 3.9 Verification of public keys . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.10 Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.11 Protocols from building blocks . . . . . . . . . . . . . . . . . . . . . . . 33 4 The Adversary 35 4.1 Zersetzung or Dirty Tricks? . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2 Foundational events and disclosures in surveillance . . . . . . . . . . . . 48 4.3 Summer of Snowden and the post-Snowden Era . . . . . . . . . . . . . 69 4.4 Standardization of cryptographic sabotage . . . . . . . . . . . . . . . . 80 4.5 XKeyscore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.6 ANT catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5 The GNU name system 147 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.2 Background: Domain Name System (DNS) . . . . . . . . . . . . . . . . 150 5.3 Security goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5.4 Exemplary Attacker: The NSA’s MORECOWBELL and QUANTUMDNS programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.5 Adversary Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.6 Domain Name System Security Extensions (DNSSEC) . . . . . . . . . . 157 5.7 Query name minimization . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.8 DNS-over-TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.9 DNSCurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5.10 Confidential DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5.11 Namecoin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5.12 The GNU name system . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.13 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.14 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 6 Tiny WireGuard Tweak 169 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 6.2 Realistic adversary concerns . . . . . . . . . . . . . . . . . . . . . . . . 171 6.3 WireGuard overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.4 Traffic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.5 Security and privacy issues . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.6 Blinding flows against mass surveillance . . . . . . . . . . . . . . . . . . 181 6.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 7 Vula 185 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 7.2 Background and related work . . . . . . . . . . . . . . . . . . . . . . . 188 7.3 Threat Model and design considerations . . . . . . . . . . . . . . . . . . 192 7.4 Detailed Protocol Description . . . . . . . . . . . . . . . . . . . . . . . . 198 7.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.6 Security Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 8 REUNION 219 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 8.2 Background and related work . . . . . . . . . . . . . . . . . . . . . . . 222 8.3 Introducing REUNION . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 8.4 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 8.5 Security Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 8.6 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 8.7 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 8.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Bibliography 251
https://pure.tue.nl/ws/portalfiles/portal/197416841/20220325_Appelbaum_hf.pdf
Top Cybersecurity Trend Predictions for 2025+: BeyondTrust Edition
For
this edition of our annual cybersecurity trend predictions, we’re sharing our
top prognostications for 2025, as well as a glimpse into the key emergent
trends we foresee taking hold in the remainder of the decade.
Oct
15, 2024
Morey J. Haber , James Maude, …
Predicting the Future Cybersecurity Threats with the Greatest Potential
for Disruption
We’re
nearing the end of 2024 and the midpoint of the roaring twenty-twenties. So far
this decade, we’ve had everything from high-stakes cyberattacks and
world-stopping technological malfunctions to a global pandemic. As we look
ahead to 2025, we need to contemplate the cybersecurity trends coming into
focus and start planning for those yet to take shape.
For
this edition of our annual cybersecurity trend predictions, we’re sharing our
top prognostications for 2025, as well as a glimpse into the key emergent
trends we foresee taking hold in the remainder of the decade.
But
first, let’s take a brief moment to consider where we stand. The cybersecurity
landscape is clearly in another phase of rapid evolution. Last year, AI
(artificial intelligence) achieved significant technological breakthroughs,
drastically altering the course of the threat landscape and pushing
organizations to rethink security strategies. This caused a surge of defense
tools that leverage AI and ML (machine learning) to advance threat detection
and response.
We
are already seeing another technological innovation making its way into
mainstream adoption: quantum computing. For years, this has been on
the distant horizon, but now it’s finally seeming closer to reality. Quantum
computing has the potential to wreak unprecedented levels of disruption, posing
a massive challenge to the traditional cryptographic methods widely deployed
today.
In
recent years, we’ve also witnessed a shift in how threat actors penetrate
environments. More emphasis on identity-based tactics has encouraged
cybersecurity practitioners to reconsider their definitions of “privilege” and
“identity security” and focus their defensive strategies on reducing the blast
radius of compromised accounts. In the midst of all this, political tensions
have risen, making the potential ripples of nation-state cyberattacks more
global-reaching than ever.
With
so much on the horizon, it’s critical for organizations to stay vigilant and
keep their security strategies tuned in to the latest trends. Please join us as
we explore what will redefine the cybersecurity landscape in 2025 and beyond.
Recapping BeyondTrust’s 2024 Cybersecurity Prediction…
Cybersecurity Trends for 2025…
Cybersecurity Trends for The Rest of the Decade…
Conclusion: Don’t Delay Your Security Preparation…: https://www.beyondtrust.com/blog/entry/beyondtrust-cybersecurity-trend-predictions
Nav komentāru:
Ierakstīt komentāru