The popular apps that are SPYING on you:
Cybersecurity experts issue urgent warning over 'data hungry' apps that can
access your location, microphone and data
- READ MORE: Are YOU being spied on? 10 warning signs someone is tracking you
By JONATHAN CHADWICK
FOR MAILONLINE
Published: 22 July
2025
They're some of
the biggest apps in the world, used by hundreds of millions of people every
day.
But according to a
new investigation, 'data hungry' smartphone apps like Facebook and Instagram ask
for 'shocking' levels of access to your personal data.
Experts at
consumer champion Which? investigated 20 popular apps across social media,
online shopping, fitness and smart home categories.
They found all of
them ask for 'risky' permissions such as access to your location, microphone,
and files on your device – even when they don't need to.
The experts urge
people to be more careful about what exactly we agree to when we download an
app and mindlessly agree to permissions.
We could be
compromising our privacy when we hastily tap 'agree'.
'Millions of us
rely on apps each day to help with everything from keeping on top of our health
and fitness to doing online shopping,' said Harry Rose, editor of Which?
'While many of
these apps appear to be free to use, our research has shown how users are in
fact paying with their data – often in scarily vast quantities.'
Among social media
apps, Facebook, owned by Meta, was arguably the most keen for user data - it
wanted the highest number of permissions (69 in total, of which 6 are
considered 'risky'
WhatsApp, also
owned by Meta, wanted 66 permissions in total, six of which are considered
risky)
Which? researchers
worked with experts at cybersecurity firm Hexiosec to assess the privacy and
security features of 20 popular apps on an Android handset.
The list included
some of the biggest names in social media (including WhatsApp, Facebook,
Instagram, TikTok), online shopping (Amazon, AliExpress) the smart home
(Samsung Smart Things, Ring Doorbell) and fitness (Strava).
Combined, the 20
apps have been downloaded over 28 billion times worldwide – meaning the average
UK adult is likely to have several of them on their phone at any given
time.
If someone were to
have all 20 downloaded on their device, collectively they would grant a
staggering 882 permissions – potentially giving access to huge amounts of an
individual’s personal data.
Overall, the team
found Chinese app Xiaomi Home asked for a total of 91 permissions – more than
any other app in the study – five of which are described as 'risky'.
Risky permissions
include those that access your microphone, can read files on your device,
or see your precise location (usually referred to as 'fine location').
Such data is a
valuable commodity and may allow firms to target users with 'uncannily accurate
adverts'.
Samsung's Smart
Things app asked for 82 permissions (of which eight are risky), followed by
Facebook (69 permissions, six risky) and WhatsApp (66 permissions, six
risky).
Meta's
photo-sharing app Instagram asked for a total of 56 permissions, of which
four are considered 'risky'
Overall, Xiaomi
asked for a total of 91 permissions - more than any other app in the study -
five of which described as 'risky'
How to improve
your app privacy
- Check privacy information:
Review any data collection information on the app store listing, including
the permissions an app will request
- Read the privacy policy: You
can find it either on the app store listing or company’s website. If you
don’t want to read the whole thing then focus on the sections on data
collection and sharing
- Limit or revoke permissions: In
Apple iOS and Google Android, you can control what apps can access your
data (tap Settings, and then Apps and Permissions to see what each app can
access)
- Delete: If you aren’t sure about an app, delete it -
and make sure all your account data is deleted, too
Xiaomi Home was
also one of two apps (alongside AliExpress) to send data to China, including to
suspected advertising networks – although this was flagged in the privacy
policy by both.
Ali Express
requested six risky permissions such as precise location, access to microphones
and reading files on the device.
AliExpress also
bombarded users with a deluge of marketing emails after download (30 over the
course of a month) but the researchers did not see any specific permission
request from AliExpress to do so.
Temu, another
Chinese-owned online marketplace, also gave a heavy push to sign up to email
marketing – which many users could easily agree to without realising, the
experts reasoned.
Among social media
apps, Facebook was 'the most keen for user data' as it wanted the highest
number of permissions (69 in total, six of which risky), followed by WhatsApp
(66 altogether, six of which risky).
TikTok, meanwhile,
asked for 41 permissions, including three risky ones, including the ability to
record audio and view files on the device, while YouTube asked for 47
permissions, four of which were 'risky'.
Overall, 16 of the
20 apps requested a permission that allows apps to create windows on top of
other apps – effectively creating pop-ups on your phone, even if you opted out
of the app sending notifications.
Seven also wanted
a permission that allows an app to start operating when you open your phone
even if you haven't yet interacted with it.
AliExpress was
also one of two apps (alongside smart device app Xiaomi) to send data to China,
including to suspected advertising networks
In some cases
there are clear uses for risky permissions – for example the likes of WhatsApp
or Ring Doorbell may need microphone access in order to carry out certain
functions.
But other examples
the need for risky permissions was less clear cut, according to Which?
For example, four
apps – AliExpress, Facebook, WhatsApp and Strava – requested permission to see
what other apps recently used or currently running.
The researchers
stress that the investigation was conducted on an Android phone and that
permissions may vary on Apple iOS devices.
But we should all
be more careful of tapping "yes" to permissions while mentally on
'autopilot' without really being aware of what we're agreeing to, Mr Rose
said.
'Our research
underscores why it’s so important to check what you’re agreeing to when you
download a new app,' he added.
The full findings
can be read on the Which? website.
In response to the
findings, Meta (which owns WhatsApp, Facebook and Instagram) said none of
its apps 'run the microphone in the background or have any access to it without
user involvement'.
Samsung's Smart
Things app asked for 82 permissions (of which eight were risky), followed by
Facebook (69 permissions, six risky) and WhatsApp (66 permissions, six risky)
Meta also said
that users must ‘explicitly approve’ in their operating system for the app to
access the microphone for the first time.
A Samsung
spokesperson said: 'All our apps, including SmartThings, are designed to comply
with UK data protection laws and relevant guidance from the Information
Commissioner's Office (ICO).'
Meanwhile, TikTok
said that privacy and security are 'built into every product' it makes. It
added: TikTok 'collects information that users choose to provide, along with
data that supports things like app functionality, security, and overall user
experience'.
Strava said that
risky permission it takes, such as precise location, allow it to 'provide the
very service that our users are requesting'. It said that it has 'implemented
appropriate guardrails’ around how data is ‘collected, shared, processed, and
used'.
Amazon said that
device permissions are to provide 'helpful features', such as 'the ability to
visualise products in their home with their device’s camera or search for
products using text-to-speech'. It added: 'We also give customers clear control
over personalised advertising by requesting consent when they visit our UK
store and providing options to opt out or adjust preferences at any
time.'
Urgent warning as experts uncover 62 shopping apps that can
track your precise location
AliExpress claimed
that the precise location permission is not used in the UK, and the microphone
permission requires user consent. It added: 'We strive to create a platform
where consumers can shop with confidence, knowing that their data is
safeguarded in accordance with the law and our strict privacy policy. We
welcome the findings from Which? as an opportunity to redouble our efforts in
this area.'
Ring said that it
doesn’t 'use cookies or trackers on the Ring app for advertising' and all
permission as used to 'provide user-facing features'. It added: 'We design our
products and services to protect our customers’ privacy and security, and to
put our customers in control of their experience. We never sell their personal
data, and we never stop working to keep their information safe.'
A Temu
spokesperson said precise location permission is ‘used to support completing an
address based on GPS location’ but it is not used in the UK market, adding that
it 'handles user data in accordance with local and international regulations
and in line with leading industry practices'.
Google
(representing YouTube), Xiaomi, Impulse and MyFitnessPal did not respond to
requests for comment.
Total permissions
by app …:
How your data is
collected and what you can do about it
07-03-2025
Mobile apps and
social media platforms now let companies gather much more fine-grained
information about people at a lower cost.
You wake up in the
morning and, first thing, you open your weather app. You close that pesky ad
that opens first and check the forecast. You like your weather app, which shows
hourly weather forecasts for your location. And the app is free!
But do you know
why it’s free? Look at the app’s privacy settings. You help keep it free by
allowing it to collect your information, including:
·
What
devices you use and their IP and media access control addresses.
·
Information
you provide when signing up, such as your name, email address, and home
address.
·
App
settings, such as whether you choose Celsius or Fahrenheit.
·
Your
interactions with the app, including what content you view and what ads you
click.
·
Inferences
based on your interactions with the app.
·
Your
location at a given time, including, depending on your settings, continuous
tracking.
·
What
websites or apps that you interact with after you use the weather app.
·
Information
you give to ad vendors.
·
Information
gleaned by analytics vendors that analyze and optimize the app.
This type of data
collection is standard fare. The app company can use this to customize ads and
content. The more customized and personalized an ad is, the more money it
generates for the app owner. The owner might also sell your data to other
companies.
You might also
check a social media account like Instagram. The subtle price that you pay is,
again, your data. Many “free” mobile apps gather information about you as you
interact with them.
As an associate
professor of electrical and computer engineering and a doctoral student in
computer science, we follow the ways software collects information about
people. Your data allows companies to learn about your habits and exploit them.
It’s no secret
that social media and mobile applications collect information about you. Meta’s
business model depends on it. The company, which operates Facebook, Instagram,
and WhatsApp, is worth $1.48 trillion. Just under 98% of its profits come from
advertising, which leverages user data from more than 7 billion monthly users.
What your data is
worth
Before mobile
phones gained apps and social media became ubiquitous, companies conducted
large-scale demographic surveys to assess how well a product performed and to
get information about the best places to sell it. They used the information to
create coarsely targeted ads that they placed on billboards, print ads, and TV
spots.
Mobile apps and
social media platforms now let companies gather much more fine-grained
information about people at a lower cost. Through apps and social media, people
willingly trade personal information for convenience. In 2007—a year after the
introduction of targeted ads—Facebook made over $153 million, triple the
previous year’s revenue. In the past 17 years, that number has increased by
more than 1,000 times.
Five ways to leave
your data
App and social
media companies collect your data in many ways. Meta is a representative case.
The company’s privacy policy highlights five ways it gathers your data:
First, it collects
the profile information you fill in. Second, it collects the actions you take
on its social media platforms. Third, it collects the people you follow and
friend. Fourth, it keeps track of each phone, tablet, and computer you use to
access its platforms. And fifth, it collects information about how you interact
with apps that corporate partners connect to its platforms. Many apps and
social media platforms follow similar privacy practices.
Your data and
activity
When you create an
account on an app or social media platform, you provide the company that owns
it with information like your age, birth date, identified sex, location, and
workplace. In the early years of Facebook, selling profile information to
advertisers was that company’s main source of revenue. This information is
valuable because it allows advertisers to target specific demographics like
age, identified gender, and location.
And once you start
using an app or social media platform, the company behind it can collect data
about how you use the app or social media. Social media keeps you engaged as
you interact with other people’s posts by liking, commenting or sharing them.
Meanwhile, the social media company gains information about what content you
view and how you communicate with other people.
Advertisers can
find out how much time you spent reading a Facebook post or that you spent a
few more seconds on a particular TikTok video. This activity information tells
advertisers about your interests. Modern algorithms can quickly pick up
subtleties and automatically change the content to engage you in a sponsored
post, a targeted advertisement or general content.
Companies can also
note what devices, including mobile phones, tablets, and computers, you use to
access their apps and social media platforms. This shows advertisers your brand
loyalty, how old your devices are, and how much they’re worth.
Because mobile
devices travel with you, they have access to information about where you’re
going, what you’re doing, and who you’re near. In a lawsuit against Kochava
Inc., the Federal Trade Commission called out the company for selling customer
geolocation data in August 2022, shortly after Roe v. Wade was overturned by
the Supreme Court in Dobbs v. Jackson Women’s Health Organization. Kochava’s
customers, including people who had abortions after the ruling was overturned,
often didn’t know that data tracking their movements was being collected,
according to the commission. The FTC alleged that the data could be used to
identify households.
Kochava has denied
the FTC’s allegations.
Information that
apps can gain from your mobile devices includes anything you have given an app
permission to have, such as your location, who you have in your contact list,
or photos in your gallery.
If you give an app
permission to see where you are while the app is running, for instance, the
platform can access your location anytime the app is running. Providing access
to contacts may provide an app with the phone numbers, names, and emails of all
the people you know.
Cross-application
data collection
Companies can also
gain information about what you do across different apps by acquiring
information collected by other apps and platforms.
This is common
with social media companies. This allows companies to, for example, show you
ads based on what you like or recently looked at on other apps. If you’ve
searched for something on Amazon and then noticed an ad for it on Instagram,
it’s probably because Amazon shared that information with Instagram.
This combined data
collection has made targeted advertising so accurate that people have reported
that they feel like their devices are listening to them.
Companies,
including Google, Meta, X, TikTok, and Snapchat, can build detailed user
profiles based on collected information from all the apps and social media
platforms you use. They use the profiles to show you ads and posts that match
your interests to keep you engaged. They also sell the profile information to
advertisers.
Meanwhile,
researchers have found that Meta and Yandex, a Russian search engine, have
overcome controls in mobile operating system software that ordinarily keep
people’s web-browsing data anonymous. Each company puts code on its web pages
that used local IPs to pass a person’s browsing history, which is supposed to
remain private, to mobile apps installed on that person’s phone, de-anonymizing
the data. Yandex has been conducting this tracking since 2017, while Meta began
in September 2024, according to the researchers.
What you can do
about it
If you use apps
that collect your data in some way, including those that give you directions,
track your workouts, or help you contact someone, or if you use social media
platforms, your privacy is at risk.
Aside from
entirely abandoning modern technology, there are several steps you can take to
limit access, at least in part, to your private information.
Read the privacy
policy of each app or social media platform you use. Although privacy policy
documents can be long, tedious, and sometimes hard to read, they explain how
social media platforms collect, process, store, and share your data.
Remove unnecessary
permissions from mobile apps to limit the amount of information that
applications can gather from you.
Be aware of the
privacy settings that might be offered by the apps or social media platforms
you use, including any setting that allows your personal data to affect your
experience or shares information about you with other users or applications.
These privacy
settings can give you some control. We recommend that you disable “off-app
activity” and “personalization” settings. “Off-app activity” allows an app to
record which other apps are installed on your phone and what you do on them.
Personalization settings allow an app to use your data to tailor what it shows
you, including advertisements.
Review and update
these settings regularly because permissions sometimes change when apps or your
phone update. App updates may also add new features that can collect your data.
Phone updates may also give apps new ways to collect your data or add new ways
to preserve your privacy.
Use private
browser windows or reputable virtual private networks software, commonly
referred to as VPNs, when using apps that connect to the internet and social
media platforms. Private browsers don’t store any account information, which
limits the information that can be collected. VPNs change the IP address of
your machine so that apps and platforms can’t discover your location.
Finally, ask
yourself whether you really need every app that’s on your phone. And when using
social media, consider how much information you want to reveal about yourself
in liking and commenting on posts, sharing updates about your life, revealing
locations you visited, and following celebrities you like.
https://www.fastcompany.com/91361508/social-media-apps-data-collection-privacy
Data Collection
Basics and Available Resources
Commission
proposes measures to boost data sharing and support European data spaces
Page contents
- Top
- Print friendly pdf
- Related media
- Press contact
To better exploit the
potential of ever-growing data in a trustworthy European framework, the
Commission today proposes
new rules on data governance. The Regulation will facilitate data sharing
across the EU and between sectors to create wealth for society, increase
control and trust of both citizens and companies regarding their data, and
offer an alternative European model to data handling practice of major tech
platforms.
The amount of data generated
by public bodies, businesses and citizens is constantly growing. It is expected
to multiply by five between 2018 and 2025. These new rules will allow this data
to be harnessed and will pave the way for sectoral European data
spaces to benefit society, citizens and companies. In the
Commission's data
strategy of February this year, nine such data spaces have been
proposed, ranging from industry to energy, and from health to the European
Green Deal. They will, for example, contribute to the green
transition by improving the management of energy consumption, make delivery of
personalised medicine a reality, and facilitate access to public services.
Executive Vice-President for
A Europe Fit for the Digital Age, Margrethe Vestager, said: “You
don't have to share all data. But if you do and data is sensitive you should be
able to do in a manner where data can be trusted and protected. We want to give
business and citizens the tools to stay in control of data. And to build trust
that data is handled in line with European values and fundamental rights.”
Commissioner for Internal
Market, Thierry Breton, said: “We are defining today a truly
European approach to data sharing. Our new regulation will enable trust and
facilitate the flow of data across sectors and Member States while putting all
those who generate data in the driving seat. With the ever-growing role of
industrial data in our economy, Europe needs an open yet sovereign Single
Market for data. Flanked by the right investments and key infrastructures, our
regulation will help Europe become the world's number one data continent.”
Delivering on the
announcement in the data
strategy, the Regulation will create the basis for a new European way of
data governance that is in line with EU values and principles, such as personal
data protection (GDPR), consumer protection and competition rules. It offers an
alternative model to the data-handling practices of the big tech platforms,
which can acquire a high degree of market power because of their business
models that imply control of large amounts of data. This new approach proposes
a model based on the neutrality and transparency of data intermediaries, which
are organisers of data sharing or pooling, to increase trust. To ensure this
neutrality, the data-sharing intermediary cannot deal in the data on its own
account (e.g. by selling it to another company or using it to develop their own
product based on this data) and will have to comply with strict requirements.
The Regulation includes:
- A number of measures to increase trust
in data sharing, as the lack of trust is currently a major obstacle
and results in high costs.
- Create new EU rules on neutrality to
allow novel data intermediaries to function as trustworthy organisers of
data sharing.
- Measures to facilitate the reuse of
certain data held by the public sector. For example, the reuse of
health data could advance research to find cures for rare or chronic
diseases.
- Means to give Europeans
control on the use of the data they generate, by making it easier
and safer for companies and individuals to voluntarily make their data
available for the wider common good under clear conditions.
Background
Today's proposal is the
first deliverable under the European
strategy for data, which aims to unlock the economic and societal potential
of data and technologies like Artificial Intelligence, while respecting EU
rules and values (for example in the area of data protection, and respect of
intellectual property and trade secrets). The strategy will build on the size
of the single market as a space where data can flow within the EU and across
sectors in line with clear, practical and fair rules for access and reuse.
Today's proposal also supports wider international sharing of data, under
conditions that ensure compliance with the European public interest and the
legitimate interests of data providers.
More dedicated proposals on
data spaces are expected to follow in 2021, complemented by a Data Act to
foster data sharing among businesses, and between business and governments.
https://ec.europa.eu/commission/presscorner/detail/en/IP_20_2102
Generative AI increases the threat to privacy by
giving the government access to sensitive information beyond what it could
collect through court-authorized surveillance.
The report shows the
breathtaking scale and invasive nature of the consumer data market and how that
market directly enables wholesale surveillance of people. The data includes not
only where you’ve been and who you’re connected to, but the nature of your
beliefs and predictions about what you might do in the future. The report
underscores the grave risks the purchase of this data poses, and urges the
intelligence community to adopt internal guidelines to address these problems.
As a privacy, electronic
surveillance, and technology law attorney, researcher, and law
professor, I have spent years researching, writing, and
advising about the legal issues the report highlights.
These issues are increasingly
urgent. Today’s commercially available information, coupled with the
now-ubiquitous decision-making artificial intelligence and generative AI-like
ChatGPT, significantly increases the threat to privacy and civil liberties by
giving the government access to sensitive personal information beyond even what
it could collect through court-authorized surveillance.
WHAT IS COMMERCIALLY
AVAILABLE INFORMATION?
The drafters of the report
take the position that commercially available information is a subset of
publicly available information. The distinction between the two is significant
from a legal perspective. Publicly available information is information that is
already in the public domain. You could find it by doing a little online
searching.
Commercially available
information is different. It is personal information collected from a dizzying
array of sources by commercial data brokers that aggregate and analyze it, then
make it available for purchase by others, including governments. Some of that
information is private, confidential, or otherwise legally protected.
The sources and types of data
for commercially available information are mind-bogglingly vast. They include
public records and other publicly available information. But far more
information comes from the nearly ubiquitous internet-connected devices in
people’s lives, like cellphones, smart-home
systems, cars, and fitness trackers. These all harness data from
sophisticated, embedded sensors,
cameras, and microphones. Sources also include data from apps, online
activity, texts, and emails, and even healthcare
provider websites.
Types of data
include location, gender, and sexual orientation, religious and
political views and affiliations, weight
and blood pressure, speech patterns, emotional states, behavioral information
about myriad activities, shopping patterns, and family and friends.
This data provides companies
and governments a window into the “Internet of Behaviors,”
a combination of data collection and analysis aimed at understanding and
predicting people’s behavior. It pulls together a wide range of data, including
location and activities, and uses scientific and technological approaches,
including psychology and machine learning, to analyze that data. The Internet
of Behaviors provides a map of what each person has done, is doing, and is
expected to do, and provides a means to influence a
person’s behavior.
BETTER, CHEAPER, AND
UNRESTRICTED
The rich depths of
commercially available information, analyzed with powerful AI, provide
unprecedented power, intelligence, and investigative insights. The information
is a cost-effective way to surveil virtually everyone, plus it provides far
more sophisticated data than traditional electronic surveillance tools or
methods like wiretapping and location tracking.
Government use of electronic-surveillance
tools is extensively regulated
by federal and state laws. The U.S. Supreme Court has ruled that the
Constitution’s Fourth
Amendment, which prohibits unreasonable searches and seizures, requires a
warrant for a wide range of digital searches. These include wiretapping
or intercepting
a person’s calls, texts, or emails; using GPS or cellular
location information to track a person; or searching a
person’s cellphone.
Complying with these laws
takes time and money, plus electronic-surveillance law restricts what, when,
and how data can be collected. Commercially available information is cheaper to
obtain, provides far richer data and analysis, and is subject to little
oversight or restriction compared to when the same data is collected directly
by the government.
THE THREATS
Technology and the burgeoning
volume of commercially available information allow various forms of the
information to be combined and analyzed in new ways to understand all aspects
of your life, including preferences and desires.
The Office of the Director of
National Intelligence report warns that the increasing volume and widespread
availability of commercially available information poses “significant threats
to privacy and civil liberties.” It increases the power of the government to
surveil its citizens outside the bounds of law, and it opens the door to the
government using that data in potentially unlawful ways. This could
include using
location data obtained via commercially available information rather than a
warrant to investigate and prosecute someone for abortion.
The report also captures both
how widespread government purchases of commercially available information are
and how haphazard government practices around the use of the information are.
The purchases are so pervasive and agencies’ practices so poorly documented
that the Office of the Director of National Intelligence cannot even fully
determine how much and what types of information agencies are purchasing, and
what the various agencies are doing with the data.
IS IT LEGAL?
The question of whether it’s
legal for government agencies to purchase commercially available information is
complicated by the array of sources and complex mix of data it contains.
There is no legal prohibition
on the government collecting information already disclosed to the public or
otherwise publicly available. But the nonpublic information listed in the
declassified report includes data that U.S. law typically protects. The
nonpublic information’s mix of private, sensitive, confidential, or otherwise
lawfully protected data makes collection a legal gray area.
Despite decades of
increasingly sophisticated and invasive commercial data aggregation, Congress
has not passed a federal data privacy law. The lack of federal regulation
around data creates
a loophole for government agencies to evade electronic surveillance
law. It also allows agencies to amass enormous databases that AI systems learn
from and use in often unrestricted ways. The resulting erosion of privacy has
been a concern for more than a
decade.
THROTTLING THE DATA
PIPELINE
The Office of the Director of
National Intelligence report acknowledges the stunning loophole that
commercially available information provides for government surveillance: “The
government would never have been permitted to compel billions of people to
carry location tracking devices on their persons at all times, to log and track
most of their social interactions, or to keep flawless records of all their
reading habits. Yet smartphones, connected cars, web tracking technologies, the
Internet of Things, and other innovations have had this effect without
government participation.”
However, it isn’t entirely
correct to say “without government participation.” The legislative branch could
have prevented this situation by enacting data privacy laws, more tightly
regulating commercial data practices, and providing oversight in AI
development. Congress could yet address the problem. Representative Ted Lieu
has introduced the bipartisan
proposal for a National AI Commission, and Senator Chuck Schumer has
proposed an
AI regulation framework.
Effective data-privacy laws
would keep your personal information safer from government agencies and
corporations, and responsible AI regulation would block them from manipulating
you.
Decentralized Society: Finding Web3's Soul
by EG
Weyl · 2022
We call this richer,
pluralistic ecosystem “Decentralized Society” (DeSoc)—a co-determined
sociality, where Souls and communities come ... :
Cracking
Down on Dissent, Russia Seeds a Surveillance Supply Chain
Russia is incubating a
cottage industry of new digital surveillance tools to suppress domestic
opposition to the war in Ukraine. The tech may also be sold overseas.
By Aaron Krolik, Paul Mozur and Adam Satariano
July 3, 2023
As the war in Ukraine unfolded last year, Russia’s best digital spies turned to new tools to fight an enemy on another front: those inside its own borders who opposed the war.
To
aid an internal crackdown, Russian authorities had amassed an arsenal of
technologies to track the online lives of citizens. After it invaded Ukraine,
its demand grew for more surveillance tools. That helped stoke a cottage
industry of tech contractors, which built products that have become a powerful
— and novel — means of digital surveillance.
The
technologies have given the police and Russia’s Federal Security Service,
better known as the F.S.B., access to a buffet of snooping capabilities focused
on the day-to-day use of phones and websites. The tools offer ways to track
certain kinds of activity on encrypted apps like WhatsApp and Signal, monitor
the locations of phones, identify anonymous social media users and break into
people’s accounts, according to documents from Russian surveillance providers
obtained by The New York Times, as well as security experts, digital activists
and a person involved with the country’s digital surveillance operations.
President Vladimir V. Putin is leaning more on technology to
wield political power as Russia faces military setbacks in Ukraine, bruising
economic sanctions and leadership challenges after an uprising led by Yevgeny V. Prigozhin, the commander of the Wagner paramilitary
group. In doing so, Russia — which once lagged authoritarian regimes like China
and Iran in using modern technology to exert control — is quickly catching up.
The Federal Security Service building on Lubyanka Square in Moscow in May. The F.S.B. and other Russian authorities want stronger technologies to track the online lives of citizens.Credit...Maxim Shemetov/Reuters
“It’s
made people very paranoid, because if you communicate with anyone in Russia,
you can’t be sure whether it’s secure or not. They are monitoring traffic very
actively,” said Alena Popova, a Russian opposition political figure and digital
rights activist. “It used to be only for activists. Now they have expanded it
to anyone who disagrees with the war.”
The
effort has fed the coffers of a constellation of relatively unknown Russian
technology firms. Many are owned by Citadel Group, a business once partially
controlled by Alisher Usmanov, who was a target of European Union sanctions as one of Mr. Putin’s “favorite
oligarchs.” Some of the companies are trying to expand overseas, raising the
risk that the technologies do not remain inside Russia.
The
firms — with names like MFI Soft, Vas Experts and Protei — generally got their
start building pieces of Russia’s invasive telecom wiretapping system before producing more advanced
tools for the country’s intelligence services.
Simple-to-use
software that plugs directly into the telecommunications infrastructure now
provides a Swiss-army knife of spying possibilities, according to the
documents, which include engineering schematics, emails and screen shots. The
Times obtained hundreds of files from a person with access to the internal
records, about 40 of which detailed the surveillance tools.
One
program outlined in the materials can identify when people make voice calls or
send files on encrypted chat apps such as Telegram, Signal and WhatsApp. The
software cannot intercept specific messages, but can determine whether someone
is using multiple phones, map their relationship network by tracking
communications with others, and triangulate what phones have been in certain
locations on a given day. Another product can collect passwords entered on
unencrypted websites.
These
technologies complement other Russian efforts to shape public opinion and
stifle dissent, like a propaganda blitz on state media, more robust internet censorship and new efforts to collect data on citizens and encourage them to report
social media posts that undermine the war.
President Vladimir V. Putin of Russia with Alisher Usmanov in 2018. Mr. Usmanov once controlled Citadel Group, a conglomerate that owns many of the firms building surveillance technology.Credit...Sputnik/Reuters
They
add up to the beginnings of an off-the-shelf tool kit for autocrats who wish to
gain control of what is said and done online. One document outlining the
capabilities of various tech providers referred to a “wiretap market,” a supply
chain of equipment and software that pushes the limits of digital mass
surveillance.
The
authorities are “essentially incubating a new cohort of Russian companies that
have sprung up as a result of the state’s repressive interests,” said Adrian
Shahbaz, a vice president of research and analysis at the pro-democracy
advocacy group Freedom House, who studies online oppression. “The spillover
effects will be felt first in the surrounding region, then potentially the world.”
In one English-language marketing document aimed at overseas customers, a diagram depicts a Russian surveillance company’s phone tracking capabilities.
Beyond the ‘Wiretap
Market’
Over
the past two decades, Russian leaders struggled to control the internet. To
remedy that, they ordered up systems to eavesdrop on phone calls and
unencrypted text messages. Then they demanded that providers of internet
services store records of all internet traffic.
The
expanding program — formally known as the System for Operative Investigative Activities, or SORM — was
an imperfect means of surveillance. Russia’s telecom providers often
incompletely installed and updated the technologies, meaning the system did not
always work properly. The volume of data pouring in could be overwhelming and
unusable.
At
first, the technology was used against political rivals like supporters
of Aleksei A. Navalny, the jailed opposition leader. Demand for the
tools increased after the invasion of Ukraine, digital rights experts said.
Russian authorities turned to local tech companies that built the old
surveillance systems and asked for more.
The
push benefited companies like Citadel, which had bought many of Russia’s
biggest makers of digital wiretapping equipment and controls about 60 to 80
percent of the market for telecommunications monitoring technology, according
to the U.S. State Department. The United States announced sanctions against Citadel and its current owner, Anton
Cherepennikov, in February.
“Sectors
connected to the military and communications are getting a lot of funding right
now as they adapt to new demands,” said Ksenia Ermoshina, a senior researcher
who studies Russian surveillance companies with Citizen Lab, a research
institute at the University of Toronto.
The
new technologies give Russia’s security services a granular view of the
internet. A tracking system from one Citadel subsidiary, MFI Soft, helps
display information about telecom subscribers, along with statistical
breakdowns of their internet traffic, on a specialized control panel for use by
regional F.S.B. officers, according to one chart.
Another
MFI Soft tool, NetBeholder, can map the locations of two phones over the course
of the day to discern whether they simultaneously ran into each other,
indicating a potential meeting between people.
A
different feature, which uses location tracking to check whether several phones
are frequently in the same area, deduces whether someone might be using two or
more phones. With full access to telecom network subscriber information,
NetBeholder’s system can also pinpoint the region in Russia each user is from or
what country a foreigner comes from.
Protei,
another company, offers products that provide voice-to-text transcription for
intercepted phone calls and tools for identifying “suspicious behavior,”
according to one document.
Russia’s
enormous data collection and the new tools make for a “killer combo,” said Ms.
Ermoshina, who added that such capabilities are increasingly widespread across
the country.
Citadel
and Protei did not respond to requests for comment. A spokesman for Mr. Usmanov
said he “has not participated in any management decisions for several years”
involving the parent company, called USM, that owned Citadel until 2022. The
spokesman said Mr. Usmanov owns 49 percent of USM, which sold Citadel because
surveillance technology was never within the firm’s “sphere of interest.”
VAS
Experts said the need for its tools had “increased due to the complex
geopolitical situation” and volume of threats inside Russia. It said it
“develops telecom products which include tools for lawful interception and which
are used by F.S.B. officers who fight against terrorism,” adding that if the
technology “will save at least one life and people well-being then we work for
a reason.”
A diagram from one corporate document shows how data is collected by an internet service provider and funneled to a local branch of the F.S.B.
No Way to Mask
As
the authorities have clamped down, some citizens have turned to encrypted
messaging apps to communicate. Yet security services have also found a way to
track those conversations, according to files reviewed by The Times.
One
feature of NetBeholder harnesses a technique known as deep-packet inspection,
which is used by telecom service providers to analyze where their traffic is
going. Akin to mapping the currents of water in a stream, the software cannot
intercept the contents of messages but can identify what data is flowing where.
That
means it can pinpoint when someone sends a file or connects on a voice call on
encrypted apps like WhatsApp, Signal or Telegram. This gives the F.S.B. access
to important metadata, which is the general information about a communication
such as who is talking to whom, when and where, as well as if a file is
attached to a message.
To
obtain such information in the past, governments were forced to request it from
the app makers like Meta, which owns WhatsApp. Those companies then decided whether to provide it.
The
new tools have alarmed security experts and the makers of the encrypted
services. While many knew such products were theoretically possible, it was not
known that they were now being made by Russian contractors, security experts
said.
Some
of the encrypted app tools and other surveillance technologies have begun
spreading beyond Russia. Marketing documents show efforts to sell the products
in Eastern Europe and Central Asia, as well as Africa, the Middle East and
South America. In January, Citizen Lab reported that Protei equipment
was used by an Iranian telecom company for logging internet usage and blocking
websites. Ms. Ermoshina said the systems have also been seen in
Russian-occupied areas of Ukraine.
For
the makers of Signal, Telegram and WhatsApp, there are few defenses against
such tracking. That’s because the authorities are capturing data from internet
service providers with a bird’s-eye view of the network. Encryption can mask
the specific messages being shared, but cannot block the record of the
exchange.
“Signal
wasn’t designed to hide the fact that you’re using Signal from your own internet
service provider,” Meredith Whittaker, the president of the Signal Foundation, said
in a statement. She called for people worried about such tracking to use a
feature that sends traffic through a different server to obfuscate its origin
and destination.
In
a statement, Telegram, which does not encrypt all messages by default, also said
nothing could be done to mask traffic going to and from the chat apps, but said
people could use features it had created to make Telegram traffic harder to
identify and follow. WhatsApp said in a statement that the surveillance tools
were a “pressing threat to people’s privacy globally” and that it would
continue protecting private conversations.
The
new tools will likely shift the best practices of those who wish to disguise
their online behavior. In Russia, the existence of a digital exchange between a
suspicious person and someone else can trigger a deeper investigation or even
arrest, people familiar with the process said.
Mr.
Shahbaz, the Freedom House researcher, said he expected the Russian firms to
eventually become rivals to the usual purveyors of surveillance tools.
“China
is the pinnacle of digital authoritarianism,” he said. “But there has been a
concerted effort in Russia to overhaul the country’s internet regulations to
more closely resemble China. Russia will emerge as a competitor to Chinese
companies.”
'The Perfect
Police State: An Undercover Odyssey into China's Terrifying Surveillance
Dystopia of the Future' (Public
Affairs, 29. 06. 2021)
by Geoffrey
Cain
A riveting
investigation into how a restive region of China became the site of a nightmare
Orwellian social experiment—the definitive police state—and the global technology
giants that made it possible
Blocked from facts and truth, under constant surveillance, surrounded by a
hostile alien police force: Xinjiang’s Uyghur population has become cursed,
oppressed, outcast. Most citizens cannot discern between enemy and friend.
Social trust has been destroyed systematically. Friends betray each other,
bosses snitch on employees, teachers expose their students, and children turn
on their parents. Everyone is dependent on a government that nonetheless treats
them with suspicion and contempt. Welcome to the Perfect Police State.
Using the haunting story of one young woman’s attempt to escape the vicious
technological dystopia, his own reporting from Xinjiang, and extensive
firsthand testimony from exiles, Geoffrey Cain reveals the extraordinary
intrusiveness and power of the tech surveillance giants and the chilling
implications for all our futures.
07-27-22
Yes, you are being
watched, even if no one is looking for you
It’s important that we
recognize the extent to which physical and digital tracking work together.
BY PETER KRAPP
The U.S. has the largest number of surveillance cameras per person in
the world. Cameras are omnipresent on city streets and in hotels, restaurants,
malls and offices. They’re also used to screen passengers for the Transportation
Security Administration. And then there are smart doorbells and other home security
cameras.
Most Americans are
aware of video surveillance of public spaces. Likewise, most people know about
online tracking–and want Congress to do something about it. But as a
researcher who studies digital culture and secret communications, I
believe that to understand how pervasive surveillance is, it’s important to
recognize how physical and digital tracking work together.
Databases can
correlate location data from smartphones, the growing number
of private cameras, license plate readers on police cruisers and
toll roads, and facial recognition technology, so if law enforcement
wants to track where you are and where you’ve been, they can. They need a
warrant to use cellphone
search equipment: Connecting your device to a mobile device forensic tool lets them extract
and analyze all your data if they
have a warrant.
However, private data brokers also track this kind of data
and help
surveil citizens–without a warrant. There is a large market for
personal data, compiled from information people volunteer, information people
unwittingly yield–for example, via mobile apps–and information that is stolen in
data breaches. Among the customers for this largely unregulated data are federal, state and local law enforcement agencies.
HOW YOU ARE TRACKED
Whether or not you
pass under the gaze of a surveillance camera or license plate reader, you are
tracked by your mobile phone. GPS tells weather apps or maps your location,
Wi-Fi uses your location, and cell-tower triangulation tracks your
phone. Bluetooth can
identify and track your smartphone, and not just for COVID-19 contact tracing,
Apple’s “Find My” service, or to connect headphones.
People volunteer their
locations for ride-sharing or for games like Pokemon Go or Ingress, but apps
can also collect and share location without your
knowledge. Many late-model cars feature telematics that track locations–for
example, OnStar or Bluelink. All this makes opting out
impractical.
The same thing is true
online. Most websites feature ad trackers and
third-party cookies, which are stored in your browser whenever you
visit a site. They identify you when you visit other sites so advertisers can
follow you around. Some websites also use key logging,
which monitors what you type into a page before hitting submit. Similarly,
session recording monitors mouse movements, clicks, scrolling and typing, even
if you don’t click “submit.”
Ad trackers know when
you browsed where, which browser you used, and what your device’s internet
address is. Google and Facebook are among the main
beneficiaries, but there are many data brokers slicing and dicing such information by
religion, ethnicity, political affiliations, social media profiles, income and
medical history for profit.
BIG BROTHER IN THE 21ST CENTURY
People may implicitly
consent to some loss of privacy in the interest of perceived or real
security–for example, in stadiums, on the road and at airports, or in return
for cheaper online services. But these trade-offs benefit individuals far less
than the companies aggregating data. Many Americans are suspicious of
government censuses, yet they willingly share their jogging
routines on apps like Strava, which has revealed sensitive and secret military data.
In the post-Roe
v. Wade legal environment, there are concerns not only about period tracking apps but about correlating data on physical movements with
online searches and phone data. Legislation like the recent Texas Senate Bill 8 anti-abortion law invokes
“private individual enforcement mechanisms,” raising questions about who
gets access to tracking data.
In 2019, the Missouri Department of Health stored data about
the periods of patients at the state’s lone Planned Parenthood clinic,
correlated with state medical records. Communications metadata can reveal who you are in touch with,
when you were where, and who else was there–whether they are in your contacts
or not.
Location data from
apps on hundreds of millions of phones lets the Department of Homeland Security track people.
Health wearables pose similar risks, and medical
experts note a lack of
awareness about the security of data they collect. Note the
resemblance of your Fitbit or smartwatch to ankle bracelets people wear during
court-ordered monitoring.
The most pervasive
user of tracking in the U.S. is Immigration and Customs Enforcement (ICE),
which amassed a vast amount of information without
judicial, legislative or public oversight. Georgetown University Law Center’s
Center on Privacy and Technology reported on how ICE
searched the driver’s license photographs of 32% of all adults
in the U.S., tracked cars in cities home to 70% of adults, and updated address
records for 74% of adults when those people activated new utility accounts.
NO ONE IS WATCHING THE WATCHERS
Nobody expects to be
invisible on streets, at borders, or in shopping centers. But who has access to
all that surveillance data, and how long it is stored? There is no single U.S. privacy law at the federal
level, and states cope with a regulatory patchwork; only five
states–California, Colorado, Connecticut, Utah and Virginia–have privacy laws.
It is possible
to limit location tracking on your phone, but not
to avoid it completely. Data brokers are supposed to mask your personally identifiable data before selling it.
But this “anonymization” is meaningless since individuals are
easily identified by cross-referencing additional data sets. This makes it easy
for bounty hunters and stalkers to abuse the system.
The biggest risk to
most people arises when there is a data breach, which is happening more often – whether
it is a leaky app or careless hotel chain, a DMV data sale or a compromised credit bureau, or indeed a data brokering middleman whose cloud storage is hacked.
This illicit flow of
data not only puts fuzzy notions of privacy in peril, but may put
your addresses and passport numbers, biometric data and social media profiles,
credit card numbers and dating profiles, health and insurance information, and
more on
sale.
https://www.fastcompany.com/90772483/yes-you-are-being-watched-even-if-no-one-is-looking-for-you
Chatbot data cannot fall into the hands of big tech
Wearable Brain Devices Will Challenge Our Mental
Privacy
- By Nita A. Farahany on March 27, 2023
A last bastion of privacy, our brains have remained
inviolate, even as sensors now record our heartbeats, breaths, steps and sleep.
All that is about to change. An avalanche of brain-tracking devices—earbuds,
headphones, headbands, watches and even wearable tattoos—will soon enter the
market, promising to transform our lives. And threatening to breach the refuge
of our minds.
Tech titans Meta, Snap, Microsoft and Apple are
already investing heavily in brain wearables. They aim to embed brain sensors
into smart watches, earbuds, headsets and sleep aids. Integrating them into our
everyday lives could revolutionize
health care, enabling early diagnosis and personalized treatment of
conditions such as depression, epilepsy and
even cognitive
decline. Brain sensors could improve our ability to meditate, focus and
even communicate with a seamless technological telepathy—using the power of
thoughts and emotion to drive our interaction with augmented reality (AR) and
virtual reality (VR) headsets, or even type
on virtual keyboards with our minds.
But brain wearables also pose very real risks to
mental privacy, freedom of thought and self-determination. As these
devices proliferate, they will generate vast amounts of neural data, creating
an intimate window into our brain states, emotions and even memories. We need
the individual power to shutter this new view into our inner selves.
Employers already seek out such data,
tracking worker fatigue
levels and offering brain
wellness programs to mitigate stress, via platforms that give them
unprecedented access to employees’ brains. Cognitive and emotional testing
based on neuroscience is becoming
a new job screening norm, revealing personality aspects that may have little
to do with a job. In China, train
conductors of the Beijing-Shanghai line, the busiest of its kind in the
world, wear brain sensors throughout their work day. There are even reports of
Chinese employees being sent
home if their brain activity shows less than stellar brain metrics. As
companies embrace brain wearables that can track employees’ attention, focus
and even boredom, without safeguards in place, they could trample on employee’s
mental privacy, eroding trust and well-being along with the dignity of work
itself.
Governments, too, are seeking access to our brains,
with a U.S brain initiative seeking “‘every spike from every
neuron’ in the human brain,” to reveal “how the firing of these neurons
produced complex thoughts.” While aimed at the underlying causes of
neurological and psychiatric conditions, this same investment could also
enable government interference with freedom of thought—a freedom critical to
human flourishing. From functional
brain biometric programs under development to authenticate
individuals—including
those funded by the National Science Foundation at Binghamton University—to
so-called brain-fingerprinting techniques used to interrogate criminal
suspects—sold by companies like Brainwave
Science and funded by law enforcement agencies from Singapore to Australia to
the United
Arab Emirates—we must act quickly to ensure neurotechnology benefits
humanity rather than heralding an Orwellian future of spying on our brains.
The rush to hack the human brain veers from
neuromarketing to the rabbit hole of social media and even to cognitive warfare
programs designed to disable or disorient. These technologies should have our
full attention. Neuromarketing campaigns such as one conducted
by Frito-Lays used insights about how women’s brains could affect
snacking decisions, then monitored brain activity while people viewed their
newly designed advertisements, allowing them to fine-tune their campaigns to
better capture attention and drive women to snack more on their products.
Social media “like” buttons and notifications are features designed to draw us
habitually back to platforms, exploiting our brains’ reward systems. Clickbait
headlines and pseudoscience claims prey on our cognitive
biases, hobbling critical thinking. And nations worldwide are considering
possible military applications of neuroscience, which some planners call
warfare’s “sixth
domain” (adding to a list that includes land, sea, air, space and
cyberspace).
As brain wearables and artificial intelligences
advance, the line between human agency and machine intervention will also blur.
When a wearable reshapes our thoughts and emotions, how much of our actions and
decisions remain truly our own? As we begin to offload mental tasks to AI, we
risk becoming overly dependent on technology, weakening independent thought and
even our capacity for reflective decision-making. Should we allow AI to shape
our brains and mental experiences? And how do we retain our humanity in an
increasingly interconnected world remade by these two technologies?
Malicious use and even hacking of brain wearables is
another threat. From probing for information, to intercepting
our PIN numbers as we think or type them, neural cybersecurity will
rule. Imagine a world where brain wearables can track what we read and see,
alter perceptions, manipulate emotions or even trigger physical pain.
That’s a world that may soon arrive. Already, companies including China’s Entertech have accumulated
millions of raw EEG data recordings from individuals across the world
using its popular consumer-based brain wearables, along with personal
information and device and app usage by those individuals. Entertech makes
plain in their privacy
policy they also record personal information, GPS signals, device
sensors, computers and services a person is using, including websites they may
be visiting. We must ensure that brain wearables are designed with security in
mind and with device and data safeguards in place to mitigate these risks.
We stand at an inflection point in the beginning of
a brain wearable revolution. We need prudent vigilance and an open and honest
debate about the risks and benefits of neurotechnology, to ensure it is used
responsibly and ethically. With the right safeguards, neurotechnology could be
truly empowering for individuals. To get there will require we recognize new
digital age rights to preserve our cognitive liberty—self-determination over
our brains and mental experiences. We must do so now, before the choice is no
longer ours to make.
This is an opinion and analysis article, and the
views expressed by the author or authors are not necessarily those of Scientific
American.
9.27.19
DuckDuckGo,
EFF, and others just launched privacy settings for the whole internet
The new standard, called Global
Privacy Control, will let you
activate a browser setting to keep your data from being sold…:
What is
Pegasus? How Surveillance Spyware Invades Phones
A
cybersecurity expert explains the NSO Group’s stealthy software
- By Bhanukiran
Gurijala, The
Conversation US on August 9, 2021
End-to-end
encryption is technology that scrambles messages on your phone and unscrambles
them only on the recipients’ phones, which means anyone who intercepts the
messages in between can’t read them. Dropbox, Facebook, Google, Microsoft,
Twitter and Yahoo are among the companies whose apps and services use end-to-end encryption.
This
kind of encryption is good for protecting your privacy, but governments don’t like it because it makes it
difficult for them to spy on people, whether tracking criminals and terrorists
or, as some governments have been known to do, snooping on dissidents,
protesters and journalists. Enter an Israeli technology firm, NSO
Group.
The
company’s flagship product is Pegasus, spyware that
can stealthily enter a smartphone and gain access to everything on it,
including its camera and microphone. Pegasus is designed to infiltrate devices
running Android, Blackberry, iOS and Symbian operating
systems and turn them into surveillance devices. The company
says it sells Pegasus only to governments and
only for the purposes of tracking criminals and terrorists.
HOW IT WORKS
Earlier version of Pegasus were installed on
smartphones through vulnerabilities in
commonly used apps or by spear-phishing, which involves tricking a targeted
user into clicking a link or opening a document that secretly installs the
software. It can also be installed over a wireless transceiver located near a target, or manually
if an agent can steal the target’s phone.
Since
2019, Pegasus users have been able to install the software on smartphones with
a missed call on WhatsApp, and can even delete the
record of the missed call, making it impossible for the the phone’s owner to
know anything is amiss. Another way is by simply sending a message to a user’s
phone that produces no notification.
This
means the latest version of this spyware does not require the smartphone user
to do anything. All that is required for a successful spyware attack and
installation is having a particular vulnerable app or operating system
installed on the device. This is known as a zero-click exploit.
Once
installed, Pegasus can theoretically harvest any data from the device and transmit
it back to the attacker. It can steal photos and videos, recordings, location
records, communications, web searches, passwords, call logs and social media
posts. It also has the capability to activate cameras and microphones for
real-time surveillance without the permission or knowledge of the user.
WHO HAS BEEN
USING PEGASUS AND WHY
NSO
Group says it builds Pegasus solely for governments to use in counterterrorism
and law enforcement work. The company markets it as a targeted spying tool to
track criminals and terrorists and not for mass surveillance. The company does
not disclose its clients.
The earliest reported use of Pegasus was by the
Mexican government in 2011 to track notorious drug baron Joaquín “El Chapo”
Guzmán. The tool was also reportedly used to track people close to murdered Saudi journalist
Jamal Khashoggi.
It
is unclear who or what types of people are being targeted and why.
However, much of the
recent reporting about Pegasus centers around a list of 50,000
phone numbers. The list has been attributed to NSO Group, but the list’s
origins are unclear. A statement from Amnesty International in Israel stated
that the
list contains phone numbers that were marked as “of interest”
to NSO’s various clients, though it’s not known if any of the phones associated
with numbers have actually been tracked.
A
media consortium, the
Pegasus Project, analyzed the phone numbers on the list and
identified over 1,000 people in over 50 countries. The findings included people
who appear to fall outside of the NSO Group’s restriction to investigations of
criminal and terrorist activity. These include politicians, government workers,
journalists, human rights activists, business executives and Arab royal family
members.
OTHER WAYS YOUR
PHONE CAN BE TRACKED
Pegasus
is breathtaking in its stealth and its seeming ability to take complete control
of someone’s phone, but it’s not the only way people can be spied on through
their phones. Some of the ways phones can aid surveillance and undermine privacy include
location tracking, eavesdropping, malware and
collecting data from sensors.
Governments
and phone companies can track a phone’s location by tracking cell signals from
cell tower transceivers and cell transceiver simulators like the StingRay device. Wi-Fi and Bluetooth signals
can also be used to track phones. In some cases, apps and web
browsers can determine a phone’s location.
Eavesdropping
on communications is harder to accomplish than tracking, but it is possible in
situations in which encryption is weak or lacking. Some types of malware can
compromise privacy by accessing data.
The
National Security Agency has sought agreements with technology companies under
which the companies would give the agency special access into their products
via backdoors,
and has reportedly built backdoors on its own. The companies
say that backdoors defeat the purpose of end-to-end encryption.
The
good news is, depending on who you are, you’re unlikely to be targeted by a
government wielding Pegasus. The bad news is, that fact alone does not
guarantee your privacy.
https://theconversation.com/what-is-pegasus-a-cybersecurity-expert
during a walk in Brooklyn
Your Face Belongs to Us: A
Secretive Startup's Quest to End Privacy as We Know It
by Kashmir Hill
“The dystopian future portrayed in some science-fiction movies is already upon us. Kashmir Hill’s fascinating book brings home the scary implications of this new reality.”—John Carreyrou, author of Bad Blood
Named One of the Best Books of the Year by the Inc. Non-Obvious
Book Awards • Longlisted for the Financial Times and
Schroders Business Book of the Year Award
New York Times tech reporter Kashmir Hill was skeptical when she
got a tip about a mysterious app called Clearview AI that claimed it could, with
99 percent accuracy, identify anyone based on just one snapshot of their face.
The app could supposedly scan a face and, in just seconds, surface
every detail of a person’s online life: their name, social media profiles,
friends and family members, home address, and photos that they might not have
even known existed. If it was everything it claimed to be, it would be the
ultimate surveillance tool, and it would open the door to everything from
stalking to totalitarian state control. Could it be true?
In this riveting account, Hill tracks the improbable rise of Clearview AI,
helmed by Hoan Ton-That, an Australian computer engineer, and Richard Schwartz,
a former Rudy Giuliani advisor, and its astounding collection of billions of
faces from the internet. The company was boosted by a cast of controversial
characters, including conservative provocateur Charles C. Johnson and
billionaire Donald Trump backer Peter Thiel—who all seemed eager to release
this society-altering technology on the public. Google and Facebook
decided that a tool to identify strangers was too radical to release, but
Clearview forged ahead, sharing the app with private investors, pitching it to
businesses, and offeringit to thousands of law enforcement agencies around
the world.
Facial recognition technology has been quietly growing more powerful for
decades. This technology has already been used in wrongful arrests in the
United States. Unregulated, it could expand the reach of policing, as it has in
China and Russia, to a terrifying, dystopian level.
Your Face Belongs to Us is a gripping true story about the rise of a
technological superpower and an urgent warning that, in the absence of
vigilance and government regulation, Clearview AI is one of many new
technologies that challenge what Supreme Court Justice Louis Brandeis once
called “the right to be let alone.”
https://www.amazon.com/Your-Face-Belongs-Us-Secretive/dp/0593448561
How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State
Byron Tau
https://www.amazon.com/Means-Control-Alliance-Government-Surveillance/dp/0593443225
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth.
Zuboff vividly brings to life the consequences as surveillance capitalism advances from Silicon Valley into every economic sector. Vast wealth and power are accumulated in ominous new "behavioral futures markets," where predictions about our behavior are bought and sold, and the production of goods and services is subordinated to a new "means of behavioral modification."
The threat has shifted from a totalitarian Big Brother state to a ubiquitous digital architecture: a "Big Other" operating in the interests of surveillance capital. Here is the crucible of an unprecedented form of power marked by extreme concentrations of knowledge and free from democratic oversight. Zuboff's comprehensive and moving analysis lays bare the threats to twenty-first century society: a controlled "hive" of total connection that seduces with promises of total certainty for maximum profit--at the expense of democracy, freedom, and our human future.
With little resistance from law or society, surveillance capitalism is on the verge of dominating the social order and shaping the digital future--if we let it. https://www.goodreads.com/book/show/26195941-the-age-of-surveillance-capitalism
How
to protect human rights in an AI-filled workplace
Here’s how
labor leaders are asserting rights and protections for human employees in the
face of increasing automation
The biggest
concern for most people when it comes to AI and work is: Are robots going to
take our jobs?
Honestly, we’re right to be concerned. According to McKinsey & Company, 45
million jobs, or a quarter of the workforce, could
be lost to automation by 2030. Of course, the promise is that AI
will create jobs, too, and we’ve already started to see emerging roles like
prompt engineers and AI ethicists crop up.
But many of us
also have concerns about how AI is being incorporated into our fields. Should a
bot host a podcast, write an article, or replace an actor? Can AI be a
therapist, a tutor, or build a car?
According to a
Workday global survey, three
out of four employees say their organization is not collaborating on AI
regulation and the same share says their company has yet to provide
guidelines on responsible AI use.
On the final episode in The New Way We
Work’s mini-series on how AI is changing our jobs, I spoke to Lorena
Gonzalez. She’s the president of the California Federation of Labor Unions, a
former assemblywoman, and has written AI transparency legislation, including a
law designed to prevent algorithms from denying workers break time.
While there are
many industry-specific concerns about AI in workplaces, she says that some of
the most effective and impactful AI regulations address common issues that
touch on many different types of workplaces.
Robot bosses and
algorithmic management
Gonzalez’s first
bill on algorithmic management applied specifically to warehouses. “We wanted
to give workers the power to question the algorithm that was speeding up their
quota,” she said. Gonzalez explained that there was no human interaction and it
was leading to an increase in warehouse injuries.
“What we started
with in the warehouse bill, we’re really seeing expand throughout different
types of work. When you’re dealing with an algorithm, even the basic experience
of having to leave your desk or leave your station . . . to use the restroom,
becomes problematic,” she says. “Taking away the human element obviously has a
structural problem for workers, but it has a humanity problem, as well.”
Privacy
Gonzalez is also
working on bills regarding worker privacy. She says some companies are going
beyond the basics of watching or listening to employees, like using AI tools
for things like heat mapping. Gonzalez also says she’s seen companies require
employees to wear devices that track who they are talking with (in previously
protected places like break rooms or bathrooms), and monitoring how fast
workers drive when not on the clock.
Data collection
and storage
A third area of
focus for Gonzalez is data that’s being taken from workers without their
knowledge, including through facial recognition tools. As an employee, you have
a “right to understand what is being taken by a computer or by AI as you’re
doing the work, sometimes to replace you, sometimes to evaluate you,” she says.
These are issues
that came up in the SAG-AFTRA
strike last year, but she says these issues come up in different forms
in different industries. “We’ve heard it from Longshoremen who say the computer
works side-by-side to try to mimic the responses that the worker is giving,”
she says. “The workers should have the right to know that they’re being
monitored, that their data is being taken, and there should be some liability
involved.”
Beyond these
broader cases of AI regulation, Gonzalez says that business leaders should talk
to their employees about how new technology will impact their jobs, before it’s
implemented, not after. “Those at the very top get sold on new technology as
being cool and being innovative and being able to do things faster and quicker
and not really going through the entirety of what these jobs are and not really
imagining what on a day-to-day basis that [a] worker has to deal with,” she
says.
Listen to the full
episode for more on how workers are fighting for AI regulation in industries
like healthcare and retail and the crucial missing step in AI
development Gonzalez sees coming out of Silicon Valley. https://www.fastcompany.com/91294759/how-to-protect-human-rights-in-an-ai-filled-workplace
“Communication in a world of pervasive surveillance”
Sources and methods:
Counter-strategies against pervasive surveillance architecture
Jacob R. Appelbaum
Contents Contents 11 List of Figures 14 List of Tables 16 List of Algorithms 17 1 Introduction 1 1.1 A fifth level of ethics in mathematics . . . . . . . . . . . . . . . . . . . . 4 1.2 Thinking about the future . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Organization of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Background on network protocols 15 2.1 Free Software, Open Hardware, Operational Security . . . . . . . . . . . 17 2.2 Layers of the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Ethernet networks and the Internet Protocols . . . . . . . . . . . . . . . 18 2.4 The Domain Name System . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 Multicast Domain Name System (mDNS) . . . . . . . . . . . . . . . . . 19 2.6 Hypertext Transport Protocol (HTTP) . . . . . . . . . . . . . . . . . . . 20 2.7 Transport Layer Security (TLS) . . . . . . . . . . . . . . . . . . . . . . . 20 2.8 Virtual Private Networks (VPN) . . . . . . . . . . . . . . . . . . . . . . 20 3 Background on cryptography 23 3.1 Mathematics as informational self-defense . . . . . . . . . . . . . . . . . 25 3.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Symmetric Encryption: block cipher . . . . . . . . . . . . . . . . . . . . 27 3.5 Symmetric Encryption: stream cipher . . . . . . . . . . . . . . . . . . . 27 3.6 Message Authentication Code . . . . . . . . . . . . . . . . . . . . . . . 27 3.7 Authenticated-Encryption with Associated-Data (AEAD) . . . . . . . . . 28 3.8 Non-interactive Key Exchange (NIKE) . . . . . . . . . . . . . . . . . . . 29 3.9 Verification of public keys . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.10 Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.11 Protocols from building blocks . . . . . . . . . . . . . . . . . . . . . . . 33 4 The Adversary 35 4.1 Zersetzung or Dirty Tricks? . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2 Foundational events and disclosures in surveillance . . . . . . . . . . . . 48 4.3 Summer of Snowden and the post-Snowden Era . . . . . . . . . . . . . 69 4.4 Standardization of cryptographic sabotage . . . . . . . . . . . . . . . . 80 4.5 XKeyscore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.6 ANT catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5 The GNU name system 147 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.2 Background: Domain Name System (DNS) . . . . . . . . . . . . . . . . 150 5.3 Security goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5.4 Exemplary Attacker: The NSA’s MORECOWBELL and QUANTUMDNS programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.5 Adversary Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.6 Domain Name System Security Extensions (DNSSEC) . . . . . . . . . . 157 5.7 Query name minimization . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.8 DNS-over-TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.9 DNSCurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5.10 Confidential DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5.11 Namecoin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5.12 The GNU name system . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.13 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.14 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 6 Tiny WireGuard Tweak 169 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 6.2 Realistic adversary concerns . . . . . . . . . . . . . . . . . . . . . . . . 171 6.3 WireGuard overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.4 Traffic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.5 Security and privacy issues . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.6 Blinding flows against mass surveillance . . . . . . . . . . . . . . . . . . 181 6.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 7 Vula 185 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 7.2 Background and related work . . . . . . . . . . . . . . . . . . . . . . . 188 7.3 Threat Model and design considerations . . . . . . . . . . . . . . . . . . 192 7.4 Detailed Protocol Description . . . . . . . . . . . . . . . . . . . . . . . . 198 7.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.6 Security Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 8 REUNION 219 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 8.2 Background and related work . . . . . . . . . . . . . . . . . . . . . . . 222 8.3 Introducing REUNION . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 8.4 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 8.5 Security Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 8.6 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 8.7 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 8.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Bibliography 251
https://pure.tue.nl/ws/portalfiles/portal/197416841/20220325_Appelbaum_hf.pdf
Top Cybersecurity Trend Predictions for 2025+: BeyondTrust Edition
For
this edition of our annual cybersecurity trend predictions, we’re sharing our
top prognostications for 2025, as well as a glimpse into the key emergent
trends we foresee taking hold in the remainder of the decade.
Oct
15, 2024
Morey J. Haber , James Maude, …
Predicting the Future Cybersecurity Threats with the Greatest Potential
for Disruption
We’re
nearing the end of 2024 and the midpoint of the roaring twenty-twenties. So far
this decade, we’ve had everything from high-stakes cyberattacks and
world-stopping technological malfunctions to a global pandemic. As we look
ahead to 2025, we need to contemplate the cybersecurity trends coming into
focus and start planning for those yet to take shape.
For
this edition of our annual cybersecurity trend predictions, we’re sharing our
top prognostications for 2025, as well as a glimpse into the key emergent
trends we foresee taking hold in the remainder of the decade.
But
first, let’s take a brief moment to consider where we stand. The cybersecurity
landscape is clearly in another phase of rapid evolution. Last year, AI
(artificial intelligence) achieved significant technological breakthroughs,
drastically altering the course of the threat landscape and pushing
organizations to rethink security strategies. This caused a surge of defense
tools that leverage AI and ML (machine learning) to advance threat detection
and response.
We
are already seeing another technological innovation making its way into
mainstream adoption: quantum computing. For years, this has been on
the distant horizon, but now it’s finally seeming closer to reality. Quantum
computing has the potential to wreak unprecedented levels of disruption, posing
a massive challenge to the traditional cryptographic methods widely deployed
today.
In
recent years, we’ve also witnessed a shift in how threat actors penetrate
environments. More emphasis on identity-based tactics has encouraged
cybersecurity practitioners to reconsider their definitions of “privilege” and
“identity security” and focus their defensive strategies on reducing the blast
radius of compromised accounts. In the midst of all this, political tensions
have risen, making the potential ripples of nation-state cyberattacks more
global-reaching than ever.
With
so much on the horizon, it’s critical for organizations to stay vigilant and
keep their security strategies tuned in to the latest trends. Please join us as
we explore what will redefine the cybersecurity landscape in 2025 and beyond.
Recapping BeyondTrust’s 2024 Cybersecurity Prediction…
Cybersecurity Trends for 2025…
Cybersecurity Trends for The Rest of the Decade…
Conclusion: Don’t Delay Your Security Preparation…: https://www.beyondtrust.com/blog/entry/beyondtrust-cybersecurity-trend-predictions
Nav komentāru:
Ierakstīt komentāru