Extracting
personalised information:
legal methods and illegal
opportunities
Living in a habitual environment and adhering to a
conservative way of life, most of us have not yet perceived, have not yet realised
that people in the emerging digital society make information of a private
nature publicly available and increasingly share it. This happens by
communicating in social networks, taking part in electronic commerce and
accumulating information requested by government institutions and private
companies in relevant databases.
Such sensitive information is obtained both in a
legal way and as a result of the activity of hackers. Having been processed
with the help of different algorithms, this information is further used both
for purposeful targeted advertising, various scoring and also for criminal
purposes. But what should be emphasised particularly is its use for supervision
and control over the actions of the individual, for determining the identity of
each member of society.
The information that has been collected this way in
the circumstances of limited democracy and that has ended up in the hands
of special services may become (to
a large extent, has already become!) a powerful means of influence,
supervision, control and manipulation of people. Without taking this into
account, we all risk a general degradation of democratic institutions, the
coming to power of authoritarianism and totalitarianism.
In
not-too-distant future, brain hackers could steal your deepest secrets
Religious beliefs, political leanings, and medical
conditions are up for grabs.
OAKLAND, Calif.—In the beginning, people hacked
phones. In the decades to follow, hackers turned to computers, smartphones,
Internet-connected security cameras, and other so-called Internet of things
devices. The next frontier may be your brain, which is a lot easier to hack
than most people think.
At the Enigma security conference here on Tuesday,
University of Washington researcher Tamara Bonaci described an experiment that
demonstrated how a simple video game could be used to covertly harvest neural
responses to periodically displayed subliminal images. While her game,
dubbed Flappy Whale, measured subjects' reactions to relatively
innocuous things, such as logos of fast food restaurants and cars, she said the
same setup could be used to extract much more sensitive information, including
a person's religious beliefs, political leanings, medical conditions, and
prejudices.
"Electrical signals produced by our body might
contain sensitive information about us that we might not be willing to share
with the world," Bonaci told Ars immediately following her presentation.
"On top of that, we may be giving that information away without even being
aware of it."
Flappy Whale had
what Bonaci calls a BCI, short for "brain-connected interface." It
came in the form of seven electrodes that connected to the player's head and
measured electroencephalography
signals in real time. The logos were repeatedly displayed, but only
for milliseconds at a time, a span so short that subjects weren't consciously
aware of them. By measuring the brain signals at the precise time the images
were displayed, Bonaci's team was able to glean clues about the player's
thoughts and feelings about the things that were depicted.
There's no evidence that such brain hacking has ever
been carried out in the real world. But the researcher said it wouldn't be hard
for the makers of virtual reality headgear, body-connected fitness apps, or
other types of software and hardware to covertly mine a host of physiological
responses. By repeatedly displaying an emotionally charged image for several
milliseconds at a time, the pilfered data could reveal all kinds of insights
about a person's most intimate beliefs. Bonaci has also theorized that
sensitive electric signals could be obtained by modifying legitimate BCI
equipment, such as those used by doctors.
Bonaci said that electrical signals produced by the
brain are so sensitive that they should be classified as personally
identifiable information and subject to the same protections as names,
addresses, ages, and other types of PII. She also suggested that researchers
and game developers who want to measure the responses for legitimate reasons
should develop measures to limit what's collected instead of harvesting raw
data. She said researchers and developers should be aware of the potential for
"spillage" of potentially sensitive data inside responses that might
appear to contain only mundane or innocuous information.
How
Hackers Could Get Inside Your Head With ‘Brain Malware’
Brain-computer interfaces offer new applications for
our brain signals—and a new vector for security and privacy violations.
Hackers have spyware in your mind. You're minding
your business, playing a game or scrolling through social media, and all the
while they're gathering your most private information direct from your brain
signals. Your likes and dislikes. Your political preferences. Your sexuality.
Your PIN.
It's a futuristic scenario, but not that futuristic.
The idea of securing our thoughts is
a real concern with the introduction of brain-computer
interfaces—devices that are controlled by brain signals such as EEG
(electroencephalography), and which are already used in medical scenarios and,
increasingly, in non-medical
applications such as gaming.
Researchers at the University of Washington in
Seattle say that we need to act fast to implement a privacy and security
framework to prevent our brain signals from being used against us before the
technology really takes off.
"There's actually very little time," said
electrical engineer Howard Chizeck over Skype. "If we don't address this
quickly, it'll be too late."
I first met Chizeck and fellow engineer Tamara
Bonaci when I visited the University of Washington Biorobotics Lab to check out
their work
on hacking teleoperated surgical robots. While I was there, they showed me
some other hacking research they were working on, including how they could use
a brain-computer interface (BCI), coupled with subliminal messaging in a
videogame, to extract private information about an individual.
Bonaci showed me how it would work. She placed a BCI
on my head—which looked like a shower cap covered in electrodes—and sat me in
front of a computer to play Flappy Whale, a simple platform game
based on the addictive Flappy Bird. All I had to do was guide a
flopping blue whale through the on-screen course using the keyboard arrow keys.
But as I happily played, trying to increase my dismal top score, something
unusual happened. The logos for American banks started appearing: Chase,
Citibank, Wells Fargo—each flickering in the top-right of the screen for just
milliseconds before disappearing again. Blink and you'd miss them.
The idea is simple: Hackers could insert images like
these into a dodgy game or app and record your brain's unintentional response
to them through the BCI, perhaps gaining insight into which brands you're
familiar with—in this case, say, which bank you bank with—or which images you
have a strong reaction to.
Bonaci's team have several different Flappy
Whale demos, also using logos from local coffee houses and fast food
chains, for instance. You might not care who knows your weak spot for Kentucky
Fried Chicken, but you can see where it's going: Imagine if these
"subliminal" images showed politicians, or religious icons, or sexual
images of men and women. Personal information gleaned this way could
potentially be used for embarrassment, coercion, or manipulation.
"Broadly speaking, the problem with
brain-computer interfaces is that, with most of the devices these days, when
you're picking up electric signals to control an application… the application
is not only getting access to the useful piece of EEG needed to control that
app; it's also getting access to the whole EEG," explained Bonaci. "And
that whole EEG signal contains rich information about us as persons."
And it's not just stereotypical black hat hackers
who could take advantage. "You could see police misusing it, or
governments—if you show clear evidence of supporting the opposition or being
involved in something deemed illegal," suggested Chizeck. "This is
kind of like a remote lie detector; a thought detector."
Of course, it's not as simple as "mind
reading." We don't understand the brain well enough to match signals like
this with straightforward meaning. But with careful engineering, Bonaci said
that preliminary findings showed it was possible to pick up on people's
preferences this way (their experiments are still ongoing).
"It's been known in neuroscience for a while
now that if a person has a strong emotional response to one of the presented
stimuli, then on
average 300 milliseconds after they saw a stimulus there is going to
be a positive peak hidden within their EEG signal," she said.
The catch: You can't tell what the emotional
response was, such as whether it was positive or negative. "But with
smartly placed stimuli, you could show people different combinations and play
the '20 Questions' game, in a way," said Bonaci.
When I played the Flappy Whale game,
the same logos appeared over and over again, which would provide more data
about a subject's response to each image and allow the researchers to better
discern any patterns.
"One of the cool things is that when you see
something you expect, or you see something you don't expect, there's a
response—a slightly different response," said Chizeck. "So if you
have a fast enough computer connection and you can track those things, then
over time you learn a lot about a person."
How likely is it that someone would use a BCI as an
attack vector? Chizeck and Bonaci think that the BCI tech itself could easily
take off very quickly, especially based on the recent sudden adoption of other
technologies when incorporated into popular applications—think augmented
reality being flung into the mainstream by Pokémon Go.
BCIs have already been touted in gaming, either as a novel controller or to add new functionality
such as monitoring stress levels. It's clear that the ability to
"read" someone's brain signals could also be used for other consumer
applications: Chizeck painted a future where you could watch a horror film and
see it change in response to your brain signals, like a thought-activated
choose-your-own-adventure story. Or imagine porn that changes according to what
gets your mind racing.
"The problem is, even if someone puts out an
application with the best of intentions and there's nothing nefarious about it,
someone else can then come and modify it," said Chizeck.
In the Flappy Whale scenario, the
researchers imagine that a BCI user might download a game from an app store
without realising it has these kind of subliminal messages in it; it'd be like
"brain malware." Chizeck pointed out that many fake,
malware-laden Pokémon-themed apps appeared in the app store around the real game's
release.
But hacking aside, Bonaci and Chizeck argued that
the biggest misuse of BCI tech could in fact be advertising, which could pose a
threat to users' privacy as opposed to their security.
"Once you put electrodes on people's heads, it's
feasible"
You could see BCIs as the ultimate in targeting ads:
a direct line to consumers' brains. If you wore a BCI while browsing the web or
playing a game, advertisers could potentially serve ads based on your response
to items you see. Respond well to that picture of a burger? Here's a McDonald's
promotion.
The researchers think there needs to be some kind of
privacy policy in apps that use BCIs to ensure people know how their EEG data
could be used.
"We usually know when we're giving up our
privacy, although that's certainly become less true with online
behaviour," said Chizeck. "But this provides an opportunity for
someone to gather information from you without you knowing about it at all.
When you're entering something on a web form, you can at least think for a
second, 'Do I want to type this?'"
Brain signals, on the other hand, are involuntary;
they're part of our "wetware."
The reason the University of Washington team is
looking into potential privacy and security issues now is to catch any problems
before the tech becomes mainstream (if indeed it ever does). In a 2014 paper, they argue that such issues "may be
viewed as an attack on human rights to privacy and dignity." They point
out that, unlike medical data, there are few legal protections for data
generated by BCIs.
One obvious way to help control how BCI data is used
would rely on policy rather than technology. Chizeck and Bonaci argue that
lawyers, ethicists, and engineers need to work together to decide what it's
acceptable to do with this kind of data. Something like an app store
certification could then inform consumers as to which apps abide by these
standards.
"There has to be an incentive for all app
developers, programmers, manufacturers to do it," said Bonaci.
"Otherwise why would they change anything about what they're doing right
now?"
The Washington team has also suggested a more
technical solution, which would effectively "filter" signals so that
apps could only access the specific data they require. In their paper, they
call this a "BCI Anonymizer" and compare it to smartphone apps having
limited access to personal information stored on your phone. "Unintended
information leakage is prevented by never transmitting and never storing raw
neural signals and any signal components that are not explicitly needed for the
purpose of BCI communication and control," they write.
Chizeck said a student in the lab was currently
running more tests to characterise further the type and detail of information
that can be gleaned through BCIs, and to try a method of filtering this to see
if it's possible to block more sensitive data from leaking out.
By doing this work now, they hope to nip future
privacy and security concerns in the bud before most people have ever come into
contact with a BCI.
"It's technically becoming feasible; once you
put electrodes on people's heads, it's feasible," said Chizeck. "The
question is, do we want to regulate it, can we regulate it, and how?" : https://motherboard.vice.com/en_us/article/ezp54e/how-hackers-could-get-inside-your-head-with-brain-malware
Chatbot data cannot fall into the hands of
big tech
Imagine the world a decade
from now. It’s no longer ruled by powerhouse countries such as the US or China,
but by a handful of tech corporations that know everything about us; what we
need, buy, say, and ultimately what we desire.
So join me for a short imaginary
trip to our possible future. Welcome to the FAMGA (Facebook, Apple, Microsoft,
Google, and Amazon) Republic!
We’ll see that the trend
started back in 2018, the year that personal assistants really started gaining
traction. Amazon’s Echo rapidly surpassed its previous high of 10 million
units shipped while embedding it into other devices. At the same time, Google
made a serious play for its role in the automated conversational market and
Facebook re-launched its chatbot initiatives to expand its reach.
Employing the power of their
developer community allowed these tech giants to build vast libraries of FAMGA
based applications that consumers used endlessly on their phones, in their
homes, and in every part of their lives.
This is point in time when
conversational applications really took off — which many people celebrated —
but there was a big question left unanswered: What happens with all this data?
When people communicate in a
natural and conversational way, they reveal more than just words. Individual
preferences, views, opinions, feelings, and inclinations become part of the
conversation.
It’s like being able to
listen in, behind the scenes, to every sales assistant’s conversation and
customer support agent’s interaction. You are able to understand people’s
intentions, actions, and behaviors — maybe even better than they self do.
Chatbots and virtual
assistants operate in a very similar format, like a dedicated focus group at
your fingertips — comprised of your entire customer base, 24/7, 365 days a
year.
Data is insight
When you have this insight,
you can begin learning about what people want, taking customer interaction to a
whole new level — you begin predicting with certainty.
In 10 years time, we’ll look
back to the years leading up to the FAMGA Republic and how we turned a blind
eye when we saw that sectors were being closed off, one by one
For example, all of the
insight and knowledge helped FAMGA launch the world’s first truly global bank,
using the trusted relationship they had developed between users and tech to
become the primary financial interface, relegating regional banks to simple
service providers.
Additionally, automakers who
had once innocently played with FAMGA digital assistants as the voice
interface, suddenly found one of the key differentiators in autonomous vehicles
and the personality of their vehicles — was now controlled by others.
Within a few short years,
the global economy became controlled by a handful of corporations.
What the future holds
Fortunately for us, this
vision isn’t a reality, yet. But it could be.
Data ownership in
conversational applications is one of the biggest issues facing enterprises
that are looking at developing their digital strategies. Positioned in the
middle of your app and customer, the technology provider knows everything your
users say.
The information your
customers are providing are all valuable data points that can be used to build
closer relationships. The data can pick up on ques and translate what looks
like a simple transfer of money from one account to the other to understanding
actual customer choices. This helps provide a better understanding of your
customers personal actions, such as buying a new refrigerator or going on
vacation.
At the same time, all of
these opportunities to better understand your customer are lost. Customers are
providing us with more information about their personal lives which could give
us value to building closer relationships but we’re losing them — all while
FAMGA is well on its way to building its new country.
AI software tool disables automated
facial tracking
|
|
As privacy +
security concerns increase, artificial intelligence finds solutions.
|
|
Engineering
researchers at the University of Toronto, in Canada — used AI software
programs to design a privacy filter for your photos that disables automatic
facial recognition systems.
Each time
you upload a photo or video to a social media platform, its facial
recognition systems learn a little more about you. These algorithms ingest
data about who you are, your location, and people you know — and they’re
constantly improving.
As concerns
over privacy + data security on social networks grow, Univ. of Toronto
researchers have created a computer software algorithm to dynamically
disrupt facial recognition systems.
Their solution leverages a deep learning technique called adversarial training,
which pits 2 AI algorithms against each other. In computing, deep learning
is a math technique that uses complex sets of data — trained to find
solutions to problems — to process information…: https://www.kurzweilai.net/digest-ai-software-tool-disables-automated-facial-tracking
|
|
02-10-21
These
two technologies could supercharge our privacy on the internet
To bring back privacy, we need both encryption and
decentralization.
BY STEVEN WATERHOUSE
In recent years, the negative impacts of
social media and other global technology platforms have become a top concern
for people around the world. The danger seems to grow by the day: Recently, we
learned that the popular encrypted messaging service WhatsApp has been sharing user data with Facebook since 2016. The news comes as social media giants such as Twitter
and Facebook, once lauded for facilitating the free dissemination of ideas, are
increasingly seen as having failed to effectively moderate misinformation and
conspiracy theories on their platforms. The internet, as currently constituted,
has a dark side.
Government action to legislate privacy
protection, from Europe to California, is a reflection of the public’s
increasing concern for their online safety. But bewilderingly, in the midst of
the growing demand for strong privacy protections, governments around the world
are also working to kneecap the very encryption that makes true privacy
possible in the name of public safety.
When it comes to strengthening digital
privacy, the most important technologies are end-to-end encryption and
decentralization. They’re even more powerful when they’re combined. Here’s how
encryption and decentralization could work together to protect your safety and
privacy online.
NOT ALL ENCRYPTION IS CREATED EQUAL
“Encryption” is a widely used and poorly understood
term. When we say something is encrypted, what we mean is that its
characteristics have been scrambled—encoded—while it is in transit between two
places. Only the sender and the receiver have the “key” to unscramble or
“decrypt” its contents, meaning hackers or other third parties can’t intercept
it and steal the information in transit.
Digital services, particularly those focused
on financial transactions and messaging, often use a form of encryption. These
include Facebook, WhatsApp, WeChat, PayPal, and Venmo, as well as most major
financial institutions. But just because something is encrypted doesn’t make it
private.
Many services that advertise themselves as
encrypted do not protect their users’ privacy in a meaningful way. Most
financial institutions, for example, encrypt the contents of transactions while
they are in transit, but then decrypt them when they reach the
institution’s online server. As a result, even though information is private
while en route from place to place, the records of the activity are ultimately
stored in cloud-based databases and are just as susceptible to online threats
as any other information.
The claims of many messaging services are
similarly misleading. Like big banks, they encrypt messages while they’re in
transit but decode them when they reach the companies’ web servers. This means
that, encryption or no, these tech giants can see the contents of every message
sent over their networks. These messaging services, if they desired or were
compelled, could share every message their users have exchanged.
For true privacy, we need to look for services
that employ end-to-end encryption. With end-to-end, the
contents of communications are not known to anyone except the sender and the
receiver. When you send a message over a service that is end-to-end encrypted,
it is only decrypted on the device that sends the message and the device that
receives it. It doesn’t live anywhere in the online ether; neither the
operators of the service nor any other third party can see what the message
says. The rapid growth of Signal, an end-to-end encrypted messaging app,
demonstrates the surging consumer demand for digital privacy.
The future of end-to-end encryption is
uncertain. Over the past year, governments around the world, from the U.S. and
E.U. to Russia and China, have introduced or tightened rules that, if fully
implemented, would make true end-to-end encryption impossible. Most of these
proposed laws would require tech companies to design “backdoors” that would
allow authorities to demand that the contents of messages and other activities
be decrypted and turned over. Once such backdoor keys exist, it’s a matter of
when, not if, they fall into the wrong hands.
THE MOST RESILIENT SYSTEMS ARE ENCRYPTED AND
DECENTRALIZED
In order to have meaningful digital privacy,
the world needs end-to-end encryption. And for encrypted systems to be
resilient, they should also be decentralized, adopting a key technical
foundation of the many blockchain projects promising to reshape finance and
other sectors in the coming years.
To visualize the way decentralization works in
practice, think of a hard drive. If all your files are stored in a single
place, and it fails or is damaged, your files are lost. To mitigate this risk,
then, you might back up your files to a cloud server. That way, even if one or
all of your physical hard drives fail, your files are safe, stored digitally in
the cloud.
But the problem is not solved. What happens
if, for instance, Amazon or Google or Apple decides to suddenly deny you
access? In that scenario, you are out of luck again: The storage provider, not
you, ultimately has control over those files. For true security, you need
multiple backups over which you have the ultimate control.
Decentralization applies this approach to all
digital activity. Decentralized networks such as blockchains are designed to
operate by consensus across many different nodes, removing the risk of a single
point of failure. They offer new possibilities around network governance as
well, with incentives and standards of behavior that can be implemented by
consensus rather than by corporate executives. Given these benefits, mainstream
platforms are increasingly looking to decentralized alternatives. Even Twitter
has revealed plans for a decentralized open standard for social media.
Blockchains facilitate their users’ privacy
through end-to-end encryption. And they are resilient to third-party
manipulation because of decentralization. Unlike centralized servers, a
blockchain cannot be compromised by a single breach.
This principle can be applied across many
different technical areas, including social media, to strengthen and advance
digital privacy. There can be no safeguarding privacy without end-to-end
encryption; government efforts to undermine this important tool should have us
all worried. But as long as there are end-to-end encrypted services, the most
resilient will be those that are decentralized. If we work to advance privacy
with these two technical concepts in mind, 2021 can be the year people around
the world start to truly reclaim their online privacy.
https://www.fastcompany.com/90602972/encryption-and-decentralization-privacy
By José Luis Peñarredonda
26 March 2018
If you worked for Ford in
1914, chances are at some stage in your career a private investigator was hired
to follow you home.
If you stopped for a drink,
or squabbled with your spouse, or did something that might make you less of a
competent worker the following day, your boss would soon know about it.
This sleuthing was partly
because Ford’s workers earned a better salary than the competition. The car
manufacturer raised pay from $2.39 a day to $5 a day, the equivalent of $124
(£88) today. But you had to be a model citizen to qualify.
This ‘Big
Brother’ operation was run by the Ford Sociological Department, a team of
inspectors that arrived unannounced on employees’ doorsteps
Your house needed to be
clean, your children attending school, your savings account had to be in good
shape. If someone at the factory believed you were on the wrong path, you might
not only miss out on a promotion, your job was on the line.
This ‘Big Brother’ operation
was run by the Ford Sociological Department, a team of inspectors that arrived
unannounced on employees’ doorsteps. Its aim was to “promote the health, safety
and comfort of workers”, as an
internal document put it.
And, to be fair, it also offered everything from medical services to
housekeeping courses.
The programme lasted eight
years. It was expensive, and many workers resented its paternalism and intrusion.
Today, most of us would find it unacceptable – what does my work have to do
with my laundry, bank account or relationships?
Yet, the idea of employers
trying to control workers’ lives beyond the workplace has persisted, and
digital tools have made it easier than ever. Chances are, you use several
technologies that could create a detailed profile of your activities and
habits, both in the office and out of it. But what can (and can’t) employers do
with this data? And, where do we draw the line?
What’s my worker score?
We’re all being graded every
day. The expensive plane tickets I bought recently have already popped up in my
credit score. The fact that I‘ve stopped jogging every morning has been noted
by my fitness app – and, if it were connected with an insurance company, this
change might push up my premiums.
And we’re not just talking
the ‘rate-a-trader’ style online review process used on freelancer platforms or
gig-economy services. A scoring system of sorts has lodged itself in the
corporate world.
HR departments
are crunching increasing volumes of data to measure employees in a more
granular way
HR departments are crunching
increasing volumes of data to measure employees in a more granular way. From
software that records every keystroke, or the ‘smart’ coffee machines that will
only give you a hot drink if you tap it with your work ID badge there are more
opportunities than ever for bosses to measure behaviour.
One big aim of data collection
is to make “predictions about how long an employee will stay, and it may
influence hiring, firing, or retention of people,” says Phoebe Moore, Associate
Professor of Political Economy and Technology at Leicester University in the UK
and author of the book The Quantified Self in Precarity: Work, Technology and
What Counts.
Data collection is “changing
employment relationships, the way people work and what the expectations can
be”, says Moore.
One problem with this
approach is that it’s blind to some of the non-quantifiable aspects of work.
Some of the subtler things I do in order to be a better writer, for instance,
are not quantifiable: having a drink with someone who tells me a great story, or
imagining a piece on my commute. None of these things would show up in my ‘job
score’. “A lot of the qualitative aspects of work are being written out,” says
Moore, “because if you can’t measure them, they don’t exist”.
The dilemma of data
Employees value these health
initiatives not only because their bosses might allow them time off to
participate but also because if they track exercise via their phone, smartwatch
or fitness wristband they can earn rewards.
“I can just wear this
device, and I get points and buy stuff for doing things I would already (be)
doing anyway without it,” says Lauren Hoffman, a former salesperson for one of
the programmes in the US, who was also enrolled in it herself.
Furthermore, the workplace
offers an environment that can help people to reach their health goals. Research
suggeststhat fitness programmes
work better when they are combined with social encouragement, collaboration and
competition. Offices can foster all that: they can organise running clubs,
weekly fitness classes or competitions to help workers thrive.
There are several good business
reasons to collect data on employees – from doing better risk management to
examining if social behaviours in the workplace can
lead to gender discrimination.
“Companies fundamentally don't understand how people interact and
collaborate at work,”, says Ben Waber, president and CEO of Humanyze, an
American company which gathers and analyses data about the workplace. He says
that he can show them.
Humanyze gathers data from
two sources. The first is the metadata from employees’ communications: their
email, phone or corporate messaging service. The company says analysing this
metadata doesn’t include reading the content of these messages, nor the
individual identities of the people involved, but involves crunching the more
general information i.e. duration, frequency and general localisation so, will
tell them which department an employee belongs to.
The second area is data
gathered from gadgets like Bluetooth infrared sensors which detect how many
people are working in one particular part of an office and how they move
around. They also use ‘supercharged’ ID badges that, as Waber says, are beefed
up with “microphones which don't record what you say, but do voice-processing
in real time.” This allows measurement of the proportion of time you speak, or
how often people interrupt you.
After six weeks of research,
the employer gets a ‘big picture’ of the problem it wants to solve, based on
the analysed data. If the aim, for instance, is to boost sales, they can
analyse what their best salespeople do that others don’t. Or if they want to
measure productivity, they can infer that the more efficient workers talk more
often with their managers.
Waber sees it as “a lens of
very large work issues, like diversity, inclusion, workload assessment,
workspace planning, or regulatory risk”. His business case is that these tools
will help companies save millions of dollars and even years of time.
Collection and protection
But not everyone is
convinced of the usefulness of these techniques, or whether such personally
intrusive technology can be justified. A
PwC survey from 2015 reveals
that 56% of employees would use a wearable device given by their employer if it
was aimed at improving their wellbeing at work. “There should be some payback
from something like this, some benefit in terms of their workplace conditions,
or advantages,” says Raj Mody, an analyst from the firm. And Hoffman remembers
that these programmes were not always an easy sell. “You’re going to get the
data and you're going to use it against me,” she was often told by sceptical
workers.
There is a
fundamental problem: these measurements are often inaccurate
And there is a fundamental
problem: these fitness tracking measurements are often inaccurate. People are
very bad at self-reporting and fitness trackers and smartphones are not exactly
precise. A
recent evidence review shows that
different models and techniques gather different results and it is very
difficult to draw trustworthy comparisons between them.
It is also unclear if
counting steps, for instance, is actually a good way of measuring activity,
both because this measurement doesn’t take intensity into account – a step made
while running counts just as much as a step made walking at home – and walking
is more difficult for some than others.
Another issue is the amount
of data these programmes can collect. They not only track your daily activity,
but also often offer health screenings for participants, which allows them to
register things that don’t seem like your boss’s business: your cholesterol
level, your weight, or even your
DNA.
In most cases, it is illegal
in the US and Europe for companies to discriminate against workers based on
their health data or any genetic test results, but there are some grey areas.
In 2010, Pamela Fink, the PR manager of an energy firm in the US, sued
her employer because
she claimed she was sacked due to a double mastectomy to reduce her probability
of developing cancer. While the company didn’t have access to her DNA results,
she contended that they knew about the risk because the surgery showed up on
her insurance bills. The case was settled out of court.
Wellness programme providers
say that employers
only see aggregated and anonymised data, so they can’t target specific employees based on their wellness results.
Humanyze ensures its clients are not forcing their employees to be monitored,
but instead give them the chance to opt in. In a similar fashion to wellness
programmes, they anonymise and collate the information that they share with
employers. Waber is emphatic that his company never sells the data on to third
parties, and emphasises transparency throughout the process.
But this kind of data could
be used in more controversial ways, and the goodwill of the companies involved
doesn’t eliminate all the risks. Data could be stolen in a cyberattack, for
instance, or it could be used in ways that are not transparent for users. It
“could be sold to basically anyone, for whatever purpose, and recirculated in
other ways,” says Ifeoma Awunja, a sociologist at Cornell University who
researches the use of health data in the workplace.
Taking a
short-term profit on user data would damage your company’s reputation - Scott
Montgomery
There are reports
that some providers are doing just thatalready – even if they data they sell is anonymous, they could be
cross-referenced with other anonymous data to
identify people. Not all these
companies do it, and some say it is not smart business to do so. “Taking a
short-term profit on user data would damage your company’s reputation, causing
user volume to plummet and thus your value to clients to diminish” says Scott Montgomery,
CEO of Wellteq, a corporate wellness provider based in Singapore.
But even if all the
companies did the right thing and acted only in their costumers’ best interest,
people in some places are still only protected by their wellness programme’s goodwill.
The US law is “significantly behind” the European Union and other parts of the
world in protecting users, says Awunja.
In the EU, a new General
Data Protection Regulation (GDRP) will come into force thisMay, which will
outlaw any use of personal data to which the user didn’t explicitly consent. In
the US, the legislation varies between states. In some of them, sharing some
health information with third parties is not illegal as long as the data
doesn’t identify the person. Furthermore, according to Gary Phelan, a lawyer at
Mitchell & Sheridan in the US, since this data is generally not considered
medical data, it does not have the privacy restrictions as medical data.
Human
beings are evaluated in terms of the risk that they pose to the firm - Awunja
There is also the question
of return on investment for the employers. Do they actually save businesses
money? These programmes are
meant to lower health insurance premiums both for companies and employees, since they are supposed to
decrease health risk, sick days, and hospital costs. But it
is not clear if this
actually happens. A
2013 study by the Rand Corporation claims
that, while these programmes save companies enough money to pay for themselves,
they “are having little if any immediate effects on the amount employers spend
on health care.”
With all these tools, “human
beings are evaluated in terms of the risk that they pose to the firm,” says
Awunja. Still, it’s a complicated balance: dealing with the everyday habits of
employees as if they were just another hit to the bottom line sounds a lot like
the old days of the Sociological Department. Whatever benefits these
technologies can bring, they have to balance with the privacy rights and
expectations of workers.
Balance
There is an episode from TV show
Black Mirror that
offers a chilling warning. In it, every person is given a score based on their
interactions on a social platform that looks strikingly like Instagram. This
score defines almost every opportunity they have in life: what jobs they can
get, where they live, which plane tickets they can buy or who can they date. In
fact, in 2020 China will
roll a mandatory Citizen Score calculated
from a number of data sources, from your purchase history to the books you
read.
Although not quite as
sinister, this illustrates the technological, legal and ethical limitations of
doing something similar elsewhere. In most parts of the world, the law prevents
your HR department from sharing or requesting data about you from your credit
card provider, your healthcare provider, or your favourite online dating site,
unless you explicitly consent that it can do so.
This should keep the most
cynical temptations at bay for now, but how to reap the benefits of data in an
acceptable way? There is a strong case for finding this balance: as Waber says,
data can give you evidence-based advice for advancing your career, or for
enhancing your effectivity at work. Having a space for taking care of your
health at work might improve your happiness at your job, and some studies
suggest that this
also translates into a productivity push.
Part of the answer seems to
be to agree to certain ethical standards. In
a paper, Awunja
proposes some practices like informing employees of the potential risks for
discrimination with the data, not penalising those who decline to take part in
these programmes, and setting a clear ‘expiration date’ to the collected data.
This is an important
conversation to have, even if you are of those with nothing to hide. As it
turns out, it is very likely that giving away our data is going to be part of
the everyday experience of work in the near future, at least in the corporate
world.
Nav komentāru:
Ierakstīt komentāru