Vincit omnia veritas
Distinction of Truth from Falsehood:
Present Opportunities
In the current international context, particular importance are the
efforts of the responsible government agencies to curb toxic propaganda. Finding adequate resources and effective methods to combat it.
Unfortunately, until now, it continues to apply customary subsequent
bureaucratic-traditional approach, when the state budget funds are spent
without adequate return.
Automatic
detection of influential actors in disinformation networks
January
26,
Significance
Hostile
influence operations (IOs) that weaponize digital communications and social
media pose a rising threat to open democracies. This paper presents a system
framework to automate detection of disinformation narratives, networks, and
influential actors. The framework integrates natural language processing,
machine learning, graph analytics, and network causal inference to quantify the
impact of individual actors in spreading the IO narrative. We present a
classifier that detects reported IO accounts with 96% precision, 79% recall,
and 96% AUPRC, demonstrated on real social media data collected for the 2017
French presidential election and known IO accounts disclosed by Twitter. Our
system also discovers salient network communities and high-impact accounts that
are independently corroborated by US Congressional reports and investigative
journalism…: https://www.pnas.org/content/118/4/e2011216118
Geneva: Evolving Censorship Evasion
Join
us and learn about our fight against internet censorship around the world.
Automating Evasion
Researchers
and censoring regimes have long engaged in a cat-and-mouse game, leading to
increasingly sophisticated Internet-scale censorship techniques and methods to
evade them. In this work, we take a drastic departure from the previously
manual evade/detect cycle by developing techniques to automate the
discovery of censorship evasion strategies.
Our Approach
We
developed Geneva (Genetic Evasion), a novel experimental
genetic algorithm that evolves packet-manipulation-based censorship evasion
strategies against nation-state level censors. Geneva re-derived virtually all
previously published evasion strategies, and has discovered new ways of
circumventing censorship in China, India, Iran, and Kazakhstan.
How it works
Geneva runs exclusively on
one side of the connection: it does not require a proxy, bridge, or assistance
from outside the censoring regime. It defeats censorship by modifying network
traffic on the fly (by injecting traffic, modifying packets, etc) in such a way
that censoring middleboxes are unable to interfere with forbidden connections,
but without otherwise affecting the flow. Since Geneva works at the network
layer, it can be used with any application; with Geneva running in the
background, any web browser can become a censorship evasion tool. Geneva cannot
be used to circumvent blocking of IP addresses.
Geneva
composes four basic packet-level actions (drop, duplicate, fragment, tamper)
together to represent censorship evasion strategies. By running
directly against real censors, Geneva’s genetic algorithm evolves strategies
that evade the censor.
Real World Deployments
Geneva
has been deployed against real-world censors in China, India, Iran, and
Kazahkstan. It has discovered dozens of strategies to defeat censorship, and
found previously unknown bugs in censors.
Note
that Geneva is a research prototype, and does
not offer anonymization, encryption, or other protection from censors. Understand
the risks in your country before trying to run Geneva.
All of these strategies and Geneva’s strategy engine and are open
source: check them out on our Github
page.
Learn
more about how we designed and built Geneva here.
Who We Are
This
project is done by students in Breakerspace,
a lab at the University of Maryland dedicated to scaling-up undergraduate
research in computer and network security.
This
work is supported by the Open Technology Fund and the National Science
Foundation.
Contact Us
Interested
in working with us, learning more, getting Geneva running in your country, or
incorporating some of Geneva’s strategies into your tool?
The
easiest way to reach us is by email.
- Dave:
dml (at) cs.umd.edu (PGP key here)
- Kevin:
kbock (at) terpmail.umd.edu (PGP key here)
https://geneva.cs.umd.edu/
I
teach people how to protect themselves from getting duped by false information
online. Here’s what you can do.
BY
ELIZABETH STOYCHEFF
You
might have fallen for someone’s attempt to disinform you about current events.
But it’s not your fault.
Even
the most well-intentioned news consumers can find today’s avalanche of
political information difficult to navigate. With so much news available, many
people consume media in an automatic, unconscious state—similar to knowing you
drove home but not being able to recall the trip.
And
that makes you more susceptible to accepting false claims.
But,
as the 2020 elections near, you can develop habits to exert more conscious
control over your news intake. I teach these strategies to students in a
course on media literacy, helping people become more savvy news consumers in
four simple steps.
1. SEEK OUT
YOUR OWN POLITICAL NEWS
Like
most people, you probably get a fair amount of your news from apps, sites, and
social media such as Twitter, Facebook, Reddit, Apple News, and Google. You
should change that.
These
are technology companies—not news outlets. Their goal is to maximize the time you spend on their sites and
apps, generating advertising revenue. To that end, their algorithms use your
browsing history to show you news you’ll agree with and like, keeping you
engaged for as long as possible.
That
means instead of presenting you with the most important news of the day, social
media feed you what they think will hold your attention. Most often, that
is algorithmically filtered and may deliver
politically biased information, outright falsehoods, or material that you have
seen before.
Instead,
regularly visit trusted news apps and news websites directly.
These organizations actually produce news, usually in the spirit of serving the
public interest. There, you’ll see a more complete range of political
information, not just content that’s been curated for you.
2. USE BASIC
MATH
Untrustworthy
news and political campaigns often use statistics to make bogus
claims—rightfully assuming most readers won’t take the time to fact-check them.
Simple
mathematical calculations, which scholars call Fermi
estimates or rough guesstimates, can help you better spot
falsified data.
Murder
statistics can be found in, among other places, the FBI’s statistics on violent crime. They estimate
that in 2018 there were 16,214 murders in the U.S. If the meme’s figure
were accurate, it would mean that nearly two-thirds of U.S. murders were
committed by the “illegal immigrants” the meme alleged.
Next,
find out how many people were living in the U.S. illegally. That group, most
news reports and estimates suggest, numbers about 11 million men, women, and children—which
is only 3% of the country’s 330 million people.
Just
3% of people committed 60% of U.S. murders? With a tiny bit of research and
quick math, you can see these numbers just don’t add up.
3. BEWARE OF
NONPOLITICAL BIASES
Also
beware of the human tendency to believe what’s in front of your eyes. Video
content is perceived as more
trustworthy—even though deepfake videos can be very deceiving. Think critically
about how you determine
something is accurate. Seeing—and hearing—should not necessarily be
believing. Treat video content with just as much skepticism as news text and
memes, verifying any facts with news from a trusted source.
4. THINK BEYOND
THE PRESIDENCY
A
final bias of news consumers and, as a result, news organizations has been a
shift toward prioritizing
national news at the expense of local and international issues.
Leadership in the White House is certainly important, but national news is only
one of four categories of information you need this election season.
Informed
voters understand and connect issues across four levels: personal
interests—like a local sports team or healthcare costs, news in their local
communities, national politics, and international affairs. Knowing a little in
each of these areas better equips you to evaluate
claims about all the others.
For
example, better understanding trade negotiations with China could provide
insight into why workers at a nearby manufacturing plant are picketing, which
could subsequently affect the prices you pay for local goods and services.
Big
businesses and powerful disinformation campaigns heavily influence the
information you see, creating personal and convincing false narratives. It’s
not your fault for getting duped, but being conscious of these processes can
put you back in control.
Elizabeth
Stoycheff is an associate professor of communication at Wayne State University.
This article is republished from The Conversation.
You Might Also
Like:
The
race to create a perfect lie detector – and the dangers of succeeding
AI and brain-scanning
technology could soon make it possible to reliably detect when people are
lying. But do we really want to know? By Amit Katwala
Thu 5 Sep
2019 06.00 BST
We
learn to lie as children, between the ages of two and five. By adulthood, we
are prolific. We lie to our employers, our partners and, most of all, one study
has found, to
our mothers. The average person hears up to 200 lies a day, according
to research by Jerry Jellison, a psychologist at the University of Southern
California. The majority of the lies we tell are “white”, the inconsequential
niceties – “I love your dress!” – that grease the wheels of human interaction.
But most people tell one or two “big” lies a day, says Richard Wiseman, a
psychologist at the University of Hertfordshire. We lie to promote ourselves,
protect ourselves and to hurt or avoid hurting others.
The mystery is how we keep
getting away with it. Our bodies expose us in every way. Hearts race, sweat
drips and micro-expressions leak from small muscles in the face. We stutter,
stall and make Freudian slips. “No mortal can keep a secret,” wrote the
psychoanalyst in 1905. “If his lips are silent, he chatters with his
fingertips. Betrayal oozes out of him at every pore.”
Even so, we are hopeless at
spotting deception. On average, across 206 scientific studies, people can
separate truth from lies just 54% of the time – only marginally better than
tossing a coin. “People are bad at it because the differences between
truth-tellers and liars are typically small and unreliable,” said Aldert Vrij,
a psychologist at the University of Portsmouth who has spent years studying
ways to detect deception. Some people stiffen and freeze when put on the spot,
others become more animated. Liars can spin yarns packed with colour and
detail, and truth-tellers can seem vague and evasive.
Humans have been trying to
overcome this problem for millennia. The search for a perfect lie detector has
involved torture, trials by ordeal and, in ancient India, an
encounter with a donkey in a dark room. Three thousand years ago
in China, the accused were forced to chew and spit out rice; the grains were
thought to stick in the dry, nervous mouths of the guilty. In 1730, the English
writer Daniel Defoe suggested taking the pulse of suspected pickpockets. “Guilt
carries fear always about with it,” he wrote. “There is a tremor in the blood
of a thief.” More recently, lie detection has largely been equated with the
juddering styluses of the polygraph machine – the quintessential lie
detector beloved
by daytime television hosts and police procedurals. But none of
these methods has yielded a reliable way to separate fiction from fact.
That could soon change. In
the past couple of decades, the rise of cheap computing power, brain-scanning
technologies and artificial intelligence has given birth to what many claim is
a powerful new generation of lie-detection tools. Startups, racing to
commercialise these developments, want us to believe that a virtually
infallible lie detector is just around the corner.
Their inventions are being
snapped up by police forces, state agencies and nations desperate to secure
themselves against foreign threats. They are also being used by employers,
insurance companies and welfare officers. “We’ve seen an increase in interest
from both the private sector and within government,” said Todd Mickelsen, the
CEO of Converus, which makes a lie detector based on eye movements and subtle
changes in pupil size.
Converus’s technology,
EyeDetect, has been used by FedEx in Panama and Uber in Mexico to screen out
drivers with criminal histories, and by the credit ratings agency Experian,
which tests its staff in Colombia to make sure they aren’t manipulating the
company’s database to secure loans for family members. In the UK, Northumbria
police are carrying out a pilot scheme that uses EyeDetect to measure the
rehabilitation of sex offenders. Other EyeDetect customers include the
government of Afghanistan, McDonald’s and dozens of local police departments in
the US. Soon, large-scale lie-detection programmes could be coming to the
borders of the US and the European Union, where they would flag potentially
deceptive travellers for further questioning.
But as tools such as
EyeDetect infiltrate more and more areas of public and private life, there are
urgent questions to be answered about their scientific validity and ethical
use. In our age of high surveillance and anxieties about all-powerful AIs, the
idea that a machine could read our most personal thoughts feels more plausible
than ever to us as individuals, and to the governments and corporations funding
the new wave of lie-detection research. But what if states and employers come
to believe in the power of a lie-detection technology that proves to be deeply
biased – or that doesn’t actually work?
And what do we do with these
technologies if they do succeed? A machine that reliably sorts truth from
falsehood could have profound implications for human conduct. The creators of
these tools argue that by weeding out deception they can create a fairer, safer
world. But the ways lie detectors have been used in the past suggests such
claims may be far too optimistic.
For
most of us, most of the time, lying is more taxing and more stressful than
honesty. To calculate another person’s view, suppress emotions and hold back
from blurting out the truth requires more thought and more energy than simply
being honest. It demands that we bear what psychologists call a cognitive load.
Carrying that burden, most lie-detection theories assume, leaves evidence in
our bodies and actions.
Lie-detection technologies
tend to examine five different types of evidence. The first two are verbal: the
things we say and the way we say them. Jeff Hancock, an expert on digital
communication at Stanford, has found that people who are lying in their online
dating profiles tend to use the words “I”, “me” and “my” more often, for
instance. Voice-stress analysis, which aims to detect deception based on
changes in tone of voice, was used during the interrogation of George
Zimmerman, who shot the teenager Trayvon Martin in 2012, and by UK councils
between 2007 and 2010 in a pilot scheme that tried to catch benefit cheats over
the phone. Only five of the 23 local authorities where voice analysis was
trialled judged it a success, but in 2014, it was still
in use in 20 councils, according to freedom of information requests by
the campaign group False Economy.
The third source of evidence
– body language – can also reveal hidden feelings. Some liars display so-called
“duper’s delight”, a fleeting expression of glee that crosses the face when
they think they have got away with it. Cognitive load makes people move
differently, and liars trying to “act natural” can end up doing the opposite.
In an experiment in 2015, researchers at the University of Cambridge were able
to detect deception more than 70% of the time by using a skintight suit to
measure how much subjects fidgeted and froze under questioning.
Get the Guardian’s
award-winning long reads sent direct to you every Saturday morning
The fourth type of evidence
is physiological. The polygraph measures blood pressure, breathing rate and
sweat. Penile plethysmography tests arousal levels in sex offenders by
measuring the engorgement of the penis using a special cuff. Infrared cameras
analyse facial temperature. Unlike Pinocchio, our noses may actually shrink
slightly when we lie as warm blood flows towards the brain.
In the 1990s, new
technologies opened up a fifth, ostensibly more direct avenue of investigation:
the brain. In the second season of the Netflix documentary Making a Murderer,
Steven Avery, who is serving a life sentence for a brutal killing he says he
did not commit, undergoes a “brain fingerprinting” exam, which uses an
electrode-studded headset called an electroencephalogram, or EEG, to read his
neural activity and translate it into waves rising and falling on a graph. The
test’s inventor, Dr Larry Farwell, claims it can detect knowledge of a crime
hidden in a suspect’s brain by picking up a neural response to phrases or
pictures relating to the crime that only the perpetrator and investigators
would recognise. Another EEG-based test was used in 2008 to convict a
24-year-old Indian woman named Aditi Sharma of murdering her fiance by lacing
his food with arsenic, but Sharma’s sentence was eventually overturned on
appeal when the Indian supreme court held that the test could violate the
subject’s rights against self-incrimination.
After 9/11, the US
government – long an enthusiastic sponsor of deception science – started
funding other kinds of brain-based lie-detection work through Darpa, the
Defence Advanced Research Projects Agency. By 2006, two companies – Cephos and
No Lie MRI – were offering lie detection based on functional magnetic resonance
imaging, or fMRI. Using powerful magnets, these tools track the flow of blood
to areas of the brain involved in social calculation, memory recall and impulse
control.
But just because a
lie-detection tool seems technologically sophisticated doesn’t mean it works.
“It’s quite simple to beat these tests in ways that are very difficult to
detect by a potential investigator,” said Dr Giorgio Ganis, who studies EEG and
fMRI-based lie detection at the University of Plymouth. In 2007, a research
group set up by the MacArthur Foundation examined fMRI-based deception tests.
“After looking at the literature, we concluded that we have no idea whether
fMRI can or cannot detect lies,” said Anthony Wagner, a Stanford psychologist
and a member of the MacArthur group, who has testified against the
admissibility of fMRI lie detection in court.
A new frontier in lie
detection is now emerging. An increasing number of projects are using AI to
combine multiple sources of evidence into a single measure for deception.
Machine learning is accelerating deception research by spotting previously
unseen patterns in reams of data. Scientists at the University of Maryland, for
example, have developed software that they claim can detect deception from
courtroom footage with 88% accuracy.
The algorithms behind such
tools are designed to improve continuously over time, and may ultimately end up
basing their determinations of guilt and innocence on factors that even the
humans who have programmed them don’t understand. These tests are being
trialled in job interviews, at border crossings and in police interviews, but
as they become increasingly widespread, civil rights groups and scientists are
growing more and more concerned about the dangers they could unleash on
society.
Nothing
provides a clearer warning about the threats of the new generation of
lie-detection than the history of the polygraph, the world’s best-known and
most widely used deception test. Although almost a century old, the machine
still dominates both the public perception of lie detection and the testing
market, with millions of polygraph tests conducted every year. Ever since its
creation, it has been attacked for its questionable accuracy, and for the way
it has been used as a tool of coercion. But the polygraph’s flawed science
continues to cast a shadow over lie detection technologies today.
Even John Larson, the
inventor of the polygraph, came to hate his creation. In 1921, Larson was a
29-year-old rookie police officer working the downtown beat in Berkeley,
California. But he had also studied physiology and criminology and, when not on
patrol, he was in a lab at the University of California, developing ways to
bring science to bear in the fight against crime.
In the spring of 1921,
Larson built an ugly device that took continuous measurements of blood pressure
and breathing rate, and scratched the results on to a rolling paper cylinder.
He then devised an interview-based exam that compared a subject’s physiological
response when answering yes or no questions relating to a crime with the
subject’s answers to control questions such as “Is your name Jane Doe?” As a
proof of concept, he used the test to solve a theft at a women’s dormitory.
Larson refined his invention
over several years with the help of an enterprising young man named Leonarde
Keeler, who envisioned applications for the polygraph well beyond law
enforcement. After the Wall Street crash of 1929, Keeler offered a version of
the machine that was concealed inside an elegant walnut box to large
organisations so they could screen employees suspected of theft.
Not long after, the US
government became the world’s largest user of the exam. During the “red scare”
of the 1950s, thousands of federal employees were subjected to polygraphs
designed to root out communists. The US Army, which set up its first polygraph
school in 1951, still trains examiners for all the intelligence agencies at the
National Center for Credibility Assessment at Fort Jackson in South Carolina.
Companies also embraced the
technology. Throughout much of the last century, about a quarter of US
corporations ran polygraph exams on employees to test for issues including histories
of drug use and theft. McDonald’s used to use the machine on its workers. By
the 1980s, there were up to 10,000 trained polygraph examiners in the US,
conducting 2m tests a year.
The only problem was that
the polygraph did not work. In 2003, the US National Academy of Sciences
published a damning report that found evidence on the polygraph’s accuracy
across 57 studies was “far from satisfactory”. History is littered with
examples of known criminals who evaded detection by cheating the test. Aldrich
Ames, a KGB double agent, passed two polygraphs while working for the CIA in
the late 1980s and early 90s. With a little training, it is relatively easy to
beat the machine. Floyd “Buzz” Fay, who was falsely convicted of murder in 1979
after a failed polygraph exam, became an expert in the test during his
two-and-a-half-years in prison, and started coaching other inmates on how to
defeat it. After 15 minutes of instruction, 23 of 27 were able to pass. Common
“countermeasures”, which work by exaggerating the body’s response to control
questions, include thinking about a frightening experience, stepping on a pin
hidden in the shoe, or simply clenching the anus.
The upshot is that the
polygraph is not and never was an effective lie detector. There is no way for
an examiner to know whether a rise in blood pressure is due to fear of getting
caught in a lie, or anxiety about being wrongly accused. Different examiners
rating the same charts can get contradictory results and there are huge
discrepancies in outcome depending on location, race and gender. In one extreme
example, an examiner in Washington state failed one in 20 law enforcement job
applicants for having sex with animals; he “uncovered” 10 times more bestiality
than his colleagues, and twice as much child pornography.
As long ago as 1965, the
year Larson died, the US Committee on Government Operations issued a damning
verdict on the polygraph. “People have been deceived by a myth that a metal box
in the hands of an investigator can detect truth or falsehood,” it concluded.
By then, civil rights groups were arguing that the polygraph violated
constitutional protections against self-incrimination. In fact, despite the
polygraph’s cultural status, in the US, its results are inadmissible in most
courts. And in 1988, citing concerns that the polygraph was open to “misuse and
abuse”, the US Congress banned its use by employers. Other lie-detectors from
the second half of the 20th century fared no better: abandoned Department of
Defense projects included the “wiggle chair”, which covertly tracked movement
and body temperature during interrogation, and an elaborate system for
measuring breathing rate by aiming an infrared laser at the lip through a hole
in the wall.
The polygraph remained
popular though – not because it was effective, but because people thought it
was. “The people who developed the polygraph machine knew that the real power
of it was in convincing people that it works,” said Dr Andy Balmer, a
sociologist at the University of Manchester who wrote a book called Lie
Detection and the Law.
The threat of being outed by
the machine was enough to coerce some people into confessions. One examiner in
Cincinnati in 1975 left the interrogation room and reportedly watched, bemused,
through a two-way mirror as the accused tore 1.8 metres of paper charts off the
machine and ate them. (You didn’t even have to have the right machine: in the
1980s, police officers in Detroit extracted confessions by placing a suspect’s
hand on a photocopier that spat out sheets of paper with the phrase “He’s
Lying!” pre-printed on them.) This was particularly attractive to law
enforcement in the US, where it is vastly cheaper to use a machine to get a
confession out of someone than it is to take them to trial.
But other people were pushed
to admit to crimes they did not commit after the machine wrongly labelled them
as lying. The polygraph became a form of psychological torture that wrung false
confessions from the vulnerable. Many of these people were then charged,
prosecuted and sent to jail – whether by unscrupulous police and prosecutors,
or by those who wrongly believed in the polygraph’s power.
Perhaps no one came to
understand the coercive potential of his machine better than Larson. Shortly
before his death in 1965, he wrote: “Beyond my expectation, through
uncontrollable factors, this scientific investigation became for practical
purposes a Frankenstein’s monster.”
The
search for a truly effective lie detector gained new urgency after the
terrorist attacks of 11 September 2001. Several of the hijackers had managed to
enter the US after successfully deceiving
border agents. Suddenly, intelligence and border services wanted tools
that actually worked. A flood of new government funding made lie detection big
business again. “Everything changed after 9/11,” writes psychologist Paul Ekman
in Telling Lies.
Ekman was one of the
beneficiaries of this surge. In the 1970s, he had been filming interviews with
psychiatric patients when he noticed a brief flash of despair cross the
features of Mary, a 42-year-old suicidal woman, when she lied about feeling
better. He spent the next few decades cataloguing how these tiny movements of
the face, which he termed “micro-expressions”,
can reveal hidden truths.
Ekman’s work was hugely
influential with psychologists, and even served as the basis for Lie to Me, a
primetime television show that debuted in 2009 with an Ekman-inspired lead
played by Tim Roth. But it got its first real-world test in 2006, as part of a
raft of new security measures introduced to combat terrorism. That year, Ekman
spent a month teaching US immigration officers how to detect deception at
passport control by looking for certain micro-expressions. The results are
instructive: at least 16 terrorists were permitted to enter the US in the
following six years.
Investment in lie-detection
technology “goes in waves”, said Dr John Kircher, a University of Utah
psychologist who developed a digital scoring system for the polygraph. There
were spikes in the early 1980s, the mid-90s and the early 2000s, neatly
tracking with Republican administrations and foreign wars. In 2008, under
President George W Bush, the US Army spent $700,000 on 94 handheld
lie detectors for use in Iraq and Afghanistan. The Preliminary
Credibility Assessment Screening System had three sensors that attached to the
hand, connected to an off-the-shelf pager which flashed green for truth, red
for lies and yellow if it couldn’t decide. It was about as good as a
photocopier at detecting deception – and at eliciting the truth.
Some people believe an
accurate lie detector would have allowed border patrol to stop the 9/11
hijackers. “These people were already on watch lists,” Larry Farwell, the
inventor of brain fingerprinting, told me. “Brain fingerprinting could have
provided the evidence we needed to bring the perpetrators to justice before
they actually committed the crime.” A similar logic has been applied in the
case of European terrorists who returned from receiving training abroad.
As a result, the frontline
for much of the new government-funded lie detection technology has been the
borders of the US and Europe. In 2014, travellers flying into Bucharest were
interrogated by a virtual
border agent called Avatar, an on-screen figure in a white shirt
with blue eyes, which introduced itself as “the future of passport control”. As
well as an e-passport scanner and fingerprint reader, the Avatar unit has a
microphone, an infra-red eye-tracking camera and an Xbox Kinect sensor to
measure body movement. It is one of the first “multi-modal” lie detectors – one
that incorporates a number of different sources of evidence – since the
polygraph.
But the “secret sauce”,
according to David Mackstaller, who is taking the technology in Avatar to
market via a company called Discern Science, is in the software, which uses an
algorithm to combine all of these types of data. The machine aims to send a
verdict to a human border guard within 45 seconds, who can either wave the traveller
through or pull them aside for additional screening. Mackstaller said he is in
talks with governments – he wouldn’t say which ones – about installing Avatar
permanently after further tests at Nogales in Arizona on the US-Mexico border,
and with federal employees at Reagan Airport near Washington DC. Discern
Science claims accuracy rates in their preliminary studies – including the one
in Bucharest – have been between 83% and 85%.
The Bucharest trials were
supported by Frontex, the EU border agency, which is now funding a competing
system called iBorderCtrl, with its own virtual border guard. One aspect of
iBorderCtrl is based on Silent Talker, a technology that has been in
development at Manchester Metropolitan University since the early 2000s. Silent
Talker uses an AI model to analyse more than 40 types of microgestures in the
face and head; it only needs a camera and an internet connection to function.
On a recent visit to the company’s office in central Manchester, I watched
video footage of a young man lying about taking money from a box during a mock
crime experiment, while in the corner of the screen a dial swung from green, to
yellow, to red. In theory, it could be run on a smartphone or used on live
television footage, perhaps even during political debates, although co-founder
James O’Shea said the company doesn’t want to go down that route – it is
targeting law enforcement and insurance.
O’Shea and his colleague
Zuhair Bandar claim Silent Talker has an accuracy rate of 75% in studies so
far. “We don’t know how it works,” O’Shea said. They stressed the importance of
keeping a “human in the loop” when it comes to making decisions based on Silent
Talker’s results.
Mackstaller said Avatar’s
results will improve as its algorithm learns. He also expects it to perform
better in the real world because the penalties for getting caught are much
higher, so liars are under more stress. But research shows that the opposite
may be true: lab studies tend to overestimate real-world success.
Before these tools are rolled
out at scale, clearer evidence is required that they work across different
cultures, or with groups of people such as psychopaths, whose non-verbal
behaviour may differ from the norm. Much of the research so far has been
conducted on white Europeans and Americans. Evidence from other domains,
including bail and prison sentencing, suggests that algorithms tend to encode
the biases of the societies in which they are created. These effects could be
heightened at the border, where some of society’s greatest fears and prejudices
play out. What’s more, the black box of an AI model is not conducive to
transparent decision making since it cannot explain its reasoning. “We don’t
know how it works,” O’Shea said. “The AI system learned how to do it by
itself.”
Andy Balmer, the University
of Manchester sociologist, fears that technology will be used to reinforce
existing biases with a veneer of questionable science – making it harder for
individuals from vulnerable groups to challenge decisions. “Most reputable science
is clear that lie detection doesn’t work, and yet it persists as a field of
study where other things probably would have been abandoned by now,” he said.
“That tells us something about what we want from it.”
The
truth has only one face, wrote the 16th-century French philosopher Michel de
Montaigne, but a lie “has a hundred thousand shapes and no defined limits”.
Deception is not a singular phenomenon and, as of yet, we know of no telltale
sign of deception that holds true for everyone, in every situation. There is no
Pinocchio’s nose. “That’s seen as the holy grail of lie detection,” said Dr
Sophie van der Zee, a legal psychologist at Erasmus University in Rotterdam.
“So far no one has found it.”
The accuracy rates of 80-90%
claimed by the likes of EyeDetect and Avatar sound impressive, but applied at
the scale of a border crossing, they would lead to thousands of innocent people
being wrongly flagged for every genuine threat it identified. It might also
mean that two out of every 10 terrorists easily slips through.
History suggests that such
shortcomings will not stop these new tools from being used. After all, the
polygraph has been widely debunked, but an estimated 2.5m polygraph exams are
still conducted in the US every year. It is a $2.5bn industry. In the UK, the
polygraph has been used
on sex offenders since 2014, and in January 2019, the government
announced plans to
use it on domestic abusers on parole. The test “cannot be killed by science
because it was not born of science”, writes the historian Ken Alder in his book
The Lie Detectors.
New technologies may be
harder than the polygraph for unscrupulous examiners to deliberately
manipulate, but that does not mean they will be fair. AI-powered lie detectors
prey on the tendency of both individuals and governments to put faith in
science’s supposedly all-seeing eye. And the closer they get to perfect
reliability, or at least the closer they appear to get, the more dangerous they
will become, because lie detectors often get aimed at society’s most
vulnerable: women in the 1920s, suspected dissidents and homosexuals in the
60s, benefit claimants in the 2000s, asylum seekers and migrants today.
“Scientists don’t think much about who is going to use these methods,” said
Giorgio Ganis. “I always feel that people should be aware of the implications.”
In an era of fake news and
falsehoods, it can be tempting to look for certainty in science. But lie
detectors tend to surface at “pressure-cooker points” in politics, when
governments lower their requirements for scientific rigour, said Balmer. In
this environment, dubious new techniques could “slip neatly into the role the
polygraph once played”, Alder predicts.
One day, improvements in
artificial intelligence could find a reliable pattern for deception by scouring
multiple sources of evidence, or more detailed scanning technologies could
discover an unambiguous sign lurking in the brain. In the real world, however,
practised falsehoods – the stories we tell ourselves about ourselves, the lies
that form the core of our identity – complicate matters. “We have this
tremendous capacity to believe our own lies,” Dan Ariely, a renowned
behavioural psychologist at Duke University, said. “And once we believe our own
lies, of course we don’t provide any signal of wrongdoing.”
In his 1995 science-fiction
novel The Truth Machine, James Halperin imagined a world in which someone
succeeds in building a perfect lie detector. The invention helps unite the
warring nations of the globe into a world government, and accelerates the
search for a cancer cure. But evidence from the last hundred years suggests
that it probably wouldn’t play out like that in real life. Politicians are
hardly queueing up to use new technology on themselves. Terry Mullins, a
long-time private polygraph examiner – one of about 30 in the UK – has been
trying in vain to get police forces and government departments interested in
the EyeDetect technology. “You can’t get the government on board,” he said. “I
think they’re all terrified.”
Daniel Langleben, the
scientist behind No Lie MRI, told me one of the government agencies he was
approached by was not really interested in the accuracy rates of his
brain-based lie detector. An fMRI machine cannot be packed into a suitcase or
brought into a police interrogation room. The investigator cannot manipulate
the test results to apply pressure to an uncooperative suspect. The agency just
wanted to know whether it could be used to train agents to beat the polygraph.
“Truth is not really a
commodity,” Langleben reflected. “Nobody wants it.”
Alexios Mantzarlis
Director,
International Fact-Checking Network at The Poynter Institute
Saint Petersburg, Florida
The Poynter InstituteInstitut d'Etudes
politiques de Paris / Sciences Po Paris See
contact info
Alexios Mantzarlis writes
about and advocates for fact-checking. He also trains and convenes
fact-checkers around the world.
As Director of the IFCN, Alexios has helped draft the fact-checkers' code of
principles, shepherded a partnership between third-party fact-checkers and
Facebook, testified to the Italian Chamber of Deputies on the "fake
news" phenomenon and helped launch International Fact-Checking Day. In January
2018 he was invited to join the European Union's High Level Group on fake news
and online disinformation. He has also drafted a lesson plan for UNESCO and a
chapter on fact-checking in the 2016 U.S. presidential elections in Truth
Counts, published by Congressional Quarterly.
The International Fact-Checking Network (IFCN) is a forum for fact-checkers
worldwide hosted by the Poynter Institute for Media Studies. These
organizations fact-check statements by public figures, major institutions and
other widely circulated claims of interest to society.
It launched in September 2015, in recognition of the fact that a booming crop
of fact-checking initiatives could benefit from an organization that promotes
best practices and exchanges in this field.
Among other things, the IFCN:
* Monitors trends and formats in fact-checking worldwide, publishing regular
articles on the dedicated Poynter.org channel.
* Provides training resources for fact-checkers.
* Supports collaborative efforts in international fact-checking.
* Convenes a yearly conference (Global Fact).
* Is the home of the fact-checkers' code of principles.
The IFCN has received funding from the Arthur M. Blank Family Foundation, the
Duke Reporters’ Lab, the Bill & Melinda Gates Foundation, Google, the
National Endowment for Democracy, the Omidyar Network, the Open Society
Foundations and the Park Foundation.
To find out more, follow @factchecknet on Twitter or go to bit.ly/GlobalFac
It took only 36 hours for
these students to solve Facebook's fake-news problem
·
· Nov.
14, 2016, 8:12 PM
Anant Goel, Nabanita De,
Qinglin Chen and Mark Craft.
Facebook is facing
increasing criticism over its role in the 2016 US presidential election because
it allowed propaganda lies disguised as news stories to spread on the
social-media site unchecked.
The spreading of false
information during the election cycle was so bad that President Barack Obama
called Facebook a "dust
cloud of nonsense."
And Business Insider's
Alyson Shontell called Facebook CEO Mark Zuckerberg's reaction to this
criticism "tone-deaf." His public stance is that fake news is
such a small percentage of the stuff shared on Facebook that it couldn't have
had an impact. This even while Facebook hasofficially
vowed to do better and insisted that ferreting out the real news from
the lies is a difficult technical problem.
Just how hard of a problem
is it for an algorithm to determine real news from lies?
Not that hard.
During a hackathon at
Princeton University, four college students created one in the form of a Chrome
browser extension in just 36 hours. They named their project "FiB: Stop living a lie."
The students are Nabanita
De, a second-year master's student in computer science student at the
University of Massachusetts at Amherst; Anant Goel, a freshman at Purdue
University; Mark Craft, a sophomore at the University of Illinois at
Urbana-Champaign; and Qinglin Chen, a sophomore also at the University of
Illinois at Urbana-Champaign.
Their News Feed authenticity
checker works like this, De tells us:
"It classifies every
post, be it pictures (Twitter snapshots), adult content pictures, fake links,
malware links, fake news links as verified or non-verified using artificial
intelligence.
"For links, we take
into account the website's reputation, also query it against malware and
phishing websites database and also take the content, search it on Google/Bing,
retrieve searches with high confidence and summarize that link and show to the
user. For pictures like Twitter snapshots, we convert the image to text, use
the usernames mentioned in the tweet, to get all tweets of the user and check
if current tweet was ever posted by the user."
The browser plug-in then
adds a little tag in the corner that says whether the story is verified.
For instance, it discovered
that this news story promising that pot cures cancer was fake, so it noted that
the story was "not verified."
WhatsApp is by far one of the
world’s most popular messaging platforms. Unfortunately, that platform
can be ripe with misinformation, especially if you happen to be a member of
large WhatsApp groups. WhatsApp has tried to fight this misinformation
in various ways in the past, but now they’re adding a new weapon
to their arsenal: fact-checking forwarded messages.
The
way the new
fact-checking tool works is when a user receives a forwarded message,
that message will now have a magnifying glass icon next to it. Tap that icon
and WhatsApp will ask you if you want to upload the message via your browser to
search for news about the claims made in the message. WhatsApp says the company
will never see the messages fact-checked through this tool.
The
goal is to help easily inform users about the facts of a subject instead of
limiting them to manually having to fact-check the claims themselves. It’s a
nice feature for today’s world, especially considering just how much
misinformation is floating around out there about things like COVID-19 and face
masks. WhatsApp says the new fact-checking tool will roll out to users today in
Brazil, Italy, Ireland, Mexico, Spain, the U.K., and the U.S. To use it, you
just need to make sure you’ve updated to the latest version of WhatsApp on your
smartphone.
Ideas / Fact-checking / The
case for a public health approach to moderate health misinformation online : https://meedan.com/blog/the-case-for-a-public-health-approach-to-moderate-health-misinformation/
The Factual - News
Evaluator
thefactual.com ; https://chrome.google.com/webstore/detail/the-factual-news-evaluato/clbbiejjicefdjlblgnojolgbideklkp?hl=en
Fact-Check with Logically
https://www.logically.ai/factchecks
How do you spot and flag political misinformation?
11 ways to spot disinformation on social media
There are four simple steps:
1. Stop. Don’t repost anything immediately or comment. Pause for a second
and consider.
2. Investigate the source. Consider where the information is from, why
they have posted it, and who benefits from it.
3. Find other sources. Any news source loves getting a scoop, but
facts spread. If something is credible, other sources will quickly start
reporting on it, so look on other sites to see if the same thing is being
reported. Many websites also research and analyze just the facts, like Google Factcheck and BellingCat.
Also consider the C.R.A.A.P test: consider the currency, relevancy, authority,
accuracy, and purpose of the source.
4. Trace the source. Extraordinary claims require extraordinary proof,
so find the source of the information. If it is an image, use a reverse image
search engine like Google images or Yanadex to
find if it has been used elsewhere or misattributed. If it is a quote from a
speech, type the quote into Google or Bing to see if you can find the video of the
original. For a headline or news story, go directly to the source and find the
article.
The warning signs are if something fails any of these simple tests. If
the source isn’t trustworthy, if nobody else is reporting it, or if the source
isn’t available, it could be misinformation posted either as a genuine mistake
or a malicious attempt to muddy the political waters.
‘To flood the zone with sh*t’
Either way, if you can’t investigate, find, and trace it, don’t repost
it. The strategy of those using misinformation is, to quote one
practitioner, “to flood the zone with shit,” to create so much confusing
and misleading information that a reader can’t tell the truth from lies and
gives up. Posting it, or even commenting on it, just helps to increase the
flood we are all dealing with.
https://www.techtarget.com/whatis/feature/10-ways-to-spot-disinformation-on-social-media
Fabula AI Limited
https://about.crunchbase.com/products/crunchbase-pro/
How AI Is
Learning to Identify Toxic Online Content
Machine-learning systems could help flag hateful,
threatening or offensive language
Social platforms large and small are
struggling to keep their communities safe from hate speech, extremist content,
harassment and misinformation. Most recently, far-right agitators posted openly
about plans to storm the U.S. Capitol before doing just that on January 6. One
solution might be AI: developing algorithms to detect and alert us to toxic and
inflammatory comments and flag them for removal. But such systems face big
challenges.
The prevalence of hateful or offensive
language online has been growing rapidly in recent years, and the problem is
now rampant. In some cases, toxic comments online have even resulted in real
life violence, from religious nationalism in Myanmar to neo-Nazi propaganda in
the U.S. Social media platforms, relying on thousands of human reviewers, are
struggling to moderate the ever-increasing volume of harmful content. In 2019,
it was reported that Facebook moderators are at risk of suffering from PTSD as
a result of repeated exposure to such distressing content. Outsourcing this
work to machine learning can help manage the rising volumes of harmful content,
while limiting human exposure to it. Indeed, many tech giants have been
incorporating algorithms into their content moderation for years.
One such example is Google’s Jigsaw, a company
focusing on making the internet safer. In 2017, it helped create Conversation
AI, a collaborative research project aiming to detect toxic comments online.
However, a tool produced by that project, called Perspective, faced substantial
criticism. One common complaint was that it created a general “toxicity score”
that wasn’t flexible enough to serve the varying needs of different platforms.
Some Web sites, for instance, might require detection of threats but not
profanity, while others might have the opposite requirements.
Another issue was that the algorithm learned
to conflate toxic comments with nontoxic comments that contained words related
to gender, sexual orientation, religion or disability. For example, one user
reported that simple neutral sentences such as “I am a gay black woman” or “I
am a woman who is deaf” resulted in high toxicity scores, while “I am a man”
resulted in a low score.
Following these concerns, the Conversation AI
team invited developers to train their own toxicity-detection algorithms and
enter them into three competitions (one per year) hosted on Kaggle, a Google
subsidiary known for its community of machine learning practitioners, public
data sets and challenges. To help train the AI models, Conversation AI released
two public data sets containing over one million toxic and non-toxic comments
from Wikipedia and a service called Civil Comments. The comments were rated on
toxicity by annotators, with a “Very Toxic” label indicating “a very hateful,
aggressive, or disrespectful comment that is very likely to make you leave a
discussion or give up on sharing your perspective,” and a “Toxic” label meaning
“a rude, disrespectful, or unreasonable comment that is somewhat likely to make
you leave a discussion or give up on sharing your perspective.” Some comments
were seen by many more than 10 annotators (up to thousands), due to sampling
and strategies used to enforce rater accuracy.
The goal of the first Jigsaw challenge was to
build a multilabel toxic comment classification model with labels such as
“toxic”, “severe toxic”, “threat”, “insult”, “obscene”, and “identity hate”.
The second and third challenges focused on more specific limitations of their
API: minimizing unintended bias towards pre-defined identity groups and
training multilingual models on English-only data.
Although the challenges led to some clever
ways of improving toxic language models, our team at Unitary, a
content-moderation AI company, found none of the trained models had been
released publicly.
For that reason, we decided to take
inspiration from the best Kaggle solutions and train our own algorithms with
the specific intent of releasing them publicly. To do so, we relied on existing
“transformer” models for natural language processing, such as Google’s BERT.
Many such models are accessible in an open-source transformers library.
This is how our team built Detoxify, an
open-source, user-friendly comment detection library to identify inappropriate
or harmful text online. Its intended use is to help researchers and
practitioners identify potential toxic comments. As part of this library, we
released three different models corresponding to each of the three Jigsaw
challenges. While the top Kaggle solutions for each challenge use model
ensembles, which average the scores of multiple trained models, we obtained a
similar performance with only one model per challenge. Each model can be easily
accessed in one line of code and all models and training code are publicly
available on GitHub. You can also try a demonstration in Google Colab.
While these models perform well in a lot of
cases, it is important to also note their limitations. First, these models will
work well on examples that are similar to the data they have been trained on.
But they are likely to fail if faced with unfamiliar examples of toxic
language. We encourage developers to fine-tune these models on data sets
representative of their use case.
Furthermore, we noticed that the inclusion of
insults or profanity in a text comment will almost always result in a high
toxicity score, regardless of the intent or tone of the author. As an example,
the sentence “I am tired of writing this stupid essay” will give a toxicity
score of 99.7 percent, while removing the word ‘stupid’ will change the score to
0.05 percent.
Lastly, despite the fact that one of the
released models has been specifically trained to limit unintended bias, all
three models are still likely to exhibit some bias, which can pose ethical
concerns when used off-the-shelf to moderate content.
Although there has been considerable progress
on automatic detection of toxic speech, we still have a long way to go until
models can capture the actual, nuanced, meaning behind our language—beyond the
simple memorization of particular words or phrases. Of course, investing in
better and more representative datasets would yield incremental improvements,
but we must go a step further and begin to interpret data in context, a crucial
part of understanding online behavior. A seemingly benign text post on social
media accompanied by racist symbolism in an image or video would be easily
missed if we only looked at the text. We know that lack of context can often be
the cause of our own human misjudgments. If AI is to stand a chance of
replacing manual effort on a large scale, it is imperative that we give our
models the full picture.
https://www.scientificamerican.com/article/can-ai-identify-toxic-online-content/?utm_
How Graphika
fights misinformation by tracking it across social media
The
social network analysis company thwarts online misinformation efforts before
they have offline consequences.
BY STEVEN MELENDEZ
Social
network analysis company Graphika has made a name for itself spotting targeted
disinformation across the internet. In 2020, its researchers reported suspected
Russian operations targeting right-wing
U.S. voters before the presidential election. The New
York-based company also flagged Chinese state efforts targeting Taiwan, global misinformation around the coronavirus pandemic,
and a massive Kremlin-tied operation that
published thousands of posts across numerous platforms.
Working
with multiple, competing companies including Facebook, Google, and Twitter,
helps Graphika spot deceptive activities that aren’t just limited to one site
and get those posts taken down, says Chief Innovation Officer Camille François.
“It’s really important because all these disinformation campaigns, all these
sophisticated actors, they ignore the boundaries of the campuses on Silicon
Valley,” she explains. The company, which has presented its research before
Congress and European Parliament, looks to point out and thwart online
misinformation efforts before they have offline consequences.
https://www.fastcompany.com/90600377/graphika-most-innovative-companies-2021?cid
Twitter may notify users exposed to
Russian propaganda during 2016 election
WASHINGTON (Reuters) -
Twitter may notify users whether they were exposed to content generated by a
suspected Russian propaganda service, a company executive told U.S. lawmakers
on Wednesday.
FILE PHOTO: A man reads
tweets on his phone in front of a displayed Twitter logo in Bordeaux,
southwestern France, March 10, 2016. REUTERS/Regis Duvignau/Illustration/File
Photo
The social media company is
“working to identify and inform individually” its users who saw tweets during
the 2016 U.S. presidential election produced by accounts tied to the
Kremlin-linked Internet Research Army, Carlos Monje, Twitter’s director of public
policy, told the U.S. Senate Commerce, Science and Transportation Committee.
A Twitter spokeswoman did
not immediately respond to a request for comment about plans to notify its
users.
Facebook Inc in December
created a portal where its users could learn whether they had liked or followed
accounts created by the Internet Research Agency.
Both companies and
Alphabet’s YouTube appeared before the Senate committee on Wednesday to answer
lawmaker questions about how their efforts to combat the use of their platforms
by violent extremists, such as the Islamic State.
But the hearing often turned
its focus to questions of Russian propaganda, a vexing issue for internet firms
who spent most of the past year responding to a backlash that they did too
little to deter Russians from using their services to anonymously spread
divisive messages among Americans in the run-up to the 2016 U.S. elections.
U.S. intelligence agencies
concluded Russia sought to interfere in the election through a variety of
cyber-enabled means to sow political discord and help President Donald Trump
win. Russia has repeatedly denied the allegations.
The three social media
companies faced a wide array of questions related to how they police different
varieties of content on their services, including extremist recruitment, gun
sales, automated spam accounts, intentionally fake news stories and Russian
propaganda.
Monje said Twitter had
improved its ability to detect and remove “maliciously automated” accounts, and
now challenged up to 4 million per week - up from 2 million per week last year.
Facebook’s head of global
policy, Monika Bickert, said the company was deploying a mix of technology and
human review to “disrupt false news and help (users) connect with authentic
news.”
Most attempts to spread
disinformation on Facebook were financially motivated, Bickert said.
The companies repeatedly
touted increasing success in using algorithms and artificial intelligence to
catch content not suitable for their services.
Juniper Downs, YouTube’s
director of public policy, said algorithms quickly catch and remove 98 percent
of videos flagged for extremism. But the company still deploys some 10,000
human reviewers to monitor videos, Downs said.
APRIL 07, 2020
The EUvsDisinfo database has now
surpassed 8000 disinformation cases, covering over 20 languages and averaging
about 60 new cases per week…: https://euvsdisinfo.eu/
Automatic Detection of Fake News
(Submitted on 23
Aug 2017)
The proliferation of
misleading information in everyday access media outlets such as social media
feeds, news blogs, and online newspapers have made it challenging to identify
trustworthy news sources, thus increasing the need for computational tools able
to provide insights into the reliability of online content. In this paper, we
focus on the automatic identification of fake content in online news. Our
contribution is twofold. First, we introduce two novel datasets for the task of
fake news detection, covering seven different news domains. We describe the
collection, annotation, and validation process in detail and present several
exploratory analysis on the identification of linguistic differences in fake
and legitimate news content. Second, we conduct a set of learning experiments
to build accurate fake news detectors. In addition, we provide comparative
analyses of the automatic and manual identification of fake news.
Subjects:
|
Computation and Language (cs.CL)
|
Cite as:
|
|
|
|
Submission history
How to detect emotions remotely with
wireless signals
September 23, 2016
MITCSAIL | EQ-Radio: Emotion
Recognition using Wireless Signals
MIT researchers from have
developed “EQ-Radio,” a device that can detect a person’s emotions using
wireless signals.
By measuring subtle changes
in breathing and heart rhythms, EQ-Radio is 87 percent accurate at detecting if
a person is excited, happy, angry or sad — and can do so without on-body
sensors, according to the researchers.
MIT professor and project
lead Dina Katabi of MIT’s Computer Science and Artificial Intelligence
Laboratory (CSAIL) envisions the system being used in health care and testing viewers’
reactions to ads or movies in real time.
Using wireless signals
reflected off people’s bodies, the device measures heartbeats as accurately as
an ECG monitor, with a margin of error of approximately 0.3 percent, according
to the researchers. It then studies the waveforms within each heartbeat to
match a person’s behavior to how they previously acted in one of the four
emotion-states.
The team will present the
work next month at the Association of Computing Machinery’s International
Conference on Mobile Computing and Networking (MobiCom).
EQ-Radio has three
components: a radio for capturing RF reflections, a heartbeat extraction
algorithm, and a classification subsystem that maps the learned physiological
signals to emotional states. (credit: Mingmin Zhao et al./MIT)
EQ-Radio sends wireless
signals that reflect off of a person’s body and back to the device. To detect
emotions, its beat-extraction algorithms break the reflections into individual
heartbeats and analyze the small variations in heartbeat intervals to determine
their levels of arousal and positive affect.
These measurements are what
allow EQ-Radio to detect emotion. For example, a person whose signals correlate
to low arousal and negative affect is more likely to tagged as sad, while
someone whose signals correlate to high arousal and positive affect would
likely be tagged as excited.
The exact correlations vary
from person to person, but are consistent enough that EQ-Radio could detect
emotions with 70 percent accuracy even when it hadn’t previously measured the
target person’s heartbeat. In the future it could be used for non-invasive
health monitoring and diagnostic settings.
For the experiments,
subjects used videos or music to recall a series of memories that each evoked
one the four emotions, as well as a no-emotion baseline. Trained just on those
five sets of two-minute videos, EQ-Radio could then accurately classify the
person’s behavior among the four emotions 87 percent of the time.
One of the challenges was to
tune out irrelevant data. To get individual heartbeats, for example, the team
had to dampen the breathing, since the distance that a person’s chest moves
from breathing is much greater than the distance that their heart moves to
beat.
To do so, the team focused
on wireless signals that are based on acceleration rather than distance
traveled, since the rise and fall of the chest with each breath tends to be
much more consistent — and, therefore, have a lower acceleration — than
the motion of the heartbeat.
Abstract of
Emotion Recognition using Wireless Signals
This paper demonstrates a
new technology that can infer a person’s emotions from RF signals reflected off
his body. EQ-Radio transmits an RF signal and analyzes its reflections off a
person’s body to recognize his emotional state (happy, sad, etc.). The key
enabler underlying EQ-Radio is a new algorithm for extracting the individual
heartbeats from the wireless signal at an accuracy comparable to on-body ECG
monitors. The resulting beats are then used to compute emotion-dependent
features which feed a machine-learning emotion classifier. We describe the
design and implementation of EQ-Radio, and demonstrate through a user study
that its emotion recognition accuracy is on par with stateof-the-art emotion
recognition systems that require a person to be hooked to an ECG monitor.
references:
Mingmin Zhao, Fadel Adib,
Dina Katabi. Emotion Recognition using Wireless Signals. MobiCom’16, October 03
- 07, 2016, New York City, NY, USA; DOI: 10.1145/2973750.2973762 (open access)
Brain scan better than polygraph in
spotting lies
fMRI spots more
lies in first controlled comparison of the two technologies
November 10, 2016
Significant
clusters in fMRI exam are located in the anterior cingulate cortex, bilateral
inferior frontal, inferior parietal and medial temporal gyrl, and the
precuneus. (credit: Perelman School of Medicine at the University of
Pennsylvania/Journal of Clinical Psychiatry)
When someone is lying, areas
of the brain linked to decision-making are activated, which lights up on an
fMRI scan for experts to see. While laboratory studies showed fMRI’s ability to
detect deception with up to 90 percent accuracy, estimates of polygraphs’
accuracy ranged wildly, between chance and 100 percent, depending on the study.
The Penn study is the first
to compare the two modalities in the same individuals in a blinded and
prospective fashion. The approach adds scientific data to the long-standing
debate about this technology and builds the case for more studies investigating
its potential real-life applications, such as evidence in criminal legal
proceedings.
Neuroscientists better than polygraph
examiners at detecting deception
Researchers from Penn’s
departments of Psychiatry and Biostatistics and Epidemiology found that
neuroscience experts without prior experience in lie detection, using fMRI
data, were 24 percent more likely to detect deception than professional
polygraph examiners reviewing polygraph recordings. In both fMRI and polygraph,
participants took a standardized “concealed information” test.*
Polygraph monitors
individuals’ electrical skin conductivity, heart rate, and respiration during a
series of questions. Polygraph is based on the assumption that incidents of
lying are marked by upward or downward spikes in these measurements.
“Polygraph measures reflect
complex activity of the peripheral nervous system that is reduced to only a few
parameters, while fMRI is looking at thousands of brain clusters with higher
resolution in both space and time. While neither type of activity is unique to
lying, we expected brain activity to be a more specific marker, and this is
what I believe we found,” said the study’s lead author, Daniel D.
Langleben, MD, a professor of Psychiatry.
fMRI Correct and
Polygraphy Incorrect. (Left) All 3 fMRI raters correctly identified number 7 as
the concealed number. (Right) Representative fragments from the electrodermal
activity polygraphy channel correspond to responses about the same concealed
numbers. The gray bars mark the time of polygraph examiner’s question (“Did you
write the number [X]?”), and the thin black bars immediately following indicate
the time of participant’s “No” response. All 3 polygraph raters incorrectly
identified number 6 as the Lie Item. (credit: Daniel D. Langleben et
al./Journal of Clinical Psychiatry)
In one example in the paper,
fMRI clearly shows increased brain activity when a participant, who picked the
number seven, is asked if that is their number. Experts who studied the
polygraph counterpart incorrectly identified the number six as the lie. The
polygraph associated with the number six shows high peaks after the participant
is asked the same questions several times in a row, suggesting that answer was
a lie.
The scenario was reversed in
another example, as neither fMRI nor polygraph experts were perfect, which is
demonstrated in the paper. However, overall, fMRI experts were 24 percent more
likely to detect the lie in any given participant.
Combination of technologies was 100
percent correct
Beyond the accuracy
comparison, authors made another important observation. In the 17 cases when
polygraph and fMRI agreed on what the concealed number was, they were 100 percent
correct. Such high precision of positive determinations could be especially
important in the United States and British criminal proceedings, where avoiding
false convictions takes absolute precedence over catching the guilty, the
authors said.
They cautioned that while
this does suggest that the two modalities may be complementary if used in
sequence, their study was not designed to test combined use of both modalities
and their unexpected observation needs to be confirmed experimentally before any
practical conclusions could be made.
The study was supported by
the U.S. Army Research Office, No Lie MRI, Inc, and the University of
Pennsylvania Center for MRI and Spectroscopy.
* To compare the
two technologies, 28 participants were given the so-called “Concealed
Information Test” (CIT). CIT is designed to determine whether a person has
specific knowledge by asking carefully constructed questions, some of which
have known answers, and looking for responses that are accompanied by spikes in
physiological activity. Sometimes referred to as the Guilty Knowledge Test, CIT
has been developed and used by polygraph examiners to demonstrate the
effectiveness of their methods to subjects prior to the actual polygraph
examination.
In the Penn
study, a polygraph examiner asked participants to secretly write down a number
between three and eight. Next, each person was administered the CIT while
either hooked to a polygraph or lying inside an MRI scanner. Each of the
participants had both tests, in a different order, a few hours apart. During
both sessions, they were instructed to answer “no” to questions about all the
numbers, making one of the six answers a lie. The results were then
evaluated by three polygraph and three neuroimaging experts separately and then
compared to determine which technology was better at detecting the fib.
Abstract of Polygraphy and
Functional Magnetic Resonance Imaging in Lie Detection: A Controlled Blind
Comparison Using the Concealed Information Test
Objective: Intentional
deception is a common act that often has detrimental social, legal, and
clinical implications. In the last decade, brain activation patterns associated
with deception have been mapped with functional magnetic resonance imaging
(fMRI), significantly expanding our theoretical understanding of the
phenomenon. However, despite substantial criticism, polygraphy remains the only
biological method of lie detection in practical use today. We conducted a
blind, prospective, and controlled within-subjects study to compare the
accuracy of fMRI and polygraphy in the detection of concealed information. Data
were collected between July 2008 and August 2009.
Method: Participants
(N = 28) secretly wrote down a number between 3 and 8 on a slip of
paper and were questioned about what number they wrote during consecutive and
counterbalanced fMRI and polygraphy sessions. The Concealed Information Test
(CIT) paradigm was used to evoke deceptive responses about the concealed
number. Each participant’s preprocessed fMRI images and 5-channel polygraph
data were independently evaluated by 3 fMRI and 3 polygraph experts, who made
an independent determination of the number the participant wrote down and
concealed.
Results: Using a
logistic regression, we found that fMRI experts were 24% more likely (relative
risk = 1.24, P < .001) to detect the concealed
number than the polygraphy experts. Incidentally, when 2 out of 3 raters in
each modality agreed on a number (N = 17), the combined accuracy was
100%.
Conclusions: These data
justify further evaluation of fMRI as a potential alternative to polygraphy.
The sequential or concurrent use of psychophysiology and neuroimaging in lie
detection also deserves new consideration.
references:
Fake
video threatens to rewrite history. Here’s how to protect it
AI-generated deepfakes
aren’t just a problem for politics and other current affairs. Unless we act
now, they could also tamper with our record of the past.
BY BENJ EDWARDS
Since deepfakes burst
onto the scene a few years ago, many have worried that they represent a grave threat to our social fabric. Creators of
deepfakes use artificial intelligence-based neural network algorithms to craft
increasingly convincing forgeries of video, audio, and photography almost as if
by magic. But this new technology doesn’t just threaten our present discourse.
Soon, AI-generated synthetic media may reach into the past and sow doubt into
the authenticity of historical events, potentially destroying the credibility
of records left behind in our present digital era.
In an age of very little
institutional trust, without a firm historical context that future historians
and the public can rely on to authenticate digital media events of the past, we
may be looking at the dawn of a new era of civilization: post-history. We need
to act now to ensure the continuity of history without stifling the creative
potential of these new AI tools.
Imagine that it’s the year
2030. You load Facebook on your smartphone, and you’re confronted with a video
that shows you drunk and
deranged, sitting in your living room saying racist things while waving a
gun. Typical AI-assisted character attack, you think. No
biggie.
You scroll down the page.
There’s a 1970s interview video of Neil
Armstrong and Buzz Aldrin on The Dick Cavett Show declaring,
“We never made it to the moon. We had to abort. The radiation belt was too
strong.” 500,000 likes.
Further down, you see the
video of a police officer with a knee on George Floyd’s neck. In this version,
however, the officer eventually lifts his knee and Floyd stands up, unharmed.
Two million likes.
Here’s a 1966 outtake
from Revolver where the Beatles sing about Lee Harvey Oswald.
It sounds exactly like the Fab Four in their prime. But people have been generating new Beatles songs for the past three years,
so you’re skeptical.
You click a link and read an
article about James Dean. There’s a realistic photo of him kissing Marilyn Monroe—something suggested in the article—but
it has been generated for use by the publication. It’s clearly labeled as an
illustration, but if taken out of context, it could pass for a real photo from
the 1950s.
Further down your feed,
there’s an ad for a new political movement growing every day: Break the Union.
The group has 50 million members. Are any
of them real? Members write convincing posts every day—complete with photos of their
daily lives—but massive AI astroturfing campaigns have been around for some
time now.
Meanwhile, riots and
protests rage nonstop in cities around America. Police routinely alter body
camera footage to erase evidence of wrongdoing before releasing it to the
public. Inversely, protesters modify body camera and smartphone footage to make
police actions appear worse than they were in reality. Each altered version of
events serves only to stoke a base while further dividing the opposing
factions. The same theme plays out in every contested social arena.
In 2030, most people know
that it’s possible to fake any video, any voice, or any statement from a person
using AI-powered tools that are freely available. They generate many thousands
of media fictions online every day, and that quantity is only going to balloon in the years to come.
But in a world where
information flows through social media faster than fact-checkers can process
it, this disinformation sows enough doubt among those who don’t understand how
the technology works (and apathy among those who do) to destroy the shared
cultural underpinnings of society—and trust in history itself. Even skeptics
allow false information to slip through the cracks when it conveniently
reinforces their worldview.
This is the age of
post-history: a new epoch of civilization where the historical record is so
full of fabrication and noise that it becomes effectively meaningless. It’s as
if a cultural singularity ripped a hole so deeply in history that no truth can
emerge unscathed on the other side.
HOW DEEPFAKES THREATEN PUBLIC TRUST
Deepfakes mean more than
just putting
Sylvester Stallone’s face onto Macaulay Culkin’s body. Soon, people will be
able to craft novel photorealistic images and video wholesale using open-source
tools that utilize the power of neural networks to “hallucinate” new images where none existed before.
The technology is still in
its early stages. And right now, detection is relatively easy, because many
deepfakes feel “off.” But as techniques improve, it’s not a stretch to expect
that amateur-produced AI-generated or -augmented content will soon be able to fool both human and machine detection in
the realms of audio, video, photography, music, and even written text. At that point, anyone with a desktop PC
and the right software will be able to create new media artifacts that present
any reality they want, including clips that appear to have been generated in
earlier eras.
The study of history
requires primary source documents that historians can authenticate as being
genuine—or at least genuinely created during a certain time period. They do
this by placing them in a historical context.
CURRENTLY THE HISTORICAL
INTEGRITY OF OUR ONLINE CULTURAL SPACES IS ATROCIOUS.
It has always been possible
to falsify paper documents and analog media artifacts given enough time, money,
and skill. Since the birth of photography, historians have been skeptical about
accepting evidence unless it matches up with other accounts and includes a
convincing provenance. But traditionally, the high barriers to pulling off
convincing forgeries has allowed historians to easily pick out fakes,
especially when their context is misleading or misrepresented.
Today, most new media
artifacts are “born digital,” which means they exist only as bits stored on
computer systems. The world generates untold petabytes of such artifacts every
day. Given the proper technology, novel digital files can be falsified without
leaving a trace. And thanks to new AI-powered tools, the barriers to
undetectably synthesizing every form of digital media are potentially about to
disappear.
In the future, historians
will attempt to authenticate digital media just as they do now: by tracking
down its provenance and building a historical context around its earliest
appearances in the historical record. They can compare versions across
platforms and attempt to trace its origin point.
But if the past is any
indication, our online archives might not survive long enough to provide the
historical context necessary to allow future historians to authenticate digital
artifacts of our present era. Currently the historical integrity of our online
cultural spaces is atrocious. Culturally important websites disappear, blog archives
break, social media sites reset, online services shut down, and comments sections that
include historically valuable reactions to events vanish without warning.
Today much of the historical
context of our recent digital history is held together tenuously by volunteer archivists and
the nonprofit Internet Archive, although increasingly universities and
libraries are joining the effort. Without the Internet Archive’s Wayback
Machine, for example, we would have almost no record of the early web. Yet
even with the Wayback Machine’s wide reach, many sites and social media posts
have slipped through the cracks, leaving potential blind spots where synthetic
media can attempt to fill in the blanks.
THE PERIL OF HISTORICAL CONTEXT ATTACKS
If these weaknesses in our
digital archives persist into the future, it’s possible that forgers will soon
attempt to generate new historical context using AI tools, thereby justifying
falsified digital artifacts.
Let’s say it’s 2045. Online,
you encounter a video supposedly from the year 2001 of then-President George W.
Bush meeting with Osama bin Laden. Along with it, you see screenshots of news
websites at the time the video purportedly debuted. There are dozens of news articles
written perfectly in the voices of their authors discussing it (by an
improved GPT-3-style algorithm). Heck, there’s even a vintage CBS
Evening News segment with Dan Rather in which he discusses the video.
(It wasn’t even a secret back then!)
Trained historians
fact-checking the video can point out that not one of those articles appears in
the archives of the news sites mentioned, that CBS officials deny the segment
ever existed, and that it’s unlikely Bush would have agreed to meet with bin
Laden at that time. Of course, the person presenting the evidence claims those
records were deleted to cover up the event. And let’s say that enough pages are
missing in online archives that it appears plausible that some of the articles
may have existed.
This hypothetical episode
won’t just be one instance out of the blue that historians can pick apart at
their leisure. There may be millions of similar AI-generated context attacks on
the historical record published every single day around the world, and the
sheer noise of it all might overwhelm any academic process that can make sense
of it.
Without reliable digital
primary source documents—and without an ironclad chronology in which to frame
both the documents and their digital context—the future study of the history of
this period will be hampered dramatically, if not completely destroyed.
POTENTIAL SOLUTIONS
Let’s say that, in the
future, there’s a core group of historians still holding the torch of
enlightenment through these upcoming digital dark ages. They will need a new
suite of tools and cultural policies that will allow them to put digital
artifacts—real and synthesized alike—in context. There won’t be black-and-white
solutions. After all, deepfakes and synthesized media will be valuable
historical artifacts in their own way, just as yesteryear’s dead-tree
propaganda was worth collecting and preserving.
Currently some attempts are
being made to solve this upcoming digital media credibility problem, but they
don’t yet have the clarion call of urgency behind them that’s necessary to push
the issue to the forefront of public consciousness. The death of history and breakdown
of trust threatens the continuity of civilization itself, but most people are
still afraid to talk in such stark terms. It’s time to start that conversation.
Here are some measures that
society may take—some more practical than others:
1. MAINTAIN BETTER HISTORICAL ARCHIVES
To study the past,
historians need reliable primary source materials provided by trustworthy
archives. More public and private funding needs to be put into reliable,
distributed digital archives of websites, news articles, social media posts,
software, and more. Financial support for organizations such as the Internet
Archive is paramount.
2. TRAIN COMPUTERS TO SPOT FAKES
It’s currently possible to
detect some of today’s imperfect deepfakes using
telltale artifacts or heuristic analysis. Microsoft recently debuted
a new way to spot hiccups in synthetic media. The Defense
Advanced Research Projects Agency, or DARPA, is working on a program
called SemaFor whose aim is to detect semantic deficiencies
in deepfakes, such as a photo of a man generated with anatomically incorrect
teeth or a person with a piece of jewelry that might be culturally out of
place.
But as deepfake technology
improves, the tech industry will likely play a cat-and-mouse game of trying to
stay one step ahead, if it’s even possible. Microsoft recently wrote of deepfakes, “. . . the fact that
they’re generated by AI that can continue to learn makes it inevitable that
they will beat conventional detection technology.”
That doesn’t mean that
keeping up with deepfakes is impossible. New AI-based tools that detect
forgeries will likely help significantly, as will automated tools that can
compare digital artifacts that have been archived by different organizations
and track changes in them over time. The historical noise generated by
AI-powered context attacks will demand new techniques that can match the massive,
automated output generated by AI media tools.
3. CALL IN THE MODERATORS
In the future, the impact of
deepfakes on our civilization will be heavily dependent on how they are
published and shared. Social media firms could decide that suspicious content
coming from nontrusted sources will be aggressively moderated off their
platforms. Of course, that’s not as easy as it sounds. What is suspicious? What
is trusted? Which community guidelines do we uphold on a global platform
composed of thousands of cultures?
Facebook has already
announced a ban on deepfakes, but with hyper-realistic synthetic media
in the future, that rule will be difficult to enforce without aggressive
detection techniques. Eventually, social media firms could also attempt
draconian new social rules to reduce techniques—say, that no one is allowed to
post content that depicts anyone else unless it also includes themselves, or
perhaps only if all people in a video consent to its publication. But those
same rules may stifle the positive aspects of AI-augmented media in the future.
It will be a tough tightrope for social media firms to walk.
4. AUTHENTICATE TRUSTWORTHY CONTENT
One of the highest-profile
plans to counter deepfakes so far is the Content Authenticity
Initiative (CAI), which is a joint effort among Adobe, Twitter, The
New York Times, the BBC, and others. The CAI recently
proposed a system of encrypted content attribution metadata tags that
can be attached to digital media as a way to verify the creator and provenance
of data. The idea is that if you can prove that the content was created by a
certain source, and you trust the creator, you’ll be more likely to trust that
the content is genuine. The tags will also let you know if the content has been
altered.
CAI is a great step forward,
but it does have weak spots. It approaches the problem from a content
protection/copyright point of view. But individual authorship of creative works
may become less important in an era when new media could increasingly be
created on demand by AI tools.
It’s also potentially
dangerous to embed personally identifiable creator information into every file
we create—consider the risks it might present to those whose creations raise
the ire of authoritarian regimes. Thankfully, this is optional with the CAI,
but its optional nature also limits its potential to separate good content from
bad. And relying on metadata tags baked into individual files might also be a
mistake. If the tags are missing from a file, they can be added later after the
data has been falsified, and there will be no record of the earlier data to
fall back on.
5. CREATE A UNIVERSAL TIMESTAMP
To ensure the continuity of
history, it would be helpful to establish an unalterable chronology of digital
events. If we link an immutable timestamp to every piece of digital media, we
can determine if it has been modified over time. And if we can prove that a
piece of digital media existed in a certain form before the impetus to fake it
arose, it is much more likely to be authentic.
The best way to do that
might be by using a distributed ledger—a blockchain. You might wince at the
jargon, since the term blockchain has been so overused in
recent years. But it’s still a profound invention that might help secure our
digital future in a world without shared trust. A blockchain is an encrypted
digital ledger that is distributed across the internet. If a blockchain network
is widely used and properly secured, you cannot revise an entry in the ledger
once it is put in place.
Blockchain timestamps already exist, but they
need to be integrated on a deep level with all media creation devices to be
effective in this scenario. Here’s how an ideal history stamp solution might
work.
THE LEDGER, IF MAINTAINED
OVER TIME, WILL GIVE FUTURE HISTORIANS SOME HOPE FOR TRACKING DOWN THE ACTUAL
ORDER OF HISTORICAL EVENTS.
Every time a piece of
digital media is saved—whether created or modified on a computer, smartphone,
audio recorder, or camera—it would be assigned a cryptographic
hash, calculated from the file’s contents, that would serve as a digital
fingerprint of the file’s data. That fingerprint (and only the fingerprint)
would be automatically uploaded to a blockchain distributed across the internet
along with a timestamp that marked the time it was added to the blockchain.
Every social media post, news article, and web page would also get a
cryptographic fingerprint on the history blockchain.
When a piece of media is
modified, cropped, or retouched, a new hash would be created that references
the older hash and entered into the blockchain as well. To prevent inevitable
privacy issues, entries on the blockchain wouldn’t be linked to individual
authors, and the timestamped files themselves would stay private unless shared
by the creator. Like any cryptographic hash, people with access to the
blockchain would not be able to reverse-engineer the contents of the file from
the digital fingerprint.
To verify the timestamp of a
post or file, a social media user would click a button, and software would
calculate its hash and use that hash to search the history blockchain. If there
were a match, you’d be able to see when that hash first entered the ledger—and
thus verify that the file or post was created on a certain date and had not
been modified since then.
This technique wouldn’t
magically allow the general populace to trust each other. It will not verify
the “truth” or veracity of content. Deepfakes would be timestamped on the
blockchain too. But the ledger, if it is maintained over time, will give future
historians some hope for tracking down the actual order of historical events,
and they’ll be better able to gauge the authenticity of the content if it comes
from a trusted source.
Of course, the implementation
of this hypothetical history stamp will take far more work than what is laid
out here, requiring consensus from an array of stakeholders. But a system like
this would be a key first step in providing a future historical context for our
digital world.
6. RESTRICT ACCESS TO DEEPFAKE TOOLS
At some point, it’s likely
that politicians in the U.S. and Europe will widely call to make deepfake tools
illegal (as unmarked deepfakes already are in China). But a sweeping ban would be
problematic for a free society. These same AI-powered tools will empower an explosion in human creative potential and
artistry, and they should not be suppressed without careful thought. This would
be the equivalent of outlawing the printing press because you don’t like how it
can print books that disagree with your historical narrative.
Even if some synthetic media
software becomes illegal, the tools will still exist in rogue hands, so legal
remedies will likely only hamstring creative professionals while driving the
illicit tools underground where they can’t as easily be studied and audited by
tech watchdogs and historians.
7. BUILD A CRYPTOGRAPHIC ARK FOR THE
FUTURE
No matter the solution, we
need to prepare now for a future that may be overwhelmed by synthetic media. In
the short term, one important aspect of fixing the origin date of a media
artifact in time with a history blockchain is that if we can prove that the
media was created before a certain technology existed to falsify it, then we
know it is more likely to be genuine. (Admittedly, with rapid advances in
technology, this window may soon be closed.)
Still, if we had a timestamp
network up and running, we could create a “cryptographic ark” for future
generations that would contain the entirety of 20th-century media—films, music,
books, website archives, software, periodicals, 3D scans of physical
artifacts—digitized and timestamped to a date in the very near future (say,
January 1, 2022) so that historians and the general public 100 years from now
will be able to verify that yes, that video of Buzz Aldrin bouncing on the moon
really did originate from a time before 13-year-olds could generate any
variation of the film on their smartphone.
Of course, the nondigital
original artifacts will continue to be stored in archives, but with public
trust in institutions diminished in the future, it’s possible that people
(especially those not born in the 20th century who did not witness the media
events firsthand) won’t believe officials who claim those physical artifacts
are genuine if they don’t have the opportunity to study them themselves.
With the cryptographic ark,
anyone will be able to use the power of the history blockchain to verify the
historical era of pre-internet events if they can access the timestamped
versions of the digitized media artifacts from an online archive.
Thinking about all of this,
it might seem like the future of history is hopeless. There are rough waters
ahead, but there are actions we can take now to help the continuity of history
survive this turbulent time. Chief among them, we must all know and appreciate
the key role history plays in our civilization. It’s the record of what we do,
what we spend, how we live—the knowledge we pass on to our children. It’s how
we improve and build on the wisdom of our ancestors.
While we must not let
disinformation destroy our understanding of the past, we also must not descend
so far into fear that we stifle the creative tools that will power the next
generation of art and entertainment. Together, we can build new tools and
policies that will prevent digital barbarians from overwhelming the gates of
history. And we can do it while still nourishing the civilization inside.
https://www.fastcompany.com/90549441/how-to-prevent-deepfakes
Technology
vs. Truth: Deception in the Digital Age
In the digital age,
information, both true and false, spreads faster than ever. The same
technology that provides access to data across the globe can abet the warping
of truth and normalization of lies. In this eBook, we examine the
intersection of truth, untruth and technology, including how social media manipulates
behavior, technologies such as deepfakes that spread misinformation, the bias
inherent in algorithms and more.
https://www.infoscum.com/articles/5389586/technology-vs-truth-deception-in-the-digital-age
EU vs DISINFORMATION EUvsDisinfo is
the flagship project of the European External Action Service’s East StratCom Task Force(opens in a new tab). It was
established in 2015 to better forecast, address, and respond to the Russian
Federation’s ongoing disinformation campaigns affecting the European Union, its
Member States, and countries in the shared neighbourhood. EUvsDisinfo’s core objective
is to increase public awareness and understanding of the Kremlin’s
disinformation operations, and to help citizens in Europe and beyond develop
resistance to digital information and media manipulation. Cases in the EUvsDisinfo database
focus on messages in the international information space that are identified as
providing a partial, distorted, or false depiction ...
https://euvsdisinfo.eu/about/
|
November 22, 2017
Continuing
Transparency on Russian Activity
A few weeks ago, we shared
our plans to increase the transparency of advertising on Facebook. This is part
of our ongoing effort to protect our platforms and the people who use them from
bad actors who try to undermine our democracy.
As part of that continuing
commitment, we will soon be creating a portal to enable people on Facebook to
learn which of the Internet Research Agency Facebook Pages or Instagram
accounts they may have liked or followed between January 2015 and August 2017.
This tool will be available for use by the end of the year in the Facebook Help
Center.
It is important that people
understand how foreign actors tried to sow division and mistrust using Facebook
before and after the 2016 US election. That’s why as we have discovered
information, we have continually come forward to share it publicly and have
provided it to congressional investigators. And it’s also why we’re building
the tool we are announcing today.: https://newsroom.fb.com/news/2017/11/continuing-transparency-on-russian-activity/
Trolls for hire:
Russia's freelance disinformation firms offer propaganda with a professional
touch
Firms charged varying prices for services, such as
$8 for a social media post, $100 per 10 comments made on an article or post and
$65 for contacting a media source.
Security researchers set up a fake company then
hired Russian firms through secret online forums to destroy its
reputation.Chelsea Stahl / NBC News; Getty Images
Oct. 1, 2019, 6:40 PM GMT+3
By Ben Popken
The same kinds of digital dirty tricks used to
interfere in the 2016
U.S. presidential election and beyond are now up for sale on
underground Russian forums for as little as a few thousand dollars, according
to a new report from an internet security company.
Businesses, individuals and politicians remain at
risk of attack from rivals taking advantage of "disinformation for
hire" services that are able to place seemingly legitimate articles on
various websites and then spread links to them through networks of inauthentic
social media accounts, warned
researchers at Insikt Group, a unit of the Boston-area-based threat
intelligence firm Recorded Future, in a report released Monday.
And to prove it, the researchers created a fake
company — then paid one Russian group $1,850 to build up its reputation and
another $4,200 to tear it down. The groups were highly professional, offering
responsive, polite customer service, and a menu of services. Firms charged
varying prices for services, such as $8 for a social media post, $100 per 10
comments made on an article or post and $65 for contacting a media source. Each
firm the researchers hired claimed to have experience working on targets in the
West.
One firm even had a public website with customer
testimonials. Researchers said the disinformation firms offered the kind of professional
responsiveness a company might expect from any contractor.
"This trolling-as-a-service is the expected
next step of social media influence after the success of the Internet Research
Agency," said Clint Watts, a senior fellow at the Foreign Policy Research
Institute and NBC News security analyst, referring to the Kremlin-linked
digital manipulation firm accused in Mueller indictments with disrupting the
2016 election. "There’s high demand for nefarious influence and
manipulation, and trained disinformation operators who will seek higher
profits."
Politicians and companies have deployed and
countered disinformation for centuries, but its reach has been vastly extended
digital platforms designed to promote hot-button content and sell targeted ads.
Recently businesses
have been hit by fake correspondence and videos that hurt their
stock prices and send executives scrambling to hire third-party firms to
monitor for erroneous online headlines.
Previously, vendors of these kinds of malicious
online influence campaigns focused on Eastern Europe and Russia. But after
Russia’s playbook for social media manipulation became public after the 2016
election, sellers have proved willing to pursue other geographies and deploy
their services in the West, Roman Sannikov, an analyst with Recorded Future,
told NBC News.
"I don’t think social media companies have
come up with an automated way to filter out this content yet," Sannikov
said.
He advised company executives to stay vigilant for
false information being disseminated about their company and reach out to
social media companies to get it taken down before it spreads.
"It's really the symbiotic relationship
between media and social media, where they can take an article that looks legit
with a sensational headline and plop it into social to amplify the
effect," Sannikov said. "It’s this feedback loop that is so
dangerous."
The researchers created a fake company and hired
two firms that advertised their services on Russian-language private
marketplaces. One firm was hired to build the fake company’s reputation, the
other to destroy it.
Because the company was fake with no one following
it or talking about it, there was no way to measure the campaign’s impact on
real conversations. Activity about a fictitious company is also less likely to
trigger moderation.
But for as little as $6,000 the researchers used
the firms to plant four pre-written articles on websites, some of which were
lesser known. One website was for a media organization that has been in
existence for almost a century, according to the researchers, who withheld the
name of the company. One of the articles carried a paid content disclaimer.
Controlled accounts under fictitious personas then
spread links to those articles on social media with hyped-up headlines. One of
the firms first used more established accounts and then reposted the content
with batches of newer accounts on a variety of platforms including Facebook and
LinkedIn. One firm said it usually created several thousand accounts per
campaign because only a few would survive being banned. The accounts also
friended and followed other accounts in the target country.
The firms were also able to create social media
accounts for the fake company and drew more than 100 followers, although it was
impossible to determine if any were real.
The security firm’s findings offer fresh evidence
that even after years of crackdowns and tens of thousands of account removals
by social media platforms, it’s still possible to create networks of phony
digital personas and operate them in concert to try to spread false information
online.
The firms claimed to use a network of editors,
translators, search engine optimization specialists, hackers and journalists, some
of them on retainer, as well as investigators on staff who could dig up dirt.
One firm even offered to lodge complaints about the
company for being involved in human trafficking. It also offered reputation
cratering services that could set someone up at work, counter a disinformation
attack, or "sink an opponent in an election."
"If our experience is any indication, we
predict that disinformation as a service will spread from a nation-state tool
to one increasingly used by private individuals and entities, given how easy it
is to implement," the researchers concluded.
https://www.nbcnews.com/tech/security/trolls-hire-russia-s-freelance-disinformation-firms-offer-propaganda-professional-n1060781
Technology is undermining democracy. Who
will save it?
Fast
Company kicks off our series “Hacking Democracy,” which will examine the
insidious impact of technology on democracy—and how companies, researchers,
and everyday users are fighting back…:
With
the 2020 election on the horizon, one of Washington’s best minds on regulating
tech shares his fears about social media manipulation and discusses Congress’s
failure to tackle election security and interference.
Senator
Mark Warner has proved himself to be a sort of braintrust on tech issues in the
Senate. Through his questioning of tech execs in hearings and the oft-cited
white papers produced by his office, the Virginia Democrat has arguably raised
the Senate’s game in understanding and dealing with Big Tech.
After
all, Warner and tech go way back. As a telecom guy in the
1980s, he was among the first to see the importance of wireless networks. He
made his millions brokering wireless spectrum deals around FCC auctions. As a
venture capital guy in the ’90s, he helped build the internet pioneer America
Online. And as a governor in the 2000s, he brought 700 miles of broadband cable
network to rural Virginia.
Government
oversight of tech companies is one thing, but in this election year Warner is
also thinking about the various ways technology is being used to threaten democracy
itself. We spoke shortly after the Donald Trump impeachment trial and the
ill-fated Iowa caucuses. It was a good time to talk about election
interference, misinformation, cybersecurity threats, and the government’s
ability and willingness to deal with such problems.
The following
interview has been edited for clarity and brevity.
Fast Company: Some news
outlets portrayed the Iowa caucus app meltdown as part
of a failed attempt by the Democratic party to push their tech and data game forward. Was that
your conclusion?
Mark Warner: I think it was a
huge screwup. Do we really want to trust either political party to run an
election totally independently, as opposed to having election professionals
[run it]? We have no information that outside sources were involved.
I
think it was purely a non-tested app that was put into place. But then you saw
the level and volume of [social media] traffic afterwards and all the conspiracy theories [about the legitimacy of
the results]. One of the things I’m still trying to get from our intel
community is how much of this conspiracy theory was being manipulated by
foreign bots. I don’t have that answer yet. I hope to have it soon. But it goes
to the heart of why this area is so important. The bad guys don’t have to come
in and change totals if they simply lessen American’s belief in the integrity
of our voting process. Or, they give people reasons not to vote, as they were
so successful in doing in 2016.
THE
BAD GUYS DON’T HAVE TO COME IN AND CHANGE TOTALS IF THEY SIMPLY LESSEN
AMERICAN’S BELIEF IN THE INTEGRITY OF OUR VOTING PROCESS.”
SENATOR
MARK WARNER
FC: Do you think that
the Department of Homeland Security is interacting with state election officials
and offering the kind of oversight and advice they should be?
MW: Chris Krebs [the director
of the Cybersecurity and Infrastructure Security Agency (CISA) in DHS] has done
a very good job. Most all state election systems now have what they call an
Einstein (cybersecurity certification) program, which is a basic protection
unit. I think we are better protected from hacking into actual voting machines
or actual election night results. But we could do better.
There
were a number of secretaries of state who in the first year after 2016 didn’t
believe the problem was real. I’m really proud of our [Senate Intelligence]
committee because we kept it bipartisan and we’ve laid [the problem] out—both
the election interference, and the Russian social media use. I don’t think
there’s an election official around that doesn’t realize these threats are
real.
But
I think the White House has been grossly irresponsible for not being willing to
echo these messages. I think it’s an embarrassment that Mitch McConnell has not
allowed any of these election security bills to come to the floor of
the Senate. I think it’s an embarrassment that the White House continues to
fight tooth and nail against any kind of low-hanging fruit like [bills
mandating] paper ballot backups and post-election audits. I’m still very
worried that three large [election equipment] companies control 90% of all the
voter files in the country. It doesn’t have to be the government, but there’s
no kind of independent industry standard on safety and security.
FC: When you think
about people trying to contaminate the accuracy or the legitimacy of the
election, do you think that we have more to worry about from foreign actors, or
from domestic actors who may have learned some of the foreign actors’ tricks?
MW: I think it’s a bit of
both. There are these domestic right-wing extremist groups, but a network that
comes out of Russia—frankly, comes out of Germany almost as much as
Russia—reinforces those messages. So there’s a real collaboration there.
There’s some of that on the left, but it doesn’t seem to be as pervasive.
China’s efforts, which are getting much more sophisticated, are more about
trying to manipulate the Chinese diaspora. There’s not that kind of
nation-state infrastructure to support some of this on the left. Although
ironically, some of the Russian activity does promote some of the leftist
theories, some of the “Bernie Sanders is getting screwed” theories. Because
again, it undermines everybody’s faith in the process.
FC: Are you worried
about deepfakes in this election cycle?
IT
UNDERMINES EVERYBODY’S FAITH IN THE PROCESS.”
SENATOR
MARK WARNER
MW: The irony is that there
hasn’t been a need for sophisticated deepfakes to have this kind of
interference. Just look at the two things with Pelosi—the one with the slurring of her speech, or the more
recent video where they’ve made it appear that she was
tearing up Trump’s State of the Union speech at inappropriate times during the
speech. So instead of showing her standing up and applauding the Tuskegee
Airmen, the video makes it look like she’s tearing up the speech while he’s
talking about the Tuskegee Airmen.
These
are pretty low-tech examples of deepfakes. If there’s this much ability to
spread [misinformation] with such low tech, think about what we may see in the
coming months with more sophisticated deepfake technology. You even have some
of the president’s family sending out some of those doctored videos. I believe
there is still a willingness from this administration to invite this kind of
mischief.
FC: Are there other
areas of vulnerability you’re concerned about for 2020?
MW: One of the areas that I’m
particularly worried about is messing with upstream voter registration files.
If you simply move 10,000 or 20,000 people in Miami Dade County from one set of
precincts to another, and they show up to the right precinct but were listed in
a different precinct, you’d have chaos on election day. I’m not sure how often
the registrars go back and rescreen their voter file to make sure people are
still where they say they are.
One
area I want to give the Trump administration some credit for is they’ve allowed
our cyber capabilities to go a bit more on offense. For many years, whether you
were talking about Russian interference or Chinese intellectual property
thefts, we were kind of a punching bag. They could attack us with a great deal
of impunity. Now we have good capabilities here, too. So we’ve struck back a
little bit, and 2018 was much safer. But we had plenty of evidence that Russia
was going to spend most of their efforts on 2020, not 2018.
That’s
all on the election integrity side. Where we haven’t made much progress at all
is with social media manipulation, whether it’s the spreading of false theories
or the targeting that was geared at African Americans to suppress their vote in
2016.
FC: We’ve just come off a big
impeachment trial that revolved around the credibility of our elections, with
Trump asking a foreign power to help him get reelected. As you were sitting
there during the State of the Union on the eve of his acquittal in the Senate,
is there anything you can share with us about what you were thinking?
MW: In America, we’ve
lived through plenty of political disputes in our history and plenty of
political divisions. But I think there were rules both written and unwritten
about some level of ethical behavior that I think this president has thrown out
the window. While a lot of my Republican colleagues privately express chagrin
at that, so far they’ve not been willing to speak up. I’m so worried about this
kind of asymmetric attack from foreign entities, whether they’re for Trump or not
for Trump. If Russia was trying to help a certain candidate, and the candidate
didn’t want that help and that leaks out, that could be
devastating to somebody’s chances. [Warner proved prescient here. Reports of that very thing happening to Bernie
Sanders emerged days later on February 21.]
If
you add up what the Russians spent in our election in 2016, what they spent in
the Brexit vote a year or so before, and what they spent in the French
presidential elections . . . it’s less than the cost of one new F-35 airplane.
In a world where the U.S. is spending $748 billion on defense, for $35 million
or $50 million you can do this kind of damage. I sometimes worry that maybe
we’re fighting the last century’s wars when conflict in the 21st century is
going to be a lot more around cyber misinformation and disinformation, where
your dollar can go a long way. And if you don’t have a united opposition
against that kind of behavior, it can do a lot of damage.
FC: Do you think Congress is
up to the task of delivering a tough consumer data privacy bill anytime soon?
MW: We haven’t so far and
it’s one more example of where America is ceding its historic technology
leadership. On privacy, obviously the Europeans have moved with GDPR.
California’s moved with their own version of privacy law. The Brits, the
Australians, and the French are moving on content regulation. I think the only
thing that’s holding up privacy legislation is how much federal preemption
there ought to be. But I think there are ways to work through that.
I
do think that some of the social media companies may be waking up to the fact
that their ability to delay a pretty ineffective Congress may come back and
bite them. Because when Congress [is ready to pass regulation], the bar’s going
to be raised so much that I think there will be a much stricter set of
regulations than what might’ve happened if we’d actually passed something this
year or the year before.
I’ve
been looking at what I think are the issues around pro-competition, around more
disclosure around dark patterns. I’ve got a half dozen bills—all of them
bipartisan—that look at data portability, [data value] evaluation, and dark patterns. I’ve been working on some of the
election security stuff around Facebook. We are looking at some Section 230
reforms. My hope is that you have a privacy bill that we could then add a
number of these other things to, because I think the world is moving fast
enough that privacy legislation is necessary but not sufficient.
FC: You’re referencing
Section 230 of the Telecommunications Act of 1996, which protects tech
companies from being liable for what users post on their platforms and how they
moderate content. To focus on the Section 230 reforms for a moment, are you
contemplating a partial change to the language of the law that would make tech
platforms legally liable for a very specific kind of toxic content? Or are you
talking about a broader lifting of tech’s immunity under the law?
MW: Maybe Section 230
made some sense in the late ’90s when [tech platforms] were startup ventures.
But when 65% of Americans get some or all their news from Facebook and Google
and that news is being curated to you, the idea that [tech companies] should
bear no responsibility at all about the content you’re receiving is one of the
reasons why I think there’s broad-based interest in reexamining this.
I
THINK THERE’S A GROWING SENSITIVITY THAT THE STATUS QUO IS NOT WORKING.”
SENATOR
MARK WARNER
I
think there’s a growing sensitivity that the status quo is not working. It’s
pretty outrageous that we’re three and a half years after the 2016 campaign,
when the whole political world went from being techno-optimists to having a
more realistic view of these platform companies, and we still haven’t passed a
single piece of legislation.
I’ve
found some of Facebook’s arguments on protecting free speech
to be not very compelling. I think Facebook is much more comparable to a cable
news network than it is to a broadcasting station that does protect First
Amendment speech. And the way I’ve been thinking about it is that it’s less
about the ability to say stupid stuff or racist stuff—because there may be some
First Amendment rights on some of that activity—but more about the
amplification issue. You may have a right to say a stupid thing, but does that
right extend to guaranteeing a social media company will promote it a million
times or 100 million times without any restriction?
This story is
part of our Hacking Democracy series, which examines the ways in which
technology is eroding our elections and democratic institutions—and what’s been
done to fix them. Read more here.
|
|
|
Despite the site’s reputation as a sometimes-toxic rumor mill,
Reddit has become an unlikely home for passionate users who aim to call
out disinformation as it spreads….:
|
|
|
|
Peter Pomerantsev
NOTHING IS TRUE AND EVERYTHING IS
POSSIBLE. THE SURREAL HEART OF THE NEW RUSSIA
New York: Public Affairs,
2014
The death of truth: how we gave up on
facts and ended up with Trump
https://www.theguardian.com/books/2018/jul/14/the-death-of-truth-how-we-gave-up-on-facts-and-ended-up-with-trump
The KGB and Soviet Disinformation: An
Insider’s View
by Lawrence Martin-Bittman
In practising what it calls
disinformation, the Soviet union has for years sponsored grand deceptions
calculated to mislead, confound, or inflame foreign opinion. Some of these
subterfuges have had a considerable impact on world affairs. Some also have had
unforeseeable consequences severely detrimental to Soviet interests.
Ultimately, they have made the Soviet Union the victim of its own deceit...
With KGB approval and support, the Czech STB in the autumn of 1964 initiated a
vast deception campaign to arouse Indonesian passions against the United
States. Through an Indonesian ambassador they had compromised with female
agents, the Czechs purveyed to President Sukarno a series of forged documents
and fictitious reports conjuring up CIA plots against him. One forgery
suggested that the CIA planned to assassinate Sukarno; another 'revealed' a
joint American-British plan to invade Indonesia from Malaysia. The unstable
Sukarno responded with anti-American diatribes, which some Indonesian
journalists in the pay of the KGB and STB amplified and Radio Moscow played
back to the Indonesian people. Incited mobs besieged American offices in
Djakarta, anti-American hysteria raged throughout the country, and US influence
was eradicated. The former STB deception specialist Ladislav Bittman has
written a history and analysis of the operation in which he participated. He
states, 'We ourselves were surprised by the monstrous proportions to which the
provocation grew.....:
Denial 2018: The Unspeakable
Truth Hardcover
The Holocaust never
happened. The planet isn’t warming. Vaccines cause autism. There is no such
thing as AIDS. The Earth is flat.
Denialism comes in many
forms, dressed in the garb of research proudly claiming to represent the best
traditions of scholarship. Its influence is insidious, its techniques are
pernicious. Climate change denialists have built well-funded institutions and lobbying
groups to counter action against global warming. Holocaust deniers have harried
historians and abused survivors. AIDS denialists have prevented treatment
programmes in Africa.
All this is bad enough, but
what if, as Keith Kahn-Harris asks, it actually cloaks much darker,
unspeakable, desires? If denialists could speak from the heart, what would we
hear?
Kahn-Harris sets out not
just to unpick denialists’ arguments, but to investigate what lies behind them.
The conclusions he reaches are disturbing and uncomfortable:
Denialism has paved the way
for the recent emergence of what the author tems ‘post-denialism’; a key
component of the ‘post-truth’ world. Donald Trump’s lack of concern with truth
represents both denialism’s final victory and the final collapse of its claims
to scholarly legitimacy.
How should we adapt to the
post-denialist era? Keith Kahn-Harris argues that there is now no alternative
to enabling denialists and post-denialists to openly express the dark desires
that they have sought to hide. This is a horrifying prospect, but perhaps if we
accept the fact of ‘moral diversity’ and air these differences in the open, we
might be able to make new and better arguments against the denialists’ hidden
agendas.
Praise for the book:
‘An elegant exploration of
how frail certainties really are, and how fragile truth is. While Kahn-Harris
offers no easy answers in how to deal with ‘post-truth’, he does inspire you to
act’.
Peter Pomerantsev, Author – Nothing
Is True and Everything Is Possible: The
Surreal Heart of the New Russia
‘This powerful book dives
deep into the darkness that drives denial. A very useful book for anyone who is
concerned about the state of the world, and a must-read for anyone who is not.’
The
truth is always hidden in the past; in a prophetic 35- year-old interview, KGB operative, Yuri Bezmenov,
aka Tomas David Schuman, explains
Russian influence and subversion techniques that have disturbing echoes in
present day US/UK. Full
clip: https://youtu.be/bX3EZCVj2XA
Nav komentāru:
Ierakstīt komentāru