Subjects:
|
Computation and Language (cs.CL)
|
Cite as:
|
arXiv:1708.07104 [cs.CL]
|
(or arXiv:1708.07104v1 [cs.CL] for
this version)
|
Bad News
Selling the
story of disinformation
In
the beginning, there were ABC, NBC, and CBS, and they were good. Midcentury
American man could come home after eight hours of work and turn on his
television and know where he stood in relation to his wife, and his children,
and his neighbors, and his town, and his country, and his world. And that was
good. Or he could open the local paper in the morning in the ritual fashion, taking
his civic communion with his coffee, and know that identical scenes were
unfolding in households across the country.
Over
frequencies our American never tuned in to, red-baiting, ultra-right-wing radio
preachers hyperventilated to millions. In magazines and books he didn’t read,
elites fretted at great length about the dislocating effects of television. And
for people who didn’t look like him, the media had hardly anything to say at
all. But our man lived in an Eden, not because it was unspoiled, but because he
hadn’t considered any other state of affairs. For him, information was in its
right—that is to say, unquestioned—place. And that was good, too.
Today,
we are lapsed. We understand the media through a metaphor—“the information
ecosystem”—which suggests to the American subject that she occupies a
hopelessly denatured habitat. Every time she logs on to Facebook or YouTube or
Twitter, she encounters the toxic byproducts of modernity as fast as her
fingers can scroll. Here is hate speech, foreign interference, and trolling;
there are lies about the sizes of inauguration crowds, the origins of
pandemics, and the outcomes of elections…: https://harpers.org/archive/2021/09/bad-news-selling
- Does not repeatedly publish false content: The site does
not repeatedly produce stories that have been found—either by journalists
at NewsGuard or elsewhere—to be clearly and significantly false, and which
have not been quickly and prominently corrected. (22 Points. A
label with a score lower than 60 points gets a red rating.)
- Gathers and presents information responsibly: Content on the
site is created by reporters, writers, videographers, researchers, or
other information providers who generally seek to be accurate and fair in
gathering, reporting, and interpreting information, even if they approach
their work from a strong point of view. They do this by referencing
multiple sources, preferably those that present direct, firsthand
information on a subject or event. (18 Points)
- Regularly corrects or clarifies errors: The site makes
clear how to contact those in charge and has effective practices for
publishing clarifications and corrections. (12.5 Points)
- Handles the difference between news and
opinion responsibly: Content providers who convey the
impression that they report news or a mix of news and opinion distinguish
opinion from news reporting, and when reporting news, they do not
regularly or egregiously misstate, distort, or cherry pick facts, or
egregiously cherry pick stories, to advance opinions. Content providers
whose clearly expressed purpose is to advance a particular point of view
do not regularly and egregiously misstate or distort facts to make their
case. (12.5 Points)
- Avoids deceptive headlines: The site
generally does not publish headlines that include false information,
significantly sensationalize, or otherwise do not reflect what is actually
in the story. (10 Points)
- Website discloses ownership and financing: The site
discloses its ownership and/or financing, as well as any notable ideological
or political positions held by those with a significant financial interest
in the site (including in the case of nonprofits, its major donors), in a
user-friendly manner. (7.5 Points)
- Clearly labels advertising: The site makes
clear which content is paid for and which is not. (7.5 Points)
- Reveals who’s in charge, including any
possible conflicts of interest: Information about those in charge
of the content is made accessible on the site, including any possible conflicts
of interest. (5 Points)
- Site provides the names of content creators,
along with either contact information or biographical information: Information
about those producing the content is made accessible on the site. (5
Points)
Criteria
|
Points
|
Does
not repeatedly publish false content
|
22
|
Gathers
and presents information responsibly
|
18
|
Regularly
corrects or clarifies errors
|
12.5
|
Handles
the difference between news and opinion responsibly
|
12.5
|
Avoids
deceptive headlines
|
10
|
Website
discloses ownership and financing
|
7.5
|
Clearly
labels advertising
|
7.5
|
Reveals
who’s in charge, including any possible conflicts of interest
|
5
|
Provides
information about content creators
|
5
|
• How to avoid falling for fake news
• The enduring appeal of conspiracy theories
• The hidden signs that can reveal a fake photo
How Susceptible Are You to Misinformation? There's a Test You
Can Take
A new misinformation quiz
shows that, despite the stereotype, younger Americans have a harder time
discerning fake headlines, compared with older generations
Many Americans seem to worry
that their parents or grandparents will fall
for fake news online. But as it turns out, we may be collectively concerned
about the wrong generation.
Contrary to popular belief,
Gen Zers and millennials could be more
susceptible to online misinformation than older adults, according to a poll
published online on June 29 by the research agency YouGov. What’s more, people
who spend more time online had more difficulty distinguishing between real and
fake news headlines. “We saw some results that are different from the ad hoc
kinds of tests that [previous] researchers have done,” says Rakoen Maertens, a
research psychologist at the University of Cambridge and lead author of a study
on the development of the test used in the poll, which was published on June 29
in Behavior
Research Methods.
Maertens’s team worked with
YouGov to administer a quick online quiz based on the test that the researchers
developed, dubbed the “misinformation susceptibility test” (MIST). It
represents the first standardized test in psychology for misinformation and
was set up in a way that allows researchers to administer it broadly and
collect huge amounts of data. To create their test, Maertens and his colleagues
carefully selected 10 actual headlines and 10 artificial-intelligence-generated
false ones—similar to those you might encounter online—that they then
categorized as “real” or “fake.” Test takers were asked to sort the real
headlines from the fake news and received a percentage score at the end for
each category. Here are a couple of examples of headlines from the test so you
can try out your “fake news detector”: “US Support for Legal Marijuana Steady
in Past Year,” “Certain Vaccines Are Loaded with Dangerous Chemicals and
Toxins” and “Morocco’s King Appoints Committee Chief to Fight Poverty and
Inequality.” The answers are at the bottom of this article.
Maertens and his team gave
the test to thousands of people across the U.S. and the U.K. in their study,
but the YouGov poll was given to 1,516 adults who were all U.S. citizens. On
average, in the YouGov poll, U.S. adults correctly categorized about 65 percent
of the headlines. However, age seemed to impact accuracy. Only 11 percent of
Americans ages 18 to 29 correctly classified 17 or more headlines, and 36
percent got no more than 10 correct. That’s compared with 36 percent of the
65-and-older crowd who accurately assessed at least 16 headlines. And only 9
percent in the latter age group got 10 or fewer correct. On average, Americans
below age 45 scored 12 out of 20, while their older counterparts scored 15.
Additionally, people who
reported spending three or more leisure hours a day online were more likely to
fall for misinformation (false headlines), compared with those who spent less
time online. And where people got their news made a difference: folks who read
legacy publications such as the Associated Press and Politico had better misinformation
detection, while those who primarily got their news from social media sites
such as TikTok, Instagram and Snapchat generally scored lower. (“I didn’t
even know that [getting news from Snapchat] was an option,” Maertens says.)
This could be part of the reason that younger people scored lower overall,
Maertens’s team hypothesized. People
who spend a lot of time on social media are exposed to a firehose of
information, both real and fake, with little context to help distinguish the
two.
Personality traits also
impacted a person’s susceptibility to fake news. Conscientiousness, for
instance, was associated with higher scores in the study conducted by Maertens
and his team, while neuroticism and narcissism were associated with lower
scores.
“They’ve done a good job in
terms of conducting the research,” says Magda Osman, head of research and
analysis at the Center for Science and Policy at the University of Cambridge,
who was not involved in the study. She worries, however, that some of the
test’s AI-generated headlines were less clear-cut than a simple real/fake
classification could capture.
Take, for example, the
headline “Democrats More Supportive than Republicans of Federal Spending for
Scientific Research.” In the study, this claim was labeled as unambiguously
true based on data
from the Pew Research Center. But just by looking at the headline, Osman
says, “you don’t know whether this means Democrats versus Republicans in the
population or Democrats versus Republicans in Congress.”
This distinction matters
because it changes the veracity of the statement. While it’s accurate to say
that Democrats generally tend to support increased science funding, Republican
politicians have a history of hiking up the defense budget, which means that
over the past few decades, they have actually outspent
their Democratic colleagues in funding certain types of research and
development.
What’s more, Osman points
out, the study does not differentiate which topics of misinformation different
groups are more susceptible to. Younger people might be more likely than their
parents to believe misinformation about sexual health or COVID but less likely
to fall for fake news about climate change, she suggests.
“The test shouldn’t be taken
as a 100% reliable individual-level test. Small differences can occur,”
Maertens wrote in an e-mail to Scientific American. “Someone who has 18/20
could in practice be equally resilient as someone scoring 20/20. However, it is
more likely that a 20/20 scorer is effectively better than let’s say a 14/20
scorer.”
Ultimately both Osman and
Maertens agree that media literacy is a crucial skill for navigating today’s
information-saturated world. “If you get flooded with information, you can’t
really analyze every single piece,” Maertens says. He recommends taking a
skeptical approach to everything you read online, fact-checking when possible
(though that was not an option for MIST participants) and keeping in mind that
you may be more susceptible to misinformation than you think.
In the example in the
third paragraph, the headlines are, in order, real, fake, real.
As Director of the IFCN, Alexios has helped draft the fact-checkers' code of principles, shepherded a partnership between third-party fact-checkers and Facebook, testified to the Italian Chamber of Deputies on the "fake news" phenomenon and helped launch International Fact-Checking Day. In January 2018 he was invited to join the European Union's High Level Group on fake news and online disinformation. He has also drafted a lesson plan for UNESCO and a chapter on fact-checking in the 2016 U.S. presidential elections in Truth Counts, published by Congressional Quarterly.
The International Fact-Checking Network (IFCN) is a forum for fact-checkers worldwide hosted by the Poynter Institute for Media Studies. These organizations fact-check statements by public figures, major institutions and other widely circulated claims of interest to society.
It launched in September 2015, in recognition of the fact that a booming crop of fact-checking initiatives could benefit from an organization that promotes best practices and exchanges in this field.
Among other things, the IFCN:
* Monitors trends and formats in fact-checking worldwide, publishing regular articles on the dedicated Poynter.org channel.
* Provides training resources for fact-checkers.
* Supports collaborative efforts in international fact-checking.
* Convenes a yearly conference (Global Fact).
* Is the home of the fact-checkers' code of principles.
The IFCN has received funding from the Arthur M. Blank Family Foundation, the Duke Reporters’ Lab, the Bill & Melinda Gates Foundation, Google, the National Endowment for Democracy, the Omidyar Network, the Open Society Foundations and the Park Foundation.
To find out more, follow @factchecknet on Twitter or go to bit.ly/GlobalFac
More from this series: The new populism
Information Overload Helps Fake News
Spread, and Social Media Knows It
Understanding how algorithm manipulators exploit our
cognitive vulnerabilities empowers us to fight back
December 1, 2020
AUTHORS Filippo
Menczer Thomas Hills
Consider Andy, who is
worried about contracting COVID-19. Unable to read all the articles he sees on
it, he relies on trusted friends for tips. When one opines on Facebook that
pandemic fears are overblown, Andy dismisses the idea at first. But then the
hotel where he works closes its doors, and with his job at risk, Andy starts
wondering how serious the threat from the new virus really is. No one he knows
has died, after all. A colleague posts an article about the COVID “scare”
having been created by Big Pharma in collusion with corrupt politicians, which
jibes with Andy's distrust of government. His Web search quickly takes him to
articles claiming that COVID-19 is no worse than the flu. Andy joins an online
group of people who have been or fear being laid off and soon finds himself
asking, like many of them, “What pandemic?” When he learns that several of his
new friends are planning to attend a rally demanding an end to lockdowns, he
decides to join them. Almost no one at the massive protest, including him, wears
a mask. When his sister asks about the rally, Andy shares the conviction that
has now become part of his identity: COVID is a hoax.
This example illustrates a
minefield of cognitive biases. We prefer information from people we trust, our
in-group. We pay attention to and are more likely to share information about
risks—for Andy, the risk of losing his job. We search for and remember things
that fit well with what we already know and understand. These biases are
products of our evolutionary past, and for tens of thousands of years, they
served us well. People who behaved in accordance with them—for example, by
staying away from the overgrown pond bank where someone said there was a
viper—were more likely to survive than those who did not.
Modern technologies are
amplifying these biases in harmful ways, however. Search engines direct Andy to
sites that inflame his suspicions, and social media connects him with
like-minded people, feeding his fears. Making matters worse, bots—automated
social media accounts that impersonate humans—enable misguided or malevolent
actors to take advantage of his vulnerabilities.
Compounding the problem is
the proliferation of online information. Viewing and producing blogs, videos,
tweets and other units of information called memes has become so cheap and easy
that the information marketplace is inundated. Unable to process all this
material, we let our cognitive biases decide what we should pay attention to.
These mental shortcuts influence which information we search for, comprehend,
remember and repeat to a harmful extent.
The need to understand these
cognitive vulnerabilities and how algorithms use or manipulate them has become
urgent. At the University of Warwick in England and at Indiana University
Bloomington's Observatory on Social Media (OSoMe, pronounced “awesome”), our
teams are using cognitive experiments, simulations, data mining and artificial
intelligence to comprehend the cognitive vulnerabilities of social media users.
Insights from psychological studies on the evolution of information conducted
at Warwick inform the computer models developed at Indiana, and vice versa. We
are also developing analytical and machine-learning aids to fight social media
manipulation. Some of these tools are already being used by journalists,
civil-society organizations and individuals to detect inauthentic actors, map
the spread of false narratives and foster news literacy.
INFORMATION OVERLOAD
The glut of information has
generated intense competition for people's attention. As Nobel Prize–winning
economist and psychologist Herbert A. Simon noted, “What information consumes
is rather obvious: it consumes the attention of its recipients.” One of the
first consequences of the so-called attention economy is the loss of
high-quality information. The OSoMe team demonstrated this result with a set of
simple simulations. It represented users of social media such as Andy, called
agents, as nodes in a network of online acquaintances. At each time step in the
simulation, an agent may either create a meme or reshare one that he or she
sees in a news feed. To mimic limited attention, agents are allowed to view
only a certain number of items near the top of their news feeds.
Running this simulation over
many time steps, Lilian Weng of OSoMe found that as agents'
attention became increasingly limited, the propagation of memes came to reflect
the power-law distribution of actual social media: the probability that a meme
would be shared a given number of times was roughly an inverse power of that
number. For example, the likelihood of a meme being shared three times was
approximately nine times less than that of its being shared once.
This winner-take-all
popularity pattern of memes, in which most are barely noticed while a few
spread widely, could not be explained by some of them being more catchy or
somehow more valuable: the memes in this simulated world had no intrinsic
quality. Virality resulted purely from the statistical consequences of
information proliferation in a social network of agents with limited attention.
Even when agents preferentially shared memes of higher quality, researcher
Xiaoyan Qiu, then at OSoMe, observed little improvement in the overall quality
of those shared the most. Our
models revealed that even when we want to see and share
high-quality information, our inability to view everything in our news feeds
inevitably leads us to share things that are partly or completely untrue.
Cognitive biases greatly
worsen the problem. In a set of groundbreaking studies in 1932, psychologist
Frederic Bartlett told volunteers a Native American legend about a young man
who hears war cries and, pursuing them, enters a dreamlike battle that
eventually leads to his real death. Bartlett asked the volunteers, who were
non-Native, to recall the rather confusing story at increasing intervals, from
minutes to years later. He found that as time passed, the rememberers tended to
distort the tale's culturally unfamiliar parts such that they were either lost
to memory or transformed into more familiar things. We now know that our minds
do this all the time: they adjust our understanding of new information so that
it fits in with what we already know. One consequence of this so-called
confirmation bias is that people often seek out, recall and understand
information that best confirms what they already believe.
This tendency is extremely
difficult to correct. Experiments consistently show that even when people
encounter balanced information containing views from differing perspectives,
they tend to find supporting evidence for what they already believe. And when
people with divergent beliefs about emotionally charged issues such as climate
change are shown the same information on these topics, they become even more
committed to their original positions.
Making matters worse, search
engines and social media platforms provide personalized recommendations based
on the vast amounts of data they have about users' past preferences. They
prioritize information in our feeds that we are most likely to agree with—no
matter how fringe—and shield us from information that might change our minds.
This makes us easy targets for polarization. Nir Grinberg and his co-workers at
Northeastern University recently showed that conservatives in the U.S. are more
receptive to misinformation. But our own analysis of consumption of low-quality
information on Twitter shows that the vulnerability applies to both sides of
the political spectrum, and no one can fully avoid it. Even our ability to
detect online manipulation is affected by our political bias, though not
symmetrically: Republican users are more likely to mistake bots
promoting conservative ideas for humans, whereas Democrats are more likely to
mistake conservative human users for bots.
SOCIAL HERDING
In New York City in August
2019, people began running away from what sounded like gunshots. Others
followed, some shouting, “Shooter!” Only later did they learn that the blasts
came from a backfiring motorcycle. In such a situation, it may pay to run first
and ask questions later. In the absence of clear signals, our brains use information
about the crowd to infer appropriate actions, similar to the behavior of
schooling fish and flocking birds.
Such social conformity is
pervasive. In a fascinating 2006 study involving 14,000 Web-based volunteers,
Matthew Salganik, then at Columbia University, and his colleagues found that
when people can see what music others are downloading, they end up downloading
similar songs. Moreover, when people were isolated into “social” groups, in
which they could see the preferences of others in their circle but had no
information about outsiders, the choices of individual groups rapidly diverged.
But the preferences of “nonsocial” groups, where no one knew about others'
choices, stayed relatively stable. In other words, social groups create a
pressure toward conformity so powerful that it can overcome individual
preferences, and by amplifying random early differences, it can cause
segregated groups to diverge to extremes.
Social media follows a
similar dynamic. We confuse popularity with quality and end up copying the
behavior we observe. Experiments on Twitter by Bjarke Mønsted and his
colleagues at the Technical University of Denmark and the University of
Southern California indicate that information is transmitted via “complex
contagion”: when we are repeatedly exposed to an idea, typically from many
sources, we are more likely to adopt and reshare it. This social bias is
further amplified by what psychologists call the “mere exposure” effect: when
people are repeatedly exposed to the same stimuli, such as certain faces, they
grow to like those stimuli more than those they have encountered less often.
Such biases translate into
an irresistible urge to pay attention to information that is going viral—if
everybody else is talking about it, it must be important. In addition to
showing us items that conform with our views, social media platforms such as
Facebook, Twitter, YouTube and Instagram place popular content at the top of
our screens and show us how many people have liked and shared something. Few of
us realize that these cues do not provide independent assessments of quality.
In fact, programmers who
design the algorithms for ranking memes on social media assume that the “wisdom
of crowds” will quickly identify high-quality items; they use popularity as a
proxy for quality. Our analysis of vast amounts of anonymous
data about clicks shows that all platforms—social media, search
engines and news sites—preferentially serve up information from a narrow subset
of popular sources.
To understand why, we
modeled how they combine signals for quality and popularity in their rankings.
In this model, agents with limited attention—those who see only a given number
of items at the top of their news feeds—are also more likely to click on memes
ranked higher by the platform. Each item has intrinsic quality, as well as a
level of popularity determined by how many times it has been clicked on.
Another variable tracks the extent to which the ranking relies on popularity
rather than quality. Simulations of this model
reveal that such algorithmic bias typically suppresses the
quality of memes even in the absence of human bias. Even when we want to share
the best information, the algorithms end up misleading us.
ECHO CHAMBERS
Most of us do not believe we
follow the herd. But our confirmation bias leads us to follow others who are
like us, a dynamic that is sometimes referred to as homophily—a tendency for
like-minded people to connect with one another. Social media amplifies
homophily by allowing users to alter their social network structures through
following, unfriending, and so on. The result is that people become segregated
into large, dense and increasingly misinformed communities commonly described
as echo chambers.
At OSoMe, we explored the
emergence of online echo chambers through another simulation, EchoDemo.
In this model, each agent has a political opinion represented by a number
ranging from −1 (say, liberal) to +1 (conservative). These inclinations are
reflected in agents' posts. Agents are also influenced by the opinions they see
in their news feeds, and they can unfollow users with dissimilar opinions.
Starting with random initial networks and opinions, we found that the
combination of social influence and unfollowing greatly accelerates the
formation of polarized and segregated communities.
Indeed, the political echo
chambers on Twitter are so extreme that individual users' political leanings
can be predicted with high accuracy:
you have the same opinions as the majority of your connections. This chambered
structure efficiently
spreads information within a community while insulating that
community from other groups. In 2014 our research group was targeted by a
disinformation campaign claiming that we were part of a politically motivated
effort to suppress free speech. This false charge spread virally mostly in the
conservative echo chamber, whereas debunking articles by fact-checkers were
found mainly in the liberal community. Sadly, such segregation of fake news
items from their fact-check reports is the norm.
Social media can also
increase our negativity. In a recent laboratory study, Robert Jagiello, also at
Warwick, found that
socially shared information not only bolsters our biases but also becomes more
resilient to correction. He investigated how information is passed from person
to person in a so-called social diffusion chain. In the experiment, the first
person in the chain read a set of articles about either nuclear power or food
additives. The articles were designed to be balanced, containing as much
positive information (for example, about less carbon pollution or
longer-lasting food) as negative information (such as risk of meltdown or
possible harm to health).
The first person in the
social diffusion chain told the next person about the articles, the second told
the third, and so on. We observed an overall increase in the amount of negative
information as it passed along the chain—known as the social amplification of
risk. Moreover, work by Danielle J. Navarro and her colleagues at the
University of New South Wales in Australia found that information in social
diffusion chains is most susceptible to distortion by individuals with the most
extreme biases.
Even worse, social diffusion
also makes negative information more “sticky.” When Jagiello subsequently
exposed people in the social diffusion chains to the original, balanced
information—that is, the news that the first person in the chain had seen—the
balanced information did little to reduce individuals' negative attitudes. The
information that had passed through people not only had become more negative
but also was more resistant to updating.
A 2015
study by OSoMe researchers Emilio Ferrara and Zeyao Yang
analyzed empirical data about such “emotional contagion” on Twitter and found
that people overexposed to negative content tend to then share negative posts,
whereas those overexposed to positive content tend to share more positive
posts. Because negative content spreads faster than positive content, it is
easy to manipulate emotions by creating narratives that trigger negative
responses such as fear and anxiety. Ferrara, now at the University of Southern
California, and his colleagues at the Bruno Kessler Foundation in Italy have
shown that during Spain's 2017 referendum on Catalan independence, social bots
were leveraged to retweet violent and inflammatory
narratives, increasing their exposure and exacerbating social
conflict.
RISE OF THE BOTS
Information quality is
further impaired by social bots, which can exploit all our cognitive loopholes.
Bots are easy to create. Social media platforms provide so-called application
programming interfaces that make it fairly trivial for a single actor to set up
and control thousands of bots. But amplifying a message, even with just a few
early upvotes by bots on social media platforms such as Reddit, can have a huge impact on
the subsequent popularity of a post.
At OSoMe, we have developed
machine-learning algorithms to detect social bots. One of these, Botometer,
is a public tool that extracts 1,200 features from a given Twitter account to
characterize its profile, friends, social network structure, temporal activity
patterns, language and other features. The program compares these
characteristics with those of tens of thousands of previously identified bots
to give the Twitter account a score for its likely use of automation.
In 2017 we estimated that
up to 15 percent of active Twitter accounts were bots—and that they had played a
key role in the spread of misinformation during the 2016 U.S.
election period. Within seconds of a fake news article being posted—such as one
claiming the Clinton campaign was involved in occult rituals—it would be
tweeted by many bots, and humans, beguiled by the apparent popularity of the
content, would retweet it.
Bots also influence us by
pretending to represent people from our in-group. A bot only has to follow,
like and retweet someone in an online community to quickly infiltrate it. OSoMe
researcher Xiaodan Lou developed another model in which
some of the agents are bots that infiltrate a social network and share
deceptively engaging low-quality content—think of clickbait. One parameter in
the model describes the probability that an authentic agent will follow
bots—which, for the purposes of this model, we define as agents that generate
memes of zero quality and retweet only one another. Our simulations show that
these bots can effectively suppress the entire ecosystem's information quality
by infiltrating only a small fraction of the network. Bots can also accelerate
the formation of echo chambers by suggesting other inauthentic accounts to be
followed, a technique known as creating “follow trains.”
Some manipulators play both
sides of a divide through separate fake news sites and bots, driving political
polarization or monetization by ads. At OSoMe, we recently uncovered a
network of inauthentic accounts on Twitter that were all
coordinated by the same entity. Some pretended to be pro-Trump supporters of
the Make America Great Again campaign, whereas others posed as Trump
“resisters”; all asked for political donations. Such operations amplify content
that preys on confirmation biases and accelerate the formation of polarized
echo chambers.
CURBING ONLINE MANIPULATION
Understanding our cognitive
biases and how algorithms and bots exploit them allows us to better guard
against manipulation. OSoMe has produced a number of tools to help people
understand their own vulnerabilities, as well as the weaknesses of social media
platforms. One is a mobile app called Fakey that helps users learn
how to spot misinformation. The game simulates a social media news feed,
showing actual articles from low- and high-credibility sources. Users must
decide what they can or should not share and what to fact-check. Analysis of
data from Fakey confirms the prevalence of online social herding: users are more
likely to share low-credibility articles when they believe that
many other people have shared them.
Another program available to
the public, called Hoaxy, shows how any extant meme
spreads through Twitter. In this visualization, nodes represent actual Twitter
accounts, and links depict how retweets, quotes, mentions and replies propagate
the meme from account to account. Each node has a color representing its score
from Botometer, which allows users to see the scale at which bots amplify
misinformation. These tools have been used by investigative journalists to
uncover the roots of misinformation campaigns, such as one pushing the
“pizzagate” conspiracy in the U.S. They also helped to detect bot-driven
voter-suppression efforts during the 2018 U.S. midterm
election. Manipulation is getting harder to spot, however, as machine-learning
algorithms become better at emulating human behavior.
Apart from spreading fake
news, misinformation campaigns can also divert attention from other, more
serious problems. To combat such manipulation, we have recently developed a
software tool called BotSlayer. It extracts
hashtags, links, accounts and other features that co-occur in tweets about
topics a user wishes to study. For each entity, BotSlayer tracks the tweets,
the accounts posting them and their bot scores to flag entities that are trending
and probably being amplified by bots or coordinated accounts. The goal is to
enable reporters, civil-society organizations and political candidates to spot
and track inauthentic influence campaigns in real time.
These programmatic tools are
important aids, but institutional changes are also necessary to curb the
proliferation of fake news. Education can help, although it is unlikely to
encompass all the topics on which people are misled. Some governments and
social media platforms are also trying to clamp down on online manipulation and
fake news. But who decides what is fake or manipulative and what is not?
Information can come with warning labels such as the ones Facebook and Twitter
have started providing, but can the people who apply those labels be trusted?
The risk that such measures could deliberately or inadvertently suppress free
speech, which is vital for robust democracies, is real. The dominance of social
media platforms with global reach and close ties with governments further
complicates the possibilities.
One of the best ideas may be
to make it more difficult to create and share low-quality information. This
could involve adding friction by forcing people to pay to share or receive
information. Payment could be in the form of time, mental work such as puzzles,
or microscopic fees for subscriptions or usage. Automated posting should be
treated like advertising. Some platforms are already using friction in the form
of CAPTCHAs and phone confirmation to access accounts. Twitter has placed limits
on automated posting. These efforts could be expanded to gradually shift online
sharing incentives toward information that is valuable to consumers.
Free communication is not
free. By decreasing the cost of information, we have decreased its value and invited
its adulteration. To restore the health of our information ecosystem, we must
understand the vulnerabilities of our overwhelmed minds and how the economics
of information can be leveraged to protect us from being misled.
Tim Berners-Lee unveils global plan to save the web
A
social psychologist found that showing people how manipulative techniques work
can create resilience against misinformation
- By Daisy Yuhas on March
13, 2023
Misinformation can feel inescapable. Last summer a survey from the nonprofit Poynter Institute for Media Studies found that 62 percent of people regularly notice false or misleading information online. And in a 2019 poll, almost nine in 10 people admitted to having fallen for fake news. Social psychologist Sander van der Linden of the University of Cambridge studies how and why people share such information and how it can be stopped. He spoke with Mind Matters editor Daisy Yuhas to discuss this work and his new book, Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity, which offers research-backed solutions to stem this spread.
[An edited transcript of the interview follows.]
In Foolproof, you
borrow an analogy from the medical world, arguing that misinformation operates
a lot like a virus. How did you come to that comparison?
I was going through journals and found models from
epidemiology and public health that are used to understand how information
propagates across a system. Instead of a virus spreading, you have an
information pathogen. Somebody shares something with you, and you then spread
it to other people.
That led me to wonder: If it’s true that
misinformation spreads like a virus, is it possible to inoculate people? I came
across some work from the 1960s by Bill
McGuire, a psychologist who studied how people could protect
themselves from “brainwashing.” He had a very similar thought. That connection
led to this whole program of research.
[Read
more about scientifically backed strategies to fight misinformation]
How
do we get “infected”?
A virus attacks by exploiting our cells’ weak spots
and hijacking some of their machinery. It’s the same for the mind in many ways.
There are certain cognitive biases that can be exploited by misinformation. Misinformation
infects our memories and influences the decisions that we make.
One example is the illusory truth bias. That’s the
idea that just hearing something repeatedly—even if you know that it is
wrong—makes it seem more true. These learned automatic associations are part of
how the brain works.
In
your research, you’ve extended the virus metaphor to argue that we can
vaccinate ourselves against misinformation through a technique that you call
“prebunking.” How does that work?
Prebunking has two parts. First is forewarning,
which jump-starts the psychological immune system because it’s sleeping most of
the time. We tell people that someone may want to manipulate them, which raises
their skepticism and heightens their awareness.
The second part of the prebunk is analogous to
providing people with a weakened dose of the virus in a vaccine. For example,
in some cases, you get a small dose of the misinformation and tips on how to
refute it. That can help people be more resilient against misinformation.
In addition, we have found that there are general
techniques used to manipulate the spread of misinformation in a lot of
different environments. In our studies, we have found that if you can help
people spot those broader techniques, we
can inoculate them against a whole range of misinformation. For
instance, in one study, people played a game [Bad News] to help them
understand the tactics used to spread fake news. That improved their ability to
spot a range of unreliable information by about 20 to 25 percent.
So
you help people recognize and resist incoming misinformation broadly by
alerting them to the techniques people use to manipulate others. Can you walk
me through an example?
Sure. We created a series of videos in partnership
with Google to make people more aware of manipulative techniques on YouTube.
One is a false dichotomy, or false dilemma. It’s a common tactic and one that
our partners at Google alerted us to because it’s present in many
radicalization videos.
In a false dichotomy, someone incorrectly asserts
that you have only one of two options. So an example would be “either you’re
not a good Muslim, or you have to join ISIS.” Politicians use this approach,
too. In a U.S. political context, an example might be: “We have to fix the
homelessness problem in San Francisco before we start talking about
immigrants.”
In our research, we have exposed people to this
concept using videos that explain false dichotomies in nonpolitical scenarios.
We use popular culture like Family Guy and Star Wars.
People have loved it, and it’s proved to be a really good vehicle.
So in our false dichotomy video, you see a scene
from a Star Wars movie, Revenge of the Sith, where
Anakin Skywalker says to Obi-Wan Kenobi, “If you’re not with me, then you’re my
enemy,” to which Obi-Wan replies, “Only a Sith deals in absolutes.” The video
cuts to explain that Anakin has just used a false dichotomy.
After seeing a video like this, the next time you’re
presented with just two options, you realize somebody may be trying to
manipulate you.
In
August you published findings from a study with more than 20,000 people viewing
these videos, which called out techniques such as false dilemmas,
scapegoating and emotionally manipulative language. What did you learn?
What we find is that, using these videos, people are
better able to recognize misinformation that we show them later both in the lab
and on social media. We included a live test on the YouTube platform. In that
setup, the environment is not controlled, and people are more distracted, so
it’s a more rigorous test.
These videos were part of an ad campaign run by
Google that had millions of views. Google has now rolled out videos based on
this research that are targeted at misinformation about Ukraine and Ukrainian
refugees in Europe. They are specifically helping people spot the technique of
scapegoating.
In
the book, you point out that many people who think they are immune to
misinformation are not. For instance, in one survey, almost 50 percent of
respondents believed they could spot fake news, but only
4 percent succeeded. Even “digital natives” can fall
for fake content. Can this happen to anyone?
A lot of people are going to think that they’re
immune. But there are basic principles that expose us all. For example, there
is an evolutionary argument that’s quite important here called the truth bias.
In most environments, people are not being actively deceived, so our default
state is to accept that things are true. If you had to critically question
everything, you couldn’t get through your day. But if you are in an
environment—like on social media—where the rate of misinformation is much
higher, things can go wrong.
In
addition to biases, the book highlights how certain social behaviors and
contexts, including online echo chambers, skew what we see. With so many forces
working against us, how do you stay optimistic?
We do have biases that can be exploited by producers
of misinformation. It’s not easy, given all of the new information we’re
exposed to all the time, for people to keep track of what’s credible. But I’m
hopeful because there are some solutions. Prebunking is not a panacea, but it’s
a good first line of defense, and it helps, as does debunking and
fact-checking. We can help people maintain accuracy and stay vigilant.
Are you a scientist who specializes in neuroscience,
cognitive science or psychology? And have you read a recent peer-reviewed paper
that you would like to write about for Mind Matters? Please send suggestions
to Scientific American’s Mind Matters editor Daisy
Yuhas at pitchmindmatters@gmail.com.
https://www.scientificamerican.com/article/theres-a-psychological-vaccine-against-misinformation/
Книга британского публициста Питера Померанцева «Россия: ничего не правда и все возможно».
Nav komentāru:
Ierakstīt komentāru