- Powerful,
evidence-based analysis by one of the deans of American politics on how
Russian interference likely tilted the 2016 election to Donald Trump
- Marshals unique
polling data and rigorous media framing analysis to explain why in all
probability the interference had an effect on the outcome
- Provides a qualified
yet compelling answer in the affirmative to the biggest question left over
from the election: Did the Russians help elect Donald Trump?
- Carefully lays out
the challenges to the notion that the Russians tilted the election and
methodically dispenses with them
- The right to be informed as to what data will be collected, and
how it will be used
- The right to opt out of data collection or sharing
- The right to be told if a website has data on you, and what that
data is
- The right to be forgotten; to have all data related to you deleted
upon request
- The right to be informed if ownership of your data changes
hands
- The right to be informed of any data breaches including your
information in a timely manner
- The right to download all data in a standardized format to port to
another platform
In the Camps: China's
High-Tech Penal Colony
by Darren Byler
How China used a network of surveillance to
intern over a million people and produce a system of control previously unknown
in human history
Novel forms of state violence and colonization
have been unfolding for years in China’s vast northwestern region, where more
than a million and a half Uyghurs and others have vanished into internment
camps and associated factories. Based on hours of interviews with camp
survivors and workers, thousands of government documents, and over a decade of
research, Darren Byler, one of the leading experts on Uyghur society and
Chinese surveillance systems, uncovers how a vast network of technology
provided by private companies―facial surveillance, voice recognition,
smartphone data―enabled the state and corporations to blacklist millions of
Uyghurs because of their religious and cultural practice starting in 2017.
Charged with “pre-crimes” that sometimes consist only of installing social
media apps, detainees were put in camps to “study”―forced to praise the Chinese
government, renounce Islam, disavow families, and labor in factories. Byler
travels back to Xinjiang to reveal how the convenience of smartphones have
doomed the Uyghurs to catastrophe, and makes the case that the technology is
being used all over the world, sold by tech companies from Beijing to Seattle
producing new forms of unfreedom for vulnerable people around the world.
https://www.goodreads.com/en/book/show/58393878-in-the-camps
Living with Digital Surveillance in China: Citizens’ Narratives on
Technology, Privacy, and Governance
- July
2023
Authors: Ariane
Ollier-Malaterre
Abstract
Digital surveillance is a daily and all-encompassing reality of
life in China. This book explores how Chinese citizens make sense of digital
surveillance and live with it. It investigates their imaginaries about
surveillance and privacy from within the Chinese socio-political system. Based
on in-depth qualitative research interviews, detailed diary notes, and
extensive documentation, Ariane Ollier-Malaterre attempts to ‘de-Westernise’
the internet and surveillance literature. She shows how the research participants
weave a cohesive system of anguishing narratives on China’s moral shortcomings
and redeeming narratives on the government and technology as civilising forces.
Although many participants cast digital surveillance as indispensable in China,
their misgivings, objections, and the mental tactics they employ to dissociate
themselves from surveillance convey the mental and emotional weight associated
with such surveillance exposure. The book is intended for academics and
students in internet, surveillance, and Chinese studies, and those working on
China in disciplines such as sociology, anthropology, social psychology,
psychology, communication, computer sciences, contemporary history, and
political sciences. The lay public interested in the implications of technology
in daily life or in contemporary China will find it accessible as it
synthesises the work of sinologists and offers many interview excerpts…: https://www.researchgate.net/publication/372792850_Living_with_Digital_Surveillance_in_China_Citizens'_Narratives_on_Technology_Privacy_and_Governance
How China’s citizens are coping with digital surveillance
Deep learning framework for subject-independent emotion detection using wireless signals
Emotion states recognition using wireless signals is
an emerging area of research that has an impact on neuroscientific studies of
human behaviour and well-being monitoring. Currently, standoff emotion
detection is mostly reliant on the analysis of facial expressions and/or eye
movements acquired from optical or video cameras. Meanwhile, although they have
been widely accepted for recognizing human emotions from the multimodal data,
machine learning approaches have been mostly restricted to subject dependent
analyses which lack of generality. In this paper, we report an experimental
study which collects heartbeat and breathing signals of 15 participants from
radio frequency (RF) reflections off the body followed by novel noise filtering
techniques. We propose a novel deep neural network (DNN) architecture based on
the fusion of raw RF data and the processed RF signal for classifying and
visualising various emotion states. The proposed model achieves high
classification accuracy of 71.67% for independent subjects with 0.71, 0.72 and
0.71 precision, recall and F1-score values respectively. We have compared our
results with those obtained from five different classical ML algorithms and it
is established that deep learning offers a superior performance even with
limited amount of raw RF and post processed time-sequence data. The deep
learning model has also been validated by comparing our results with those from
ECG signals. Our results indicate that using wireless signals for stand-by
emotion state detection is a better alternative to other technologies with high
accuracy and have much wider applications in future studies of behavioural
sciences.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0242946
Spotify wants to know your "emotional
state, gender, age, or accent"
BY WREN GRAVES
ON JANUARY 28, 2021,
11:44PM
If you listen to Spotify, then soon enough Spotify may listen to you.
Via Music Business Worldwide, the streaming platform has
secured a patent to monitor the background noise and speech of its users.
The big green circle first
filed a patent for its “Identification of taste attributes from an audio
signal” product in February of 2018, and finally received approval on
January 12th, 2021. The goals is to gauge listener’s “emotional state, gender,
age, or accent,” in order to recommend new music...: https://consequenceofsound.net/2021/01/spotify-patent-monitor-users-speech/
In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth.
OpenAI insiders’ open letter warns of ‘serious risks’ and calls for
whistleblower protections
By Samantha Murphy Kelly, CNN
Tue
June 4, 2024
A
group of OpenAI insiders are demanding that artificial intelligence companies
be far more transparent about AI’s “serious risks” — and that they protect
employees who voice concerns about the technology they’re building.
“AI
companies have strong financial incentives to avoid effective oversight,” reads
the open letter posted Tuesday signed by
current and former employees at AI companies including OpenAI, the creator
behind the viral ChatGPT tool.
They
also called for AI companies to foster “a culture of open criticism” that
welcomes, rather than punishes, people who speak up about their concerns,
especially as the law struggles to catch up to the quickly advancing
technology.
Companies have acknowledged
the “serious risks” posed by AI — from manipulation to a loss of control, known
as “singularity,” that could potentially result in human extinction
— but they should be be doing more to educate the public about risks and
protective measures, the group wrote.
As the law currently stands, the
AI employees said, they don’t believe AI companies will share critical
information about the technology voluntarily.
It’s essential, then, for
current and former employees to speak up — and for companies not to enforce
“disparagement” agreements or otherwise retaliate against those who voice risk-related
concerns. “Ordinary whistleblower protections are insufficient because they
focus on illegal activity, whereas many of the risks we are concerned about are
not yet regulated,” the group wrote.
Their letter comes as
companies move quickly to implement generative AI tools into their products,
while government regulators, companies and consumers grapple with responsible
use. Meanwhile many tech experts, researchers and leaders have called for a temporary pause in the AI race, or for the
government to step in and create a moratorium.
OpenAI’s response
In
response to the letter, OpenAI spokesperson told CNN it is “proud of our track
record providing the most capable and safest AI systems and believe in our
scientific approach to addressing risk, adding that the company agrees
“rigorous debate is crucial given the significance of this technology.”
OpenAI
noted it has an anonymous integrity hotline and
a Safety and Security Committee led by members of its board and safety leaders
from the company. The company does not sell personal info, build user profiles,
or use that data to target anyone or sell anything.
But
Daniel Ziegler, one of the organizers behind the letter and an early
machine-learning engineer who worked at OpenAI between 2018 and 2021, told CNN
that it’s important to remain skeptical of the company’s commitment to transparency.
“It’s
really hard to tell from the outside how seriously they’re taking their
commitments for safety evaluations and figuring out societal harms, especially
as there is such strong commercial pressures to move very quickly,” he
said. “It’s really important to have the right culture and processes so that
employees can speak out in targeted ways when they have concerns.”
He
hopes more professionals in the AI industry will go public with their concerns
as a result of the letter.
Meanwhile,
Apple is widely expected to announce a partnership with OpenAI at its annual
Worldwide Developer Conference to bring generative AI to the iPhone.
“We
see generative AI as a key opportunity across our products and believe we have
advantages that set us apart there,” Apple CEO Tim Cook said on the company’s most recent
earnings call in early May. https://edition.cnn.com/2024/06/04/tech/openai-insiders-letter/index.html
AI Act: a step closer to the first rules on
Artificial Intelligence
11-05-2023
Once approved, they will be the world’s first rules on Artificial Intelligence
- MEPs include bans on biometric surveillance,
emotion recognition, predictive policing AI systems
- Tailor-made regimes for general-purpose AI and
foundation models like GPT
- The right to make complaints about AI systems
To ensure a human-centric and ethical development
of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management
rules for AI systems…: https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence
Why
We're Worried about Generative AI
From the technology upsetting jobs and causing intellectual property issues to models making up fake answers to questions, here’s why we’re concerned about generative AI.
Full
Transcript…: https://www.scientificamerican.com/podcast/episode/why-were-worried-about-generative-ai/
An Action Plan to increase the safety and security of advanced AI
In October
2022, a month before ChatGPT was released, the U.S. State Department
commissioned an assessment of proliferation and security risk from weaponized
and misaligned AI.
In February 2024, Gladstone completed that assessment. It includes an analysis
of catastrophic AI risks, and a first-of-its-kind,
government-wide Action Plan for what we can do about them.
https://www.gladstone.ai/action-plan#action-plan-overview
Artificial Intelligence Act: MEPs adopt landmark law
Facial recognition technology can expose
political orientation from naturalistic facial images
Abstract
Ubiquitous
facial recognition technology can expose individuals’ political orientation, as
faces of liberals and conservatives consistently differ. A facial recognition
algorithm was applied to naturalistic images of 1,085,795 individuals to
predict their political orientation by comparing their similarity to faces of
liberal and conservative others. Political orientation was correctly classified
in 72% of liberal–conservative face pairs, remarkably better than chance (50%),
human accuracy (55%), or one afforded by a 100-item personality questionnaire
(66%). Accuracy was similar across countries (the U.S., Canada, and the UK),
environments (Facebook and dating websites), and when comparing faces across
samples. Accuracy remained high (69%) even when controlling for age, gender,
and ethnicity. Given the widespread use of facial recognition, our findings
have critical implications for the protection of privacy and civil liberties….:
https://www.nature.com/articles/s41598-020-79310-1
- 08-27-20
As a percentage of GDP, U.S. spending on scientific R&D has sunk to
levels not seen since the pre-Sputnik era.
NEXUS: A Brief History of Information Networks from the Stone Age to AI
Yuval Noah Harari
This non-fiction book looks through the long lens
of human history to consider how the flow of information has made, and unmade,
our world.
We
are living through the most profound information revolution in human history.
To understand it, we need to understand what has come before. We have
named our species Homo sapiens, the wise human – but if humans are
so wise, why are we doing so many self-destructive things? In particular, why
are we on the verge of committing ecological and technological suicide?
Humanity gains power by building large networks of cooperation, but the easiest
way to build and maintain these networks is by spreading fictions, fantasies,
and mass delusions. In the 21st century, AI may form the nexus for a new
network of delusions that could prevent future generations from even attempting
to expose its lies and fictions. However, history is not deterministic, and
neither is technology: by making informed choices, we can still prevent the
worst outcomes. Because if we can’t change the future, then why waste time
discussing it?
https://www.ynharari.com/book/nexus/ ; https://www.goodreads.com/book/show/204927599-nexus
Around the halls: What should
the regulation of generative AI look like?
Nicol Turner Lee, Niam Yaraghi, Mark MacCarthy, and Tom Wheeler Friday, June 2, 2023
We are living in a time of unprecedented advancements in generative artificial intelligence (AI), which are AI systems that can generate a wide range of content, such as text or images. The release of ChatGPT, a chatbot powered by OpenAI’s GPT-3 large language model (LLM), in November 2022 ushered generative AI into the public consciousness, and other companies like Google and Microsoft have been equally busy creating new opportunities to leverage the technology. In the meantime, these continuing advancements and applications of generative AI have raised important questions about how the technology will affect the labor market, how its use of training data implicates intellectual property rights, and what shape government regulation of this industry should take. Last week, a congressional hearing with key industry leaders suggested an openness to AI regulation—something that legislators have already considered to reign in some of the potential negative consequences of generative AI and AI more broadly. Considering these developments, scholars across the Center for Technology Innovation (CTI) weighed in around the halls on what the regulation of generative AI should look like.
NICOL
TURNER LEE (@DrTurnerLee)
Generative AI refers to machine learning algorithms that can create new content
like audio, code, images, text, simulations, or even videos. More recent focus
has been on its enablement of chatbots, including ChatGPT, Bard, Copilot,
and other more sophisticated tools that leverage LLMs to
perform a variety of functions, like gathering research for assignments,
compiling legal case files, automating repetitive clerical tasks, or improving
online search. While debates around regulation are focused on the potential
downsides to generative AI, including the quality of datasets, unethical
applications, racial or gender bias, workforce implications, and greater
erosion of democratic processes due to technological manipulation by bad
actors, the upsides include a dramatic spike in efficiency and productivity as
the technology improves and simplifies certain processes and decisions like
streamlining physician processing of
medical notes, or helping educators teach critical
thinking skills. There will be a lot to discuss around generative AI’s ultimate
value and consequence to society, and if Congress continues to operate at a
very slow pace to regulate emerging technologies and institute a federal
privacy standard, generative AI will become more technically advanced and
deeply embedded in society. But where Congress could garner a very quick win on
the regulatory front is to require consumer disclosures when AI-generated
content is in use and add labeling or some type of multi-stakeholder certification
process to encourage improved transparency and accountability for existing and
future use cases.
Once again, the European
Union is already leading the way on this. In its most recent AI Act,
the EU requires that AI-generated content be disclosed to consumers to prevent
copyright infringement, illegal content, and other malfeasance related to
end-user lack of understanding about these systems. As more chatbots mine,
analyze, and present content in accessible ways for users, findings are often
not attributable to any one or multiple sources, and despite some permissions
of content use granted under the fair use doctrine in
the U.S. that protects copyright-protected work, consumers are often left in
the dark around the generation and explanation of the process and results.
Congress should prioritize
consumer protection in future regulation, and work to create agile policies
that are futureproofed to adapt to emerging consumer and societal
harms—starting with immediate safeguards for users before they are left to,
once again, fend for themselves as subjects of highly digitized products and
services. The EU may honestly be onto something with the disclosure
requirement, and the U.S. could further contextualize its application vis-à-vis
existing models that do the same, including the labeling guidance
of the Food and Drug Administration (FDA) or what I have proposed in prior
research: an adaptation of the Energy
Star Rating system to AI. Bringing more transparency and accountability
to these systems must be central to any regulatory framework, and beginning
with smaller bites of a big apple might be a first stab for policymakers.
NIAM
YARAGHI (@niamyaraghi)
With the emergence of sophisticated artificial intelligence (AI) advancements,
including large language models (LLMs) like GPT-4, and LLM-powered applications
like ChatGPT, there is a pressing need to revisit healthcare privacy
protections. At their core, all AI innovations utilize sophisticated
statistical techniques to discern patterns within extensive datasets using
increasingly powerful yet cost-effective computational technologies. These
three components—big data, advanced statistical methods, and computing
resources—have not only become available recently but are also being
democratized and made readily accessible to everyone at a pace unprecedented in
previous technological innovations. This progression allows us to identify
patterns that were previously indiscernible, which creates opportunities for
important advances but also possible harms to patients.
Privacy regulations, most
notably HIPAA, were established to protect patient confidentiality, operating
under the assumption that de-identified data would remain anonymous. However,
given the advancements in AI technology, the current landscape has become
riskier. Now, it’s easier than ever to integrate various datasets from multiple
sources, increasing the likelihood of accurately identifying individual
patients.
Apart from the amplified risk
to privacy and security, novel AI technologies have also increased the value of
healthcare data due to the enriched potential for knowledge extraction.
Consequently, many data providers may become more hesitant to share medical
information with their competitors, further complicating healthcare data
interoperability.
Considering these heightened
privacy concerns and the increased value of healthcare data, it’s crucial to
introduce modern legislation to ensure that medical providers will continue
sharing their data while being shielded against the consequences of potential
privacy breaches likely to emerge from the widespread use of generative AI.
MARK
MACCARTHY (@Mark_MacCarthy)
In “The
Leopard,” Giuseppe Di Lampedusa’s famous novel of the Sicilian
aristocratic reaction to the unification of Italy in the 1860s, one of his
central characters says, “If we want things to stay as they are, things will
have to change.”
Something like this Sicilian
response might be happening in the tech industry’s embrace of
inevitable AI regulation. Three things are needed, however, if we do not want
things to stay as they are.
The first and most important
step is sufficient resources for agencies to enforce current law. Federal Trade
Commission Chair Lina Khan properly says AI
is not exempt from current consumer protection, discrimination, employment, and
competition law, but if regulatory agencies cannot hire technical staff and
bring AI cases in a time of budget austerity, current law will be a dead
letter.
Second, policymakers should
not be distracted by science fiction fantasies of AI programs developing
consciousness and achieving independent agency over humans, even if these
metaphysical abstractions are endorsed by
industry leaders. Not a dime of public money should be spent on these highly
speculative diversions when scammers and industry edge-riders are seeking to
use AI to break existing law.
Third, Congress should
consider adopting new identification, transparency, risk assessment, and
copyright protection requirements along the lines of the European Union’s
proposed AI
Act. The National Telecommunications and Information
Administration’s request
for comment on a proposed AI accountability framework and Sen.
Chuck Schumer’s (D-NY) recently-announced legislative
initiative to regulate AI might be moving in that direction.
TOM
WHEELER (@tewheels)
Both sides of the political aisle, as well as digital corporate chieftains, are
now talking about the need to regulate AI. A common theme is the need for a new
federal agency. To simply clone the model used for existing regulatory agencies
is not the answer, however. That model, developed for oversight of an
industrial economy, took advantage of slower paced innovation to micromanage
corporate activity. It is unsuitable for the velocity of the free-wheeling AI
era.
All regulations walk a
tightrope between protecting the public interest and promoting innovation and
investment. In the AI era, traversing this path means accepting that different
AI applications pose different risks and identifying a plan that pairs the
regulation with the risk while avoiding innovation-choking regulatory
micromanagement.
Such agility begins with
adopting the formula by which digital companies create technical standards
as the formula for developing behavioral standards: identify
the issue; assemble a standard-setting process involving the companies, civil
society, and the agency; then give final approval and enforcement authority to
the agency.
Industrialization was all
about replacing and/or augmenting the physical power of
humans. Artificial intelligence is about replacing and/or augmenting
humans’ cognitive powers. To confuse how the former was
regulated with what is needed for the latter would be to miss the opportunity
for regulation to be as innovative as the technology it oversees. We need
institutions for the digital era that address problems that already are
apparent to all.
Google and Microsoft are
general, unrestricted donors to the Brookings Institution. The findings,
interpretations, and conclusions posted in this piece are solely those of the
author and are not influenced by any donation.
Eine kurze
Geschichte der Digitalisierung
Von
elektrisierten Mönchen zur künstlichen Intelligenz: Die Geistesgeschichte der
Maschine
Wir
erleben täglich das Wechselbad der Gefühle: Digitalisierungsbegeisterung und
Furcht vor der fremden kalten Macht. Doch woher kommt sie, diese Macht? Der
Kulturtheoretiker Martin Burckhardt zeigt: alles ist von Menschen erdacht.
Schließlich begann das digitale Zeitalter 1746. Wir würden nicht im Internet surfen,
hätte Abbé Nollet damals nicht die Sofortwirkung von Elektrizität entdeckt.
Hätte Joseph-Marie Jacquard nicht den automatisierten Webstuhl erfunden und
Charles Babbage mit seiner Analytischen Maschine nicht den Grundstein für
unseren heutigen Computer gelegt. Nicht die Mathematik treibt die
Digitalisierung voran, sondern menschliche Wünsche und Sehnsüchte. Dieses Buch
ist eine Einladung, den Computer nicht als Gerät zu denken, sondern als
Gesellschaftsspiel, das unsere Zukunft prägen wird. Ein Crashkurs in der
Geistesgeschichte der Maschine…: https://www.amazon.com/Eine-kurze-Geschichte-Digitalisierung-German-ebook/dp/B07C3QDM4H
Nav komentāru:
Ierakstīt komentāru