Vere scire est per causas scire
Human of the Future & Artificial Intelligence
Artificial intelligence
can be not only a valuable assistant, but also a dangerous enemy.
In the wrong hands,
artificial intelligence can become a means of manipulation.
by Kai-Fu Lee ,
Chen Qiufan
In a groundbreaking blend of science and
imagination, the former president of Google China and a leading writer of
speculative fiction join forces to answer an urgent question: How will
artificial intelligence change our world over the next twenty years?
AI will be the defining issue of the twenty-first century, but many people know
little about it apart from visions of dystopian robots or flying cars. Though
the term has been around for half a century, it is only now, Kai-Fu Lee argues,
that AI is poised to upend our society, just as the arrival of technologies
like electricity and smart phones did before it. In the past five years, AI has
shown it can learn games like chess in mere hours--and beat humans every time.
AI has surpassed humans in speech and object recognition, even outperforming
radiologists in diagnosing lung cancer. AI is at a tipping point. What comes
next?
Within two decades, aspects of daily life may be unrecognizable. Humankind
needs to wake up to AI, both its pathways and perils. In this provocative work
that juxtaposes speculative storytelling and science, Lee, one of the world's
leading AI experts, has teamed up with celebrated novelist Chen Qiufan to
reveal how AI will trickle down into every aspect of our world by 2041. In ten
gripping narratives that crisscross the globe, coupled with incisive analysis,
Lee and Chen explore AI's challenges and its potential:
- Ubiquitous AI that knows you better than you know yourself
- Genetic fortune-telling that predicts risk of disease or even IQ
- AI sensors that creates a fully contactless society in a future
pandemic
- Immersive personalized entertainment to challenge our notion of
celebrity
- Quantum computing and other leaps that both eliminate and
increase risk
By gazing toward a not-so-distant horizon, AI 2041 offers
powerful insights and compelling storytelling for everyone interested in our
collective future.
https://www.goodreads.com/en/book/show/56377201-ai-2041
The Bletchley Declaration
by Countries Attending the AI Safety
Summit, 1-2 November 2023
Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.
AI systems are already
deployed across many domains of daily life including housing, employment,
transport, education, health, accessibility, and justice, and their use is
likely to increase. We recognise that this is therefore a unique moment to act
and affirm the need for the safe development of AI and for the
transformative opportunities of AI to be used for good and for all,
in an inclusive manner in our countries and globally. This includes for public
services such as health and education, food security, in science, clean energy,
biodiversity, and climate, to realise the enjoyment of human rights, and to
strengthen efforts towards the achievement of the United Nations Sustainable
Development Goals.
Alongside these
opportunities, AI also poses significant risks, including in those
domains of daily life. To that end, we welcome relevant international efforts
to examine and address the potential impact of AI systems in existing
fora and other relevant initiatives, and the recognition that the protection of
human rights, transparency and explainability, fairness, accountability,
regulation, safety, appropriate human oversight, ethics, bias mitigation,
privacy and data protection needs to be addressed. We also note the potential
for unforeseen risks stemming from the capability to manipulate content or
generate deceptive content. All of these issues are critically important and we
affirm the necessity and urgency of addressing them.
Particular safety risks arise
at the ‘frontier’ of AI, understood as being those highly capable
general-purpose AI models, including foundation models, that could
perform a wide variety of tasks - as well as relevant specific narrow AI that
could exhibit capabilities that cause harm - which match or exceed the
capabilities present in today’s most advanced models. Substantial risks may
arise from potential intentional misuse or unintended issues of control
relating to alignment with human intent. These issues are in part because those
capabilities are not fully understood and are therefore hard to predict. We are
especially concerned by such risks in domains such as cybersecurity and
biotechnology, as well as where frontier AI systems may amplify risks
such as disinformation. There is potential for serious, even catastrophic,
harm, either deliberate or unintentional, stemming from the most significant
capabilities of these AI models. Given the rapid and uncertain rate
of change of AI, and in the context of the acceleration of investment in
technology, we affirm that deepening our understanding of these potential risks
and of actions to address them is especially urgent.
Many risks arising
from AI are inherently international in nature, and so are best
addressed through international cooperation. We resolve to work together in an
inclusive manner to ensure human-centric, trustworthy and responsible AI that
is safe, and supports the good of all through existing international fora and
other relevant initiatives, to promote cooperation to address the broad range
of risks posed by AI. In doing so, we recognise that countries
should consider the importance of a pro-innovation and proportionate
governance and regulatory approach that maximises the benefits and
takes into account the risks associated with AI. This could include
making, where appropriate, classifications and categorisations of risk based on
national circumstances and applicable legal frameworks. We also note the
relevance of cooperation, where appropriate, on approaches such as common
principles and codes of conduct. With regard to the specific risks
most likely found in relation to frontier AI, we resolve to intensify and
sustain our cooperation, and broaden it with further countries, to identify,
understand and as appropriate act, through existing international fora and
other relevant initiatives, including future international AI Safety
Summits.
All actors have a role to
play in ensuring the safety of AI: nations, international fora and other
initiatives, companies, civil society and academia will need to work together.
Noting the importance of inclusive AI and bridging the digital
divide, we reaffirm that international collaboration should endeavour to engage
and involve a broad range of partners as appropriate, and welcome
development-orientated approaches and policies that could help developing
countries strengthen AI capacity building and leverage the enabling
role of AI to support sustainable growth and address the development
gap.
We affirm that, whilst safety
must be considered across the AI lifecycle, actors developing
frontier AI capabilities, in particular those AI systems
which are unusually powerful and potentially harmful, have a particularly strong
responsibility for ensuring the safety of these AI systems, including
through systems for safety testing, through evaluations, and by other
appropriate measures. We encourage all relevant actors to provide
context-appropriate transparency and accountability on their plans to measure,
monitor and mitigate potentially harmful capabilities and the associated
effects that may emerge, in particular to prevent misuse and issues of control,
and the amplification of other risks.
In the context of our
cooperation, and to inform action at the national and international levels, our
agenda for addressing frontier AI risk will focus on:
- identifying AI safety risks of shared
concern, building a shared scientific and evidence-based understanding of
these risks, and sustaining that understanding as capabilities continue to
increase, in the context of a wider global approach to understanding the
impact of AI in our societies.
- building respective risk-based policies across
our countries to ensure safety in light of such risks, collaborating as
appropriate while recognising our approaches may differ based on national
circumstances and applicable legal frameworks. This includes, alongside
increased transparency by private actors developing
frontier AI capabilities, appropriate evaluation metrics, tools
for safety testing, and developing relevant public sector capability and
scientific research.
In furtherance of this
agenda, we resolve to support an internationally inclusive network of
scientific research on frontier AI safety that encompasses and
complements existing and new multilateral, plurilateral and bilateral
collaboration, including through existing international fora and other relevant
initiatives, to facilitate the provision of the best science available for
policy making and the public good.
In recognition of the
transformative positive potential of AI, and as part of ensuring wider
international cooperation on AI, we resolve to sustain an inclusive global
dialogue that engages existing international fora and other relevant initiatives
and contributes in an open manner to broader international discussions, and to
continue research on frontier AI safety to ensure that the benefits
of the technology can be harnessed responsibly for good and for all. We look
forward to meeting again in 2024.
by Jacob Ward
This eye-opening
narrative journey into the rapidly changing world of artificial intelligence
reveals the alarming ways AI is exploiting the unconscious habits of our brains
– and the real threat it poses to humanity.: https://www.goodreads.com/en/book/show/59429424-the-loop
‘Trustworthy AI: A Business Guide for Navigating
Trust and Ethics in AI’
by Bina Ammanathi
The founders of Humans
for AI provide a straightforward and structured way to think about belief and
ethics in AI, and provide practical guidelines for organizations developing or
using artificial intelligence solutions.: https://soundcloud.com/reesecrane/pdfreadonline-trustworthy-ai-a-business-guide-for-navigating-trust-and
‘The New
Fire: War, Peace and Democracy in the Age of AI’
by Ben Buchanan and
Andrew Embry
Combining a sharp
grasp of technology with clever geopolitical analysis, two AI policy experts
explain how artificial intelligence can work for democracy. With the right
approach, technology need not favor tyranny. : https://www.goodreads.com/en/book/show/58329461-the-new-fire
Adversarial
vulnerabilities of human decision-making
November 17, 2020
Significance
“What I cannot efficiently break, I cannot
understand.” Understanding the vulnerabilities of human choice processes allows
us to detect and potentially avoid adversarial attacks. We develop a general
framework for creating adversaries for human decision-making. The framework is
based on recent developments in deep reinforcement learning models and
recurrent neural networks and can in principle be applied to any
decision-making task and adversarial objective. We show the performance of the
framework in three tasks involving choice, response inhibition, and social
decision-making. In all of the cases the framework was successful in its
adversarial attack. Furthermore, we show various ways to interpret the models
to provide insights into the exploitability of human choice.
Abstract
Adversarial examples are carefully crafted
input patterns that are surprisingly poorly classified by artificial and/or
natural neural networks. Here we examine adversarial vulnerabilities in the
processes responsible for learning and choice in humans. Building upon recent
recurrent neural network models of choice processes, we propose a general
framework for generating adversarial opponents that can shape the choices of
individuals in particular decision-making tasks toward the behavioral patterns
desired by the adversary. We show the efficacy of the framework through three
experiments involving action selection, response inhibition, and social
decision-making. We further investigate the strategy used by the adversary in
order to gain insights into the vulnerabilities of human choice. The framework
may find applications across behavioral sciences in helping detect and avoid
flawed choice. https://www.pnas.org/content/117/46/29221
Human Compatible:
Artificial
Intelligence and the Problem of Control
In the popular imagination,
superhuman artificial intelligence is an approaching tidal wave that threatens
not just jobs and human relationships, but civilization itself. Conflict
between humans and machines is seen as inevitable and its outcome all too
predictable.
In this groundbreaking book, distinguished AI researcher Stuart Russell argues
that this scenario can be avoided, but only if we rethink AI from the ground
up. Russell begins by exploring the idea of intelligence in humans and in
machines. He describes the near-term benefits we can expect, from intelligent
personal assistants to vastly accelerated scientific research, and outlines the
AI breakthroughs that still have to happen before we reach superhuman AI. He
also spells out the ways humans are already finding to misuse AI, from lethal
autonomous weapons to viral sabotage.
If the predicted breakthroughs occur and superhuman AI emerges, we will have
created entities far more powerful than ourselves. How can we ensure they
never, ever, have power over us? Russell suggests that we can rebuild AI on a
new foundation, according to which machines are designed to be inherently
uncertain about the human preferences they are required to satisfy. Such
machines would be humble, altruistic, and committed to pursue our objectives,
not theirs. This new foundation would allow us to create machines that are
provably deferential and provably beneficial.
In a 2014 editorial co-authored with Stephen Hawking, Russell wrote,
"Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last." Solving the problem of control
over AI is not just possible; it is the key that unlocks a future of unlimited
promise. ..: https://www.goodreads.com/en/book/show/44767248-human-compatible
Rise of AI 2020
Welcome to State
of AI Report 2021
Published by Nathan Benaich and Ian Hogarth on 12
October 2021.
This year’s report
looks particularly at the emergence of transformer technology, a technique to
focus machine learning algorithms on important relationships between data
points to extract meaning more comprehensively for better predictions, which
ultimately helped unlock many of the critical breakthroughs we highlight
throughout…:
https://www.stateof.ai/2021-report-launch.html
Statement on AI Risk
AI
experts and public figures express their concern about AI risk
AI experts, journalists, policymakers, and the public are increasingly
discussing a broad spectrum of important and urgent risks from AI. Even so, it
can be difficult to voice concerns about some of advanced AI’s most severe
risks. The succinct statement below aims to overcome this obstacle and open up
discussion. It is also meant to create common knowledge of the growing number
of experts and public figures who also take some of advanced AI’s most severe
risks seriously.
Mitigating the risk of extinction from AI should be a global priority
alongside other societal-scale risks such as pandemics and nuclear war.
https://www.safe.ai/statement-on-ai-risk#signatories
The Great AI Reckoning
Deep learning has built a brave new world—but now
the cracks are showing
The
Turbulent Past and Uncertain Future of Artificial Intelligence
Is there a way out of AI's boom-and-bust cycle?...:
https://spectrum.ieee.org/special-reports/the-great-ai-reckoning/
Stanford University(link is external):
Gathering Strength, Gathering Storms: The One
Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report
Welcome to the 2021 Report :
https://ai100.stanford.edu/sites/g/files/sbiybj18871/files/media/file/AI100Report_MT_10.pdf
Artificial Intelligence Index Report 2021
https://aiindex.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf
|
AI software
with social skills teaches humans how to collaborate
Unlocking
human-computer cooperation.
May 30, 2021
A
team of computer researchers developed an AI software program with social
skills — called S Sharp (written S#) — that out-performed humans in its ability
to cooperate. This was tested through a series of games between humans and the
AI software. The tests paired people with S# in a variety of social scenarios.
One
of the games humans played against the software is called “the prisoner’s
dilemma.” This classic game shows how 2 rational people might not cooperate —
even if it appears that’s in both their best interests to work together. The
other challenge was a sophisticated block-sharing game.
In
most cases, the S# software out-performed humans in finding compromises that
benefit both parties. To see the experiment in action, watch the good
featurette below. This project was helmed by 2 well-known computer scientists:
- Iyad Rahwan PhD ~
Massachusetts Institute of Technology • US
- Jacob Crandall PhD ~
Brigham Young Univ. • US
The
researchers tested humans and the AI in 3 types of game interactions:
- computer – to
– computer
- human – to –
computer
- human – to – human
Researcher
Jacob Crandall PhD said:
Computers
can now beat the best human minds in the most intellectually challenging games
— like chess. They can also perform tasks that are difficult for adult humans
to learn — like driving cars. Yet autonomous machines have difficulty learning
to cooperate, something even young children do.
Human
cooperation appears easy — but it’s very difficult to emulate because it relies
on cultural norms, deeply rooted instincts, and social mechanisms that express
disapproval of non-cooperative behavior.
Such
common sense mechanisms aren’t easily built into machines. In fact, the same AI
software programs that effectively play the board games of chess +
checkers, Atari video games, and the card game of poker — often fail to
consistently cooperate when cooperation is necessary.
Other
AI software often takes 100s of rounds of experience to learn to cooperate with
each other, if they cooperate at all. Can we build computers that
cooperate with humans — the way humans cooperate with each other? Building on
decades of research in AI, we built a new software program that learns to
cooperate with other machines — simply by trying to maximize its own world.
We
ran experiments that paired the AI with people in various social scenarios —
including a “prisoner’s dilemma” challenge and a sophisticated block-sharing game.
While the program consistently learns to cooperate with another computer — it
doesn’t cooperate very well with people. But people didn’t cooperate much with
each other either.
As
we all know: humans can cooperate better if they can communicate their intentions
through words + body language. So in hopes of creating an program that
consistently learns to cooperate with people — we gave our AI a way to listen
to people, and to talk to them.
We
did that in a way that lets the AI play in previously unanticipated scenarios.
The resulting algorithm achieved our goal. It consistently learns to cooperate
with people as well as people do. Our results show that 2 computers make a much
better team — better than 2 humans, and better than a human + a computer.
But
the program isn’t a blind cooperator. In fact, the AI can get pretty angry if
people don’t behave well. The historic computer scientist Alan Turing PhD
believed machines could potentially demonstrate human-like intelligence. Since
then, AI has been regularly portrayed as a threat to humanity or human jobs.
To
protect people, programmers have tried to code AI to follow legal + ethical
principles — like the 3 Laws
of Robotics written by Isaac Asimov PhD. Our research
demonstrates that a new path is possible.
Machines
designed to selfishly maximize their pay-offs can — and should — make an
autonomous choice to cooperate with humans across a wide range of situations. 2
humans — if they were honest with each other + loyal — would have done as well
as 2 machines. About half of the humans lied at some point. So the AI is
learning that moral characteristics are better — since it’s programmed to not
lie — and it also learns to maintain cooperation once it emerges.
The
goal is we need to understand the math behind cooperating with people — what
attributes does AI need so it can develop social skills. AI must be able to
respond to us — and articulate what it’s doing. It must interact with other
people. This research could help humans with their relationships. In society,
relationships break-down all the time. People that were friends for years
all-of-a-sudden become enemies. Because the AI is often better at reaching
these compromises than we are, it could teach us how to get-along better.
https://www.kurzweilai.net/digest-ai-software-with-social-skills-teaches-humans-how-to-collaborate
Superintelligence Cannot be Contained: Lessons from Computability Theory
Published: Jan 5, 2021
Abstract
Superintelligence is a
hypothetical agent that possesses intelligence far surpassing that of the
brightest and most gifted human minds. In light of recent advances in machine
intelligence, a number of scientists, philosophers and technologists have
revived the discussion about the potentially catastrophic risks entailed by
such an entity. In this article, we trace the origins and development of the
neo-fear of superintelligence, and some of the major proposals for its
containment. We argue that total containment is, in principle, impossible, due
to fundamental limits inherent to computing itself. Assuming that a superintelligence
will contain a program that includes all the programs that can be executed by a
universal Turing machine on input potentially as complex as the state of the
world, strict containment requires simulations of such a program, something
theoretically (and practically) impossible.
https://jair.org/index.php/jair/article/view/12202
University
of California researchers have developed new computer AI software that enables robots to
learn physical skills --- called motor tasks --- by trial + error. The
robot uses a step-by-step process similar to the way humans learn. The
lab made a demo of their technique --- called re-inforcement learning. In the
test: the robot completes a variety of physical tasks --- without any
pre-programmed details about its surroundings.
The lead researcher said: "What we’re showing is a new AI approach
to enable a robot to learn. The key is that when a robot is faced with
something new, we won’t have to re-program it. The exact same AI software
enables the robot to learn all the different tasks we gave it."
https://www.kurzweilai.net/digest-this-self-learning-ai-software-lets-robots-do-tasks-autonomously
Researchers at Lund University in Sweden have developed implantable electrodes
that can capture signals from a living human (or) animal brain over a long
period of time —- but without causing brain tissue damage.
This bio-medical tech will make it possible to
monitor — and eventually understand — brain function in both healthy + diseased people.
https://www.kurzweilai.net/digest-breakthrough-for-flexible-electrode-implants-in-the-brain
This paper reports unprecedented success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90% on the exam's non-diagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC questions. The results, on unseen test questions, are robust across different test years and different variations of this kind of test. They demonstrate that modern NLP methods can result in mastery on this task. While not a full solution to general question-answering (the questions are multiple choice, and the domain is restricted to 8th Grade science), it represents a significant milestone for the field.
- Jacob W.
Crandall, Mayada Oudah, Tennom, Fatimah Ishowo-Oloko, Sherief Abdallah,
Jean-François Bonnefon, Manuel Cebrian, Azim Shariff, Michael A. Goodrich,
Iyad Rahwan. Cooperating with machines. Nature Communications, 2018; 9 (1)
DOI: 10.1038/s41467-017-02597-8 (open access)
- Ting-Hao (Kenneth) Huang, Joseph Chee Chang, and Jeffrey P. Bigham. Evorus: . Language Technologies Institute and Human-Computer Interaction Institute Carnegie Mellon University. 2018. (open access)
The technology described in the film already exists, says UC Berkeley AI researcher Stuart Russell
Campaign to Stop Killer Robots | Slaughterbots
- Autonomousweapons.org
- Campaign
to Stop Killer Robots
- Making
the Case The Dangers of Killer Robots and the Need for a Preemptive Ban,
Human Rights Watch
- Meaningful
Human Control or Appropriate Human Judgment? The Necessary Limits on
Autonomous Weapons, Global Security
- ETHICALLY
ALIGNED DESIGN A Vision for Prioritizing Human Wellbeing with Artificial
Intelligence and Autonomous Systems, IEEE Global Initiative for
Ethical Considerations in Artificial Intelligence and Autonomous Systems
- Killing
by machine: Key issues for understanding meaningful human control,
Article 36
OCTOBER 30, 2023
President Biden
Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
Fast Stencil-Code Computation on a Wafer-Scale Processor
The performance of CPU-based
and GPU-based systems is often low for PDE codes, where large, sparse, and
often structured systems of linear equations must be solved. Iterative solvers
are limited by data movement, both between caches and memory and between nodes.
Here we describe the solution of such systems of equations on the Cerebras
Systems CS-1, a wafer-scale processor that has the memory bandwidth and
communication latency to perform well. We achieve 0.86 PFLOPS on a single
wafer-scale system for the solution by BiCGStab of a linear system arising from
a 7-point finite difference stencil on a 600 X 595 X 1536 mesh, achieving about
one third of the machine's peak performance. We explain the system, its
architecture and programming, and its performance on this problem and related
problems. We discuss issues of memory capacity and floating point precision. We
outline plans to extend this work towards full applications.: https://arxiv.org/abs/2010.03660
deck: Our singular future.
deck: The next 20 years will change our idea of what it is to be human.
author: by Robert Levine date: July 2006
OpenAI’s GPT-4 is so powerful that experts want to slam the
brakes on generative AI |
We
can keep developing more and more powerful AI models, but should we? Experts
aren’t so sure OpenAI
GPT-4 ir tik jaudīgs, ka eksperti vēlas nospiest ģeneratīvo AI Mēs
varam turpināt izstrādāt arvien jaudīgākus AI modeļus, bet vai mums
vajadzētu? Eksperti nav tik pārliecināti fastcompany.com/90873194/chatgpt-4-power-scientists-warn-pause-development-generative-ai-letter |
- 09.20.19
- Do We Want Robot Warriors to Decide Who Lives or Dies?
- Why we
really should ban autonomous weapons: a response
- The proposed ban on offensive autonomous weapons is unrealistic and dangerous
Campaign to Stop Killer Robots | Slaughterbots
Artificial Intelligence: An Illustrated History: From Medieval Robots to Neural Networks
by Clifford
A. Pickover
An illustrated
journey through the past, present, and future of artificial intelligence.
From medieval robots and Boolean algebra to facial recognition, artificial
neural networks, and adversarial patches, this fascinating history takes
readers on a vast tour through the world of artificial intelligence.
Award-winning author Clifford A. Pickover (The Math Book, The Physics Book,
Death & the Afterlife) explores the historic and current applications
of AI in such diverse fields as computing, medicine, popular culture,
mythology, and philosophy, and considers the enduring threat to humanity should
AI grow out of control. Across 100 illustrated entries, Pickover provides an
entertaining and informative look into when artificial intelligence began, how
it developed, where it’s going, and what it means for the future of
human-machine interaction.
https://www.goodreads.com/book/show/44443017-artificial-intelligence
Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI
April 21, 2017
When AI improves human performance instead of taking over
date: April 18, 2017
“Throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials,” says DeepMind Technologies CEO Demis Hassabis.
November 5, 2020
Summary:
Engineers have developed a
computer chip that combines two functions - logic operations and data storage -
into a single architecture, paving the way to more efficient devices. Their
technology is particularly promising for applications relying on artificial
intelligence…:
https://www.sciencedaily.com/releases/2020/11/201105112954.htm
- Published on May 11, 2017
- Featured in: Big Data, Cloud Computing, Information
Technology
- Security
- Improved customer success
- Monitoring and customer support
- Risk reduction and resource optimization
- Maximize efficiency by making logging data
accessible
- By Rolando
Somma on March 13, 2020
Location: London, UK
Read more at: https://phys.org/news/2017-03-tech-world-debate-robots-jobs.html#jCp
Around the halls: What should
the regulation of generative AI look like?
Nicol Turner Lee, Niam Yaraghi, Mark MacCarthy, and Tom Wheeler Friday, June 2, 2023
We are living in a time of unprecedented advancements in generative artificial intelligence (AI), which are AI systems that can generate a wide range of content, such as text or images. The release of ChatGPT, a chatbot powered by OpenAI’s GPT-3 large language model (LLM), in November 2022 ushered generative AI into the public consciousness, and other companies like Google and Microsoft have been equally busy creating new opportunities to leverage the technology. In the meantime, these continuing advancements and applications of generative AI have raised important questions about how the technology will affect the labor market, how its use of training data implicates intellectual property rights, and what shape government regulation of this industry should take. Last week, a congressional hearing with key industry leaders suggested an openness to AI regulation—something that legislators have already considered to reign in some of the potential negative consequences of generative AI and AI more broadly. Considering these developments, scholars across the Center for Technology Innovation (CTI) weighed in around the halls on what the regulation of generative AI should look like.
NICOL
TURNER LEE (@DrTurnerLee)
Generative AI refers to machine learning algorithms that can create new content
like audio, code, images, text, simulations, or even videos. More recent focus
has been on its enablement of chatbots, including ChatGPT, Bard, Copilot,
and other more sophisticated tools that leverage LLMs to
perform a variety of functions, like gathering research for assignments,
compiling legal case files, automating repetitive clerical tasks, or improving
online search. While debates around regulation are focused on the potential
downsides to generative AI, including the quality of datasets, unethical
applications, racial or gender bias, workforce implications, and greater
erosion of democratic processes due to technological manipulation by bad
actors, the upsides include a dramatic spike in efficiency and productivity as
the technology improves and simplifies certain processes and decisions like
streamlining physician processing of
medical notes, or helping educators teach critical
thinking skills. There will be a lot to discuss around generative AI’s ultimate
value and consequence to society, and if Congress continues to operate at a
very slow pace to regulate emerging technologies and institute a federal
privacy standard, generative AI will become more technically advanced and
deeply embedded in society. But where Congress could garner a very quick win on
the regulatory front is to require consumer disclosures when AI-generated
content is in use and add labeling or some type of multi-stakeholder certification
process to encourage improved transparency and accountability for existing and
future use cases.
Once again, the European
Union is already leading the way on this. In its most recent AI Act,
the EU requires that AI-generated content be disclosed to consumers to prevent
copyright infringement, illegal content, and other malfeasance related to
end-user lack of understanding about these systems. As more chatbots mine,
analyze, and present content in accessible ways for users, findings are often
not attributable to any one or multiple sources, and despite some permissions
of content use granted under the fair use doctrine in
the U.S. that protects copyright-protected work, consumers are often left in
the dark around the generation and explanation of the process and results.
Congress should prioritize
consumer protection in future regulation, and work to create agile policies
that are futureproofed to adapt to emerging consumer and societal
harms—starting with immediate safeguards for users before they are left to,
once again, fend for themselves as subjects of highly digitized products and
services. The EU may honestly be onto something with the disclosure
requirement, and the U.S. could further contextualize its application vis-à-vis
existing models that do the same, including the labeling guidance
of the Food and Drug Administration (FDA) or what I have proposed in prior
research: an adaptation of the Energy
Star Rating system to AI. Bringing more transparency and accountability
to these systems must be central to any regulatory framework, and beginning
with smaller bites of a big apple might be a first stab for policymakers.
NIAM
YARAGHI (@niamyaraghi)
With the emergence of sophisticated artificial intelligence (AI) advancements,
including large language models (LLMs) like GPT-4, and LLM-powered applications
like ChatGPT, there is a pressing need to revisit healthcare privacy
protections. At their core, all AI innovations utilize sophisticated
statistical techniques to discern patterns within extensive datasets using
increasingly powerful yet cost-effective computational technologies. These
three components—big data, advanced statistical methods, and computing
resources—have not only become available recently but are also being
democratized and made readily accessible to everyone at a pace unprecedented in
previous technological innovations. This progression allows us to identify
patterns that were previously indiscernible, which creates opportunities for
important advances but also possible harms to patients.
Privacy regulations, most
notably HIPAA, were established to protect patient confidentiality, operating
under the assumption that de-identified data would remain anonymous. However,
given the advancements in AI technology, the current landscape has become
riskier. Now, it’s easier than ever to integrate various datasets from multiple
sources, increasing the likelihood of accurately identifying individual
patients.
Apart from the amplified risk
to privacy and security, novel AI technologies have also increased the value of
healthcare data due to the enriched potential for knowledge extraction.
Consequently, many data providers may become more hesitant to share medical
information with their competitors, further complicating healthcare data
interoperability.
Considering these heightened
privacy concerns and the increased value of healthcare data, it’s crucial to
introduce modern legislation to ensure that medical providers will continue
sharing their data while being shielded against the consequences of potential
privacy breaches likely to emerge from the widespread use of generative AI.
MARK
MACCARTHY (@Mark_MacCarthy)
In “The
Leopard,” Giuseppe Di Lampedusa’s famous novel of the Sicilian
aristocratic reaction to the unification of Italy in the 1860s, one of his
central characters says, “If we want things to stay as they are, things will
have to change.”
Something like this Sicilian
response might be happening in the tech industry’s embrace of
inevitable AI regulation. Three things are needed, however, if we do not want
things to stay as they are.
The first and most important
step is sufficient resources for agencies to enforce current law. Federal Trade
Commission Chair Lina Khan properly says AI
is not exempt from current consumer protection, discrimination, employment, and
competition law, but if regulatory agencies cannot hire technical staff and
bring AI cases in a time of budget austerity, current law will be a dead
letter.
Second, policymakers should
not be distracted by science fiction fantasies of AI programs developing
consciousness and achieving independent agency over humans, even if these
metaphysical abstractions are endorsed by
industry leaders. Not a dime of public money should be spent on these highly
speculative diversions when scammers and industry edge-riders are seeking to
use AI to break existing law.
Third, Congress should
consider adopting new identification, transparency, risk assessment, and
copyright protection requirements along the lines of the European Union’s
proposed AI
Act. The National Telecommunications and Information
Administration’s request
for comment on a proposed AI accountability framework and Sen.
Chuck Schumer’s (D-NY) recently-announced legislative
initiative to regulate AI might be moving in that direction.
TOM
WHEELER (@tewheels)
Both sides of the political aisle, as well as digital corporate chieftains, are
now talking about the need to regulate AI. A common theme is the need for a new
federal agency. To simply clone the model used for existing regulatory agencies
is not the answer, however. That model, developed for oversight of an
industrial economy, took advantage of slower paced innovation to micromanage
corporate activity. It is unsuitable for the velocity of the free-wheeling AI
era.
All regulations walk a
tightrope between protecting the public interest and promoting innovation and
investment. In the AI era, traversing this path means accepting that different
AI applications pose different risks and identifying a plan that pairs the
regulation with the risk while avoiding innovation-choking regulatory
micromanagement.
Such agility begins with
adopting the formula by which digital companies create technical standards
as the formula for developing behavioral standards: identify
the issue; assemble a standard-setting process involving the companies, civil
society, and the agency; then give final approval and enforcement authority to
the agency.
Industrialization was all
about replacing and/or augmenting the physical power of
humans. Artificial intelligence is about replacing and/or augmenting
humans’ cognitive powers. To confuse how the former was
regulated with what is needed for the latter would be to miss the opportunity
for regulation to be as innovative as the technology it oversees. We need
institutions for the digital era that address problems that already are
apparent to all.
Google and Microsoft are
general, unrestricted donors to the Brookings Institution. The findings,
interpretations, and conclusions posted in this piece are solely those of the
author and are not influenced by any donation.
2. Uh oh!
3. What other choice do we have but to move forward?
AI will upload and access our memories, predicts Siri co-inventor
April 26, 2017
Case | Man with quadriplegia employs injury bridging technologies to move again – just by thinking
Hello, an amazing Information dude. Thanks for sharing this nice information with us. NMN Supplement Australia
AtbildētDzēst