Boston experimented with using generative AI for
governing. It went surprisingly well
BY SANTIAGO GARCES AND
STEPHEN GOLDSMITH
addresses important
questions. If it’s not implemented in a dynamic and flexible way, however, it
runs the risk of impeding the kinds of dramatic improvements in both government
and community participation that generative AI stands to offer.
Current bureaucratic
procedures, developed 150 years ago, need reform, and generative AI presents a
unique opportunity to do just that. As two lifelong public servants, we believe
that the risk of delaying reform is just as great as the risk of negative impacts.
Anxiety around generative AI,
which has been spilling across sectors from screenwriting to university
education, is understandable. Too often, though, the debate is framed only
around how the tools will disrupt us, not how these they might reform systems
that have been calcified for too long in regressive and inefficient patterns.
OpenAI’s ChatGPT and its
competitors are not yet part of the government reform movement, but they should
be. Most recent attempts to reinvent government have centered around elevating
good people within bad systems, with the hope that this will chip away at the
fossilized bad practices.
The level of transformative
change now will depend on visionary political leaders willing to work through
the tangle of outdated procedures, inequitable services, hierarchical
practices, and siloed agency verticals that hold back advances in responsive government.
New AI tools offer the most
hope ever for creating a broadly reformed, citizen-oriented governance. The
reforms we propose do not demand reorganization of municipal departments;
rather, they require examining the fundamental government operating systems and
using generative AI to empower employees to look across agencies for solutions,
analyze problems, calculate risk, and respond in record time.
What makes generative AI’s
potential so great is its ability to fundamentally change the operations of
government.
Bureaucracies rely on paper
and routines. The red tape of bureaucracy has been strangling employees and
constituents alike. Employees, denied the ability to quickly examine underlying
problems or risks, resort to slow-moving approval processes despite knowing,
through frontline experience, how systems could be optimized. And the big
machine of bureaucracy, unable or unwilling to identify the cause of a
prospective problem, resorts to reaction rather than preemption.
Finding patterns of any sort,
in everything from crime to waste, fraud to abuse, occurs infrequently and
often involves legions of inspectors. Regulators take months to painstakingly
look through compliance forms, unable to process a request based on its own
distinctive characteristics. Field workers equipped with AI could quickly
access the information they need to make a judgment about the cause of a
problem or offer a solution to help residents seeking assistance. These new
technologies allow workers to quickly review massive amounts of data that are
already in city government and find patterns, make predictions, and identify
norms in response to well framed inquiries.
Together, we have overseen
advancing technology innovation in five cities and worked with chief data
officers from 20 other municipalities toward the same goals, and we see the
possible advances of generative AI as having the most potential. For example,
Boston asked OpenAI to “suggest interesting analyses” after we uploaded 311
data. In response, it suggested two things: time series analysis by case time,
and a comparative analysis by neighborhood. This meant that city officials
spent less time navigating the mechanics of computing an analysis, and had more
time to dive into the patterns of discrepancy in service. The tools make
graphs, maps, and other visualizations with a simple prompt. With lower
barriers to analyze data, our city officials can formulate more hypotheses and
challenge assumptions, resulting in better decisions.
Not all city officials have
the engineering and web development experience needed to run these
tests and code. But this experiment shows that other city employees,
without any STEM background, could, with just a bit of training, utilize these
generative AI tools to supplement their work.
To make this possible, more
authority would need to be granted to frontline workers who too often have
their hands tied with red tape. Therefore, we encourage government leaders to
allow workers more discretion to solve problems, identify risks, and check
data. This is not inconsistent with accountability; rather, supervisors can
utilize these same generative AI tools, to identify patterns or outliers—say,
where race is inappropriately playing a part in decision-making, or where
program effectiveness drops off (and why). These new tools will more quickly
provide an indication as to which interventions are making a difference, or
precisely where a historic barrier is continuing to harm an already
marginalized community.
Civic groups will be able to
hold government accountable in new ways, too. This is where the linguistic
power of large language models really shines: Public employees and community
leaders alike can request that tools create visual process maps, build checklists
based on a description of a project, or monitor progress compliance. Imagine if
people who have a deep understanding of a city—its operations, neighborhoods,
history, and hopes for the future—can work toward shared goals, equipped with
the most powerful tools of the digital age. Gatekeepers of formerly mysterious
processes will lose their stranglehold, and expediters versed in state and
local ordinances, codes, and standards, will no longer be necessary to maneuver
around things like zoning or permitting processes.
Numerous challenges would
remain. Public workforces would still need better data analysis skills in order
to verify whether a tool is following the right steps and producing correct
information. City and state officials would need technology partners in the
private sector to develop and refine the necessary tools, and these
relationships raise challenging questions about privacy, security, and
algorithmic bias.
However, unlike previous
government reforms that merely made a dent in the issue of sprawling, outdated
government processes, the use of generative AI will, if broadly, correctly, and
fairly incorporated, produce the comprehensive changes necessary to bring
residents back to the center of local decision-making—and restore trust in
official conduct.
https://www.fastcompany.com/90983427/chatgpt-generative-ai-government-reform
Artificial intelligence in
government
Uses of AI in government[edit]
The potential uses of AI in government are wide and varied,[4] with Deloitte considering that
"Cognitive technologies could eventually revolutionize every facet of
government operations".[5] Mehr suggests that
six types of government problems are appropriate for AI applications:[2]
1. Resource allocation -
such as where administrative support is required to complete tasks more
quickly.
2. Large datasets - where
these are too large for employees to work efficiently and multiple datasets
could be combined to provide greater insights.
3. Experts shortage -
including where basic questions could be answered and niche issues can be
learned.
4. Predictable scenario -
historical data makes the situation predictable.
5. Procedural - repetitive
tasks where inputs or outputs have a binary answer.
6. Diverse data - where data
takes a variety of forms (such as visual and linguistic) and needs to be
summarised regularly.
Mehr states that "While applications of AI in government work have not
kept pace with the rapid expansion of AI in the private sector, the potential
use cases in the public sector mirror common applications in the private
sector."[2]
Potential and actual uses of AI in government can be divided into three
broad categories: those that contribute to public policy objectives; those that
assist public interactions with the government; and other uses.
Contributing to public policy objectives[edit]
There are a range of examples of where AI can contribute to public policy
objectives.[4] These include:
- Receiving benefits
at job loss, retirement, bereavement and child birth almost immediately,
in an automated way (thus without requiring any actions from citizens at
all)[6]
- Social insurance
service provision[3]
- Classifying
emergency calls based on their urgency (like the system used by the Cincinnati Fire Department in the United States[7])
- Detecting and
preventing the spread of diseases[7]
- Assisting public
servants in making welfare payments and immigration decisions[1]
- Adjudicating bail
hearings[1]
- Triaging health care
cases[1]
- Monitoring social
media for public feedback on policies[8]
- Monitoring social
media to identify emergency situations[8]
- Identifying
fraudulent benefits claims[8]
- Predicting a crime
and recommending optimal police presence[8]
- Predicting traffic
congestion and car accidents[8]
- Anticipating road
maintenance requirements[8]
- Identifying breaches
of health regulations[8]
- Providing
personalised education to students[7]
- Marking exam papers[1]
- Assisting with
defence and national security (see Artificial intelligence § Military and Applications of artificial
intelligence § Other fields in which AI methods are implemented respectively).
- Making symptom based
health Chatbot AI Vaid for diagnosis[9]
Assisting public interactions with government[edit]
AI can be used to assist members of the public to interact with government
and access government services,[4] for example by:
- Answering questions using virtual assistants or chatbots (see below)
- Directing requests
to the appropriate area within government[2]
- Filling out forms[2]
- Assisting with
searching documents (e.g. IP Australia's trade mark search[10])
- Scheduling
appointments[8]
Examples of virtual assistants or chatbots being used by government include
the following:
- Launched in February
2016, the Australian Taxation Office has a virtual
assistant on its website called
"Alex".[11] As at 30 June
2017, Alex could respond to more than 500 questions, had engaged in 1.5
million conversations and resolved over 81% of enquiries at first contact.[11]
- Australia's National Disability
Insurance Scheme (NDIS) is developing a virtual assistant
called "Nadia" which takes the form of an avatar using the voice of actor Cate Blanchett.[12] Nadia is
intended to assist users of the NDIS to navigate the service. Costing some
$4.5 million,[13] the project
has been postponed following a number of issues.[14][15] Nadia was
developed using IBM Watson,[16][12] however,
the Australian Government is considering other
platforms such as Microsoft Cortana for its further
development.[17]
- The Australian
Government's Department of Human Services uses virtual
assistants on parts of its website to answer questions
and encourage users to stay in the digital channel.[18] As at December
2018, a virtual assistant called "Sam" could answer general
questions about family, job seeker and student payments and related
information. The department also introduced an internally-facing virtual
assistant called "MelissHR" to make it easier for departmental
staff to access human resources information.[18]
- Estonia is building
a virtual assistant which will guide citizens through any interactions
they have with the government. Automated and proactive services
"push" services to citizens at key events of their lives
(including births, bereavements, unemployment, ...). One example is the
automated registering of babies when they are born.[19][20]
Other uses[edit]
Other uses of AI in government include:
- Translation[2]
- Language interpretation pioneered by the European Commission's Directorate General for
Interpretation and Florika Fink-Hooijer.
- Drafting documents[2]
Potential benefits[edit]
AI offers potential efficiencies and costs savings for the government. For
example, Deloitte has estimated that
automation could save US Government employees between 96.7
million to 1.2 billion hours a year, resulting in potential savings of between
$3.3 billion to $41.1 billion a year.[5] The Harvard Business Review has stated that while this
may lead a government to reduce employee numbers, "Governments could
instead choose to invest in the quality of its services. They can re-employ
workers' time towards more rewarding work that requires lateral thinking,
empathy, and creativity — all things at which humans continue to outperform
even the most sophisticated AI program."[1]
Risks[edit]
Risks associated with the use of AI in government include AI becoming
susceptible to bias,[2] a lack of
transparency in how an AI application may make decisions,[7] and the
accountability for any such decisions.[7]
AI in governance and the economic world might make the market more
difficult for companies to keep up with the increases in technology. Large U.S.
companies like Apple and Google are able to dominate the market with their
latest and most advanced technologies. This gives them an advantage over
smaller companies that do not have the means of advancing as far in the digital
technology fields with AI.[21]
See also[edit]
- Government by algorithm
- AI for Good
- Project Cybersyn
- Civic technology
- e-government
- Applications of artificial
intelligence
- Lawbot
- Regulation of artificial
intelligence
- Existential risk
from artificial general intelligence
- Artificial general intelligence
- Singleton (global governance)
https://en.wikipedia.org/wiki/Artificial_intelligence_in_government
- Manage AI Models. ...
- Data Governance & Security. ...
- Algorithmic Bias Mitigation. ...
- Implement Frameworks. ...
- Explainability & Transparency. ...
- Engage Stakeholders. ...
- Continuous Monitoring.
Nav komentāru:
Ierakstīt komentāru