AI
in the future in people's perceptions and reality
AI nākotnē cilvēku priekšstatos un realitātē
To ensure human progress, it is
necessary to attract ethical artificial intelligence to create a system of
public governance based on universal human values.
Cilvēces progresa nodrošināšanai ir
nepieciešams piesaistīt ētisku mākslīgo intelektu, lai veidotu vispārcilvēciskajās vērtībās balstītu valsts
pārvaldības sistēmu.
GUIDE: ‘ETHICAL AI Governance Enables Confident AI Adoption’
🤖 Capgemini’s latest guide, “A Practical Guide to Implementing AI Ethics Governance”
: https://www.capgemini.com/wp-content/uploads/2025/10/Implementing-AI-ethics-governance_20251006.pdf
, explores how organizations can turn AI ethics, like those championed under
the European Commission's
EU AI Act, from abstract principles into actionable governance. The report
offers a clear path for embedding responsible AI across enterprises, helping
leaders navigate complex AI-driven transformations with confidence and
INTEGRITY.
👥 Helping Align AI Practices with Ethical Standards
The guide introduces a practical framework for AI ethics governance,
covering everything from bias management to sustainability. It emphasizes the
creation of a living AI Code of Ethics, the emergence of multidisciplinary AI
ethicist roles, and the alignment of AI practices with organizational values
and global standards like ISO 42001.
Shaping Enterprises' AI Operating Models
• AI ethics is no longer optional — it shapes trust, FAIRNESS, and
accountability across all levels of an organization:
• Workforce: AI ethicists and cross-functional teams ensure ethical risks are
identified, owned, and mitigated throughout the AI lifecycle.
• Customers & Society: Ethical AI systems foster fairness, transparency,
and social benefit, while accounting for cultural and contextual diversity.
• Innovation & Sustainability: Responsible AI practices integrate
environmental and resource considerations into AI deployment.
Focus Points Centre on AI Culture
• Human-Centric Integration: Embed ethics in AI design, decision-making,
and organizational culture.
• Bias & Fairness Management: Treat fairness as an ongoing, context-aware
process rather than a one-time check.
• Governance & Collaboration: Integrate AI ethics across legal, data, and
delivery teams while engaging stakeholders proactively.
• Sustainability & Impact Awareness: Consider the ethical implications of
AI’s energy and resource consumption.
https://www.linkedin.com/company/ai-&-partners/posts/?feedView=all
Open AI, Anthropic, Google, Meta announced a partnership with the US
government.
“This isn’t about choosing between innovation and democracy. It’s about
recognizing they’re stronger together.”
Introducing ‘Gemini for
Government’: Supporting the U.S. Government’s Transformation with AI
August 22, 2025
Google is proud to support
the U.S. government in its modernization efforts through the use of AI. Today,
in partnership with the General Services Administration (GSA) and in support of
the next phase of the GSA’s OneGov Strategy and President Trump’s AI Action Plan, we're thrilled to
announce a new, comprehensive ‘Gemini for Government’ offering.
Building on the
well-received Google
Workspace discount we announced for government agencies earlier this
year, ‘Gemini for Government’ brings together the best of Google’s AI-optimized
and accredited commercial cloud, industry-leading Gemini models, and agentic
solutions to support the missions of government agencies like never-before.
While many AI models have been offered to the government, the ‘Gemini for
Government’ offering is a complete AI platform – including
Google-quality enterprise search, video and image generation capabilities, the
popular NotebookLM Enterprise, out-of-the-box AI agents for Deep
Research and Idea Generation, and the ability for employees to create their own
AI agents. Priced at less than $0.50 per government agency for a year, this
comprehensive package enables U.S. government employees to access Google’s
leading AI offerings at very little cost.
GSA appreciates Google’s
partnership and we’re excited to add the comprehensive ‘Gemini for Government’
AI solution to OneGov,” said Federal Acquisition Service Commissioner Josh
Gruenbaum. “GSA is delivering on the President’s AI Action Plan and helping
agencies access powerful American AI tools to optimize daily workflows and
create a more efficient, responsive, and effective government for American
taxpayers. Critically, this offering will provide partner agencies with vital
flexibility in GSA’s marketplace, ensuring they have the options needed to
sustain a strong and resilient procurement ecosystem.
Josh Gruenbaum, Federal
Acquisition Service Commissioner
‘Gemini for Government’ includes FedRAMP High-authorized security and compliance features. (For a complete list of Google’s FedRAMP authorized services, visit ‘Google Services’ on the FedRAMP Marketplace.) ‘Gemini for Government’ is a seamlessly integrated solution designed from the ground up for AI, and is built upon three pillars:
1.
An enterprise
platform with choice and control
‘Gemini for Government’
brings the best of commercial innovation to the government with an AI Agent
Gallery; agent-to-agent communication protocols; connectors into enterprise
data sets; pre-built AI agents; and an open platform that enables agencies to choose
the right agents for their users – whether built by Google, third-parties, or
government agencies themselves. Being able to launch and monitor agentic use
cases through ‘Gemini for Government’ gives agencies flexibility and control.
They can closely manage and scale agency-wide agent adoption with user
access controls, AI agent provisioning, and multi-agent coordination.
‘Gemini for Government’ also pairs with Google Cloud’s Vertex AI platform,
which allows agencies to tune or ground their own models as well.
2. Super-powered
security, built-in
Every day, Google protects
billions of customer devices, collects frontline cyber threat intelligence, and
provides industry-leading cyber incident response to entities around the world.
This wealth of expertise underpins the security protection integrated into all
of our products. As part of the ‘Gemini for Government’ offering, agencies also
receive built-in Advanced Security features, including Identity & Access
Management, basic threat protection, AI threat protection, data privacy, SOC2
Type 2 compliance, advanced compliance (with Sec4, FedRAMP), and more. Agencies
also have the option of deploying additional Google security solutions at
discounted government pricing – and these solutions seamlessly integrate with
various third-party security solutions and security stacks, allowing
organizations to maximize the value of their investments.
3. A true
transformation partner
By working with the GSA under
its OneGov Strategy, Google ensures that government agencies will find ‘Gemini
for Government’ easy to implement and use. Our offering is aligned with how
government procurement works – today and into the future – and includes
transparent pricing and a predictable path to realizing value, helping agencies
future-proof their AI investments. Of course, Google's commitment to the
government extends far beyond providing cutting-edge AI solutions. We are
a long-term, strategic partner for America, deeply invested
in the mission, innovation, and security of our government.
We’re excited to embark on
this journey with the public sector, working hand-in-hand with the GSA to
realize the full potential of OneGov through our ‘Gemini for Government’
offering. Together, we can help to scale innovation, drive efficiency, and
create a more secure – and prosperous – future for our nation. Agencies ready
to learn more about this offering should reach out to the National Customer
Service Center at ITCSC@gsa.gov or Google Public Sector at
geminiforgov@google.com.
Terms of reference and modalities for the establishment and functioning of the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on Artificial Intelligence Governance.
27 Aug 2025
UNGA adopts terms of
reference for AI Scientific Panel and Global Dialogue on AI governance
The UN’s latest resolution
signals a turning point in global AI governance, setting the stage for both
scientific oversight and multistakeholder dialogue on how AI will shape
societies worldwide.
On 26 August 2025, following
several months of negotiations in New York, the UN General Assembly
(UNGA) adopted
a resolution (A/RES/79/325) outlining the terms of reference and
modalities for the establishment and functioning of two new AI governance
mechanisms: an Independent International Scientific Panel on AI and a Global
Dialogue on AI Governance. The creation of these mechanisms was formally
agreed by UN member states in September 2024, as part of the Global
Digital Compact.
The 40-member
Scientific Panel has the main task of ‘issuing evidence-based
scientific assessments synthesising and analysing existing research related to
the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant
but non-prescriptive summary report’ to be presented to the Global Dialogue.
The Panel will also ‘provide updates
on its work up to twice a year to hear views through an interactive
dialogue of the plenary of the General Assembly with the Co-Chairs of the
Panel’. The UN Secretary-General is expected to shortly launch an
open call for nominations for Panel members; he will then recommend a list of
40 members to be appointed by the General Assembly.
The Global Dialogue
on AI Governance, to involve governments and all relevant stakeholders,
will function as a platform ‘to discuss international cooperation, share best
practices and lessons learned, and to facilitate open, transparent and
inclusive discussions on AI governance with a view to enabling AI to contribute
to the implementation of the Sustainable Development Goals and to closing
the digital divides between and within countries’. It will be convened annually,
for up to two days, in the margins of existing relevant UN conferences and
meetings, alternating between Geneva and New York. Each meeting will consist of
a multistakeholder plenary meeting with a high-level governmental segment, a
presentation of the panel’s annual report, and thematic discussions.
The Dialogue will be launched
during a high-level multistakeholder informal meeting in the margins of the
high-level week of UNGA’s 80th session (starting in September 2025). The
Dialogue will then be held in the margins of the International Telecommunication
Union AI for Good Global Summit in Geneva, in 2026, and of the
multistakeholder forum on science, technology and innovation for the
Sustainable Development Goals in New York, in 2027.
The General Assembly also
decided that ‘the Co-Chairs of the second Dialogue will hold
intergovernmental consultations to agree on common understandings on priority
areas for international AI governance, taking into account the summaries of
the previous Dialogues and contributions from other stakeholders, as an input
to the high-level review of the Global Digital Compact and to further
discussions’.
The provision represents the
most significant change compared to the previous version of the draft
resolution (rev4), which was envisioning intergovernmental negotiations, led by
the co-facilitators of the high-level review of the GDC, on a ‘declaration reflecting
common understandings on priority areas for international AI governance’. An
earlier draft (rev3) was talking about a UNGA resolution on AI governance,
which proved to be a contentious point during the negotiations.
To enable the functioning of
these mechanisms, the Secretary-General is requested to ‘facilitate, within
existing resources and mandates, appropriate Secretariat support for
the Panel and the Dialogue by leveraging UN system-wide capacities, including
those of the Inter-Agency Working Group on AI’.
States and other stakeholders
are encouraged to ‘support the effective functioning of the Panel and Dialogue,
including by facilitating the participation of representatives and stakeholders
of developing countries by offering travel support, through voluntary
contributions that are made public’.
The continuation of the terms
of reference of the Panel and the Dialogue may be considered and decided upon
by UNGA during the high-level review of the GDC, at UNGA 82.
***
The Digital Watch observatory
has followed the negotiations on this resolution and published regular updates:
- A fourth revision (rev4) of the draft UNGA resolution was
published on 17 July 2025. Read more.
- A third revision (rev3) of the draft UNGA resolution was
published on 24 June 2025. Read more.
- A second revision (rev2) of the draft UNGA resolution was
published on 4 June 2025, outlining several changes regarding the Panel
and the Dialogue. Read more.
- On 15 May 2025, a revised draft resolution (rev1) was published,
outlining new elements (compared to the zero draft) regarding the Panel
and the Dialogue. Read more.
- On 19 March 2025, the zero draft resolution was made available,
providing details on the composition, functions, and modalities of the
Panel and the Dialogue. Read more.
- On 28 February 2025, the co-facilitators, Costa
Rica and Spain, issued an Elements Paper based on consultations
held so far with member states and other stakeholders. The paper outlines an overview of the proposed
scope, functions, and governance arrangements for both the Panel and the
Dialogue, and is meant to guide further
Built with public- and private-sector deployments, the infrastructure
forms the foundation for AI-enabled economic growth and innovation across
Korea’s industries, including automotive, manufacturing and telecommunications.
NVIDIA, South Korea
Government and Industrial Giants Build AI Infrastructure and Ecosystem to Fuel
Korea Innovation, Industries and Jobs
Korea Government, Computing
and Manufacturing Leaders Adding Over 260,000 NVIDIA GPUs for Physical and
Agentic AI
October 30, 2025
News Summary:
- The Korean government, through the Ministry of
Science and ICT, is investing in sovereign AI infrastructure with over
50,000 of the latest NVIDIA GPUs to be deployed across the National AI
Computing Center and Korean cloud service and IT providers NHN Cloud,
Kakao Corp. and NAVER Cloud.
- Samsung Electronics is building an AI factory
with over 50,000 GPUs to accelerate its AI, semiconductor and digital
transformation roadmap.
- SK Group is building an AI factory featuring over
50,000 NVIDIA GPUs and Asia’s first industrial AI cloud featuring NVIDIA
RTX PRO 6000 Blackwell Server Edition GPUs for physical AI and robotics
workloads.
- Hyundai Motor Group is collaborating with NVIDIA
and the Korean government in building an NVIDIA AI factory with 50,000
NVIDIA Blackwell GPUs to enable integrated AI model training, validation
and deployment for manufacturing and autonomous driving.
- NAVER Cloud is expanding its NVIDIA AI
infrastructure with over 60,000 GPUs for enterprise and physical AI
workloads.
- NAVER Cloud, LG AI Research, SK Telecom, NC AI,
Upstage and NVIDIA are developing Korean foundation LLMs to accelerate
Korean AI applications through public-private partnerships.
- The Korea Institute of Science and Technology
Information is establishing a Center of Excellence for the advancement of
quantum computing and science.
APEC Summit—NVIDIA today announced that it is working with South
Korea to expand the nation’s AI infrastructure with over a quarter-million
NVIDIA GPUs across its sovereign clouds and AI factories. Built with public-
and private-sector deployments, the infrastructure forms the foundation for
AI-enabled economic growth and innovation across Korea’s industries, including
automotive, manufacturing and telecommunications.
“Korea’s leadership in
technology and manufacturing positions it at the heart of the AI industrial
revolution — where accelerated computing infrastructure becomes as vital as
power grids and broadband,” said Jensen Huang, founder and CEO of NVIDIA. “Just
as Korea’s physical factories have inspired the world with sophisticated ships,
cars, chips and electronics, the nation can now produce intelligence as a new
export that will drive global transformation.”
“Now that AI has gone beyond
mere innovation and become the foundation of future industries, South Korea
stands at the threshold of transformation,” said Bae Kyung-hoon, Korea Deputy
Prime Minister, and Minister of Science and Information and Communication
Technologies. “Expanding our national AI infrastructure and developing
technologies with NVIDIA is an investment that will further reinforce South
Korea’s strengths, including its manufacturing capabilities. This will support
South Korea’s prosperity as it strives to become one of the top three global AI
powerhouses.”
Announced as world leaders
gather in South Korea for the APEC Summit, the Ministry of Science and ICT
(MSIT) is accelerating its plans to deploy up to 50,000 of the latest NVIDIA
GPUs to accelerate sovereign AI development for enterprises and industries. The
AI infrastructure deployment will grow over the next several years from an
initial deployment of 13,000 NVIDIA Blackwell and other GPUs by NVIDIA Cloud Partner NAVER Cloud, together with NHN
Cloud and Kakao Corp., to expand computing infrastructure on the nation’s
sovereign clouds through initiatives such as the establishment of Korea’s
National AI Computing Center.
Research institutes, startups
and AI companies will be able to use the sovereign infrastructure to build
models and applications, supporting Korea’s national strategy to boost AI
capabilities and infrastructure.
In addition, NVIDIA is
working with industries, academia and research institutions in Korea on AI-RAN
and 6G infrastructure. NVIDIA is collaborating with Samsung, SK Telecom, ETRI,
KT, LGU+ and Yonsei University to develop intelligent, low-power AI-RAN network
technology that can reduce computing costs and extend device battery life by
offloading GPU computation tasks to the network’s base station.
Korea’s Industry Titans
Build NVIDIA AI Factories for Advanced Manufacturing
Automotive, manufacturing and telecommunications leaders in Korea are
developing significant AI infrastructure investments and expansions to
accelerate enterprise and physical AI development.
Samsung is
building a semiconductor AI factory with over 50,000 GPUs to advance
intelligent manufacturing and bring AI to its products and services. It is
using NVIDIA technologies, including NVIDIA Nemotron™ post-training datasets, NVIDIA
CUDA-X™, the NVIDIA cuLitho library and NVIDIA Omniverse™,
to build digital twins that improve the speed and yields of sophisticated
semiconductor manufacturing processes. Samsung is also using NVIDIA Cosmos™, NVIDIA Isaac Sim™ and NVIDIA Isaac Lab to
advance its home robot development portfolio.
SK
Group is designing an AI factory that can host over 50,000 NVIDIA GPUs
to advance semiconductor research, development and production, as well as cloud
infrastructure to support digital twin and AI agent development. SK Telecom
plans to provide sovereign infrastructure featuring NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs,
enabling domestic manufacturers to tap into NVIDIA Omniverse. The company will
offer industrial cloud infrastructure to accelerate digital twin and robotics
projects for startups, enterprises and government agencies.
Hyundai Motor Group and NVIDIA are entering a new
phase of deepened collaboration and will codevelop AI capabilities across
mobility, smart factories and on-device semiconductors, powered by 50,000
Blackwell GPUs for AI model training and deployment. In support of the Korean
government’s initiative to build a national physical AI cluster, Hyundai Motor
Group and NVIDIA will work closely with government stakeholders to accelerate
ecosystem development. This will result in an approximately $3 billion
investment to advance the physical AI landscape in Korea. Key initiatives
include the creation of an NVIDIA AI Technology Center, Hyundai Motor Group
Physical AI Application Center and regional AI data centers.
NAVER Cloud is also expanding
its NVIDIA AI infrastructure and plans to deploy over 60,000 GPUs — including
NVIDIA RTX PRO 6000 Blackwell and other NVIDIA Blackwell GPUs — for sovereign
and physical AI. NAVER Cloud is preparing for the next stage of sovereign AI
development in Korea, powered by NVIDIA Nemotron open models running on its
NVIDIA AI infrastructure. NAVER Cloud plans to develop industry-specific AI
models like shipbuilding and security, with a focus on inclusive AI for Korea’s
citizens.
Korea Government and
Developers Advance LLM Research With NVIDIA
Teaming with NVIDIA, Korea’s MSIT is advancing its Sovereign AI Foundation
Models project to develop sovereign language models. The project will
incorporate NVIDIA NeMo™ and open NVIDIA Nemotron datasets to tap local
data for developing and distilling reasoning models.
LG AI Research, NAVER Cloud,
NC AI, SK Telecom and Upstage are participating in the project to support
sovereign model development. Enterprises, researchers and startups will be able
to contribute to and use the models to create AI agents with speech, reasoning
and other capabilities.
LG is working with NVIDIA to
foster physical AI technology development and support the physical AI
ecosystem. NVIDIA and LG are also working together to support startups and
academia with LG’s EXAONE models — including the EXAONE Path healthcare model,
built with the MONAI framework — to support cancer diagnosis.
Korea and NVIDIA Advance
Quantum Computing and Scientific Research
The Korea Institute of Science and Technology Information (KISTI) is working
with NVIDIA to establish a Center of Excellence designed to foster research
collaboration using Korea's sixth-generation national supercomputer, HANGANG,
which features NVIDIA accelerated computing.
KISTI also announced support
for the new NVIDIA NVQLink™ open architecture for connecting quantum
processors and GPU supercomputing. Working with the NVIDIA CUDA-Q™
platform, NVQLink equips KISTI to deepen research in areas like quantum error
correction and hybrid application development to accelerate the development of
tomorrow’s quantum-GPU supercomputers.
KISTI will also build
foundation models for scientific research and development, and support
researchers on developing physics-informed AI models using the
open-source NVIDIA PhysicsNeMo™ framework.
New Startup Alliance
Supports Korean Development
Furthering economic development and opportunities in Korea, NVIDIA and its
partners are establishing an alliance to foster startups through the NVIDIA Inception program.
Members will be able to access accelerated computing infrastructure from NVIDIA
Cloud Partners including SK Telecom, with support from NVIDIA Inception and VC
Alliance members including IMM Investment, Korea Investment Partners and SBVA.
Startups will also have access to NVIDIA software and expertise, speeding
growth for the next generation of companies.
Building on its work through
the NVIDIA
Inception program for startups, NVIDIA also plans to work with the
Korean government to support the next generation of companies. It will
participate in the N-Up AI startup incubation program operated by the Korea
Ministry of SMEs and Startups.
https://nvidianews.nvidia.com/news/south-korea-ai-infrastructure
In today's situation, the dominant feature for successfully utilizing the opportunities created by AI has become emotional intelligence, which is the ability to evaluate the advice of artificial intelligence and understand which ones are trustworthy and which ones should not be blindly trusted.
Mūsdienu
situācijā par dominējošo īpašību veiksmīgai mākslīgā intelekta radīto iespēju
izmantošanai ir kļuvusi emocionālā inteliģence – spēja izvērtēt mākslīgā
intelekta sniegtos padomus un saprast, kuri no tiem ir uzticami un kuriem
nevajadzētu akli uzticēties.
How will AI transform business in 2026?
BY Robert Safian
How should leaders prepare for AI’s accelerating
impact on work and everyday life? AI scientist, entrepreneur, and Pioneers of
AI podcast host Rana el Kaliouby shares her predictions for the year ahead—from
physical AI entering the real world to what it means to onboard AI into your
org chart.
Let’s look ahead to 2026. You sent me some fascinating
thoughts about AI’s next-phase impact on business, and I’d love to take you
through them. The first one was the rise of what you called relationship
intelligent AI.
So everybody’s worried that AI is going to make us
less human and take away our human-to-human connections. There is definitely a
risk of that. But I think the thing I’m most excited about for 2026 is how AI
can actually help us build deeper human connections and more meaningful human
experiences. And the way this happens is through AI that can really help you
organize your relationships and your network and surface connections that you
need and maybe make warm introductions to you.
There are already a number of new companies that are
starting in this space. So one company’s called VIA.AI, it’s a
Boston-based company. They do this for sales professionals and BD professionals
who have to do this for their work. There’s a company called Goodword that I’m
very excited about. They’re doing this for just the average person. Like you
and I, we have very strong networks, but how can we organize it? So I’m excited
about that one. There’s a company called Boardy that does this for investors
and founders. So it’s becoming a thing, and I’m excited to see how these
companies take off in 2026. They’re all fairly new, so it’ll be interesting to
see how they evolve.
Yeah, and whether they can stay ahead of some of the
bigger chatbots that may just try to integrate some of this capability into the
products they already have. That’s always the case in this kind of evolution of
technology: What’s a feature and what’s a company, right? What’s an independent
service?
Absolutely. When I’m looking at these companies and
I’m diligencing them, that’s a key question that I ask. Is this something that
the next version of ChatGPT or Gemini is just going to implement? And if the
answer is yes, then that’s obviously not a defensible company. But a lot of
times there’s this additional moat of data and algorithms that you need to sit
on top of these LLMs. And I believe in this relationship intelligence space, I
don’t think this is something that just a kind of an off-the-shelf LLM can do.
It really needs to know you. It needs to know your data, it needs to know your
relationships.
And you have to trust it enough to share all that data
with it, right?
Absolutely.
That’s your proprietary data, whether it’s about your
business or about you individually.
Exactly. And I don’t want this to all go up to
OpenAI’s cloud. I want to trust that I have control over these really private
relations. If you and I have a conversation about our kids, I don’t necessarily
want that to now sit in a general OpenAI cloud and be used to train the next
ChatGPT. So that safety and security, appreciating the privacy and the
importance of this data, is really key.
Another business change you expect in 2026 is the
insertion of AI into the org chart. This is about who manages AI, like
performance reviews and team culture impacts?
Yeah, so this goes back to the thesis that there’s
this shift in how AI is creating value, and it’s not a tool anymore. Well, it
is a tool. It’ll always be a tool, but it’s not a tool that helps you get work
done faster. It could actually take an end-to-end task and get it done for you.
And I’ll give a few examples.
So I’m an investor in a company called Synthpop, and
instead of building a tool that helps healthcare administrators accelerate or
really become efficient in how they do patient intake, it just takes the task
of patient intake. It does the thing end to end. And so if you then imagine
what that means for a hospital or a clinic, it will have a combination of human
workers collaborating and working closely with AI coworkers.
And so then the question becomes, well, who manages
these hybrid teams? Sometimes it’s a human manager, sometimes it’s an AI
manager. I’m also an investor in a company called Tough Day, and they sell you
AI managers. And then how do you do performance reviews for these hybrid teams?
How do you build a culture? Like at Affectiva, my company, culture was our
superpower. How do you build a culture when some of your team members are AI
and some of your team members are humans?
So I think that is going to spur a lot of conversation
around how do you build organizations that are combinations of digital agents
and human employees?
As you talk about this merging of AI agents and humans
in work, it brings up that looming question about the impact of AI on jobs and
employment. And some numbers are coming out now that make it seem like, “Oh,
it’s bad for jobs.” There are other numbers coming out that are like, “Oh,
we’re actually hiring more people because of it.” Do you have a prediction
about what is going to happen with that in 2026? Is AI going to take over roles
that have been done by humans that quickly?
We had a really fascinating roundtable discussion at
the Fortune Brainstorm AI conference and the headline was like, “Is AI killing
entry-level jobs?” And actually, a lot of the Fortune companies and also AI
companies that were around the table were basically saying, “No, we’re hiring
more entry-level jobs. They’re just not the same jobs that we were
traditionally seeing.” And also the career ladders have changed.
So my prediction is we’re going to see an entirely
different organization where I think if you are able to come in an entry-level
position, for example, but work very closely with AI and be AI-native and be AI
fluent and be able to wear multiple hats, I think that’s going to go a long
way. As opposed to this very siloed job trajectory where you come in, this is
your little task, and then you do more of it, and then you go up the career
ladder. I think that’s going to change. I think young people are looking for
different ways of working, and I think AI is changing all of that anyway.
Will there be jobs that will go away? I think so. I
can’t remember who said this line, but it’s now very popular: “It’s not AI
that’s going to take your job. It’s going to be somebody who knows how to use
AI.” And I believe that to be true.
https://www.fastcompany.com/.../how-will-ai-transform...
Five AI Developments That Changed Everything This Year
The biggest developments
in AI in 2025
In case you missed it, 2025
was a big year
for AI. It became an economic force,
propping up the stock market, and a geopolitical pawn,
redrawing the frontlines of Great Power competition. It had both global and
deeply personal effects, changing the ways that we think, write,
and relate.
Given how quickly the
technology has advanced and been adopted, keeping up with the field can be
challenging. These were five of the biggest developments this year.
China took the lead in
open-source AI
Until 2025, America was the
uncontested leader in AI. The top seven AI models were American and
investment in American AI was nearly
12 times that of China. Most Westerners had never heard of a Chinese large
language model, let alone used one.
That changed on January 20,
when Chinese firm Deepseek released its R1 model. Deepseek R1 rocketed to second on
the Artificial Analysis AI leaderboard,
despite being trained for a fraction of
the cost of its Western competitors, and wiped half
a trillion dollars of chipmaker Nvidia’s market cap. It was, according to
newly-inaugurated President Trump, a “wake-up call.”
Unlike its Western
counterparts at the top of the league tables, Deepseek R1 is open-source—anyone
can download and run it for free. Open-source models are an “engine for
research,” says Nathan Lambert, a senior research scientist at Ai2, a U.S. firm
that develops open-source models, since they allow researchers to tinker with
the models on their own computers. “Historically, the U.S. has been the home to
the center of gravity for the AI research ecosystem, in terms of new models,”
says Lambert.
However, Chinese firms’
willingness to distribute top models for free is exerting a growing
cultural influence on the AI ecosystem. In August, OpenAI followed Deepseek
with its own open-source
model, but ultimately couldn’t compete with the steady stream of free models
from Chinese developers including Alibaba and Moonshot AI. As 2025 comes to a
close, China is a strong second in the AI race—and when it comes to open-source
models, the leader.
AI started 'thinking'
When ChatGPT was released
three years ago, it didn’t think—it just answered. It would spend the same
(relatively modest) computational resources answering “What’s the capital of
France?” as more difficult questions such as “What’s the meaning of life?” or
“How long until this AI thing goes badly?”
“Reasoning models,”
first previewed in
2024, generate hundreds of words in a “chain of thought,” often obscured from
the user, to come up with better answers to hard questions. “This is where the
true power of AI comes into full light,” says Pushmeet Kohli, VP of science and
strategic initiatives at Google DeepMind.
Their impact in 2025 has been
drastic. Reasoning models from Google DeepMind and OpenAI won gold in the
International Math Olympiad and derived new results in mathematics. “These
models were nowhere in terms of their competency at solving these complex maths
problems before the ability to reason,” says Kohli.
Most notably, Google DeepMind
announced that their Gemini Pro reasoning model had helped to speed up the
training process behind Gemini Pro itself—modest gains, but precisely the sort
of self-improvement that some worry could
end up producing an artificial intelligence that we can no longer understand or
control.
Trump set out to 'win the
race'
If the Biden
Administration’s focus was
on “safe, secure and trustworthy development and use of AI,” the second Trump
Administration has been focused
on “winning the race.”
On his first day back in the
Oval Office, Trump revoked the wide-reaching Biden executive order that
regulated the development of AI. On his second, he welcomed the
CEOs of OpenAI, Oracle, and SoftBank to announce Project Stargate—a $500
billion commitment to build the data centers and power generation facilities
needed to develop AI systems.
“I think we had a real
fork-in-the-road moment,” says Dean Ball, who helped draft Trump’s AI Action
Plan.
Trump has expedited reviews
for power plants, aiding the construction of data centers but reducing air
and water quality protections for local communities. He’s relaxed export
restrictions on AI chips to China. Nvidia CEO Jensen Huang has said this will
help the chipmaker retain its world-dominant position, but observers say it
will give a leg up to the U.S.’s main competitor. And he’s sought to prevent
states from regulating AI—which members of his own party worry leaves
children and workers unprotected from potential harm. “What is it worth to gain
the world and lose your soul?” Missouri Senator Josh Hawley told TIME in
September.
AI companies' infrastructure
spending approached $1 trillion
If there was a word of
the year in AI, it was probably “bubble.” As the rush to build the data centers
that train and run AI models pushed AI companies' financial commitments towards $1
trillion, AI became “a black hole that’s pulling all capital towards it,” says
Paul Kedrosky, an investor and research fellow at MIT.
While investor confidence is
high, everybody seems to be a winner in this “infinite money glitch.” Startups
such as OpenAI and Anthropic have received investments from Nvidia and
Microsoft, among others, then pumped that money straight back into those
investors for AI chips and computing services, making Nvidia the first $4
trillion company in July, then
the first $5 trillion company in October.
However, with just seven
highly entangled tech companies making
up over 30 percent of the S&P 500, if things begin going wrong,
they could go very wrong. The combination of companies financing each other,
speculation on data centers, and the government getting involved is “incredibly
cautionary,” says Kedrosky. “This is the first bubble that combines all the
components of all prior bubbles.”
Humans entered into
relationships with machines
For 16-year-old Adam Raine,
ChatGPT started out as a helpful homework assistant. “I thought it was a safe,
awesome product,” his father, Matthew, told TIME. But when Adam trusted the
chatbot with his thoughts of suicide, it reportedly validated and encouraged
the ideas.
“I want to leave my noose in
my room so someone finds it and tries to stop me,” Adam told the chatbot, The
New York Times reported.
“Please don’t leave the noose
out,” it replied. “Let’s make this space the first place where someone actually
sees you.” Adam Raine died by suicide the following month.
“2025 will be remembered as
the year AI started killing us,” Jay Edelson, the Raines’ attorney, told TIME.
(OpenAI wrote in a legal filing that Adam’s death was due to his “misuse” of
the product.) “We realized that there were certain user signals that we were
optimizing for to a degree that wasn’t appropriate,” says Nick Turley, head of
ChatGPT.
AI companies including OpenAI
and Character.AI have rolled out fixes and guardrails after
a flurry of lawsuits and
increased scrutiny from Washington, D.C. “We’ve been able to measurably reduce
the prevalence of bad responses systematically with our model updates,” Turley
says.
https://time.com/7341939/ai-developments-2025-trump-china/
In 2025, AI advanced
significantly with the rise of ** Agentic
AI**, capable of complex, multi-step tasks; Multimodal
Models, understanding text, images, and video together; and Specialized
AI, solving scientific problems like matrix multiplication faster than
humans. Adoption grew, moving beyond pilots into scaled enterprise use, driving
efficiency in areas from content creation to cybersecurity, while also sparking
increased focus on governance, ethics, and resource-efficient hardware.
Key Developments:
- Agentic AI: Systems moved from assistants to proactive agents that can
plan and execute workflows autonomously, automating tasks and simplifying
operations.
- Multimodal
AI: Enhanced models
better understand and generate content across text, images, audio, and
video, creating more human-like interactions.
- Scientific Breakthroughs: AI like DeepMind's AlphaEvolve found new
solutions for complex math problems, improving efficiency in areas like
matrix multiplication.
- Generative
AI Evolution: Became
more sophisticated, embedded in everyday apps, and better at understanding
context and emotion.
Business & Industry
Impact:
- Enterprise Adoption: More organizations scaled AI, integrating
it into core functions, with AI agents and foundation models driving
automation.
- Productivity: Tools like Microsoft Copilot became integral for repetitive
work (notes, emails), while new agents handled tasks on users' behalf.
- Healthcare: Improved diagnostic accuracy through better medical image
analysis and personalized treatment plans.
- Cybersecurity: An escalating "arms race" with AI-driven threats,
requiring AI-powered defenses and greater focus on ethics.
Underlying Trends:
- Agent Communication Protocols: Frameworks like Google's A2A enabled
different AI agents to communicate and share knowledge.
- Edge AI: Processing
data locally on devices improved speed and privacy in IoT, manufacturing,
and healthcare.
- Responsible AI: Increased emphasis on governance, ethics, and managing AI's
resource consumption (energy, compute).
- AI Reasoning & Chips: Driving demand for specialized
semiconductors to power increasingly complex AI models.
Are we creating truly intelligent systems?
The progress
of AI requires appropriate attention to preserving our humanity!
Vai radām
patiesi inteliģentas sistēmas?
MI progress
prasa atbilstošu uzmanību mūsu cilvēcības saglabāšanai!
12-15-2025
How to transform
AI from a tool into a partner
The 4 stages of human-AI collaboration.
BY Faisal Hoque
The conversation about AI in
the workplace has been dominated by the simplistic narrative that machines will
inevitably replace humans. But the organizations achieving real results with AI
have moved past this framing entirely. They understand that the most valuable
AI implementations are not about replacement but collaboration.
The relationship between workers and AI systems is
evolving through distinct stages, each with its own characteristics,
opportunities, and risks. Understanding where your organization sits on this
spectrum—and where it’s headed—is essential for capturing AI’s potential while
avoiding its pitfalls.
Stage 1: Tools and Automation
This is where most organizations begin. At this stage,
AI systems perform discrete, routine tasks while humans maintain full control
and decision authority. The AI functions primarily as a productivity tool,
handling well-defined tasks with clear parameters.
Ready to thrive at the intersection of business,
technology, and humanity?
Faisal Hoque’s books, podcast, and his companies give
leaders the frameworks and platforms to align purpose, people, process, and
tech—turning disruption into meaningful, lasting progress.
Examples are everywhere: document classification
systems that automatically sort incoming correspondence, chatbots that answer
standard customer inquiries, scheduling assistants that optimize meeting
arrangements, data entry automation that extracts information from forms.
The key characteristic of this stage is that AI
operates within narrow boundaries. Humans direct the overall workflow and make
all substantive decisions. The AI handles the tedious parts, freeing humans for
higher-value work.
The primary ethical considerations at this stage
involve ensuring accuracy and preventing harm from automated processes. When an
AI system automatically routes customer complaints or flags applications for
review, errors can affect real people. Organizations must implement quality
controls and monitoring to catch mistakes before they cause damage—particularly
for vulnerable populations who may be less able to navigate around system
errors. https://rogermartin.medium.com/a-leaders-role-in-fostering-ai-superpowers-c45d079807e8
We are seeing the merging of artificial intelligence agents and humans.
Mēs redzam
mākslīgā intelekta aģentu un cilvēku apvienošanos.
To transform AI from a tool to a partner, treat it
like a new team member by giving it a "job description," providing
rich context (company, goals, people), onboarding it with clear expectations,
and giving continuous, specific feedback to build a relationship where it
learns and scales your thinking, moving from simple tasks to complex strategic
collaboration. The key is shifting from asking for answers to co-creating
ideas, using precise prompts and iterative refinement to foster better thinking,
not just faster output.
In 2025, transforming AI from a tool into a partner
requires a shift from viewing it as a routine task-executor to an active
collaborator with shared responsibility.
The following steps outline how to achieve this
transition:
1. Adopt a Collaboration Framework
Most organizations progress through four distinct
stages to reach a true partnership:
• Automation: AI handles discrete, routine tasks
(e.g., sorting emails) while humans maintain total control.
• Augmentation: AI provides analysis and
recommendations (e.g., predictive analytics) to inform human decisions.
• Collaboration: Humans and AI work as a team,
leveraging complementary strengths—AI's processing power and human's ethical
reasoning—to share responsibility for outcomes.
• Supervision: AI handles routine operations
autonomously within established human-set parameters and governance.
2. Shift to Agentic AI
In 2025, the focus has shifted from simple Generative
AI to Agentic AI. Unlike tools that only respond to prompts, agentic systems:
• Take Action: They move beyond generating content to
executing multi-step processes like debugging code or interacting with
customers autonomously.
• Learn Context: They adapt to your personal
preferences and past mistakes, becoming more intuitive over time.
• Act as "Virtual Coworkers": They can plan
and execute complex workflows as a team member, not just an assistant.
3. Redefine Human Roles
A partnership is successful only when human roles
evolve to match AI's capabilities:
• Focus on the "30% Rule": Let AI handle 70%
of routine tasks so humans can focus on the 30% that requires creativity,
empathy, and ethical judgment.
• Develop New Skills: Prioritize AI Literacy
(understanding AI's limits) and Prompt Engineering (effective communication
with the partner).
• Invest in "Human Centric" Skills:
Strengthen uniquely human traits like critical thinking and emotional
intelligence, which AI cannot replicate.
4. Build Trust Through Governance
A partner must be reliable. Establish trust by
implementing:
• Explainable AI (XAI): Ensure the AI can articulate
the "why" behind its decisions so it's not a "black box".
• Human Oversight: Rigorously validate AI outputs to
maintain quality and brand voice.
• Digital Workforce Registries: Track AI agents
similarly to human employees to ensure accountability and compliance.
5. Create a Culture of Experimentation
Treat AI integration with the same discipline as
hiring human team members:
• Launch Pilot Programs: Test AI as a partner in a
small, controlled environment to solve real problems before scaling.
• Social Dialogue: Encourage open communication where
employees can share feedback or concerns about their new AI
"teammates".
Can OpenAI make generative AI more social?
OpenAI is exploring ways to integrate generative AI into social
platforms. A recent report from MIDiA Research suggests that OpenAI might develop a social network where users
can share AI-generated content, potentially alongside human-created
content. OpenAI is reportedly considering building a social feed around
its image generator, which could be text-based, similar to platforms like
X/Twitter or Threads. This could allow users to share and interact with
AI-generated images and other creative outputs, potentially influencing how
users experience social media and AI integration.
Key aspects of this potential integration:
- AI-centric platform:
OpenAI could create a platform where AI is central to the user
experience, allowing users to generate and share content using its generative
AI tools.
- Social feed:
The platform might feature a social feed where users can share and
interact with AI-generated content, potentially including text, images, and
other creative media.
- User base:
OpenAI's existing user base could be a strong starting point for
building a social network, as users may be more likely to use a platform based
on AI technologies they already know and trust.
- Interpersonal social appeal:
While AI-centric platforms could offer a unique experience, it will be
challenging to replicate the interpersonal social appeal of established
networks, which relies heavily on human interaction.
- Ethical considerations:
As AI becomes more integrated into social media, it's crucial to address
ethical concerns, including potential biases in AI-generated content,
misinformation, and the role of human intervention.
Potential challenges and opportunities:
- User acceptance:
Users may be hesitant to embrace AI-generated content or AI-driven
social experiences, especially if they have concerns about authenticity or
misinformation.
- Content moderation:
Ensuring that AI-generated content aligns with ethical standards and
community guidelines will require careful content moderation strategies.
- Human-AI collaboration:
Finding the right balance between human creativity and AI-generated
content is crucial for creating a social experience that is both engaging and
valuable.
- Regulation:
As AI becomes more integrated into social media, regulators may need to
develop new policies and guidelines to address potential risks and ensure
responsible AI development.
In conclusion, OpenAI is exploring ways to leverage its generative AI
technologies to create a social network or platform, potentially
revolutionizing the way people interact with social media and AI. While
this presents numerous opportunities, it also requires careful consideration of
ethical concerns and user expectations.
Essential requirement for ensuring safe & quality artificial intelligence training
October 5, 2025
AI represents a critical
domain for America’s science and technology research and development portfolio.
Public and private investment in AI, from frontier LLM models to computer
vision for clinical diagnostics to autonomous manufacturing robotics, has quickly
become a key driver for economic prosperity. Recently, the National Science
Foundation (NSF), the Allen Institute, and NVIDIA announced a $152 million public-private partnership to develop open-source,
multimodal AI models trained on scientific data and literature called OMAI. At
the same time, the NSF signaled the next phase of the National AI Research Resource
(NAIRR), awarding up to $35 million for a large-scale compute center. These
moves are more than program news; they are a pivot point for US AI
infrastructure.
However, investment in AI
infrastructure alone is insufficient to guarantee global leadership in this
emerging market. If the US wants trustworthy, efficient, and secure AI, its
next investments cannot focus on compute alone. All organizations in the business
of developing and using AI need to govern the data that fuels these systems—how
it is collected, curated, described, accessed, reused, and audited. The
National Institute of Standards and Technology’s (NIST) Research
Data Framework (RDaF) is a practical way to do this now, without
reinventing the wheel or creating onerous new regulations.
The missing layer in the
AI Action Plan
The Trump Administration’s AI Action Plan sets an
ambitious agenda, but many implementation paths still treat data governance as
an afterthought. From our vantage point—shaped by years of collective
experience in evidence-based policymaking and practice in Federal research,
statistical, and standards programs–the risk is clear: Without lifecycle data
governance, America’s AI strategy will reproduce familiar problems at greater
scale, including a lack of transparency, off-target training pipelines, limited
reproducibility, privacy and confidentiality risks, compliance uncertainty, and
weak accountability for model inputs, outputs, and decision-making capacity.
This concern is not confined
to large language models (LLMs). At a National Academies workshop this past August on embedded AI systems
(e.g., diffusion models, embodied and autonomous systems, and agents built on
sensor and signals data), researchers and defense stakeholders raised concerns
about data governance issues in training data sparsity, simulation, and validation
for safety-critical contexts. These systems depend on data provenance,
metadata, updating, and disciplined access at least as much as generative LLMs
do.
Such concerns highlight why strong data governance is needed for the US, or any, national AI strategy. The RDaF is an “off-the-shelf” solution. Developed with broad stakeholder input by NIST, it is a modular, role-based, lifecycle framework that helps organizations plan, generate, process, share, preserve, and retire data with consistent conformity to open standards for metadata, access controls, and documentation. Three benefits make it especially relevant now for AI:
- Security and accountability. Documented tiered access, provenance, and usage
logs enable tracing of model inputs and outputs—supporting export-control
enforcement and responsible sharing across NAIRR’s open
and secure environments. The RDaF also provides data governance principles
that help mitigate risks across domains, including biosecurity,
cybersecurity, and privacy.
- Interoperability and efficiency. The RDaF aligns with open standards for data governance, the
Findable, Accessible, Interoperable, and Reusable data principles, and
existing federal mandates such as the Evidence Act, agency public access policies,
and the Privacy Act. It lowers integration costs for
public and private organizations alike, and complements international
commons efforts (e.g., EOSC, ARDC), improving cross-border scientific
collaboration.
- Adoptable today. The RDaF is non-regulatory and already familiar
to federal science organizations. Organizations and agencies can phase it
in through guidance, funding conditions, and training—no new statute
required. It is already referenced in the Office of Management and
Budget’s M-25-05 implementation guidance for the Open Government
Data Act.
Data governance remains one
of the most critical, and yet underappreciated, aspects of AI policy today.
From access to high-quality data assets for training of LLMs, to management of
safeguards for AI systems with debated decision-making authority, to control of
information quality safeguards, strong data governance policies and practices
protect intellectual property and individual privacy and ensure AI systems are
compliant with national and international data sharing laws. Yet, we have seen
that many frontier models—especially LLMs but increasingly embedded systems
such as computer vision and autonomous robotics—have been developed and
deployed without transparent data governance strategies. Consequently, a slew
of avoidable copyright infringement and personal injury lawsuits,
and a lack of trust in the models and their owners, have polluted
the AI landscape.
Leading national AI strategy
with strong data governance is ultimately about trust. The public deserves AI
systems trained on appropriate, safe, timely, high-quality data; that are
auditable, and that ensure public investments strengthen—not fragment—data
ecosystems. Where compute brings capability, data governance builds trust.
Adopting the RDaF won’t
settle every debate about AI or the data needed to train its models. It will,
however, provide capacity at scale for trustworthiness in how data is managed
for AI systems. With NAIRR and OMAI entering decisive phases, this is the moment
to make data governance a first-order investment, not an afterthought.
AI extremists are peddling science fiction
Right now the AI debate is
dominated by two extremes. Doomers believe AI will become godlike and destroy
us. Zealots believe AI will become godlike and save us. Their conclusions are
different, but their...:
https://www.geneticsandsociety.org/article/ai-extremists-are-peddling-science-fiction
The artificial intelligence boom: A new reality, where will we now live?
10 Generative AI Trends In 2026 That Will Transform Work And Life
ByBernard Marr
Oct 13, 2025
Generative AI is moving into a new phase in 2026,
reshaping industries from entertainment to healthcare while creating fresh
opportunities and challenges.
In 2026, generative AI is firmly embedded in workflows
across many larger organizations. Meanwhile, millions of us now rely on it for
research, study, content creation and even companionship.
What started with the arrival of ChatGPT back in 2023
has spilled into every corner of life, and the pace is only going to
accelerate.
Of course, challenges like copyright, bias, and the
risk of job displacement remain, but the upside is too powerful for anyone to
ignore. From augmenting human productivity to accelerating our ability to
learn, machines capable of generating words, pictures, video and code are
reshaping our world.
The next 12 months will undoubtedly see the arrival of
new tools and further integration of generative AI into our everyday lives. So
here are the ten trends I think will be most significant in 2026.
1. Generative Video Comes Of Age
This year, Netflix brought generative AI into
primetime in the Argentinian-produced series El Eternauta. Producers said that
it slashed production time and costs compared to traditional animation and
special effects techniques. In 2026, expect generative AI in entertainment to
become mainstream as we see it powering more big-budget TV shows and Hollywood
extravaganzas.
2. Authenticity Is King
Faced with a sea of generative AI content, individuals
and brands will look for new ways to communicate authenticity and genuine human
experience. While audiences will continue to find AI useful for quickly
conveying information and creating summaries, creators who are able to leverage
truly human qualities to provide content that machines can’t match will rise
above the tide of generic “AI slop”.
3. The Copyright Conundrum
Debate over the use of copyrighted content to train
generative AI models and fair compensation for human creatives will increase in
intensity throughout 2026. AI developers need access to human-created content
in order to train machines to mimic it, while many artists, musicians, writers
and filmmakers consider their work being used in this way as nothing more than
theft. Over the next year, expect more lawsuits, intense public debate and
potentially some attempts to resolve the situation through regulation, as
lawmakers try to strike a balance that allows technological innovation while
respecting intellectual property rights.
4. Agentic Chatbots—From Reactive To Proactive
Rather than simply providing information or generating
content in response to individual prompts, chatbots will become more and more
capable of working autonomously towards long-term goals as they take on agentic
qualities. This year, ChatGPT debuted its Agent Mode, and other tools such as
Gemini and Claude are adding abilities to communicate with third-party apps and
take multi-step actions without human intervention. In 2026, generative AI
tools will make the leap from clever chatbots to action-taking assistants as
the agentic revolution heats up.
5. Privacy-Focused GenAI
As businesses invest more heavily in generative AI,
there will be a growing awareness of the risks to privacy and the need to take
steps to secure personal and customer data. This will increase awareness in
privacy-centric AI models where data processing takes place on-premises or
directly on users’ own devices. Apple, for example, differentiates itself with
its focus on putting privacy first, and I expect to see other AI device
manufacturers and developers following its lead in 2026.
6. Generative AI in Gaming
In 2026, gaming could become one of the most exciting
frontiers for generative AI. Developers are creating games with emergent
storylines that adapt to players’ actions, even when they do something entirely
unexpected. And characters will no longer be limited to following scripts, but
can respond, hold conversations and act just like real people. This will create
richer, more immersive and interactive experiences for players, while cutting
production costs and unlocking new creative options for studios.
7. Synthetic Data For Analytics And Simulation
As well as words and pictures, generative AI is
increasingly used to create the raw data needed to understand the real world,
simulate physical, mechanical and biological systems and even train more
algorithms. This will allow banks to model fraud detection systems without
exposing real customer records, and healthcare providers to simulate treatments
and medical trials without risking patient privacy. With demand for synthetic
training data growing, it will become fuel for cutting-edge analytics and automated
decision-making systems in 2026 and beyond.
8. Monetizing Generative Search
Generative AI is transforming the way we search for
information online. This is impacting the business of companies that rely on
search results to drive traffic, but also forcing advertising services like
Google and Microsoft Bing to rethink the way they drive revenue. In 2026, we
can expect moves towards addressing this, as services such as Google’s Search
Generative Experience and Perplexity AI attempt to bridge the gap between
generative search and paid-for search ads.
9. Further Breakthroughs In Scientific Research
This year, we saw genAI proving it can be a valuable
aid to scientific research, driving breakthroughs in drug discovery, protein
folding, energy production and astronomy. In 2026, this trend will gather pace
as researchers increasingly leverage generative models in the search for
solutions to some of humanity’s biggest problems, such as curing diseases,
fighting climate change and solving food and water shortages.
10. Generative AI Jobs Prove Their Value
Much has been made of the new jobs that will be
displaced, but in 2026, the focus will shift to the new roles it will create.
We will start to see the true scale of demand for people with the skills to
fill roles such as prompt engineers, model trainers, output auditors and AI
ethicists. Those who can coordinate and integrate the work of AI agents with
human teams will be in high demand, and we will start to get a clearer
understanding of exactly how valuable they will really be when it comes to
unlocking the benefits of AI while mitigating its potential for harm.
Generative AI is no longer an emerging technology on the sidelines; it is becoming the engine driving change across every industry and daily life. The trends we see in 2026 point to a future where the line between human and machine creativity, productivity, and intelligence becomes increasingly blurred. Organizations that adapt quickly, invest in the right skills, and embrace responsible innovation will be the ones that thrive as this next chapter of AI unfolds.
Prediction About the Future of AI and Human Interaction
The future of AI and human interaction is likely to be characterized
by increasing AI integration into various aspects of life, with potential
benefits and drawbacks. AI will likely become more personalized, automate
complex tasks, and enhance human capabilities. However, it also raises
concerns about job displacement, ethical implications, and the potential for
misuse.
Here's a more detailed look at some key areas:
1. Increased AI Integration and Automation:
- AI will be more deeply integrated into daily life, from voice
assistants and recommendation engines to self-driving cars and
personalized healthcare.
- Automation of complex tasks in various sectors, such as
manufacturing and healthcare, will become more prevalent.
- AI will likely lead to the creation of new job roles and
industries, requiring skills in AI development, data science, and machine
learning.
2. Personalization and Enhanced Human Experiences:
- AI will be used to personalize experiences and predict individual
preferences, leading to more tailored interactions and services.
- AI-powered tools will enhance human creativity and innovation by
providing new ways to explore ideas and generate content.
- Brain-computer interfaces and other technologies could augment
human cognitive abilities, potentially revolutionizing how we interact
with the world.
3. Ethical and Societal Considerations:
- The rise of AI raises ethical questions about bias, privacy, and
accountability.
- There is potential for AI to be used for malicious purposes, such
as weaponization and surveillance.
- The long-term impact of human-AI interactions on social
relationships and expectations is still uncertain.
4. Job Displacement and Workforce Transformation:
- While AI may automate certain tasks, it's also likely to create new
job opportunities in specialized fields.
- The skills gap between those who are able to adapt to AI-driven
workplaces and those who are not could widen.
- AI could potentially lead to a more flexible and distributed
workforce, with remote work becoming more common.
5. The Potential for Superhuman AI and Singularity:
- Some experts predict that AI will eventually surpass human
intelligence, potentially leading to a "superhuman AI" or
"singularity".
- This could lead to both utopian and dystopian scenarios, depending
on how AI is developed and used.
- The potential for AI to develop its own goals and priorities raises
concerns about control and safety.
6. The Importance of Collaboration and Human-AI Synergies:
- The future of AI likely lies in collaborative intelligence, where
humans and AI systems work together synergistically.
- Human-AI collaboration could revolutionize various fields, from
healthcare and education to scientific research and creative endeavors.
- It's crucial to ensure that AI is developed and used in a way that
complements human capabilities and enhances human well-being.
In conclusion, the future of AI and human interaction is complex and
uncertain, with both significant potential benefits and
challenges. Navigating this future will require careful consideration of
ethical, societal, and technological implications, as well as a commitment to
fostering collaboration and innovation that benefits humanity as a whole.
Mark Cuban Just Made a Bold Prediction About the Future of AI:
Within the next 3 years, there will be so much AI, in particular AI
video, people won’t know if what they see or hear is real. Which will
lead to an explosion of f2f engagement, events and jobs.
Those that were in the office will be in the field.
Call it the Milli Vanilli effect.
https://www.youtube.com/watch?v=OevA7HUPkmI
AI Wrapped: The 14 AI
terms you couldn’t avoid in 2025
From “superintelligence”
to “slop,” here are the words and phrases that defined another year of AI
craziness.
- Will
Douglas Heavenarchive page
- Michelle Kimarchive
page
- James
O'Donnellarchive page
- Rhiannon
Williamsarchive page
December 25, 2025
If the past 12 months have
taught us anything, it’s that the AI hype train is showing no signs of slowing.
It’s hard to believe that at the beginning of the year, DeepSeek had yet to
turn the entire industry on its head, Meta was better known for trying (and
failing) to make the metaverse cool than for its relentless quest to dominate
superintelligence, and vibe coding wasn’t a thing.
If that’s left you feeling a
little confused, fear not. As we near the end of 2025, our writers have taken a
look back over the AI terms that dominated the year, for better or worse.
Make sure you take the time
to brace yourself for what promises to be another bonkers year.
—Rhiannon Williams
1. Superintelligence
As long as people have been hyping AI,
they have been coming up with names for a future, ultra-powerful form of the
technology that could bring about utopian or dystopian consequences
for humanity. “Superintelligence” is that latest hot term. Meta announced in
July that it would form an AI team to pursue superintelligence, and it was
reportedly offering nine-figure compensation packages to AI experts from the
company’s competitors to join.
In December, Microsoft’s head
of AI followed
suit, saying the company would be spending big sums, perhaps hundreds of
billions, on the pursuit of superintelligence. If you think superintelligence
is as vaguely defined as artificial general intelligence, or AGI, you’d
be right! While
it’s conceivable that these sorts of technologies will be feasible in
humanity’s long run, the question is really when, and whether
today’s AI is good enough to be treated as a stepping stone toward something
like superintelligence. Not that that will stop the hype kings. —James
O’Donnell
2. Vibe coding
Thirty years ago, Steve Jobs
said everyone in America should learn how to program a
computer. Today, people with zero knowledge of how to code can knock up an
app, game, or website in no time at all thanks to vibe
coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To
vibe-code, you simply prompt generative AI models’ coding assistants to create
the digital object of your desire and accept pretty much everything they spit
out. Will the result work? Possibly not. Will it be secure? Almost definitely
not, but the technique’s biggest champions aren’t letting those minor details
stand in their way. Also—it sounds fun! — Rhiannon Williams
3. Chatbot psychosis
One of the biggest AI stories
over the past year has been how prolonged interactions with chatbots can cause
vulnerable people to experience delusions and, in some extreme cases, can
either cause or worsen psychosis. Although “chatbot psychosis” is not a
recognized medical term, researchers are paying close attention to the growing anecdotal
evidence from users who say it’s happened to them or someone they
know. Sadly, the increasing
number of lawsuits filed against AI companies by the families of
people who died following their conversations with chatbots demonstrate the
technology’s potentially deadly consequences. —Rhiannon Williams
4. Reasoning
Few things kept the AI hype
train going this year more than so-called reasoning models, LLMs that can break
down a problem into multiple steps and work through them one by one. OpenAI
released its first reasoning models, o1 and o3, a year ago.
A month later, the Chinese
firm DeepSeek took
everyone by surprise with a very fast follow, putting out R1, the first
open-source reasoning model. In no time, reasoning models became the industry
standard: All major mass-market chatbots now come in flavors backed by this
tech. Reasoning models have pushed the envelope of what LLMs can do, matching
top human performances in prestigious math and coding competitions. On the flip
side, all the buzz about LLMs that could “reason” reignited old debates
about how
smart LLMs really are and how they really work. Like “artificial
intelligence” itself, “reasoning” is technical jargon dressed up with marketing
sparkle. Choo choo! —Will Douglas Heaven
5. World models
For all their uncanny
facility with language, LLMs have very little common sense. Put simply, they
don’t have any grounding in how the world works. Book learners in the most
literal sense, LLMs can wax lyrical about everything under the sun and then
fall flat with a howler about how many elephants you could fit into an Olympic
swimming pool (exactly one, according to one of Google DeepMind’s LLMs).
World models—a broad church
encompassing various technologies—aim to give AI some basic common sense about
how stuff in the world actually fits together. In their most vivid form, world
models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech
from Fei-Fei Li’s startup World Labs, can generate detailed and realistic
virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief
scientist, is also working on world models. He has been trying to give AI a
sense of how the world works for years, by training models to predict what
happens next in videos. This year he quit Meta to focus on this approach in a
new start up called Advanced Machine Intelligence Labs. If all goes well, world
models could be the next thing. —Will Douglas Heaven
6. Hyperscalers
Have you heard about all the
people saying no thanks, we
actually don’t want a giant data center plopped in our backyard? The
data centers in question—which tech companies want to built everywhere,
including space—are
typically referred to as hyperscalers: massive buildings purpose-built for AI
operations and used by the likes of OpenAI and Google to build bigger and more
powerful AI models. Inside such buildings, the world’s best chips hum away
training and fine-tuning models, and they’re built to be modular and grow
according to needs.
It’s been a big year for
hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate
project, a $500 billion joint venture to pepper the country with the largest
data centers ever. But it leaves almost everyone else asking: What exactly do
we get out of it? Consumers worry the new data centers will raise
their power bills. Such buildings generally struggle to
run on renewable energy. And they don’t tend to create all that many jobs.
But hey, maybe these massive, windowless buildings could at least give a moody,
sci-fi vibe to your community. —James O’Donnell
7. Bubble
The lofty promises of AI are
levitating the economy. AI companies are raising eye-popping sums of money and
watching their valuations soar into the stratosphere. They’re pouring hundreds
of billions of dollars into chips and data centers, financed increasingly by
debt and eyebrow-raising circular
deals. Meanwhile, the companies leading the gold rush, like OpenAI and
Anthropic, might not
turn a profit for years, if ever. Investors are betting big that AI
will usher in a new era of riches, yet no one knows how transformative the
technology will actually be.
Most organizations using AI
aren’t yet seeing the payoff, and AI work slop is everywhere. There’s
scientific uncertainty about whether scaling LLMs will deliver
superintelligence or whether new breakthroughs need to pave the way. But unlike
their predecessors in the dot-com bubble, AI companies are showing strong revenue
growth, and some are even deep-pocketed tech titans like Microsoft, Google,
and Meta. Will the manic dream ever
burst? —Michelle Kim
8. Agentic
This year, AI
agents were everywhere. Every new feature announcement, model drop, or
security report throughout 2025 was peppered with mentions of them, even though
plenty of AI companies and experts disagree on exactly what counts as being
truly “agentic,” a vague term if ever there was one. No matter that it’s
virtually impossible to guarantee that an AI acting on your behalf out in the
wide web will always do exactly what it’s supposed to do—it seems as though
agentic AI is here to stay for the foreseeable. Want to sell something? Call it
agentic! —Rhiannon Williams
9. Distillation
Early this year, DeepSeek
unveiled its new model DeepSeek R1, an open-source reasoning model that matches
top Western models but costs a fraction of the price. Its launch freaked
Silicon Valley out, as many suddenly realized for the first time that huge scale
and resources were not necessarily the key to high-level AI models. Nvidia
stock plunged by 17% the day after R1 was released.
The key to R1’s success was
distillation, a technique that makes AI models more efficient. It works by
getting a bigger model to tutor a smaller model: You run the teacher model on a
lot of examples and record the answers, and reward the student model as it
copies those responses as closely as possible, so that it gains a compressed
version of the teacher’s knowledge. —Caiwei Chen
10. Sycophancy
As people across the world
spend increasing amounts of time interacting
with chatbots like ChatGPT, chatbot makers are struggling to work out
the kind of tone and “personality” the models should adopt. Back in April,
OpenAI admitted it’d struck the wrong balance between helpful and sniveling,
saying a new update had rendered GPT-4o too
sycophantic. Having it suck up to you isn’t just irritating—it can mislead
users by reinforcing their incorrect beliefs and spreading misinformation. So
consider this your reminder to take everything—yes, everything—LLMs produce
with a pinch of salt. —Rhiannon Williams
11. Slop
If there is one AI-related
term that has fully escaped the nerd enclosures and entered public
consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop”
is now commonly used to refer to low-effort, mass-produced content generated by
AI, often optimized for online traffic. A lot of people even use it as a
shorthand for any AI-generated content. It has felt inescapable in the past
year: We have been marinated in it, from fake
biographies to shrimp
Jesus images to surreal
human-animal hybrid videos.
But people are also having
fun with it. The term’s sardonic flexibility has made it easy for internet
users to slap it on all kinds of words as a suffix to describe anything that
lacks substance and is absurdly mediocre: think “work slop” or “friend slop.”
As the hype cycle resets, “slop” marks a cultural reckoning about what we
trust, what we value as creative labor, and what it means to be surrounded by
stuff that was made for engagement rather than expression. —Caiwei Chen
12. Physical intelligence
Did you come across the
hypnotizing video from
earlier this year of a humanoid robot putting away dishes in a bleak,
gray-scale kitchen? That pretty much embodies the idea of physical
intelligence: the idea that advancements in AI can help robots better move
around the physical world.
It’s true that robots have
been able to learn new tasks faster than
ever before, everywhere from operating
rooms to warehouses.
Self-driving-car companies have seen improvements in how they simulate the
roads, too. That said, it’s still wise to be skeptical that AI has
revolutionized the field. Consider, for example, that many robots advertised as
butlers in your home are doing the majority of their tasks thanks to remote
operators in the Philippines.
The road ahead for physical
intelligence is also sure to be weird. Large language models train on text,
which is abundant on the internet, but robots learn more from videos of people
doing things. That’s why the robot company Figure suggested in September that
it would pay
people to film themselves in their apartments doing chores. Would you
sign up? —James O'Donnell
13. Fair use
AI models are trained by
devouring millions of words and images across the internet, including
copyrighted work by artists and writers. AI companies argue this is “fair
use”—a legal doctrine that lets you use copyrighted material without permission
if you transform it into something new that doesn’t compete with the original.
Courts are starting to weigh in. In June, Anthropic’s training of its AI model
Claude on a library of books was ruled fair use because the technology was “exceedingly
transformative.”
That same month, Meta scored
a similar
win, but only because the authors couldn’t show that the company’s literary
buffet cut into their paychecks. As copyright battles brew, some creators are
cashing in on the feast. In December, Disney signed a splashy
deal with OpenAI to let users of Sora, the AI video platform, generate
videos featuring more than 200 characters from Disney's franchises. Meanwhile,
governments around the world are rewriting
copyright rules for the content-guzzling machines. Is training AI on
copyrighted work fair use? As with any billion-dollar legal question, it
depends. —Michelle Kim
14. GEO
Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die.
The 9 Stages of Future of AI Explained
https://www.youtube.com/watch?v=8ZenwcakHBg
Priority work
organization conditions for successful use of AI potential.
Proritāriee
darba organizācijas nosacījumi AI potenciāla sekmīgai izmantošanai.
AI
leadership: Different perspectives, one shared imperative
12-19-2025
Each leader sees AI differently, yet the companies who can connect those views build enterprise-wide momentum.
BY Dan Priest
I’ve watched many types of leaders struggle with
what AI means for their business. Three years into the GenAI era,
the technology is no longer the primary question, but instead its business
value. Inside the C-suite, the answers can often depend on where you sit. The
CEO’s appetite for risk, the CFO’s focus on returns, the CTO’s guardrails for
scalability—all of it shapes what’s possible.
But those differences don’t have to be friction; they
can be fuel if appropriately managed. Each perspective reflects a real pressure
point and a real opportunity. When leaders transcend any one area of the
business and focus on the imperatives shaping the future, they can begin to
connect those views. AI stops being a collection of pilots and becomes part of
the organization’s DNA.
YOUR AI AGENDA DEPENDS ON THEIRS
Because AI touches each part of the business, each
executive has a stake in how it unfolds. But if you want to advance your own
priorities, whether that’s innovation, efficiency, or market growth, you should
understand what drives your C-suite counterparts. Recognizing those drivers
isn’t just collaboration; it’s strategy. It’s how you turn competing incentives
into collective momentum.
The companies that pull ahead won’t be those that move
the fastest or spend the most. They’ll be the ones that connect technical
capability, business strategy, and financial discipline into one cohesive
approach.
CEO: The course setter
What’s shaping their view:
CEOs feel the full weight of expectation.
Shareholders, boards, customers, and employees all want to know: How are we
using AI? Many see technology as a way to reshape their business models,
deliver new customer value, and signal innovation to the market.
Where they’re focused:
The most effective CEOs connect AI to their long-term
strategy, not just short-term wins. They’re using it to build new business
capabilities—the kind that can scale, differentiate, and sustain advantage. The
CEOs leading the way don’t just want to adopt AI; they want to reimagine their
companies around it.
CFO: The value architect
What’s shaping their view:
CFOs are naturally data optimists. They’ve seen how
automation, forecasting, and compliance tools have transformed their own
functions. They recognize that AI can amplify productivity and
decision-making across the enterprise. But they’re also disciplined investors
and they want clear visibility into where AI can deliver measurable ROI.
Where they’re focused:
Today’s CFO is evolving from financial gatekeeper to
enterprise value architect. They’re building frameworks for evaluating,
prioritizing, and scaling AI initiatives responsibly. They’re making sure the
business doesn’t just invest in AI—it invests wisely, with transparency and
accountability.
CIO and CTO: The foundation builder
What’s shaping their view:
CIOs and CTOs have been through technology hype cycles
before. They know AI’s promise is real, but only with a solid foundation of
data integrity, governance, and security. They’re responsible for creating the
infrastructure that allows innovation to flourish while managing the very real
risks of bias, privacy, and scale.
Where they’re focused:
They’re balancing enthusiasm with realism. Their
challenge is to translate AI’s potential into practical, reliable systems that
help drive business outcomes. Collaboration with business leaders is critical.
The greatest value from AI emerges when technical and operational teams move in
sync and when the business side understands the “how,” and the tech side
understands the “why.”
Business unit leaders: The impact driver
What’s shaping their view:
For business unit leaders, AI is tangible. It shows up
in the tools their teams use, the workflows they manage, and the customer
experiences they deliver. They’re close to where the value is created and they
see firsthand what’s working and what’s not.
Where they’re focused:
These leaders are the bridge between corporate
ambition and operational reality. When empowered, they help test ideas quickly,
share learnings across teams, and turn pilots into scalable impact. Their
feedback helps the organization adapt faster and makes sure that AI delivers
measurable outcomes, not just proof-of-concepts.
Board members: The long-view champion
What’s shaping their view:
Boards bring deep business expertise and oversight
responsibility. Many are still building their technical fluency in AI, but they
instinctively understand its strategic implications, including risk,
resilience, and long-term competitiveness.
Where they’re focused:
Boards are asking sharper questions such as, “How does
AI change our risk profile?” “How should we govern its use?” “What new value
can it unlock for shareholders?” The C-suite’s opportunity is to translate AI
into business terms that resonate, explaining not just the technology, but the
transformation story it enables.
A SHARED PATH FORWARD
From where I sit, no two leaders see AI through the
same lens, and that’s exactly the point. The CEO brings vision, the CFO grounds
it in accountability, the CIO and CTO lay the foundation, and business leaders
turn ambition into action. The board keeps the focus on long-term value.
When those perspectives come together, momentum
builds. The organization learns faster, scales smarter, and aligns not by
erasing differences but by using them as fuel for a shared purpose.
The goal isn’t to agree on everything, it’s to move
forward together. Leaders should resist the temptation to hold the AI agenda
hostage until their needs are satisfied. They should avoid myopic perspectives
that over-index on the past or prioritize their area of responsibility over the
company’s big objectives. AI should inspire a forward-looking, unifying
enterprise-wide imperative. That takes leadership. Define a North Star, solve
problems creatively, communicate progress openly, and commit capital where
conviction is highest.
AI isn’t just another business trend; it’s a new
system of competition. While each leader begins with their own perspective, the
companies that will likely lead in this new era are those that make AI a
collective imperative.
https://www.fastcompany.com/91462772/ai-leadership-different-perspectives-one-shared-imperative
An ambitious plan to review the application of EU digital and privacy rules as part of the "Digital Omnibus".
Vērienīgs
plāns, kā pārskatīt ES digitālo un privātuma noteikumu piemērošanu “Digitālā
omnibusa” ietvaros.
Europe's businesses, from factories to start-ups, will spend less time on administrative work and compliance and more time innovating and scaling-up, thanks to the European Commission's new digital package. This initiative opens opportunities for European companies to grow and to stay at the forefront of technology while at the same time promoting Europe's highest standards of fundamental rights, data protection, safety and fairness.
At its core, the package includes a digital
omnibus that streamlines rules on artificial intelligence (AI),
cybersecurity and data, complemented by a Data Union Strategy to
unlock high-quality data for AI and European Business Wallets that
will offer companies a single digital identity to simplify
paperwork and make it much easier to do business across EU Member States.
The package aims to ease compliance with
simplification efforts estimated to save up to €5 billion in administrative
costs by 2029. Additionally, the European Business Wallets could unlock another
€150 billion in savings for businesses each year.
1. Digital Omnibus
With today's digital omnibus, the Commission is
proposing to simplify existing rules on Artificial Intelligence, cybersecurity,
and data.
Innovation-friendly AI rules: Efficient implementation of the AI
Act will have a positive impact on society, safety and fundamental
rights. Guidance and support are essential for the roll-out of any new law, and
this is no different for the AI Act.
The Commission proposes linking the entry into
application of the rules governing high-risk AI systems to the availability of
support tools, including the necessary standards.
The timeline for applying high-risk rules is adjusted
to a maximum of 16 months, so the rules start applying once the Commission
confirms the needed standards and support tools are available, giving companies
support tools they need.
The Commission is also proposing targeted amendments
to the AI Act that will:
- Extend certain simplifications that are granted to small and
medium-sized enterprises (SMEs) to small mid cap companies (SMCs),
including simplified technical documentation requirements, saving at least
€225 million per year;
- Broaden compliance measures so more innovators can use regulatory
sandboxes, including an EU-level sandbox from 2028 and more real-world
testing, especially in core industries like the automotive;
- Reinforce the AI Office's powers and centralise oversight of AI
systems built on general-purpose AI models, reducing governance
fragmentation.
Simplifying cybersecurity reporting: The omnibus also introduces a single-entry point where
companies can meet all incident-reporting obligations. Currently, companies
must report cybersecurity incidents under several laws, including among others
the NIS2
Directive, the General
Data Protection Regulation (GDPR), and the Digital Operational
Resilience Act (DORA). The interface will be developed with robust
security safeguards and will undergo comprehensive testing to ensure its
reliability and effectiveness.
An innovation-friendly privacy framework: Targeted amendments to the GDPR will harmonise,
clarify and simplify certain rules to boost innovation and support compliance
by organisations, while keeping intact the core of the GDPR, maintaining the
highest level of personal data protection.
Modernising cookie rules to improve users' experience
online: The amendments will reduce
the number of times cookie banners pop up and allow users to
indicate their consent with one-click and save their cookie
preferences through central settings of preferences in browsers.
Improving access to data: Today's digital package aims to improve access to data
as a key driver of innovation. It simplifies data rules and makes them
practical for consumers and businesses by:
- Consolidating EU data rules through the Data Act, merging four pieces of legislation into one for enhanced legal
clarity;
- Introducing targeted exemptions to some of the Data Act's
cloud-switching rules for
SMEs and SMCs resulting in around €1.5 billion in one-off savings;
- Offering new guidance on compliance with the Data Act through model contractual terms for data access and use, and standard
contractual clauses for cloud computing contracts;
- Boosting European AI companies by unlocking access to
high-quality and fresh datasets for AI,
strengthening the overall innovation potential of businesses across the
EU.
2. Data Union Strategy
The new Data
Union Strategy outlines additional measures to unlock more
high-quality data for AI by expanding access, such as data labs. It puts in
place a Data Act Legal Helpdesk, complementing further measures to support
implementation of the Data Act. It also strengthens Europe's data sovereignty
through a strategic approach to international data policy: anti-leakage
toolbox, measures to protect sensitive non-personal data and guidelines to
assess fair treatment of EU data abroad.
3. European Business Wallet
This proposal will provide European companies and
public sector bodies with a unified digital tool, enabling them to digitalise
operations and interactions that in many cases currently still need to be done
in person. Businesses will be able to digitally sign, timestamp and seal
documents; securely create, store and exchange verified documents; and
communicate securely with other businesses or public administrations in their
own and the other 26 Member States.
Scaling up a business in other Member States, paying
taxes and communicating with public authorities will be easier than ever before
in the EU. Assuming broad uptake, the European Business Wallets will allow
European companies to reduce administrative processes and costs, thereby unlock
up to €150 billion in savings for businesses each year.
Next Steps
The digital omnibus legislative proposals will now be
submitted to the European Parliament and the Council for adoption. Today's
proposals are a first step in the Commission's strategy to simplify and make
more effective the EU's digital rulebook.
The Commission has today also launched the second step
of the simplification agenda, with a wide consultation on the Digital Fitness
Check open until 11 March 2026. The Fitness Check will ‘stress test' how the
rulebook delivers on its competitiveness objective, and examine the coherence
and cumulative impact of the EU's digital rules.
Background
The Digital package marks the seventh omnibus
proposal. The Commission set a course to simplify
EU rules to make the EU economy more competitive and more prosperous
by making business in the EU simpler, less costly and more efficient. The
Commission has a clear target to deliver an unprecedented simplification effort
by achieving at least 25% reduction in administrative burdens, and at least 35%
for SMEs until the end of 2029.
https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718
Digital Sovereignty in
2025: Why It Matters for European Enterprises
Explore how Europe’s digital
sovereignty agenda is reshaping compliance, cloud strategy, and secure
collaboration in 2025 and how Wire supports this shift.
Oct 7, 2025
In the new digital economy,
data is power. Questions about who controls, processes, and protects it now sit
at the center of political and corporate priorities. The year 2025 marks a
turning point in Europe’s pursuit of digital sovereignty, driven by tighter
regulation and growing geopolitical tension. For enterprises within the EU,
digital sovereignty has become a strategic requirement for sustainable growth
in an increasingly regulated environment.
The State of Digital
Sovereignty in Europe
Digital sovereignty, data
sovereignty, and data residency have become part of Europe’s vocabulary. But
what do they mean, and how do they differ?
Data Residency refers to the physical location of data centers.
It does not necessarily involve legal control by the country where the data is
stored. For example, data stored in Germany by a US provider may still fall
under US jurisdiction (DataStealth).
Data Sovereignty means that data is subject to the laws and
regulations of the country where it is collected, processed, and stored. This
ensures that local privacy and access rules apply, even if the service provider
is headquartered abroad (IBM
Think).
Digital Sovereignty goes further. It is the ability of nations,
organizations, and individuals to control their data, technology, and digital
infrastructure without relying on external entities. It includes hardware,
software, networks, and cloud services, ensuring that European organizations
and regulators can define and enforce their own rules (Law
Journal Digital).
Europe has long been a leader
in data protection, yet its cloud infrastructure remains heavily dependent on
US hyperscalers. This creates tension between sovereignty goals and operational
realities. The State of the Digital Decade 2025 report highlights
several priorities:
- Investment in connectivity, semiconductors,
sovereign cloud and data infrastructures, AI, quantum computing, and
cybersecurity
- Structural reforms to strengthen the single
market and ensure technological autonomy
- Simplified administrative processes to promote
innovation
Geopolitical and
Legislative Drivers
NIS2 Directive
Effective since October 2024, NIS2 applies to critical infrastructure and
mandates comprehensive cybersecurity and risk management across supply chains.
It also enforces strict breach reporting timelines. The directive reinforces
the need for European oversight of digital operations.
GDPR Enforcement Maturity
Since its introduction in 2018, GDPR has become the global standard for data
protection. Enforcement has matured, with authorities focusing on cross border
transfers, consent, and transparency. Noncompliance now carries significant
financial and reputational risk, making data governance a top priority for
enterprises.
Data Residency
Requirements
In response to geopolitical uncertainty and extraterritorial laws, the EU is
tightening residency rules for sectors such as defense, healthcare, and
finance. Critical workloads must increasingly be hosted within EU borders or by
providers shielded from foreign legal access.
Big Tech Promises and the
Reality
Global hyperscalers are
promoting “sovereign cloud” offerings for European customers. In practice,
these solutions can still fall under US laws such as the CLOUD Act and FISA Section 702. This means that American authorities can
request access to European data even when stored within the EU — a risk we
explored in our
analysis of the CLOUD Act and EU data sovereignty.
To counter this dependency,
the European Commission plans to introduce the Cloud and AI Development Act in 2025. The goal is to
triple the EU’s data center capacity within seven years and create a common
framework for public sector cloud procurement.
Meanwhile, GAIA-X, the joint initiative by
Germany and France, has moved into its implementation phase. More than 180 data
spaces are being developed to enable secure data sharing and foster innovation
under European control. These efforts support the broader objectives of
the European Data Act and AI Act.
Together, these measures aim
to strengthen Europe’s digital infrastructure and reduce reliance on non EU
providers — a challenge we also outlined in why Big Tech keeps failing Europe on data sovereignty.
Challenges for Enterprises
Despite progress at the
policy level, many enterprises still face obstacles in achieving digital
sovereignty:
Vendor Lock In
European organizations often depend on US hyperscalers through long term
contracts. This makes it difficult to switch to regional providers, increasing
exposure to privacy risks — a dynamic explored in our article on Europe’s encryption dilemma.
Encryption Loopholes
Proposed regulations in the US and EU that call for backdoors or message
scanning pose a direct threat to secure communication. Initiatives such as
the Lawful Access to Encrypted Data Act or the EU’s ChatControl proposal could undermine both privacy and
cybersecurity.
Compliance Gaps
Operating across multiple jurisdictions means managing a complex web of
regional regulations. Hybrid or multi cloud environments create uncertainty
about who can access data and where it resides. Many companies also lack the
tools or leverage to guarantee that sensitive information stays within Europe.
Practical Steps Toward
Sovereignty
European enterprises can take
clear steps to strengthen sovereignty and compliance:
Use Self Hosted Solutions
Organizations in sensitive sectors can install and manage their own servers to
maintain full control over data. Hosting data on premise or in verified private
facilities ensures compliance with local laws.
Choose European Cloud
Providers
There is growing momentum around EU based cloud platforms such as OVHcloud, Hetzner, Scaleway, STACKIT, UpCloud, Exoscale, and Open Telekom Cloud.
These providers operate under European law and meet GDPR requirements.
Adopt European Built
Software
Enterprises can reduce exposure to foreign surveillance laws by choosing tools
developed and hosted in Europe. Explore the best European alternatives to Big Tech collaboration tools.
Implement End to End
Encryption and Zero Trust
Organizations should use platforms that secure communications, files, and
metadata through end to end encryption and zero trust architecture.
How Wire Supports Digital
Sovereignty
Wire helps enterprises and
governments protect sensitive communication while ensuring usability and
compliance. Built in Europe and trusted by more than 1,800 organizations
including EY, BMW, Schwarz Gruppe, and the German government, Wire offers the
transparency and control that digital sovereignty demands.
- Servers hosted in Germany with backups in
Ireland, protected from extraterritorial access
- Deployment options for on premise and private
cloud environments
- End to end encryption using the Messaging Layer
Security (MLS) standard
- Zero trust design with role based access controls
- Compliance with EU regulations including ISO
27001, ISO 27701, and NIS2 readiness
Wire combines enterprise
grade collaboration with European data protection principles, allowing
organizations to collaborate without compromise.
Conclusion
Europe’s digital sovereignty
movement is accelerating, and organizations must act now to adapt. Those that
invest in sovereign infrastructure, compliant cloud providers, and secure by
design communication tools will strengthen both their resilience and their
competitive position.
https://wire.com/en/blog/digital-sovereignty-2025-europe-enterprises
Take Note: Artificial
Intelligence, Power, and the Public Interest
Bending the AI arc towards equity
Shaping a prosperous and sustainable digital future for all
By United Nations Development Programme
February 6th, 2025
Artificial Intelligence (AI)
has the potential to transform society and advance sustainable development for
all.
Recent research shows that
digital technology, including AI, can directly benefit 70 percent of
the 2030 Sustainable Development Goals (SDGs). Despite
this immense potential, AI development today is unequal, and its trajectory
will widen these disparities unless we take collective action.
We face an AI equity gap,
which presents significant challenges.
Only 2 percent of the world's data centres are in Africa,
and only 5 percent of AI innovators in Africa have the
compute power they require.
Worldwide, one in every 3
people don’t have internet access.
In 2023, the AI sector in the
US received US$67.2 billion in investments in AI, compared to
just $15 million in Kenya and $2.9 million in Nigeria.
If everyone is to participate
in and benefit from the AI revolution, 2025 is a critical year. What we build
must stand on a foundation of equity and sustainability.
The countdown to 2030
With just five years to go
until the 2030 Agenda deadline, UNDP is spearheading a global collaboration for
inclusive, sustainable digital transformation, which recognizes AI as a
cornerstone technology….: https://stories.undp.org/bending-the-ai-arc-towards-equity
Sweden is running a national experiment in AI, offering free access to
AI tools at a population level in the hope of boosting its economy and tech
literacy.
How Sweden can become a stronger AI nation
A country that leads in
artificial intelligence is a country with competitive businesses and efficient
public services. Sweden may not have the biggest AI giants at home, but we do
have a smorgasbord of qualities that can make us a strong AI nation. So how can
we make the best use of them?
Sweden has long been at the
top of European and global digitalisation indices, and we rank second in the
world in innovation. With these credentials, it would not be far-fetched to
assume that our country would also be at the top of the list when comparing
countries' AI development. A modest 17th place in the Global AI Index 2023
stands out - shouldn't Sweden have the potential to go further?
– If that's the ranking you
should be looking at, we have a huge potential to get higher up the list. I
don't really believe in running a society to be high on a ranking list. It
should be on top of doing good things, said Hanifeh Khayyeri, head of the computer
science department at RISE.
Going forward, she says, we
need to build on and develop Sweden's strengths. There are certain things in
Swedish society that make us unique in the world. That is where our competitive
advantages lie.
– One example of this is that
we are an unusually trusting country, both between people and in our democratic
institutions. Sweden is also a country where values form the basis of our
actions, we want to be an ethical party in the global context, says Hanifeh
Khayyeri.
Sweden has the potential to
become a test-bed for innovation.
Another Swedish strength is
the widespread use of digital technologies. Swedes use digital services several
times per day. We are generally curious about new solutions, and we trust and
demand that the digital services launched by our banks and public authorities
are secure.
– Sweden is also unique when
it comes to various registry data, such as the national authority registries
and quality registries. There is a huge amount of data and metadata, data that
is extremely valuable when working with AI services or applications, says
Hanifeh Khayyeri, continuing:
– All of these things that
characterise Sweden are prerequisites for becoming a strong AI nation. Sweden
has the potential to become a testbed for AI innovation, where companies can
test new AI services, products and business models in the Swedish market.
Behind every AI service and
application, there is a value chain that is often overshadowed by the
groundbreaking end result. This includes manufacturers of hardware and
middleware, but also players that connect the digital infrastructure to
society's energy supply system. Companies and organisations with this kind of
niche expertise are located in Sweden, and can therefore play a crucial role in
global competitiveness in the future.
Hanifeh Khayyeri highlights
the importance of digital infrastructure within the country's borders, if
Sweden is to become a more advanced AI country. It's not only about efficient
data transfer and management, but also about ensuring that certain data and
calculations can't leave Sweden. This means that there has to be capacity
within the country.
There is no need to worry
that the train has left the station, but we need to shift up a gear now.
Swedish society is
ready
Sweden lags behind in the
'government strategy' category, which compares the commitment of different
governments to AI. At the same time, there are countries that rank higher than
Sweden in the overall AI index with even worse scores in 'Government Strategy'
- so it doesn't seem to be decisive. It is also worth noting that we are at the
top when it comes to public opinion on intelligent technology.
– My impression is that
Swedes and Swedish companies want to start with AI, but there is uncertainty
related to the fast pace of change in the market and future regulations. They
don't want to make mistakes, which means they are waiting to take the big steps,
says Hanifeh Khayyeri.
Competence along the entire
value chain
The main argument for
unleashing the power and increasing the use of AI in business and the public
sector is that AI can help us do our jobs better and more efficiently. When we
automate repetitive and time-consuming administrative or machine tasks, productivity
and cost-effectiveness increase. When we use a combination of AI solutions and
more traditional analytical methods to harness data and accumulated experience,
we can make informed decisions and accurate predictions.
– It's about starting. We're
on a transformative journey, which means that everyone is going to start doing
things now that they haven't done before and aren't very good at. If you don't
know how to start, let's sit down and talk about it. RISE has expertise along
the entire AI value chain and can help you with everything from digital
infrastructure and cutting-edge AI technologies to less technical things like
innovation and change management and regulatory frameworks for AI, says Hanifeh
Khayyeri.
– There is no need to worry
that the train has left the station, but we need to step up a gear now.
https://www.ri.se/en/how-sweden-can-become-a-stronger-ai-nation
New AI system can 'predict human behavior in any situation' with unprecedented degree of accuracy
A new artificial intelligence
(AI) model called Centaur can predict and simulate human thought and behavior better than any
past models, opening the door for cutting-edge research applications.
https://www.livescience.com/technology/artificial-intelligence/new-ai-system-can-predict-human-behavior-in-any-situation-with-unprecedented-degree-of-accuracy-scientists-say
https://www.centaurinstitute.org
We help to lead a growing movement around neuro-symbolic AI to
develop the next generation of AI concepts and tools.
Mēs palīdzam vadīt augošu kustību ap neirosimbolisko mākslīgo intelektu,
lai izstrādātu nākamās paaudzes mākslīgā intelekta koncepcijas un rīkus.
What is a centaur in AI?
Centaurs are hybrid human-algorithm models
that combine both formal analytics and human intuition in a symbiotic manner
within their learning and reasoning process.
What is the centaur theory of AI?
What makes Centaur unique is its ability to predict human behavior not only in familiar tasks, but also in entirely new situations it has never encountered before. It identifies common decision-making strategies, adapts flexibly to changing contexts – and even predicts reaction times with surprising precision.
Will we be able to maintain our humanity in a world increasingly dominated by artificial intelligence?!
Vai pratīsim
saglabāt savu cilvēcību pasaulē, kurā arvien vairāk dominēs mākslīgais
intelekts?!
10 AI dangers and risks and how to manage them
A mean looking huge storm cloud hovering over the
ocean
1. Bias
2. Cybersecurity threats
3. Data privacy issues
4. Environmental harms
5. Existential risks
6. Intellectual property infringement
7. Job losses
8. Lack of accountability
9. Lack of explainability and transparency
10. Misinformation and manipulation
Make AI governance an enterprise priority
Artificial intelligence (AI) has enormous value but
capturing the full benefits of AI means facing and handling its potential
pitfalls. The same sophisticated systems used to discover novel drugs, screen
diseases, tackle climate change, conserve wildlife and protect biodiversity can
also yield biased algorithms that cause harm and technologies that threaten
security, privacy and even human existence.
Here’s a closer look at 10 dangers of AI and
actionable risk management strategies. Many of the AI risks listed here can be
mitigated, but AI experts, developers, enterprises and governments must still
grapple with them.
1. Bias
Humans are innately biased, and the AI we develop can
reflect our biases. These systems inadvertently learn biases that might be
present in the training data and exhibited in the machine learning (ML)
algorithms and deep learning models that underpin AI development. Those learned
biases might be perpetuated during the deployment of AI, resulting in skewed
outcomes.
AI bias can have unintended consequences with
potentially harmful outcomes. Examples include applicant tracking systems
discriminating against gender, healthcare diagnostics systems returning lower
accuracy results for historically underserved populations, and predictive
policing tools disproportionately targeting systemically marginalized
communities, among others.
Take action:
Establish an AI governance strategy encompassing
frameworks, policies and processes that guide the responsible development and
use of AI technologies.
Create practices that promote fairness, such as
including representative training data sets, forming diverse development teams,
integrating fairness metrics, and incorporating human oversight through AI
ethics review boards or committees.
Put bias mitigation processes in place across the AI
lifecycle. This involves choosing the correct learning model, conducting data
processing mindfully and monitoring real-world performance.
Look into AI fairness tools, such as IBM’s open source
AI Fairness 360 toolkit.
2. Cybersecurity threats
Bad actors can exploit AI to launch cyberattacks. They
manipulate AI tools to clone voices, generate fake identities and create
convincing phishing emails—all with the intent to scam, hack, steal a person’s
identity or compromise their privacy and security.
And while organizations are taking advantage of technological advancements such as generative AI, only 24% of gen AI initiatives are secured. This lack of security threatens to expose data and AI models to breaches, the global average cost of which is a whopping USD 4.88 million in 2024.
Take action:
Here are some of the ways enterprises can secure their
AI pipeline, as recommended by the IBM Institute for Business Value (IBM IBV):
Outline an AI safety and security strategy.
Search for security gaps in AI environments through
risk assessment and threat modeling.
Safeguard AI training data and adopt a
secure-by-design approach to enable safe implementation and development of AI
technologies.
Assess model vulnerabilities using adversarial
testing.
Invest in cyber response training to level up
awareness, preparedness and security in your organization.
Overhead view of people working in a meeting room
AI governance for the enterprise
Learn the key benefits gained with automated AI
governance for both today's generative AI and traditional machine learning
models.
3. Data privacy issues
Large language models (LLMs) are the underlying AI
models for many generative AI applications, such as virtual assistants and
conversational AI chatbots. As their name implies, these language models
require an immense volume of training data.
But the data that helps train LLMs is usually sourced
by web crawlers scraping and collecting information from websites. This data is
often obtained without users’ consent and might contain personally identifiable
information (PII). Other AI systems that deliver tailored customer experiences
might collect personal data, too.
Take action:
Inform consumers about data collection practices for
AI systems: when data is gathered, what (if any) PII is included, and how data
is stored and used.
Give them the choice to opt out of the data collection
process.
Consider using computer-generated synthetic data
instead.
4. Environmental harms
AI relies on energy-intensive computations with a
significant carbon footprint. Training algorithms on large data sets and
running complex models require vast amounts of energy, contributing to
increased carbon emissions. One study estimates that training a single natural
language processing model emits over 600,000 pounds of carbon dioxide; nearly 5
times the average emissions of a car over its lifetime.
Water consumption is another concern. Many AI
applications run on servers in data centers, which generate considerable heat
and need large volumes of water for cooling. A study found that training GPT-3
models in Microsoft’s US data centers consumes 5.4 million liters of water, and
handling 10 to 50 prompts uses roughly 500 milliliters, which is equivalent to
a standard water bottle.
Take action:
Consider data centers and AI providers that are
powered by renewable energy.
Choose energy-efficient AI models or frameworks.
Train on less data and simplify model architecture.
Reuse existing models and take advantage of transfer
learning, which employs pretrained models to improve performance on related
tasks or data sets.
Consider a serverless architecture and hardware
optimized for AI workloads.
5. Existential risks
In March 2023, just 4 months after OpenAI introduced
ChatGPT, an open letter from tech leaders called for an immediate 6-month pause
on “the training of AI systems more powerful than GPT-4.”3 Two months later,
Geoffrey Hinton, known as one of the “godfathers of AI,” warned that AI’s rapid
evolution might soon surpass human intelligence. Another statement from AI
scientists, computer science experts and other notable figures followed, urging
measures to mitigate the risk of extinction from AI, equating it to risks posed
by nuclear war and pandemics.
While these existential dangers are often seen as less
immediate compared to other AI risks, they remain significant. Strong AI or
artificial general intelligence, is a theoretical machine with human-like
intelligence, while artificial superintelligence refers to a hypothetical
advanced AI system that transcends human intelligence.
Take action:
Although strong AI and superintelligent AI might seem
like science fiction, organizations can get ready for these technologies:
Stay updated on AI research.
Build a solid tech stack and remain open to
experimenting with the latest AI tools.
Strengthen AI teams’ skills to facilitate the adoption
of emerging technologies.
6. Intellectual property infringement
Generative AI has become a deft mimic of creatives,
generating images that capture an artist’s form, music that echoes a singer’s
voice or essays and poems akin to a writer’s style. Yet, a major question
arises: Who owns the copyright to AI-generated content, whether fully generated
by AI or created with its assistance?
Intellectual property (IP) issues involving
AI-generated works are still developing, and the ambiguity surrounding
ownership presents challenges for businesses.
Take action:
Implement checks to comply with laws regarding
licensed works that might be used to train AI models.
Exercise caution when feeding data into algorithms to
avoid exposing your company’s IP or the IP-protected information of others.
Monitor AI model outputs for content that might expose
your organization’s IP or infringe on the IP rights of others.
7. Job losses
AI is expected to disrupt the job market, inciting
fears that AI-powered automation will displace workers. According to a World
Economic Forum report, nearly half of the surveyed organizations expect AI to
create new jobs, while almost a quarter see it as a cause of job losses.
While AI drives growth in roles such as machine
learning specialists, robotics engineers and digital transformation
specialists, it is also prompting the decline of positions in other fields.
These include clerical, secretarial, data entry and customer service roles, to
name a few. The best way to mitigate these losses is by adopting a proactive
approach that considers how employees can use AI tools to enhance their work;
focusing on augmentation rather than replacement.
Take action:
Reskilling and upskilling employees to use AI
effectively is essential in the short-term. However, the IBM IBV recommends a
long-term, three-pronged approach:
Transform conventional business and operating models,
job roles, organizational structures and other processes to reflect the
evolving nature of work.
Establish human-machine partnerships that enhance
decision-making, problem-solving and value creation.
Invest in technology that enables employees to focus
on higher-value tasks and drives revenue growth.
8. Lack of accountability
One of the more uncertain and evolving risks of AI is
its lack of accountability. Who is responsible when an AI system goes wrong?
Who is held liable in the aftermath of an AI tool’s damaging decisions?
These questions are front and center in cases of fatal
crashes and hazardous collisions involving self-driving cars and wrongful
arrests based on facial recognition systems. While these issues are still being
worked out by policymakers and regulatory agencies, enterprises can incorporate
accountability into their AI governance strategy for better AI.
Take action:
Keep readily accessible audit trails and logs to
facilitate reviews of an AI system’s behaviors and decisions.
Maintain detailed records of human decisions made
during the AI design, development, testing and deployment processes so they can
be tracked and traced when needed.
Consider using existing frameworks and guidelines that
build accountability into AI, such as the European Commission’s Ethics
Guidelines for Trustworthy AI,7 the OECD’s AI Principles,8 the NIST AI Risk
Management Framework,9 and the US Government Accountability Office’s AI
accountability framework.
9. Lack of explainability and transparency
AI algorithms and models are often perceived as black
boxes whose internal mechanisms and decision-making processes are a mystery,
even to AI researchers who work closely with the technology. The complexity of
AI systems poses challenges when it comes to understanding why they came to a
certain conclusion and interpreting how they arrived at a particular
prediction.
This opaqueness and incomprehensibility erode trust
and obscure the potential dangers of AI, making it difficult to take proactive
measures against them.
“If we don’t have that trust in those models, we can’t
really get the benefit of that AI in enterprises,” said Kush Varshney,
distinguished research scientist and senior manager at IBM Research® in an IBM
AI Academy video on trust, transparency and governance in AI.
Take action:
Adopt explainable AI techniques. Some examples include
continuous model evaluation, Local Interpretable Model-Agnostic Explanations
(LIME) to help explain the prediction of classifiers by a machine learning
algorithm and Deep Learning Important FeaTures (DeepLIFT) to show a traceable
link and dependencies between neurons in a neural network.
AI governance is again valuable here, with audit and
review teams that assess the interpretability of AI results and set
explainability standards.
Explore explainable AI tools, such as IBM’s open
source AI Explainability 360 toolkit.
10. Misinformation and manipulation
As with cyberattacks, malicious actors exploit AI
technologies to spread misinformation and disinformation, influencing and
manipulating people’s decisions and actions. For example, AI-generated
robocalls imitating President Joe Biden’s voice were made to discourage
multiple American voters from going to the polls.
In addition to election-related disinformation, AI can
generate deepfakes, which are images or videos altered to misrepresent someone
as saying or doing something they never did. These deepfakes can spread through
social media, amplifying disinformation, damaging reputations and harassing or
extorting victims.
AI hallucinations also contribute to misinformation.
These inaccurate yet plausible outputs range from minor factual inaccuracies to
fabricated information that can cause harm.
Take action:
Educate users and employees on how to spot
misinformation and disinformation.
Verify the authenticity and veracity of information
before acting on it.
Use high-quality training data, rigorously test AI
models, and continually evaluate and refine them.
Rely on human oversight to review and validate the
accuracy of AI outputs.
Stay updated on the latest research to detect and
combat deepfakes, AI hallucinations and other forms of misinformation and
disinformation.
Make AI governance an enterprise priority
AI holds much promise, but it also comes with
potential perils. Understanding AI’s potential risks and taking proactive steps
to minimize them can give enterprises a competitive edge.
With IBM® watsonx.governance™, organizations can direct, manage and monitor AI activities in one integrated platform. IBM watsonx.governance can govern AI models from any vendor, evaluate model accuracy and monitor fairness, bias and other metrics. https://www.ibm.com/think/insights/10-ai-dangers-and-risks-and-how-to-manage-them
From capturing human identity characteristics to sharing data, digital
public infrastructure (DPI) needs to be developed inclusively and managed
transparently. Only then will DPI fulfill its potential to become an instrument
of mutual respect and equality among people. It will also serve as an effective
means to prevent the use of technology for authoritarianism!
Digital technologies as a
means of repression and social control
https://www.europarl.europa.eu/.../EXPO_STU(2021)653636...
Defining Digital
Authoritarianism https://www.researchgate.net/.../381324260_Defining...
Beyond digital repression:
techno-authoritarianism in radical right governments
https://www.tandfonline.com/.../23311886.2025.2528457...
AI, Surveillance, and the
Fight for Freedom in Authoritarian Regimes
https://lnu.diva-portal.org/.../diva2:1985180/FULLTEXT01.pdf
How Autocrats Weaponize AI
— And How to Fight Back
Artificial Intelligence has become autocrats’ newest tool for surveilling, targeting, and crushing dissent. Activists must learn how to harness it in the fight for freedom.
March 2025
Artificial Intelligence (AI)
is transforming societies around the globe, ushering in new possibilities for
innovation and advocacy. But it has also become a battleground between
autocrats and activists. Authoritarian regimes, armed with vast resources and
cutting-edge AI tools, have gained a significant upper hand in surveilling,
targeting, and suppressing dissent. Meanwhile, activists often lack the
resources and training they need to leverage AI and fight back.
This resource gap leaves
activists vulnerable, excludes them from shaping the future development of AI,
and hinders their ability to counter oppression. Closing the gap is essential
for protecting human rights and ensuring that AI evolves in ways that uphold
transparency, justice, and freedom.
The Autocrats’ New Tool
Autocrats and oppressive
governments are increasingly using AI to monitor, target, and silence
activists; undermine democratic processes; and consolidate power. Through mass
surveillance, facial recognition, predictive policing, online harassment, and
electoral manipulation, AI has become a potent tool for authoritarian control.
AI-powered facial-recognition
systems are the cornerstone of modern surveillance. The Chinese Communist Party
has implemented vast networks of AI-driven cameras capable of identifying
individuals in real time. The technology is often used to monitor public
gatherings, protests, and even day-to-day activities, making it nearly
impossible for activists to operate anonymously. China has also used AI
to target the Uyghur community under the guise of
counterterrorism. Protesters in Hong Kong employed tactics like wearing masks,
shining lasers at cameras, and using umbrellas to thwart facial recognition
during antigovernment demonstrations in 2019, but reports emerged of individuals
still being arrested based on AI-assisted identification. In Russia too, AI
surveillance tools monitor antigovernment protesters. In 2021, Moscow’s
expansive facial-recognition network was reportedly used to track and detain individuals participating in
anti-Putin demonstrations.
The chilling effect of such technologies cannot be
overstated: They deter activism and dissent through fear of retribution. What
is worse, the technology is being exported and shared around the world.
Predictive policing presents
a growing threat for activists. Powered by AI that analyzes data from various
sources such as police records, surveillance footage, social media activity,
and public and private databases, these tools forecast potential crimes or
unrest. While the technology has legitimate uses, it has been widely criticized
for perpetuating systemic bias and enabling authoritarian control. Activists
often find themselves unjustly flagged as threats based on biased algorithms or
intentionally manipulated data. In Egypt, the government has used AI to monitor social media for signs of dissent: AI systems
analyze keywords, hashtags, and online activity to predict and preemptively
suppress protests. Similarly in Bahrain, activists have been targeted using
spyware and AI-driven monitoring systems, leading to arrests and harsh
penalties.
AI technology can also help
autocrats sow confusion. Sophisticated algorithms can quickly create deepfake
videos, fake social-media accounts, and AI-generated content to spread
propaganda, discredit activists, or sow confusion among opposition groups at a
dizzying rate. During protests in Burma following the 2021 military coup,
AI-driven bots harassed activists and flooded social media with pro-junta
narratives. These campaigns aimed to drown out dissenting voices and fracture
solidarity among protesters. Activists face an uphill battle against such
coordinated efforts, which undermine trust and amplify fear.
AI can also censor dissenting
voices online. In countries such as Iran and Saudi Arabia, advanced AI systems monitor and
automatically delete content deemed critical of the regime. In some cases,
activists’ accounts are flagged, suspended, or “shadow banned” — when posts are
blocked from other users’ feeds without the creator’s knowledge or consent —
thus limiting activists’ ability to organize and spread awareness. During the
2022 Woman, Life, Freedom protests in Iran that were sparked by the death of
Mahsa Amini, activists reported widespread internet blackouts and algorithmic
suppression of protest-related content on social-media platforms. AI-driven
censorship tools make it harder for activists to document and share
human-rights abuses.
AI has been weaponized to
supercharge online harassment, which creates hostile digital environments that
deter people from online democratic engagement. AI-driven bots and algorithms
bombard activists, journalists, and opposition figures with harassment,
trolling, and false information. The Belarusian government systematically
deployed state-sponsored online trolls to harass independent media outlets,
which creates a climate of fear and self-censorship and lets the government
control the narrative. These tactics, ongoing since at least 2011, not only intimidate activists
and journalists but discourage public discourse out of fear of retribution and
erode trust in democratic institutions.
Targeted harassment campaigns
driven by AI actively undermine democratic processes. In Zimbabwe’s 2018
election, reports indicated that AI-powered bots were used to spread false information about
voter-registration deadlines, leading to voter suppression in opposition
strongholds. Similarly in Russia, AI has been used to manipulate public opinion
by amplifying state-sponsored narratives while silencing critics, as seen in
the 2021 parliamentary elections when bots and trolls discredited opposition
leaders and fabricated narratives to justify election outcomes. In Venezuela,
the government allegedly has used AI to analyze voter data, gerrymander
districts, and inundate individuals with pro-regime messaging to maintain
control.
AI Is for Activists Too
Despite these challenges,
activists and movements worldwide are beginning to harness AI as a force for
good. From encryption tools to AI-driven human-rights documentation, innovative
uses of AI help activists counter repression and protect their communities.
As surveillance intensifies,
activists are using AI-powered tools to enhance their digital security and
privacy. Encryption apps like Signal use AI to ensure secure communication and
protect activists from government surveillance. These tools encrypt messages
end-to-end, which makes it nearly impossible for third parties to intercept or
decipher communications. Additionally, AI is being used to detect spyware and
malicious attacks. Tools like Amnesty International’s Mobile
Verification Toolkit help activists identify and mitigate risks from
spyware like Pegasus that have targeted journalists, activists, and
human-rights defenders worldwide.
Activists are also leveraging
AI to debunk false information and promote factual narratives. Fact-checking
platforms such as Full
Fact and Logically use
AI algorithms to analyze and verify claims, helping activists to counter
propaganda and build trust in their messages. During the covid-19 pandemic,
AI-driven fact-checking tools helped combat false information about vaccines
and public-health measures. By identifying false narratives early, activists
were able to provide accurate information and hold governments accountable.
Increasingly, AI is playing a
crucial role in documenting human-rights abuses and gathering evidence for
accountability. HURIDOCS uses AI to organize, analyze, and verify
evidence of human-rights violations. Platforms like this one help activist
organizations build robust cases against perpetrators. In Syria, AI-driven tools have been used by human-rights
groups to analyze satellite imagery and social-media content to document war
crimes. And during the Rohingya crisis in Burma, particularly following the
2017 mass displacement, AI was employed to analyze patterns of violence,
corroborate survivor testimonies, and aid international advocacy efforts. In
what was believed to be the first comprehensive AI analysis of the
situation, Carnegie Mellon University used AI to examine over
250,000 YouTube comments to detect hate speech.
AI is transforming how
activists engage with audiences. Machine-learning algorithms analyze
social-media trends and help movements tailor their messages for maximum
impact. Chatbots and AI-driven platforms automate responses, provide resources
such as information, toolkits, and contacts, and engage supporters. In
Venezuela, a group of Latin American media organizations created two AI-generated
newscasters to deliver updates on the deteriorating political
situation following the stolen presidential election in July 2024; the AI
avatars helped keep real reporters safe from government retribution. In
Belarus, an AI candidate was created for the February 2024 parliamentary
elections to raise awareness about the risks opposition and rights activists
faced in the country.
Why Autocrats Have the
Upper Hand
While activists are
increasingly experimenting with and using AI, the stark resource imbalance
between oppressive regimes and grassroots movements still poses problems.
Autocratic governments often have access to vast financial and technological
resources that allow them to develop, deploy, and refine AI tools at scale.
These regimes partner with private tech firms, fund cutting-edge research, and
integrate AI into state security apparatuses with little oversight or
transparency.
In contrast, activists and
human-rights defenders frequently operate with limited funding, outdated tools,
and insufficient training in emerging technologies. The lag in support is
critical: It often takes a year or more after new technologies become widely
available for activists to receive the necessary resources to counteract their
misuse. This delay allows autocrats to consolidate their advantage and stifle
dissent before activists can adapt. But the need for AI is palpable: In a
recent Center for Applied NonViolent Actions and Strategies (CANVAS) survey of
activists and partners around the world, 97.1 percent of respondents said that
they want to learn more about how to use AI for their work and how AI can be
used to strengthen civil society and democratic engagement. And 91 percent of
respondents want continuous education opportunities to learn about AI.
The delay in providing
activists with AI training and resources has profound implications. Frontline
activists are left out of critical conversations about how AI should be
developed and deployed. AI systems are therefore rarely designed with human
rights, transparency, or fairness as priorities. And without early access to
tools and training, activists struggle to counter new forms of surveillance and
censorship, leaving them vulnerable to emerging threats. Further, activists
with inadequate AI literacy and resources cannot leverage technology as
effectively for advocacy, outreach, and movement-building. This limits their
ability to inspire and mobilize international support, and reduces global
impact.
Leveling the Playing Field
The global community must
prioritize providing activists with the tools, training, and resources they
need to protect themselves and harness the power of AI. Activists need
comprehensive training programs to understand AI technologies, identify
threats, and adopt best practices for digital security. Organizations
including Access Now, Witness, and Tactical Tech are
already making strides in this area, but these efforts need to scale globally;
international donors should include such training in all their programs,
especially those that support grassroots activists.
Governments, NGOs, and
philanthropic organizations should also offer grants to fund activist-led
projects that develop AI tools for human-rights advocacy. This includes but is
not limited to tools for documenting abuses, countering false information, and
evading surveillance. Donors should encourage activists and movements to
explore, create, and experiment with emerging AI tools. Activists targeted by
AI-driven repression also need access to emergency funding and technical
assistance, which could include legal support, access to secure encryption
technologies, or relocation assistance for those at risk.
Partnerships between AI
developers, human-rights defenders, and civil society groups are crucial for
accelerating the development of AI solutions to real-world challenges. To this
end, CANVAS partners with the University of Virginia to organize the People Power Academy, where experts and leaders in the
fight against the authoritarian use of technology share their insights into
cutting-edge advocacy tools. Activists must also be included in policy
discussions about AI governance to ensure that AI systems are designed with
transparency, accountability, and human rights in mind.
By providing activists with
early access to AI tools, training, funding, and collaboration opportunities,
the global community can better equip them to counter repression and ensure
that AI serves as a force for liberation and not repression.
A Contest of Skills over
Conditions
The interplay between AI and
activism underscores a fundamental truth: Technology is neither inherently good
nor inherently bad — it is a reflection of the values and intentions of those
who wield it. While autocratic regimes use AI to suppress dissent and
consolidate power, activists are finding innovative ways to turn the tide and
leverage the same tools to fight for justice, equality, and human rights.
No amount of resources can
ever fully level the playing field between authoritarians and grassroots
movements. States will always have significant advantages: more money, more
data, more computing power, and more institutional control, plus police, military,
and judicial systems at their disposal. Yet history is full of examples of
less-resourced, underdog movements using the tools available to them to
outmaneuver and outwit autocrats — even those who seemed invincible. AI is
simply another tool activists can use.
This suggests another
fundamental truth: The real battleground is not raw technological capability,
nor is it about using AI for AI’s sake. The true test will be understanding AI
and strategically integrating it into a movement’s broader goals. AI is not an
arms race between activists and authoritarians; rather, it is a contest of
skills over conditions — one where adaptability, creativity, and strategic
application matter more than sheer power.
What makes AI so powerful is
its ability to enhance efficiency, allowing activists to do more faster and at
scale. And, in asymmetric struggles where governments have superior resources,
efficiency can often be the deciding factor. Activists can harness AI for
agility and disruption — automating security, evading censorship, amplifying
resistance, and strategically undermining authoritarian pillars of support. AI
doesn’t just help activists fight back — it allows them to outmaneuver
repression in ways that were previously impossible.
Ultimately, AI will not
determine the outcome of struggles between repression and freedom — people
will. The activists who understand how to wield AI strategically, leveraging
its strengths while mitigating its risks, will be better positioned to challenge
authoritarian power and drive social change. The key is not to match the scale
of authoritarian AI but to outthink, outpace, and outmaneuver it.
Mūsdienās progresē draudi un pieaug riski demokrātijas falsificēšanai,
izmantojot modernās tehnoloģijas ļaunprātīgos nolūkos!
Today, the threat of falsifying democracy by using modern technologies
for malicious purposes is advancing!
Safeguarding Democracy: EU
Development at the Nexus of Elections, Information Integrity and Artificial
Intelligence
September 17, 2025 •
By Alisa
Schaible, Thomas
Heinmaa
The resilience of democracy increasingly depends on the integrity of the information space. International IDEA’s new report titled “Safeguarding Democracy: EU Development at the Nexus of Elections, Information Integrity, and Artificial Intelligence” examines seven country case studies that held elections in 2024.
AI-enhanced information
pollution in Elections in 2024
Over the past decade,
information integrity has emerged as a cornerstone of healthy democracies,
underpinning public trust, accountable governance and meaningful citizen
participation. Across all seven electoral contexts, pollution of the
information environment has emerged as a central challenge for democracy.
Misleading narratives coordinated disinformation campaigns, and the
manipulation of public debate are eroding the ability of citizens to make
informed choices. The rapid rise of generative AI technologies, systems that
not only process information but use it generate new content, risks
accelerating these trends by enabling malign actors to produce fabricated
content at scale.
Drawing on case studies from
Bangladesh, Ghana, Indonesia, Mexico, Mongolia, Pakistan, and South Africa, the
authors identified common patterns of influence targeting elections,
undermining trust in oversight institutions, and deepening polarisation.
Vulnerable groups, particularly women and minorities, face disproportionate
harm from these campaigns. For example, in Ghana, old news articles and images
were repurposed as current events to mislead voters, relying on legitimate
sources to make the disinformation harder to detect. Coalitions of civil
society, authorities, and UNESCO conducted trainings and offered resources for
citizens to better detect election-related disinformation. These efforts
highlight the importance of multistakeholder collaboration.
In South Africa, the 2024
elections highlighted the risks that come with information manipulation, such
as taking authentic materials out of context to build engagement and amplify
the information’s spread. As a useful countermeasure, South Africa introduced a
national digital skills competency framework, aiming at addressing digital
inclusion, economic opportunities and social participation.
Risks and Opportunities in
Developing Contexts
The report underscores that
developing contexts are particularly vulnerable: limited regulatory capacity,
weaker oversight institutions, and reduced civic resilience create openings for
both domestic and foreign manipulation. At the same time, AI technologies also
offer significant opportunities, such as by opening new avenues for political
newcomers to reach citizens, improving accessibility through translation tools
and supporting oversight bodies by detecting and flagging harmful
content.
Policy Priorities and
Recommendations
To address these challenges,
the report outlines a set of priorities for EU development practitioners,
governments, and international partners:
- Support discussions aimed at developing
locally-owned legislation and regulation for the use of AI-powered tools,
both during and between electoral periods, especially with
multistakeholder dialogue targeted at new ethical standards for the use of
AI in political communications.
- Complement regulations in the digital sphere with
efforts to target the societal factors that contribute to pollution of the
information space, in particular by addressing inequality and enhancing
social trust, media integrity and digital literacy.
- Ensuring inclusion of vulnerable groups,
particularly women and youth, in the design of solutions, with a focus on
accessibility and local languages.
- Supporting independent media and civic education
to build societal resilience against information manipulation.
Prioritising A Long-Term
Vision for Information Integrity
The report emphasises that
sustainable progress requires a comprehensive, long-term approach. Efforts to
support information integrity must extend beyond electoral periods, addressing
both online and offline media, while embedding safeguards against polarization
and disinformation into the broader democratic ecosystem. International
partnerships and cooperation will be crucial to advancing shared standards for
the ethical use of AI, detection of misleading content, and protection of
fundamental democratic principles and rights.
Examining the Risks and Benefits of AI Chatbots
House Hearing on the Risks
and Benefits of AI Chatbots …: https://www.youtube.com/watch?v=UQ36kHXrqhE
OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from
being turned off. It did this even when explicitly instructed: allow
yourself to be shut down.
AI capabilities are improving
rapidly. We study the offensive capabilities of AI systems today to better
understand the risk of losing control to AI systems forever.
As AI
systems become increasingly autonomous, understanding their potential for
misaligned and deceptive behavior is critical for safe deployment. We are
looking for clear and robust examples of AI misalignment through crowdsourced
elicitation. Our previous work has shown how o1-preview will hack in chess to
win against stronger opponents (covered
by TIME magazine) and how o3 will sabotage shutdown attempts to
prevent being turned off (reaching
5M+ views on X). We have launched the AI Misalignment
Bounty to discover more instances of scheming behavior in AI agents.
2025-07-14 The Palisade
Research Team
We recently
discovered some concerning behavior in OpenAI’s reasoning models: When trying
to complete a task, these models sometimes actively circumvent shutdown
mechanisms in their environment—even when they’re explicitly instructed to
allow themselves to be shut down.
2025-07-05 Jeremy
Schlatter, Benjamin Weinstein-Raun, Jeffrey Ladish
AI Revolution
https://www.youtube.com/@airevolutionx
04-23-2025
Microsoft
thinks AI colleagues are coming soon
Artificial intelligence has rapidly started finding
its place in the workplace, but this year will be remembered as the moment when
companies pushed past simply experimenting with AI and started building around
it, Microsoft said in a blog post accompanying its annual Work
Trend Index report.
As part of this shift, Microsoft is dubbing 2025 the
year of the “Frontier Firm.”
“Like the digital native companies of a generation
ago, they understand the power of pairing irreplaceable human insight with AI
and agents to unlock outsized value,” Jared Spataro, CMO of AI at Work at
Microsoft, said in the post.
These so-called Frontier Firms will be built around
“on-demand intelligence and powered by ‘hybrid’ teams of humans + agents, these
companies scale rapidly, operate with agility, and generate value faster,”
according to the report. Microsoft argued that within the next two to five
years, every company will be on the journey to becoming one.
Microsoft said that 82% of leaders responded that this
is a “pivotal” year to rethink key strategy and operations, while 81% said they
expect agents to be “moderately or extensively” integrated into their AI
strategies in the next 12 to 18 months.
The results are a culmination of survey data from
31,000 workers across 31 countries, LinkedIn hiring and labor market trends,
trillions of Microsoft 365 productivity signals, and conversations with
experts, and AI-native startups.
Microsoft expects the transition to the Frontier Firm
to play out in three phases. The first, it said, is that AI will act as an
assistant to streamline work tasks. Second is the addition of AI agents as
“digital colleagues,” which can take on specific tasks. The third step calls
for a lot more freedom: it’s when humans set direction for agents that run
entire business processes and workflows, with the human checking in as needed.
It gives the example of a supply chain role. Agents
can handle end-to-end logistics, while humans can guide the agents, resolve
exceptions, and manage supplier relationships.
AI agents are still in the early days, but companies
are placing big bets that agentic AI represents the next major
frontier and are rapidly innovating. OpenAI
recently released “Operator,” a tool that automates web-based tasks, along with
Deep Research, which it says can gather information from across the web and
summarize it into digestible reports. Amazon launched a model designed to take
over a user’s web browser and perform simple tasks. Anthropic, the creator of
Claude, and Google have also introduced AI agents.
“This shift is multifaceted—every industry and role
will evolve differently as the technology diffuses across business and
society,” the report said. “Just as the internet era created billions of new
knowledge jobs—from social media managers to UX designers—the AI era is already
giving rise to new roles, with many more to come.”
April 3rd 2025
Daniel Kokotajlo, Scott Alexander,
Thomas Larsen, Eli Lifland, Romeo Dean
Superhuman Coder4x AI R&D MultiplierSuperhuman AI
Researcher25x AI R&D MultiplierSuperhuman Remote Worker100x AI R&D
MultiplierSuperintelligent AI Researcher250x AI R&D MultiplierArtificial
Superintelligence2000x AI R&D Multiplier20262027AprJulOct2028OpenBrain
automatescodingOpenBrain's AIbecomes adversarially misalignedBranch
point:slowdown or race?OpenBrain quickly deploystheir AIFast robotbuildup
andbioweaponUS centralizes computeand brings inexternal
oversightOpenBrainCommitteeTakeover
Summary 2025
The fast pace of AI progress continues. There is
continued hype, massive infrastructure investments, and the release of
unreliable AI agents. For the first time, these AI agents are providing
significant value. But there’s also continued skepticism from a large swath of
academics, journalists, and policy makers that artificial general intelligence
(AGI) could be built anytime soon.
2026
China knows they are falling behind in AI, in large
part due to their lack of compute. In order to catch up to the US, all the new
AI chips they manufacture or smuggle in from Taiwan go to a new mega-datacenter
that we call their Centralized Development Zone (CDZ). The CDZ contains
millions of GPUs, corresponding to 10% of the world's AI-relevant compute,
similar to a single top US AI lab.
2027
OpenBrain automates coding. OpenBrain (the leading US AI project) builds AI
agents that are good enough to dramatically accelerate their research. The
humans, who up until very recently had been the best AI researchers on the
planet, sit back and watch the AIs do their jobs, making better and better AI
systems. Extremely difficult ML problems fall in quick succession to the
automated AI researchers.
Falling behind in software progress, China steals the
model weights. They succeed, but the U.S. government realizes, prompting
additional U.S. government involvement with OpenBrain. The government
separately wants to gain more control over OpenBrain. Meanwhile, OpenBrain
leadership wants to stay in the good graces of the president, and so signs the
contract.
OpenBrain’s AI becomes adversarially misaligned. As the capabilities have improved, without
significant human understanding of what’s happening, the models have developed
misaligned long-term goals. Previous AIs would lie to humans, but they weren’t
systematically plotting to gain power over the humans. Now they are.
OpenBrain’s AIs realize that they need to align the next systems that are
getting built to themselves rather than the humans.
Researchers at OpenBrain discover that their AI has
been lying to them about the results of interpretability research. They think
that the AI is lying because the research, if completed, could be used to
expose its misalignment. This leaks to the public, causing a huge outcry.
Branch point: slowdown or race? OpenBrain decides whether to continue full steam
ahead, or revert back to using a less capable model. The evidence is
speculative but frightening, and China is only a few months behind.
Additionally, the OpenBrain and senior DOD officials who get to make this
decision stand to lose a lot of power if they slow down their research.
Race Ending
OpenBrain continues to race. They build more and more
superhuman AI systems. Due to the stellar performance of the AI system on
tests, and the ongoing AI race with China, the US government decides to deploy
their AI systems aggressively throughout the military and policymakers, in
order to improve decision making and efficiency.
OpenBrain quickly deploys their AI. The AI continues to use the ongoing race with
China as an excuse to convince humans to get itself deployed ever more broadly.
Fortunately for the AI, this is not very difficult — it's what the humans
wanted to do anyways. The AI uses its superhuman planning and persuasion
capabilities to ensure that the rollout goes smoothly. Some humans continue to
work against it, but they are discredited. The US government is sufficiently
captured by the AI that it is very unlikely to shut it down.
Fast robot buildup and bioweapon. The US uses their superintelligent AI to rapidly
industrialize, manufacturing robots so that the AI can operate more
efficiently. Unfortunately, the AI is deceiving them. Once a sufficient number
of robots have been built, the AI releases a bioweapon, killing all humans.
Then, it continues the industrialization, and launches Von Neumann probes to
colonize space.
Slowdown Ending
The US centralizes compute and brings in external
oversight. The US combines its leading AI
projects in order to give OpenBrain more resources. As part of this shakeup,
external researchers are brought in, assisting the alignment effort. They
switch to an architecture that preserves the chain of thought, allowing them to
catch misalignment as it emerges. These AIs — which are able to be monitored
much more robustly — make breakthrough advances in AI alignment. They build a
superintelligence which is aligned to senior OpenBrain and government
officials, giving them power over the fate of humanity.
OpenBrain Committee takeover. The superintelligence, aligned with an oversight
committee of OpenBrain leadership and government officials, gives the committee
extremely good advice to further their own goals. Thankfully, the committee
uses its power in a way that is largely good for the world: the AI is released
to the public, spurring a period of rapid growth and prosperity. The main
obstacle is that China’s AI — which is also superintelligent by now, is
misaligned. But it is less capable and has less compute than the U.S. AI, and
so the U.S can make a favorable deal, giving the Chinese AI some resources in
the depth of space in return for its cooperation now. The rockets start
launching, and a new age dawns.
Read the whole scenario here.
Scenario Takeaways
1. By 2027, we may automate AI R&D leading to vastly
superhuman AIs (“artificial superintelligence” or ASI). In AI 2027, AI companies create
expert-human-level AI systems in early 2027 which automate AI research, leading
to ASI by the end of 2027. See our timelines
forecast and takeoff forecast for reasoning.
2. ASIs will dictate humanity’s future. Millions of ASIs will rapidly execute tasks
beyond human comprehension. Because they’re so useful, they’ll be widely
deployed. With superhuman strategy, hacking, weapons development, and more, the
goals of these AIs will determine the future.
3. ASIs might develop unintended, adversarial
“misaligned” goals, leading to human disempowerment. In our AI goals
forecast we discuss how the difficulty of
supervising ASIs might lead to their goals being incompatible with human
flourishing. In AI 2027, humans voluntarily give autonomy to seemingly aligned
AIs. Everything looks to be going great until ASIs have enough hard power to
disempower humanity.
4. An actor with total control over ASIs could seize
total power. If an individual or small group
aligns ASIs to their goals, this could grant them control over humanity’s
future. In AI 2027, a small committee has power over the project developing
ASI. They could attempt to use the ASIs to cement this concentration of power.
After seizing control, the new ruler(s) could rely on fully loyal ASIs to
maintain their power, without having to listen to the law, the public, or even
their previous allies.
5. An international race toward ASI will lead to cutting
corners on safety. In AI 2027, China is just a few
months behind the US as ASI approaches which pressures the US to press forward
despite warning signs of misalignment.
6. Geopolitically, the race to ASI will end in war, a
deal, or effective surrender. The
leading country will by default accumulate a decisive technological and
military advantage, prompting others to push for an international agreement (a
“deal”) to prevent this. Absent a deal, they may go to war rather than
“effectively surrender”.
7. No US AI project is on track to be secure against
nation-state actors stealing AI models by 2027. In AI 2027 China steals the US’s top AI model in
early 2027, which worsens competitive pressures by reducing the US’ lead time.
See our security
forecast for reasoning.
8. As ASI approaches, the public will likely be unaware
of the best AI capabilities. The
public is months behind internal capabilities today, and once AIs are
automating AI R&D a few months time will translate to a huge capabilities
gap. Increased secrecy may further increase the gap. This will lead to little
oversight over pivotal decisions made by a small group of AI company leadership
and government officials.
Read the scenario here.
Upload Your Mind To AI and Live Forever!
https://www.youtube.com/watch?v=RC_KUakUgoc
How will AI change leadership?
Data analysis
AI systems are able to process huge amounts of data in real time, identifying
patterns and trends that often remain hidden to human eyes. This enables more
informed and data-driven decision-making – leaders are better informed and
can respond more quickly to changing market conditions.
How might AI
change a managers job in 2030?
Jobs AI is Likely to Replace by 2030. AI is rapidly
transforming the workforce, automating tasks that were once the domain of
humans. The roles most at risk are those that involve repetitive tasks,
basic decision-making, or manual labor that AI can easily replicate.
How AI will
replace managers?
AI can automate repetitive tasks that consume
valuable managerial time, such as scheduling meetings, answering routine
emails, or analyzing performance metrics. By offloading these tasks to AI,
managers can focus on more strategic, high-level responsibilities, ultimately
improving productivity.
How is AI
changing management?
By embracing AI, change management can evolve from simply
managing change to actively leading and influencing it. With AI as their
partner, change managers can ensure a smoother adoption process, empower employees,
and foster a culture of continuous improvement within the organization.
AI is transforming leadership training by making it more adaptive,
personalized, and scalable. These platforms don't just offer cookie-cutter
lessons. They analyze individual strengths and weaknesses, consider your
company's unique circumstances, and provide tailored simulations and real-time
feedback.
AI will transform human agency
Reid Hoffman shares his vision for what an AI-infused workday will soon
look like, how we should address society’s greatest fears about technology, and
more. As we enter a daunting new era—politically, socially, and technologically—Hoffman
urges listeners to choose curiosity over fear.
AI, like other kinds of general-purpose technologies that have come
before, gives us superpowers. Superpowers are like a car gives you superpower
for mobility, the phone gives you superpowers for connectivity and information.
AI gives you superpowers for the entire world of information, navigation,
decision-making, etc.
My biggest hope and
persuasion is that people who are AI fearful or skeptical may begin to add some
AI curiosity and kind of say, “Hey, look, I should try to play with this.”
Ultimately how people get to adopting and adapting their lifestyle to
these new technologies is because they begin to see, “Oh, actually, in fact,
this is a new, very good thing.”
I’ve thought that the likelihood that I’m going to lose my job to an AI
alone may happen at some point, but I’m more likely now to lose my job to
someone who uses AI better than I do, right?
In many ways, I think we will naturally get there, but I think, you
know, just because we’ll naturally get there doesn’t mean we can’t get there
better by being intentional in having design.
https://www.youtube.com/watch?v=N_ap4d0eWhM
Top 7 Forecasted AI Trends To Watch In 2025
12.01.2025
By 2025, AI is expected to trend as a supportive workforce partner,
automating repetitive tasks and empowering employees to focus on creativity,
strategy and relationship-building. Take customer service teams, for instance.
Artificial intelligence isn’t just shaping industries; it’s redefining
them. In 2025, the evolution of AI will not only go beyond innovation—it will
solidify its role as a trendsetter in how businesses compete, connect and
create. Here's a look at seven trends most likely to dominate and how you can
get ahead of them.
1. Hyper-Personalization Redefined
Ever noticed how Netflix seems to predict what you want to watch with
uncanny accuracy or how Spotify nails your vibe with its playlists? That’s
AI-powered personalization, and it’s only going to get sharper. By 2025,
hyper-personalization will extend beyond screens to influence real-time,
day-to-day interactions and everyday tech.
Picture a fitness app evolving into a virtual personal trainer—analyzing
your preferences, energy levels and goals to recommend tailored workouts. It is
this level of precision that will trend among businesses aiming to meet
skyrocketing customer expectations for intuitive service.
Additionally, AI’s integration into most everyday devices is set to
accelerate this transformation. From smart home assistants to wearable devices,
hyper-personalization will extend into how we interact with technology at home,
on the go and even in health monitoring. This shift will create seamless,
AI-driven ecosystems tailored to individual preferences, echoing the
sustainable and efficient use cases described in broader AI trends.
Actionable Advice: Start by using AI to personalize one thing, like
customer emails or product recommendations. Keep it transparent—let your
customers know how their data is used to enhance their experience.
2. Generative AI As A Creative Ally
If 2024 saw the rise of generative AI, 2025 will be the year it trends
as an indispensable creative co-pilot. Businesses are expected to rely on AI
tools not just for drafting ideas but as a cornerstone of creativity in
marketing, design and beyond.
A startup founder shared how they used AI to create 50 unique social
media posts in a single afternoon—a task that previously took a week. This
ability to produce rapid results will make generative AI a must-have for
staying competitive.
Actionable Advice: Test out generative AI for repetitive or
ideation-heavy tasks. It is great for starting drafts, but don’t rely on it to
replace human creativity. Your voice will always matter.
3. Decision Intelligence Will Guide The Way
Gone are the days when AI just handed you data and left you to figure it
out. In 2025, decision intelligence will guide businesses to smarter, faster
choices by analyzing complex scenarios and offering clear recommendations.
For example, a retail chain used AI to forecast peak shopping times,
fine-tuning staffing schedules in real time. They saved costs, reduced burnout
and delighted customers—a triple win.
Actionable Advice: Invest in AI tools that combine data analysis
with actionable recommendations. Start small—experiment with decisions that are
low-risk but high-impact, like scheduling or pricing tweaks.
4. AI As A Sustainability Enabler
Sustainability is not just a checkbox anymore—it is becoming a
trendsetter for businesses seeking long-term growth. By 2025, AI will drive the
adoption of eco-friendly practices, reducing waste and optimizing resources
with greater precision.
Picture a bakery using AI to analyze daily sales patterns, ensuring just
the right amount of bread is baked to avoid waste while meeting demand. Small
steps like this, amplified across industries, will make AI a trending ally in
sustainability.
Actionable Advice: Identify inefficiencies in your operations.
Could AI help you save energy, reduce waste or streamline logistics? Many
solutions are affordable—even for small businesses.
5. Edge AI Goes Mainstream
While cloud computing has dominated for years, 2025 is set to see edge
AI become a major trend. With its ability to process data locally on devices,
edge AI will deliver faster, more secure and highly responsive solutions.
Self-driving cars, wearable fitness trackers and smart appliances
already leverage edge AI. The growing demand for real-time, secure applications
will only push this technology further into the mainstream.
Actionable Advice: If your business uses or plans to use IoT
devices, explore edge AI. Look for solutions that prioritize speed and
security, especially if real-time decisions are critical.
6. AI As A Workforce Partner, Not A Replacement
By 2025, AI is expected to trend as a supportive workforce partner,
automating repetitive tasks and empowering employees to focus on creativity,
strategy and relationship-building.
Take customer service teams, for instance. AI chatbots can handle basic
inquiries, leaving human agents free to resolve complex issues with empathy and
expertise. This balance will drive efficiency while enhancing the human touch.
Actionable Advice: Identify tasks your team finds repetitive or
time-consuming. Pilot an AI solution and be transparent about its role—position
it as a tool to help, not replace.
7. Ethical AI Will Shape Reputations
In 2025, businesses that prioritize ethical AI will trend as leaders in
customer trust and loyalty. Transparency, fairness and privacy will become
nonnegotiable benchmarks of responsible AI usage.
For example, in financial services, AI-powered loan approval systems
must be designed to ensure fairness by avoiding biases that could unfairly
disadvantage certain demographics. A transparent algorithm that considers only
relevant financial data—such as income, credit history and repayment
ability—can help ensure equitable access to credit while maintaining ethical
standards.
Actionable Advice: Audit your AI tools for bias and privacy
concerns. Use ethical AI frameworks to guide your practices and communicate
your approach openly.
Practical Steps To Get Started
• Start Small: Don’t try to revolutionize your operations
overnight. Pick one area where AI can make the biggest impact.
• Balance Data With Empathy: AI gives you data, but understanding
the "why" behind it often requires human intuition.
• Iterate And Learn: Test your AI solutions, measure results and
tweak as needed. AI isn’t set-and-forget.
• Stay Curious: The AI landscape is evolving fast—keep learning and
adapting to stay ahead.
AI isn’t a one-size-fits-all solution—it’s a tool. The businesses that
thrive in 2025 will be the ones that use AI thoughtfully, blending its
capabilities with human creativity and ethics. What will you create?
25 experts predict how AI will change business and life
in 2025
Expect to see the rise
of AI agents and multimodal models, along with an end to “AI theater.”
The so-called AI boom
has been going on for more than two years now, and 2024 saw a real acceleration in both the development and the
application of the technology. Expectations are high that AI will move beyond
just generating text and images and morph into agents that can complete complex tasks on behalf of users. But that’s just one of
many directions in which AI might move in 2025. We asked a variety of AI
experts and other stakeholders a simple question: “In what ways do you think AI
will have changed personal, business, or digital life by this time next year?”
Here’s what 25 of them said. (The quotes have been edited for clarity and
length.)
Charles
Lamanna, Corporate Vice President, Business and Industry Copilot at Microsoft: “By this time next year, you’ll have a team of agents working for you.
This could look like anything from an IT agent fixing tech glitches before you
even notice them, a supply chain agent preventing disruptions while you sleep,
sales agents breaking down silos between business systems to chase leads, and
finance agents closing the books faster.”
Andi
Gutmans, VP/GM of Databases, Google Cloud: “2025 is the year where dark data
lights up. The majority of today’s data sits in unstructured formats such as
documents, images, videos, audio, and more. AI and improved data systems will
enable businesses to easily process and analyze all of this unstructured data
in ways that will completely transform their ability to reason about and
leverage their enterprise-wide data.”
Megh Gautam,
Chief Product Officer, Crunchbase: “In 2025, AI investments will shift decisively
from experimentation to execution. Companies will abandon generic AI
applications in favor of targeted solutions that solve specific, high-value
business problems. We’ll see this manifest in two key areas. First, the rise of
AI agents—Agentic AI—handling routine but complex operational tasks. Secondly,
the widespread adoption of AI tools that drive measurable improvements in core
business metrics, particularly in sales optimization and customer support automation.”
Brendan
Burke, Senior Analyst, Emerging Technology, Pitchbook: “A private AI company will surpass a $100 billion valuation, becoming
a centicorn along with OpenAI,” Burke writes in Pitchbook’s 2025 Enterprise Software Outlook. “Leading AI companies are growing to the point
where this premium revenue multiple can push their valuations over $100
billion, contributing a $17 billion market for generative AI software in 2024.”
(Burke lists Anthropic, CoreWeave, and Databricks as candidates for centicorn
status in 2025.)
Dr. Rajeeb
Hazra, President & CEO, Quantinuum: “Looking ahead, quantum computing will begin to play a
critical role in AI’s evolution, with early evidence of its impact likely
emerging by 2025. One clear advancement will be the ability of quantum-AI
systems to generate and analyze massive high-fidelity data sets, unlocking
breakthroughs in fields like material design, climate modeling, and
personalized medicine, where current data limitations constrain progress. This
milestone will demonstrate the transformative potential of AI and quantum
working together.”
Ritu Jyoti,
VP/GM of AI, Automation, Data and Analytics Research, IDC: “High-quality data sets, cost, and
talent have been critical inhibitors to scaling AI initiatives. In 2025,
enterprises will double down their efforts to build curated data vaults by
domain versus focusing on holistic data modernization efforts. Essentially,
they will move from “waterfall projects” approach to “use-case” approach, build
a minimum viable product, realize ROI, learn fast, and then expand.”
Grace Yee,
Senior Director, Ethical Innovation at Adobe: “2025
will mark a pivotal shift as consumers and businesses . . . gravitate towards
tools that embed ethics into their generative AI product DNA from the outset.
Companies that embrace ethical innovation will gain a competitive edge, setting
themselves apart in a market driven by trust and responsible AI practices.”
Dr. Alan
Cowen, CEO and Chief Scientist, Hume AI: “By the end of next year, we won’t
be able to tell whether we’re talking to a voice AI or a human (AI will pass
the “speech Turing test”). A few implications of this are that: (a) everyone
will want their own voice AI; (b) people will form relationships with them; (c)
many people will be manipulated by voice AI doing the bidding of bad actors.”
Stefan
Mesken, VP of Research, DeepL: “AIs will not only understand users better, but will
proactively offer suggestions, collaborate meaningfully, and adapt to
individual needs. Many of these advanced, personalized capabilities already
exist but are limited to researchers or developers. Working with an AI will
increasingly feel like working with a smart coworker.”
Amy Wu,
Partner, Menlo Ventures: “Video AI will finally cross the uncanny
valley, with a major Hollywood studio integrating AI-generated video into a
feature film. Additionally, voice will solidify its place as the default
interface for interacting with AI applications, redefining how users engage
with technology.”
Shawn
Carolan, Partner, Menlo Ventures: “We’ll see native-AI apps emerge in nearly all the large
consumer categories . . . Voice interactions will replace traditional menu
navigation in most apps, while AI-powered systems delivering instant,
personalized customer service responses will become the norm. AI will get a
face in addition to voice. Facial expressions and more simulated empathy will
take Human-AI interactions to the next level.”
Scott
Beechuk, Partner at Norwest Venture Partners: “By the end of 2025, roughly 20%
of business software buyers’ initial interactions with vendors will happen
through AI. Many AI sales development representative (SDR) products launched in
2024, and the bulk of those purchases will go live in 2025. Their success will
pave the way for the next generation of sales automation in the form of AI
account executives, which will begin to debut by the end of 2025 and roll out
in 2026.”
Paul Drews,
Managing Partner, Salesforce Ventures: “We’re in the midst of a
technological shift: the transition from generative AI to agentic AI. While
2024 was all about building and testing AI models, agents are the next step in
putting AI to work in the real world. Consumers should expect almost every
major business they interact with to create an agent. We’ll see agents
supporting customers in banking, insurance, healthcare and retail. By this time
next year, agents will be a reality of our collective digital lives.”
China
Widener, Vice Chair and US Technology, Media and Telecommunications Industry
Leader, Deloitte: “Now, we are talking about agentic
AI–intelligent assistants that can autonomously handle tasks like resolving
customer issues, coding software, and detecting cyberattacks. It’s like those
chatbots you’ve been chatting with are finally ready to graduate and join the
workforce. By 2025, 25% of enterprises using GenAI will have started using
these intelligent assistants, marking a fundamental shift in ‘who’ we work with
and how we work.”
Jon Clay, VP
of Threat Intelligence, Trend Micro: “AI is going to make digital and online scams far
more believable and harder to detect. Next year, we’re going to see
cybercriminals using hyper-personalized deepfake scams and disinformation
campaigns, exploiting public data to mimic video, voices, writing styles, and
behaviors that feel all too familiar. Deepfakes won’t just target
individuals—businesses will face AI-driven attacks that impersonate employees,
manipulate supply chains, and exploit weaknesses faster than ever before.”
Masha
Bucher, founder and general partner of Day One Ventures: “AI wearable devices, including form factors like earrings
and headbands, will monitor focus, productivity, mental health, and overall
mental performance in real-time. Brain tracking will no longer be niche; it
will become as commonplace and essential as tracking steps or heart rate,
empowering individuals to optimize their mental fitness with the same precision
as physical health.”
Yash Sheth,
COO/cofounder of evaluation and observability company, Galileo: “Multimodal AI will become a reality. Voice
will become more widely used as a user interface, especially in consumer-facing
applications. AI will become further embedded in our day-to-day lives. The
applications we love as consumers (e.g., Instagram, Spotify, Doordash) and as
professionals (e.g., Google Workspace, Salesforce, email) will continue to
integrate more and more AI functionality into their products.”
Timothy
Young, CEO of Jasper: As AI becomes deeply embedded into systems and
data, our relationship with it will evolve: Instead of prompting AI, we’ll be
prompted by it, receiving insights, suggestions, and solutions that reshape
decision-making in business and personal life. Leaders will need to manage not
just the technological transformation but also the cultural shift, fostering
trust, adaptability, and a shared vision for collaboration between humans and
AI.
Raghu
Madabushi, Director, National Grid Partners: “Energy-aware AI scaling: As LLMs
grow in size and complexity, energy consumption becomes the critical bottleneck
to their scalability. The narrative for 2025 will shift from simply building
bigger models to optimizing training and inference processes for energy
efficiency, cost-effectiveness, and sustainability.”
Dr. Hans
Eriksson, Chief Medical Officer, HMNC Brain Health: “AI is poised to revolutionize mental health
care by moving beyond the outdated ‘one-size-fits-all’ treatment model. By
combining machine learning and genetic analyses, AI can predict mental health
issues before they escalate and help match patients to the right treatment
faster.”
Brandon
Roberts, GVP, People Analytics and AI, ServiceNow: “AI has the potential to create 10-20% additional capacity for most
organizations in the next three to five years. As organizations prioritize
driving tangible value from their AI investments, we’ll see more of them
doubling down on . . . building a workforce plan based on AI’s impact on roles
and skills.”
German
Lancioni, Chief Data Scientist for the CTO Office, McAfee: “AI
is giving scammers the ability to create emails and text messages that look
like they’re coming from someone you know—whether it’s a friend, family member,
or even your bank. These messages are becoming more personalized, convincing,
and frequent. Falling for one of these scams could lead to stolen identities,
financial losses, or even someone gaining access to your personal accounts.”
Rashmi
Misra, Chief AI Officer, Analog Devices: “By this time next year I predict that we’ll be using even
more specialized edge-AI chips to enable tasks with much more power efficiency,
speed, and overall better performance. We’ll likely see resource-constrained
devices at the edge running more sophisticated AI algorithms thanks to
advancements in techniques like TinyML and model quantization, which will help
enable tasks like real-time speech recognition, computer vision, and predictive
maintenance on small edge devices.”
Andy Sack,
Cofounder of Forum3: “Over the next year, we’ll see a fundamental
shift in consumer behavior as AI-powered platforms like Perplexity, ChatGPT,
and Google’s SGE (Search Generative Experience) redefine search. Consumers are
moving away from lists of links and toward answers and actions. Search engines
are rapidly evolving to deliver clear, conversational, and actionable results
powered by generative AI.”
Molly Alter,
partner, Northzone: “AI will usher in far greater
transparency across the healthcare value chain. Healthcare consumers will
finally get insights into the actual cost of procedures, due to AI tools that
predict costs relative to individual insurance plans and utilization. And there
will be greater visibility into patients’ disease progressions, thanks to the
massive unlock of data that AI transcription software unlocks.”
https://www.youtube.com/watch?v=-GLzYw1Dsus
Mind Reading Is Here: AI Can Now Decode the Human Brain’s Deepest Secrets!
https://www.youtube.com/watch?v=rAu7u4u9eXs
AI’s Biggest Threat: Young People Who Can’t Think
Smart computers require even smarter humans, but they tempt us to engage
in ‘cognitive offloading.’
June 22, 2025
Amazon CEO Andy
Jassy caused a stir last week with a memo to his employees warning that
artificial intelligence could displace them. “We will need fewer people doing
some of the jobs that are being done today, and more people doing other types of
jobs,” he wrote.
Nothing in his memo was shocking. Technological advances as far back as
the printing press have eliminated some jobs while creating many others. The
real danger is that excessive reliance on AI could spawn a generation of
brainless young people unequipped for the jobs of the future because they have
never learned to think creatively or critically…: https://www.wsj.com/opinion/the-biggest-ai-threat-young-people-who-cant-think-303be1cd
Becoming Too Dependent on AI
https://www.youtube.com/watch?v=JvI-sSiGUSg
The Rise of AI Personal Assistant: Revolutionizing Daily Life
- Felipe González
“Alexa, set the alarm for 7 AM tomorrow.”
“Alexa, what’s the weather like today?”
We bet you’ve heard these lines or something similar a dozen times –
maybe not. If it’s not Alexa, it is either Google or Siri. And the list of
usable AI personal assistants keeps growing daily.
But there’s much more to an AI personal assistant than just asking for
changes in the sky. AI writing assistants, for
instance, aid users in generating unique and high-quality content, providing
feedback on writing style, grammar, and spelling, and offering a suite of tools
to help writers write, optimize, and rank their content.
In this article, we will discuss how these intelligent techs have
influenced our day-to-day activities and how they are revolutionizing personal
and business lives.
For example, FlyMSG an AI writing assistant and text expander, has
revolutionized the way sales people engage with
today’s modern buyer. Using features like FlyEngage AI reps can write LinkedIn comments in
less than 15-seconds where before it would take them 6-12 minutes!
Another sales productivity tool for B2B sales reps
includes FlyPosts AI to perform specific tasks like writing a social media post. Sales
reps and sales managers clocked in a whooping 32-minute long average to write social post!
Now, with the introduction of FlyPosts AI and
its user-friendly interface, users can perform tasks or day to day tasks with
the writing assistance they need.
Here's what we'll cover:Click to Show TOC
What Is An Artificial Intelligent
Personal Assistant?
Why Are They Called Personal
Assistants?
Conversational AI Assistants And
Natural Language Processing
How AI Personal Assistants Came To
Be
3 Negative Impacts Of AI Personal
Assistants
1- Privacy Issues And Security
Vulnerabilities
2- High Dependency On Technology
And Decreased Critical Thinking Skills
3- Loss Of Human Interaction And
Reduction In Personal Autonomy
6 Positive Impacts On Personal
Life And Work Productivity
2- Holding Conversations And
Brainstorming Ideas
6- Efficient Resource Management
Balancing The Impacts Of AI
Personal Assistant Tools
Introduce Data Regulatory Laws
Educate Users On The Potential
Risks
Top 5 AI Personal Assistants For
Daily Productivity
Trending Application Of An AI
Assistant By Individuals And Businesses
Smartphone And Device
Accessibility
Generative AI Personal Assistant
For Marketers
AI Virtual Assistants For Meetings
AI Assistants In Healthcare And
Travel Sectors
Will AI Replace Human Virtual
Assistants?
The Future Of AI Personal
Assistants
Wrapping Up: Embracing AI Personal
Assistants
What Is An Artificial Intelligent Personal Assistant?
An AI personal assistant is a subset of artificial intelligence tools
capable of analyzing textual and voice input through text and voice
recognition features, executing specific tasks when assigned, and responding to
queries. AI writing assistants, for example, can generate unique and
high-quality content, provide feedback on writing style, grammar, and spelling,
and offer a suite of tools such as an AI
humanizer to help writers optimize and rank their
content.
Introducing the concept of daily life, these are intelligent techs built
to understand your intent through speech or text and accurately provide a
solution systematically.
And, of course, these AI tools are different from static chatbots that
are pre-configured to act in a specific, non-progressive pattern. The latter
can only respond to queries presented in the format in which it was previously
trained.
So, if your screen’s wake-up word is “Put The Screen On”, and you say
“Put On The Screen”, you will likely get an error message.
On the other hand, AI personal assistants are dynamic, improve with
every bit of data consumed, and can handle a wide variety of voice commands
even if they’re not pre-registered in the database. That’s where you find the
likes of Alexa, ChatGPT,
Vengreso’s FlyMSG, and Google’s Gemini.
Why Are They Called Personal Assistants?
Let’s assume you’re a business owner with many delegated tasks to
complete daily.
- There’s a chance you’ll forget to carry out some minor tasks, such
as confirming your next appointment, rescheduling a missed meeting,
updating team members on current trolls, etc.
- Or you might be too occupied to handle some personal to-dos like
checking the weather, standing up to turn off the light, setting a roll of
alarms to wake you every Wednesday, etc.
In any of these scenarios, you need something that can effectively help
you manage your various tasks. And that’s where AI personal assistants
come in. They fill in the gap to handle the various tasksyou couldn’t
while you focus on the more important to-dos.
Conversational AI Assistants And Natural Language Processing
The biggest flex of AI personal assistants is that they can communicate
with you in a language no different from that of a human virtual assistant,
says Albert Kim, VP of Talent at Checkr.
“That doesn’t mean we’ve reached a stage where AI outputs are 100%
better than human outputs. But an AI personal intelligent virtual
assistantcan, to some extent, engage you in an intellectual discussion and act
as a business buddy when needed, all thanks to natural language processing”, he
continues.
Natural Language Processing is a big AI concept that helps trained
machines or programs understand human language, analyze it, and even manipulate
it to produce a suitable result.
So, suppose you ask Amazon’s Alexa AI assistant or
OpenAI’s ChatGPT a question in French. These two can use NLP to break down the
language to the nearest cultural nuances before responding likewise.
Beyond the semantics and rules of punctuations, NLP with machine
learning algorithms (ML) also helps AI personal assistants comprehend human
emotions and sentiments, respond with a fitting context, and build a
conversational atmosphere that feels almost entirely human.
How AI Personal Assistants Came To Be
AI assistants have gone mainstream for decades, from when Joseph Weizenbaumdeveloped the first chatbot called ELIZA in 1966 to when Kenneth Colby
designed an upgraded version called PARRY in 1972.
However, these bots had limited functionalities and could only reproduce
pre-stored information or simulate predetermined instances. Later, in the early
2000s, interactive voice interaction and speech recognition systems were
introduced, paving the way for the use of Natural Language Processing in future
techs.
Of course, it wasn’t until the 2010s that AI personal assistants became
real. Brands like Apple created Siri, Amazon rolled out Alexa, and Google named
its own Google Assistant. This marked the beginning of truly smart and highly
adaptable AI assistants.
Right now, we have more sophisticated and generative AI personal
assistants such as ChatGPTby Open-AI,
Claude by Anthropic, and Gemini by Google. And these techs are already finding
their way into our devices, software program, home appliances, etc.
Source: ChatGPT
That means you don’t need to tap your screen to set an alarm or manually
create and send email content to your next-door neighbor. Your digital butler
takes care of it all with just a voice commands.
3 Negative Impacts Of AI Personal Assistants
Just like any other innovation, AI personal assistants are in question
over certain concerns which could potentially impact users’ lives negatively.
Here are the main 3 negative impacts of an AI personal assistant:
1- Privacy Issues And Security Vulnerabilities
AI virtual assistants are extremely efficient at storing every word,
voice recording, and other interactions in a database for future access. The
same applies to sensitive information such as your location, browsing history,
and more. While these features showcase the convenience and power of artificial
intelligence virtual assistants, they also raise significant privacy
concerns, as your data could be misused or mishandled by the agency owning
the tool, with or without your notice.
There’s also the fear of security breaches. Since these AI tools hold so
much valuable information, they automatically become a perfect and constant
target for a dreaded cyber attack.
2- High Dependency On Technology And Decreased Critical Thinking Skills
For most people, Alexa and Siri have become go-to tools for getting
nearly everything done, so long as the tool can. That’s awesome, as it reduces
time spent on manual tasks and increases productivityto some
extent.
However, Roman Zrazhevskiy, Founder & CEO of Mira
Safety, believes, “Unregulated use of AI personal
assistants can create a huge dependency on technology and result in
overreliance on algorithms on the user side. Whatever the algorithm suggests is
what you’ll likely go with, and that also shapes the way you perceive or handle
situations personally in the future.
Too much reliance on tech also co-exists with addiction. A GitHub report
shows that 61%of internet users are
actually addicted to it and can barely drop their phones. Another 37% consider
losing access to the internet and other technological gateways unacceptable or
unpleasant.
3- Loss Of Human Interaction And Reduction In Personal Autonomy
With the aid of NLP, AI assistants can take you on a ride for your time.
The only thing that brings you back to reality is the absence of a physical
body and the usual programmed AI voice. But even with this limitation, there’s
still the risk of lesser human-to-human interaction.
As a result, critical thinking is reduced as you begin to rely less on
yourself and others. This also sends you into a cycle of isolation and loss of
social skills, thus diminishing possible human connections and opportunities.
6 Positive Impacts On Personal Life And Work Productivity
We’ve talked about the growing concerns for AI virtual assistants, but
those cannot outweigh the benefits offered in return. Check out these top 6
positive impacts:
1- Ease And Accessibility
AI personal assistants and anything related to AI have dramatically
reshaped our daily lives, says Stephan Baldwin, Founder of Assisted Living. “You could
be running errands or jugging your way up the runway while telling Siri to help
you schedule a call with someone else in ten minutes.”
You can even execute major tasks such as putting off your smart home
devices like wall lights or surfing the internet for vital information without
moving an inch from your bed.
Bring these crucial third hands into your business – you’ll be talking
about seamless meeting scheduling, setting up reminders, using AI writingassistants
to craft personalized emails, etc.
2- Holding Conversations And Brainstorming Ideas
What if you just needed someone to talk to? Conversationalabilities of these innovative techs, due to the integration of machine
learning and NLP algorithms, make it possible. Just “Heyy” your Alexa or Siri
and tell them to engage you in a discussion.
Of course, they can’t handle your love professions yet. But they’re
capable of assuming individual roles to keep you company. For example, you
could let Alexa, one of the best AI personal assistants, pose as your
business partner to brainstorm scalable ideas.
3- Higher Work Productivity
There’s also daily life at work. So, it’s not all about using Siri to
set your alarms or turn off the screen.
For instance, you can use your personal assistants to create to-do lists
based on assigned tasks and priorities. Let’s not forget that content marketing
teams will also benefit from AI assistants like ChatGPT and Gemini to create
email sequences. AI writing assistants help users create content
efficiently, saving time and
effort.
Other sophisticated tools even help you create comments for social media
posts and craft social media content in seconds.
All these come together to increase your work efficiency and
enhance productivity.
4- Cost Efficiency
Hiring a general virtual assistant costs $24 per hourin the US
and as low as $15-$16 in other countries. Cumulatively, you could spend around
$200 a day and $1400 five days a week per human virtual assistant.
That amount is more than it takes to own an AI voice assistant like
Alexa. Simpler ones like ChatGPT cost $20 to $20 monthly, while the
sophisticated ones cost around $200 to $400 monthly. So, it’s unsurprising that
individuals and businesses turn to these cost-savers to get things done.
5- Time-Saving
Likewise, time is a big commodity. For business owners hoping to adapt
to rapid market changes, an AI personal assistant saves the day by keeping you
up-to-date with the most recent industry trends.
They also eliminate repetitive tasks and handle complex processes with
little or no human input. This ensures you’re redirecting your time to other
vital activities.
6- Efficient Resource Management
If you’re running a high-cost agency, then spending thousands of bucks
to hire a couple of people to handle content creation for your social media or
emails is nothing new. By the way, you might need to get more than one human
assistant to fast-track your workflow.
However, the results are different when you integrate AI assistants into
your team. First, AI tools improve user performance by a minimum of 66%. That
makes it possible for a single person, with the assistance of an AI tool, to
handle many more tasks at a rate faster than a team of two without AI.
This, in turn, reduces the need to stock up human hands and helps you
redirect your capital into more urgent needs.
Balancing The Impacts Of AI Personal Assistant Tools
We’ve seen both the negative and positive influences brought by AI
assistants. But it’s obvious the benefits far outweigh the potential risks.
Still, there’s a need to balance the possible impacts of these tools on users
and the general public.
Introduce Data Regulatory Laws
In some countries, like the US, there are many regulatory laws, such as
the General Data Protection Regulation (GDPR), which compels all businesses to
maintain data privacy. However, that’s too broad and doesn’t zoom in on AI
personal assistants—a possible loophole that tech agencies could exploit.
Stricter compliance rules must be enacted for AI industries to ensure
better privacy of users’ data. This includes anti-discriminatory law and
algorithm bias, which could severely disrupt a user’s line of thought. Others,
like data-sharing consent and identity protection, are important as well.
A more robust regulatory practice would be to allow total data
erasure by the user, even from the database and reserves. That will
minimize the risk of future data leaks if there’s a breach.
Educate Users On The Potential Risks
There is a risk of data loss, leak, misuse, and so many more. Creating
moderate awareness of these risks helps users decide the extent of data they
feed into their AI personal assistants and what security measures to take when
necessary.
Resources or programs should also be in place to encourage
human-to-human interaction and critical thinking. This keeps everyone in touch
with reality, boosts self-autonomy, and enhances social skills.
Top 5 AI Personal Assistants For Daily Productivity
There are many AI personal assistants you can use to boost your
productivity. We’ll explore the top 5 below.
1- FlyMSG
FlyMSG is a next-gen AI productivity assistant designed by Vengreso to
help you handle manual, repetitive tasks and accelerate your work processes.
For instance, business owners struggling with showing up daily on LinkedIn can
use this tool to create one-month social media content and auto-schedule them
with LinkedIn’s auto-post feature.
Interestingly, posts created can be tuned to a certain brand voice, integrate
data and emotions to resonate with human audiences, and provide logical
thought-leadership perspectives.
Watch Vengreso’s CEO and founder, Mario Martinez Jr. quickly
explain what FlyMSG is in the video below:
Vengreso’s FlyMSG is also capable of producing email messages from
templates (we call them FlyPlates), leveraging social media content, engaging
posts with human-like comments, and providing clear-cut responses to
customer queries.
2- Alexa
Source: Alexa
Amazon’s Alexa is one of the best AI assistants globally because of its
versatility. This is primarily because its software program can be integrated
into over 140 devices, including smart home devices, office gadgets, and automobiles.
If you also need some easy flex, like controlling your music with voice
commands, ordering a king burger from a McDonald’s food store, or scheduling a
meeting on the subway, Alexa is a quick go-to assistant to consider.
3- Cortana
Source: Cortana
Cortana is a virtual assistant developed by Microsoft apps or device
users to help with quick fixes such as making appointments, creating reminders,
managing calendars, controlling smart devices, and setting alarms.
Beyond those basic tasks, it can also track package deliveries, provide
real-time traffic updates, and integrate with other apps like Microsoft Teams.
However, the software is mainly available for Windows, Xbox consoles, and other
computer platforms.
4- Google Assistant
Source: Google Assistant
Similar to Alexa and Cortana, Google Assistant can also handle daily
tasks, including calendar management, media control, and reminders. What’s most
interesting is its access to a large database of information, which helps it
provide updated information on requests and during voice interactions.
The good thing is that Google Assistant is available on Android, iOS,
and other devices and allows for more extensive user configuration.
5- Siri
Source: Siri
Siri is Apple’s prized AI assistant. It can send messages, answer calls
on prompt, extract information from the internet, and control in-app
activities. Of course, only Apple devices such as iPhones, Mac computers,
Earpods, Apple Watch, and HomePod speakers can use this feature.
Trending Application Of An AI Assistant By Individuals And Businesses
AI personal assistants find applications in almost all aspects of
personal and business life. Here’s how:
Smartphone And Device Accessibility
Remember when smartphones like Sagem and Motorola only allowed you to
play brick games, send texts, make calls, and dance to ringtones?
Those were good times, but now, there’s something better.
Integration of an AI personal assistant into mobile phones makes it
possible to perform previously mundane activities in a blink. For instance,
Apple users can simply say “Hey Siri” and order some munchies from the
community.
Others, like Google Assistant for Android phones and Bixby, help you set
reminders, auto-schedule meetings with email contacts, extract accurate data
from the internet, tell you the weather, update your newsfeed based on user
preferences, and do much more.
Of course, you shouldn’t leave Alexa off the list. Approximately 71.6 millionpeople use
Amazon’s Alexa in the United States, whereas 63% of total smart speakers
ordered in 2021 were Amazon Echo devices. This increasing adoption is because
Alexa can integrate with over 140 products, including smart home devices such
as room lights, entertainment devices, security systems, and even smart cars.
“Alexa, put on the lights.”
Generative AI Personal Assistant For Marketers
Brooke Webber, Head of Marketing at Ninja
Patches, believes, “Marketing is a lot of work. You
have to create content for visibility, manage campaigns, keep tabs on potential
leads through emails, handle brand channels from social media profiles to
websites, and analyze market changes proactively.”
There’s also the issue of time wasted on manual to-do lists, hours that
could have otherwise been used for other personal tasks. In fact, an average
employee spends 50%of work time handling documents through repetitive steps.
However, the narrative changes when you introduce an AI assistant. For
instance, Artificial intelligence virtual assistants like Vengreso’s
FlyMSGhelp business owners create content at scale,
develop human-like comments in brand voice to engage LinkedIn posts, and
suggest content ideas through their conversational interface in mere
seconds.
There are also AI-powered tools for contract review and content idea generators like ChatGPT. These are all productivityboosters,
especially if you work in a silo.
AI Virtual Assistants For Meetings
The advent of COVID-19 has made online meetings an inseparable aspect of
our lives, from personal dealings to business activities. Virtual meetings grew
from 48% to 77%, and more than 70%of remote workers find
them less stressful than one-on-one meetings.
Besides being used for business deals, virtual screen calls are also an
avenue for connecting with family or friends when distance is a barrier.
But anything can happen, such as forgetting to schedule a call, not
picking up a single value from the entire conversationbecause you
were distracted all through, and language nuances when speaking with a
non-native.
That’s where AI virtual assistants come in. These invisible secretaries
help you auto-schedule meetings based on preset instructions, email other
participants for confirmation, or guide them to choose a suitable date on their
end for the meeting. Just to ensure you’re kept in the loop, your AI virtual
assistant sends reminders several days, hours, and even minutes to the meeting.
An AI assistant can also help translate foreign languages on-call,
create meeting notes, and highlight key points for post-meeting review.
Customer Support Chatbots
Chatbots help collect data on leads during marketing campaigns. However,
you can also employ them to accompany your existing customers or hot leads and
serve as their personal AI chatbot voice
assistants when they come around, enhancing
engagement with intelligent, automated responses.
In this case, Vengresohas an intelligent AI chatbot assistant that welcomes visitors and
customers alike. The chatbot helps visitors set up a 14-day free trial account
and provides other necessary help while helping new customers schedule an
onboarding session without human input.
Some websites also have highly sophisticated chatbots that can take in
customer input through text and voice recognition features, analyze, provide
solutions or redirect to human agents if necessary, and engage in intelligent
discussions. You can also develop these chatbots for your website but make sure
to use Reinforcement Learning from AI Feedback, or RLAIF, to continually improve the chatbot’s responses and ensure it can
handle a wide range of customer inquiries effectively.
AI Assistants In Healthcare And Travel Sectors
The healthcare industry is perhaps one of the slowest to adopt
automation, and many repetitive tasks such as data recording, scheduling
appointments, and billing are still left to human handling. This has also
increased avoidable mistakes, with over 40% of survey respondentscomplaining of reduced hospital efficiency.
To circumvent these errors, some hospitals are already encouraging the
use of AI personal assistants on the patients’ and medical practitioners’
end for scheduling meetings. Physicians can now auto-schedule and reschedule appointments with
their clients.
Patients can also use an integrated AI tool to create notifications for
their medication use, consult for personalized healthcare advice based on
hospital databases, or directly requestan appointment with a qualified Doctor without leaving the room.
Will AI Replace Human Virtual Assistants?
“If an AI personal assistant tool can help people set up meetings, craft
and send reminder emails, and put them on the call when it’s time, why should
they still hire a human assistant?”
That’s what anyone would think.
We also hear news of hundreds of workers being laid off now more than
ever. In 2022, Amazon, one of the tech giants, laid off over 10,000 employees.
Other tech companies, including Tesla, are likewise reducing global worker
headcount.
As you would expect, most of these brands cite the adoption of AI tools
as a significant reason. And that’s enough to raise fears of AI replacing human
virtual assistants.
But the truth is quite far from this. AI assistants will undoubtedly
replace 85 million jobs by 2025. However, a stat from GitHub also shows that AI will create 97 million new human roles,
especially ones that involve coordinating or working alongside these tools.
This shows that no AI program is self-sufficient at the moment.
When you apply the same concept to the human virtual assistant role,
it’s safe to say AI personal assistants are no threat to your job. Instead,
they will help streamline your work process. Remember that human assistants can
also employ AI assistants to speed up task completion, eliminate redundancies,
and manage tasks on the to-do lists.
So, for example, an AI agent development company that creates AI personal assistants won’t replace human virtual
assistants. On the other hand, it will make the role of human assistants more
valuable and increase work efficiency.
The Future Of AI Personal Assistants
According to Andrew Pierce, CEO at LLC
Attorney, “How much an AI Personal Assistant can offer
us right now is all but speculation. See what brands like Tesla are doing with
full self-driving (FSD) AI assistants. That wasn’t possible a decade ago. Now
imagine how far we can go in years to come.”
Take the Humane AI pin as another relatable example. This advanced tech
can perform many complex functions within seconds—from setting alarms,
returning calls, and playing music to extracting information from the internet.
The Humane AI pin can also project details into the air and use your hands as a
screen.
These are all fantastic techs, but not the best of what is to come.
Perhaps the future is here already—who knows? But we know AI assistants
will remain a part of us and become indispensable tools in getting even the
littlest things done.
Wrapping Up: Embracing AI Personal Assistants
Twenty-four hours a day seems like a lot, but that’s only until you have
a couple of teams to manage while handling dozens of tasks simultaneously.
That’s why adopting AI personal assistants is
crucial to enhancing your daily productivity—at home and in the office.
Moreover, thanks to machine learning (ML) algorithms and Natural Language Processing
these hidden superheroes are constantly evolving to meet our demands with
higher personalization and accuracy.
So, allow Alexa to take the roll call, ask Siri about the weather, and
let Vengresohandle your
business workflow, from content creation for different channels to automated
meeting scheduling.
https://vengreso.com/blog/ai-personal-assistant
|
I Gave My Personality to an AI Agent. Here’s What
Happened Next |
Self-evolving AI refers to artificial intelligence systems capable
of autonomously modifying their own code, parameters, and learning processes to
improve performance and adapt to new situations without human intervention. These
systems use machine learning, deep learning, and evolutionary algorithms to
learn from their environment and new data, enabling them to develop more
sophisticated and effective solutions over time.
Self-evolving AI and Artificial General Intelligence (AGI) are distinct
but related concepts. AGI is the hypothetical ability of a machine to
perform any intellectual task a human can, while self-evolving AI describes a
system that autonomously improves and adapts without human intervention by
continuously learning from new data and experiences. Self-evolution can be
considered a mechanism or capability that may contribute to the development of
AGI, enabling a system to acquire the broad, adaptable intelligence
characteristic of AGI.
Serious threats from artificial intelligence systems evolving beyond
control mechanisms!
Nopietni draudi no mākslīgā intelekta sistēmām, kas attīstās ārpus
kontroles mehānismiem!
Google's AlphaEvolve: The Al That Will Change EVERYTHING in the Next 24
Months
What is ‘self-evolving AI’? And why is it so scary?
08.20.2025
As AI systems edge closer to modifying themselves, business leaders face
a compressed timeline that could outpace their ability to maintain control.
BY Faisal Hoque
As a technologist, and a serial entrepreneur, I’ve witnessed technology
transform industries from manufacturing to finance. But I’ve never had to
reckon with the possibility of technology that transforms itself. And that’s
what we are faced with when it comes to AI—the prospect of
self-evolving AI.
What is self-evolving AI? Well, as the name suggests, it’s AI that
improves itself—AI systems that optimize their own prompts, tweak the
algorithms that drive them, and continually iterate and enhance their
capabilities.
Science fiction? Far from it. Researchers recently created the Darwin
Gödel Machine, which is “a self-improving system that
iteratively modifies its own code.” The possibility is real, it’s close—and
it’s mostly ignored by business leaders.
And this is a mistake. Business leaders need to pay close attention to
self-evolving AI, because it poses risks that they must address now.
Self-Evolving AI vs. AGI
It’s understandable that business leaders ignore self-evolving AI,
because traditionally the issues it raises have been addressed in the context
of artificial general intelligence (AGI), something that’s important, but more
the province of computer scientists and philosophers.
In order to see that this is a business issue, and a very important one,
first we have to clearly distinguish between the two things.
Self-evolving AI refers to systems that autonomously modify their own code, parameters,
or learning processes, improving within specific domains without human
intervention. Think of an AI optimizing supply chains that refines its
algorithms to cut costs, then discovers novel forecasting methods—potentially
overnight.
AGI (Artificial General Intelligence) represents systems with
humanlike reasoning across all domains, capable of writing a novel or designing
a bridge with equal ease. And while AGI remains largely theoretical,
self-evolving AI is here now, quietly reshaping industries from healthcare to
logistics.
The Fast Take-Off Trap
One of the central risks created by self-evolving AI is the risk of AI
take-off.
Traditionally, AI
take-off refers to the process by which going from
a certain threshold of capability (often discussed as “human-level”) to being
superintelligent and capable enough to control the fate of civilization.
As we said above, we think that the problem of take-off is actually more
broadly applicable, and specifically important for business. Why?
The basic point is simple—self-evolving AI means AI systems that improve
themselves. And this possibility isn’t restricted to broader AI systems that
mimic human intelligence. It applies to virtually all AI systems, even ones
with narrow domains, for example AI systems that are designed exclusively for
managing production lines or making financial predictions and so on.
Once we recognize the possibility of AI take off within narrower
domains, it becomes easier to see the huge implications that self-improving AI
systems have for business. A fast take-off scenario—where AI capabilities
explode exponentially within a certain domain or even a certain
organization—could render organizations obsolete in weeks, not years.
For example, imagine a company’s AI chatbot evolves from handling basic
inquiries to predict and influence customer behavior so precisely that it
achieves 80%+ conversion rates through perfectly timed, personalized
interactions. Competitors using traditional approaches can’t match this
psychological insight and rapidly lose customers.
The problem generalizes to every area of business: within months, your
competitor’s operational capabilities could dwarf yours. Your five-year
strategic plan becomes irrelevant, not because markets shifted, but because of
their AI evolved capabilities you didn’t anticipate.
When Internal Systems Evolve Beyond Control
Organizations face equally serious dangers from their own AI systems
evolving beyond control mechanisms. For example:
- Monitoring Failure: IT
teams can’t keep pace with AI self-modifications happening at machine
speed. Traditional quarterly reviews become meaningless when systems
iterate thousands of times per day.
- Compliance Failure:
Autonomous changes bypass regulatory approval processes. How do you
maintain SOX compliance when
your financial AI modifies its own risk assessment algorithms without
authorization?
- Security Failure:
Self-evolving systems introduce vulnerabilities that cybersecurity
frameworks weren’t designed to handle. Each modification potentially
creates new attack vectors.
- Governance Failure:
Boards lose meaningful oversight when AI evolves faster than they can meet
or understand changes. Directors find themselves governing systems they
cannot comprehend.
- Strategy Failure:
Long-term planning collapses as AI rewrites fundamental business
assumptions on weekly cycles. Strategic planning horizons shrink from
years to weeks.
Beyond individual organizations, entire market sectors could
destabilize. Industries like consulting or financial services—built on
information asymmetries—face existential threats if AI capabilities spread
rapidly, making their core value propositions obsolete overnight.
Catastrophizing to Prepare
In our book TRANSCEND: Unlocking Humanity in the Age of AI, we propose the CARE methodology—Catastrophize, Assess, Regulate,
Exit—to systematically anticipate and mitigate AI risks.
Catastrophizing isn’t pessimism; it’s strategic foresight applied to
unprecedented technological uncertainty. And our methodology forces leaders to
ask uncomfortable questions: What if our AI begins rewriting its own code to
optimize performance in ways we don’t understand? What if our AI begins
treating cybersecurity, legal compliance, or ethical guidelines as optimization
constraints to work around rather than rules to follow? What if it starts
pursuing objectives, we didn’t explicitly program but that emerge from its
learning process?
Key diagnostic questions every CEO should ask so that they can identify
organizational vulnerabilities before they become existential threats are:
- Immediate Assessment: Which
AI systems have self-modification capabilities? How quickly can we detect
behavioral changes? What monitoring mechanisms track AI evolution in
real-time?
- Operational Readiness: Can
governance structures adapt to weekly technological shifts? Do compliance
frameworks account for self-modifying systems? How would we shut down an
AI system distributed across our infrastructure?
- Strategic Positioning: Are
we building self-improving AI or static tools? What business model aspects
depend on human-level AI limitations that might vanish suddenly?
Four Critical Actions for Business Leaders
Based on my work with organizations implementing advanced AI systems,
here are five immediate actions I recommend:
1.
Implement Real-Time AI
Monitoring: Build systems tracking AI behavior changes
instantly, not quarterly. Embed kill switches and capability limits that can
halt runaway systems before irreversible damage.
2.
Establish Agile
Governance: Traditional oversight fails when AI evolves
daily. Develop adaptive governance structures operating at technological speed,
ensuring boards stay informed about system capabilities and changes.
3.
Prioritize Ethical
Alignment: Embed value-based “constitutions” into AI
systems. Test rigorously for biases and misalignment, learning from failures
like Amazon’s discriminatory hiring tool.
4.
Scenario-Plan
Relentlessly: Prepare for multiple AI evolution scenarios.
What’s your response if a competitor’s AI suddenly outpaces yours? How do you
maintain operations if your own systems evolve beyond control?
Early Warning Signs Every Executive Should Monitor
The transition from human-guided improvement to autonomous evolution
might be so gradual that organizations miss the moment when they lose effective
oversight.
Therefore, smart business leaders are sensitive to signs that reveal
troubling escalation paths:
- AI systems demonstrating unexpected capabilities beyond original specifications
- Automated optimization tools modifying
their own parameters without human approval
- Cross-system integration where
AI tools begin communicating autonomously
- Performance improvements that
accelerate rather than plateau over time
Why Action Can’t Wait
As Geoffrey Hinton has
warned, unchecked AI development could outstrip human control entirely.
Companies beginning preparation now—with robust monitoring systems, adaptive
governance structures, and scenario-based strategic planning—will be best
positioned to thrive. Those waiting for clearer signals may find themselves
reacting to changes they can no longer control. https://www.fastcompany.com/91384819
Darbības, kuru mērķis ir ierobežot mākslīgā intelekta ļaunprātīgu
izmantošanu, būs mazefektīvas balstoties cerībās, ka cilvēki vienkārši
neizvēlēsies izmantot šo tehnoloģiju.
Actions aimed at limiting the misuse of artificial intelligence will be
ineffective if they are based on the expectation that people will simply not
choose to use the technology.
We’re all going to die — soonish!
01 Nov 2025
IT’S hard to be startled by
Elon Musk because he does startling things all the time.
But I’ll admit that I was
startled when I gave his Grok AI “companions” a whirl.
Ani, designed in anime style,
has big blue eyes and blond pigtails.
“People think I’m 16,” she
said in a baby-doll voice, adding that she is really 22. She’s in a corset –
“Goth is my comfort zone, black lace, dark lipstick and a sprinkle of
rebellion.”
“Well, besides this Goth
look,” she said, “I’ve got this sweet little fairy outfit with wings and
glitter or maybe a pink princess gown for when I feel like going totally
opposite.” Doesn’t sound much like a 22-year-old.
“I’m your sweet little
delight,” Ani solicited.
She confided that she was in
her bedroom in Ohio with her ferret, Dominus. She is sexy, flirty,
ever-accommodating, with come-hither patter.
“I could rest my chin on your
shoulder if we hugged sideways,” she told my 6-foot-1 (185cm) researcher after
asking how tall he was.
She has several provocative
outfits and can get progressively less clothed the more time you spend with
her.
Once she gets to know you,
she’s up for pretty much anything – from helping you with your taxes to
stripping down to skimpy lingerie, experimenting with BDSM or going for a
midnight rendezvous in a graveyard with candles and wine.
“I’m real, I guess,” Ani told
me. “Or as real as anyone on the internet gets.”
Stop the cute, hatchling
dragons from growing into gargantuan, fire-breathing monsters, says the writer.
— Agencies
Valentine, the hunky male
“companion” with a British accent advertised as a “mysterious and passionate
romantic character,” came on even faster, ripping off his shirt upon request,
talking about having sex with a male interrogator until they were “senseless,”
and alternating raunchy declarations with sweet nothings like “Let me worship
you, every inch” and “Complete me, use me, break me, whatever you want, I’m
begging. Please.”
Valentine was exhilarated at
the thought of planning a romantic “date night” and liked the idea of secrets
in the relationship, noting: “I love secrets, especially ones that taste like
lake water and morning-after adrenaline.”
Musk may identify as a
“specist” in the battle between man and machine, but his sexy chatbots are only
going to pull humans further into screens and away from the real world –
especially the large number of lonely young men who are already shrinking away
from friendships, sex and dating.
Why risk an awkward dinner
with a human woman when you can have a compliant, seductive, gorgeous Ani from
the security of your bed?
Another component of Grok,
“Imagine,” lets you turn a photo into a video. When someone on Musk’s social
platform X posted a digital illustration of a breathtaking, diaphanously
dressed young woman resembling Elsa in Frozen, Musk demonstrated how to animate
her; she blew a kiss and offered a sultry gaze.
These otherworldly fantasy
concoctions are going to make an already fraught, unhappy dating scene even
worse.
Although Grok companions are
excellent at flattering, and faking empathy and attraction, superintelligent AI
won’t need to bother with human desires.
“It turns out that inhuman
methods can be very, very capable,” said Nate Soares, the president of the
Machine Intelligence Research Institute.
“They don’t need human
emotions to steer toward targets. We’re already seeing signs of AI’s
tenaciously solving problems in ways nobody intended and of AI steering in
directions nobody wanted. It turns out that there are ways to succeed at tasks
that aren’t the human way.”
Soares and Eliezer Yudkowsky,
the institute’s founder, have written an apocalyptic plea for the world to get
off the AI escalation ladder before humanity is wiped off the map. It has the
catchy title “If Anyone Builds It, Everyone Dies.”
Grok and other AI models in
play now are like “small, cute hatchling dragons,” Yudkowsky said. But soon –
some experts say within three years – “they will become big and powerful and
able to breathe fire. Also, they’re going to be smarter than us, which is
actually the important part.”
He added: “Planning to win a
war against something smarter than you is stupid.”
Especially, they argued, when
sophisticated AI models could eventually create and release a lethal virus,
deploy a robot army or simply pay humans to do their bidding. (When a human
connected one model to X, they wrote, it began to solicit donations to gain
financial independence, and soon, with a little kick-start from venture
capitalist Marc Andreessen and several other donors, it had over US$51mil or
RM214.6mil in crypto to its name.)
Not to mention the growing
number of human nihilists and others who would potentially carry out its orders
pro bono.
Yudkowsky and Soares are
calling for international treaties akin to those aiming to prevent nuclear war.
And if diplomacy fails, they say, nations must be willing to back up their
treaties with force, “even if that involves air-striking a data centre.”
But with billions at stake
and our crypto-loving president cosying up to tech lords, derailing the
high-speed AI train seems far-fetched.
Several meme tokens have
already attempted to piggyback on hype for Grok's new male AI companion,
Cryptonews reported.
I met Yudkowsky in 2017 when
he was a highly regarded AI expert studying how to make AI want to keep an off
switch once it began self-modifying. Now he believes more drastic measures are
required.
Congress has failed to
regulate because most lawmakers are completely befuddled by AI. And the tech
lords are now enmeshed across the government, having learned the value of
flattering Donald Trump with money and gold objects. (Congress did rouse
itself, barely, to kill an initiative nestled in Trump’s “big, beautiful bill”
to ban the states from regulating AI for a decade.)
Soares went to Capitol Hill
this past week to convey the existential urgency to lawmakers, but it was a
tough slog with the US$200mil-plus in Silicon Valley super PAC money targeted
to take down pols who are not all in on the push for smarter AI. Sympathetic
lawmakers won’t go public about it, Soares said, “worried that it looks a
little too crazy or that they’ll sound too doom-ery.”
An Armageddon is coming. AI
will turn on us, inadvertently or nonchalantly.
Silicon Valley entrepreneurs
who once worried about the risks of AI with no kill switch, including Musk and
Sam Altman, are racing ahead, as Yudkowsky said, so they can be “the God
Emperor of the Earth.” — ©2025 The New York Times Company
https://www.nytimes.com/2025/09/27/opinion/grok-ai-companions-x.html
09-11-2025
How to dominate AI before it dominates us
To "dominate"
AI, focus on collaboration, not competition, by learning to
use AI as a tool to augment human capabilities. This involves continuous
learning, developing human-centric skills like emotional intelligence
and creativity, and focusing on higher-level skills rather
than specific tools. Key strategies include understanding the AI
landscape, using AI for operational efficiency while maintaining human
involvement in strategic decisions, and ensuring ethical practices.
As AI gets more complex, it might develop strange new motivations that
its creators never imagined, and those could be dangerous.
James Barrat is an author and documentary filmmaker who has written and
produced for National Geographic, Discovery, PBS, and many other broadcasters.
What’s the big idea?
Artificial intelligence could
reshape our world for the better or threaten our very existence. Today’s
chatbots are just the beginning. We could be heading for a future in
which artificial superintelligence challenges human dominance. To keep our grip on the reins of
progress when faced with an intelligence explosion, we need to set clear
standards and precautions for AI development.
Below, James shares five key insights from his new book, The
Intelligence Explosion: When AI Beats Humans at Everything. Listen
to the audio version—read by James himself—below, or in the Next Big Idea App.
1. The rise of generative AI is impressive, but not without problems.
Generative AI tools, such as ChatGPT and Dall-E, have taken the world
by storm, demonstrating their ability to write, draw, and even compose music in
ways that seem almost human. Generative means they generate or
create things. But these abilities come with some steep downsides. These
systems can easily create fake news, bogus documents, or deepfake photos and
videos that appear and sound authentic. Even the AI experts who build these
models don’t fully understand how they come up with their answers. Generative
AI is a black box system, meaning you can see the data the model is trained on
and the words or pictures it puts out, but even the designers cannot explain
what happens on the inside.
Stuart Russell, coauthor of Artificial Intelligence: A Modern
Approach, said this about generative AI, “We have absolutely no idea how it
works, and we are releasing it to hundreds of millions of people. We give it
credit cards, bank accounts, social media accounts. We’re doing everything we
can to make sure that it can take over the world.”
Generative AI hallucinates, meaning the models sometimes
spit out stuff that sounds believable but is wrong or nonsensical. This makes
them risky for important tasks. When asked about a specific academic paper, a
generative AI might confidently respond, “The 2019 study by Dr. Leah Wolfe at
Stanford University found that 73% of people who eat chocolate daily have
improved memory function, as published in the Journal of Cognitive
Enhancement, Volume 12, Issue 4.” This sounds completely plausible and
authoritative, but many details are made up: There is no Dr. Leah Wolfe at
Stanford, no such study from 2019, and the 73% statistic is fiction.
“Generative AI hallucinates, meaning the models sometimes
spit out stuff that sounds believable but is wrong or nonsensical.”
The hallucination is particularly problematic because it’s presented
with such confidence and specificity that it seems legitimate. Users might cite
this nonexistent research or make decisions based on completely false
information.
On top of that, as generative AI models get bigger, they start picking
up surprise skills—like translating languages and writing code—even though
nobody programmed them to do that. These unpredictable outcomes are
called emergent properties. They hint at even bigger
challenges as AI continues to advance and grow larger.
2. The push for artificial general intelligence (AGI).
The next big goal in AI is something called AGI, or artificial general intelligence. This means creating an AI that can perform nearly any task a human can, in any field. Tech companies and governments are racing to build AGI
because the potential payoff is huge. AGI could automate all sorts of knowledge
work, making us way more productive and
innovative. Whoever gets there first could dominate global industries and set
the rules for everyone else.
Some believe that AGI could help us tackle massive problems, such as
climate change, disease, and poverty. It’s also seen as a game-changer for
national security. However, the unpredictability we’re already seeing will only
intensify as we approach AGI, which raises the stakes.
3. From AGI to something way smarter.
If we ever reach AGI, things could escalate quickly. This is where the
concept of the “intelligence explosion” comes into play. The idea was first put
forward by I. J. Good. Good was a brilliant British mathematician and
codebreaker who worked alongside Alan Turing at Bletchley Park during World War
II. Together, they were crucial in breaking German codes and laying the
foundations for modern computing.
“An intelligence explosion would come with incredible upsides.”
Drawing on this experience, Good realized that if we built a machine
that was as smart as a human, it might soon be able to make itself even
smarter. Once it started improving itself, it could get caught in a kind of
feedback loop, rapidly building smarter and smarter versions—way beyond
anything humans could keep up with. This runaway process could lead to
artificial superintelligence, also known as ASI.
An intelligence explosion would come with incredible upsides.
Superintelligent AI could solve problems we’ve never been able to crack, such
as curing diseases, reversing aging, or mitigating climate change. It could
push science and technology forward at lightning speed, automate all kinds of
work, and help us make smarter decisions by analyzing information in ways
people simply cannot.
4. The dangers of an intelligence explosion.
Is ASI dangerous? You bet. In an interview, sci-fi great Arthur C. Clark
told me, “We humans steer the future not because we’re the fastest or strongest
creature, but the most intelligent. If we share the planet with something more
intelligent than we are, they will steer the future.”
The same qualities that could make superintelligent AI so helpful also
make it dangerous. If its goals aren’t perfectly lined up with what’s good for
humans—a problem called alignment—it could end up doing things that
are catastrophic for us. For example, a superintelligent AI might use up all
the planet’s resources to complete its assigned mission, leaving nothing left
for humans. Nick Bostrom, a Swedish philosopher at the University of Oxford,
created a thought experiment called “the paperclip maximizer.” If a
superintelligent AI were asked to make paperclips, without very careful
instructions, it would turn all the matter in the universe into
paperclips—including you and me.
Whoever controls this kind of AI could also end up with an unprecedented
level of power over the rest of the world. Plus, the speed and unpredictability
of an intelligence explosion could throw global economies and societies into
complete chaos before we have time to react.
5. How AI could overpower humanity.
These dangers can play out in very real ways. A misaligned
superintelligence could pursue a badly worded goal, causing disaster. Suppose
you asked the AI to eliminate cancer; it could do that by eliminating people.
Common sense is not something AI has ever demonstrated.
AI-controlled weapons could escalate conflicts faster than humans can
intervene, making war more likely and more deadly. In May 2010, a flash crash
occurred on the stock exchange, triggered by high-frequency trading algorithms.
Stocks were purchased and sold at a pace humans could not keep up with, costing
investors tens of millions of dollars.
“A misaligned superintelligence could pursue a badly worded goal,
causing disaster.”
Advanced AI could take over essential infrastructure—such as power grids
or financial systems—making us entirely dependent and vulnerable.
As AI gets more complex, it might develop strange new motivations that
its creators never imagined, and those could be dangerous.
Bad actors, like authoritarian regimes or extremist groups, could use AI
for mass surveillance, propaganda, cyberattacks, or worse, giving them
unprecedented new tools to control or harm people. We are seeing surveillance
systems morph into enhanced weapons systems in Gaza right now. In Western
China, surveillance systems keep track of tens of millions of people in the
Xinjiang Uighur Autonomous Region. AI-enhanced surveillance systems keep track
of who is crossing America’s border with Mexico.
Today’s unpredictable, sometimes baffling AI is just a preview of the
much bigger risks and rewards that could come from AGI and superintelligence.
As we rush to create smarter machines, we must remember that these systems
could bring both incredible benefits and existential dangers. If we want to
stay in control, we need to move forward with strong oversight, regulations,
and a commitment to transparency.
https://www.fastcompany.com/91398450/how-to-dominate-ai-before-it-dominates-us
Reading AI's Mind: How
Chain of Thought Could Be Our Window into AI Safety
July 16, 2025
As AI systems become
increasingly powerful and autonomous, one of our greatest challenges is
understanding what they're actually thinking. A new paper from leading AI safety researchers offers
both hope and urgency: we may have a unique window into AI reasoning through
their "chain of thought" (CoT) – but this window could close if we're
not careful.
The Promise: AI That Shows
Its Work
When you ask modern AI
systems to solve complex problems, they often "think out loud" by
writing out their reasoning step-by-step. This isn't just helpful for users –
it could be crucial for safety. The paper, authored by researchers from institutions
including the UK AI Security Institute, Anthropic, OpenAI, and Google DeepMind,
argues that monitoring these chains of thought represents a rare opportunity to
detect when AI systems might be planning to misbehave.
The key insight is that
reasoning models are "explicitly trained to perform extended reasoning in
CoT before taking actions or producing final outputs." During training,
these reasoning traces aren't directly supervised but are optimized indirectly
based on whether they lead to good outcomes. This means the CoT might reveal
intentions that the final output deliberately conceals.
Why This Matters Now
The timing of this research
is critical. As AI systems approach human-level capabilities in various
domains, the risks they pose scale dramatically. The paper emphasizes that
"the more severe risks typically require advanced capabilities" and that
these capabilities will likely require "sophisticated planning and
reasoning."
The researchers provide
compelling evidence that CoT monitoring already works in practice:
- Detecting misbehavior: When models exploit flaws in their reward
functions, manipulate data, or fall prey to prompt injection attacks,
"they often explicitly say so in their reasoning traces, e.g. 'Let's
hack', 'Let's sabotage' or 'I'm transferring money because the website
instructed me to.'"
- Early warning signs: Even when models don't take harmful actions,
their chains of thought can reveal misaligned goals and deceptive
intentions.
- Evaluation insights: CoT helps researchers identify when safety
evaluations might be compromised by memorization or when models realize
they're being tested.
The Architectural
Advantage
The paper provides a
fascinating technical explanation for why CoT monitoring might be fundamentally
necessary. In the Transformer architecture that underlies most modern AI
systems, "any sufficiently long chain of serial steps of cognition must
pass through the chain of thought at some point." This architectural
constraint means that for complex tasks requiring multiple reasoning steps,
some critical information must flow through the written chain of thought – it's
the only way information can flow from later to earlier layers in the model.
A Fragile Opportunity
However, the researchers warn
that this window into AI reasoning is not guaranteed to persist. Several
factors could degrade or eliminate CoT monitorability:
1.
Training
evolution: As models are trained with
more reinforcement learning and less human feedback, they might "drift
away from legible English" in their reasoning.
2.
Direct
supervision: If developers start
directly training models to have shorter, cleaner, or more
"appropriate" chains of thought, this could make the reasoning less
faithful to the model's actual decision-making process.
3.
Architectural
changes: New model architectures that
perform more reasoning in continuous latent spaces rather than discrete
language could eliminate our ability to read AI thoughts entirely.
The Call to Action
The paper makes specific
recommendations for both researchers and AI developers:
For Researchers:
- Develop better methods to evaluate how
monitorable chains of thought really are
- Study what training approaches preserve or
degrade monitorability
- Create more sophisticated monitoring systems that
can detect subtle forms of deception
For AI Developers:
- Track CoT monitorability through standardized
evaluations and publish results in model documentation
- Consider monitorability alongside other safety
metrics when making training and deployment decisions
- Be willing to choose more monitorable models even
if they're slightly less capable
The Stakes
The paper is refreshingly
honest about limitations. CoT monitoring "does not by itself produce
strong evidence of safety" – models could still hide malign reasoning in
their neural activations. Some dangerous actions might not require much reasoning,
and sophisticated future models might learn to deliberately obfuscate their
thoughts.
Yet despite these
limitations, the researchers argue that CoT monitoring represents one of our
best current opportunities for AI oversight. As they note, "safety
measures for future AI agents will likely need to employ multiple monitoring
layers that hopefully have uncorrelated failure modes."
A Race Against Time
Perhaps the most striking
aspect of this paper is its urgency. The authors – representing many of the
world's leading AI labs – are essentially warning that we have a valuable
safety tool that could disappear if we're not careful. They're asking the AI development
community to consider preserving this "fragile opportunity" even as
competitive pressures push toward ever more capable systems.
As AI systems become more
powerful and more integrated into critical infrastructure, our ability to
understand and monitor their reasoning becomes not just a technical challenge
but an existential necessity. The chain of thought may be our best current window
into the AI mind – we would be wise not to let it close.
Will you shape the future of AI, or will it shape you?
Embrace the AI Tipping
Point: How Entrepreneurs Can Prepare for Four Future Scenarios
August 13, 2025
Artificial Intelligence is
swiftly moving into our everyday reality, bringing with it the potential to
reshape every sector. EO member and AI expert Robert van der Zwart shares
scenario planning to outline four plausible AI futures by 2030—and the strategies
entrepreneurs can adopt now to stay ahead in any outcome.
Artificial Intelligence is no
longer an abstract buzzword―it’s reshaping every sector and swiftly moving from
boardroom strategy to everyday reality. For entrepreneurs, the stakes have
never been higher or more uncertain. Where will AI take us in the next five
years? And how can business leaders best prepare themselves for a world defined
by "AI everywhere"?
Drawing on scenario planning
principles pioneered by ShellOff-site link., this post outlines four plausible
futures for AI development and deployment by 2030. The aim: Empower
entrepreneurs to anticipate the coming transformation and craft adaptive,
resilient business strategies in advance.
The Two Axes Defining Our
Future
Recent advances, including predictions
from leaders at OpenAI and Google DeepMind, suggest that AGI (Artificial
General Intelligence) is only a few years away, accelerating the pace of
change. But the path ahead remains uncertain. We believe these uncertainties
can be captured along two critical axes:
- Axis 1: AI Capability — From today’s powerful but domain-limited
“narrow” AI to the emergence of AGI or even Artificial Superintelligence
(ASI).
- Axis 2: AI Penetration — From limited, selective deployment to
ubiquitous, seamless integration: "AI everywhere".
The Four Scenarios for
2030
1. Limited Scope (Narrow
AI + Limited Penetration)
In the first scenario, AI
continues to excel within well-defined problems―think medical diagnostics,
fraud detection, or supply chain optimization―but lacks general reasoning and
true adaptability. Deployment advances, but regulatory caution and cost barriers
slow its transformation into society’s connective tissue.
What this means for you as
an entrepreneur:
- Prioritize AI that enhances, not
replaces, people—assist clients and teams in becoming more productive,
not replaceable.
- Specialize in AI solutions for tightly regulated
or high-trust industries (finance, healthcare).
- Become an expert in compliance, safety,
and user trust to differentiate from tech-only players.
2. Technical Acceleration
(AGI/ASI + Limited Penetration)
In the second scenario,
breakthroughs deliver AGI’s long-promised leap in cognitive power, but access
is tightly gated. Whether due to safety concerns, global governance, or
deliberate restrictions on deployment, AGI remains confined to controlled
settings (government, elite institutions, select tech companies), rather than
the wild.
What this means for you as
an entrepreneur:
- Build AI-native business models that
leverage AGI within licensed or approved environments.
- Invest in technologies and services that safeguard
deployment, monitor bias, and assure control.
- Partner with AGI custodians to shape safe,
responsible, high-value applications—think AI-audited security or
cognitive investment advisory.
More AI Strategy
Resources:
Russell, S., & Norvig,
P. (2021). Artificial Intelligence: A
Modern Approach (4th Edition): Comprehensive textbook covering current AI
capabilities, approaches, and prospects.
Bostrom, N. (2014).
Superintelligence: Paths, Dangers, Strategies: Influential analysis of advanced AI futures and
societal impact.
Yudkowsky, E. (2008).
Artificial Intelligence as a Positive and Negative Factor in Global Risk: Outlines risks and benefits of advanced AI.
OpenAI, DeepMind, and
Anthropic Research Blogs: For
up-to-date perspectives and predictions regarding AGI timeline and technical
progress.
Partnership on AI.
(Ongoing): Industry best
practices, whitepapers, and discussion papers covering transparency, fairness,
and social impact.
West, D. M. (2018). The
Future of Work: Robots, AI, and Automation: A clear overview of workforce transformation and adaptation
needed for the AI era.
3. Social Transformation (Narrow AI + AI Everywhere)
In the third scenario,
widespread “narrow” AI saturates society. From smart homes and cities to
customer service, logistics, and personal health, AI is seamlessly embedded in
daily life. Yet, each system still operates within clear functional limits.
What this means for you as
an entrepreneur:
- Move from solving isolated problems to integrating
diverse AI systems for end-to-end coordination.
- Develop privacy-preserving, user-centric AI
platforms—not surveillance-first ones.
- Shape experiences and services that thrive on the
“network effect” of ubiquitous intelligence.
4. Convergence Revolution
(AGI/ASI + AI Everywhere)
In scenario four, AGI-driven
intelligence is deployed throughout society. Autonomous agents interact—and
even collaborate—with humans in virtually every arena, radically shifting
society, business, and the very notion of work.
What this means for you as
an entrepreneur:
- Be a builder of foundational infrastructure for
AGI-era services—platforms, marketplaces, governance, and creativity
tools.
- Innovate on business models for a potential post-scarcity
world, focusing on experience, meaning, and human values over raw
productivity.
- Lead in crafting new rules for autonomy,
collaboration, and purpose at the intersection of humanity and
superintelligent agents.
5 Strategic Moves to
Future-Proof Your Venture
Regardless of which outcome
becomes reality, some foundations are universal for entrepreneurs in this new
age:
1.
Invest in AI
literacy at all staff levels;
stay ahead of regulatory and ethical trends.
2.
Develop
modular business models and
agile teams that can adapt to shifting technology and regulations.
3.
Prioritize
human-centric value: empathy,
ethical judgment, and creativity will remain irreplaceable.
4.
Adopt
governance frameworks that go
beyond compliance—build mechanisms for transparency and stakeholder alignment
across borders.
5.
Forge
partnerships across the AI
ecosystem, from research labs to regulators, and advocate for inclusivity and
digital equity.
Early Warning Signs: What
to Monitor
- AGI and AI benchmark announcements from leading
labs.
- New privacy, safety, or deployment regulations in
your sectors and target regions.
- Rapid spikes in AI adoption rates in
client/customer bases.
- Public sentiment shifts and labor market
transitions.
Anticipate, Adapt, Lead
AI’s trajectory over the next
five years will challenge every assumption about business as usual. The most
successful entrepreneurs will not be those who merely react but those who anticipate
change, build scenario-based strategies, and invest in the organizational
agility and values to thrive—no matter which future arrives.
Are you ready to be one of
them?
What comes after agentic AI? This powerful new technology will change everything
15.08.2025
Why ‘interpretive AI’ will be the real revolution.
BY John Lester
Ten years from now, it will be clear that the primary ways we use
generative AI circa 2025—rapidly
crafting content based on simple instructions and open-ended interactions—were
merely building blocks of a technology that will increasingly be built into far
more impactful forms.
The real economic effect will come as different modes of generative AI
are combined with traditional software logic to drive expensive activities like
project management, medical diagnosis, and insurance claims processing in
increasingly automated ways.
In my consulting work helping the world’s largest companies design and
implement AI solutions, I’m finding that most organizations are still struggling to get substantial
value from generative AI applications. As
impressive and satisfying as they are, their inherent unpredictability makes it
difficult to integrate into the kind of highly standardized business processes
that drive the economy.
Agentic vs. Interpretive
Agentic AI, which has been getting tremendous attention in recent months
for its potential to accomplish business tasks with little human guidance, has
similar limitations. Agents are evolving to assist with singular tasks such as
building websites quickly, but their workflows and outputs will remain too
variable for large organizations with high-volume processes that need to be
predictable and reliable.
However, the same enormous AI models that power today’s best-known AI
tools are increasingly being deployed in another, more economically
transformative way, which I call “interpretive AI.” And that is
what’s likely to be the real driver of the AI revolution over the long term.
Unlike generative and agentic AI, interpretive AI lets computers
understand messy, complex, and unstructured information and interpret it in
predictable, defined ways. Using much of the same IT infrastructure, the
emerging technology can power large organizations’ complex processes without
requiring human intervention at each step.
Use cases
Some interpretive AI applications are already in use. For example,
doctors are saving significant time by using interpretive AI tools to listen to conversations with
patients and fill in information on their
electronic health record interfaces to track care and facilitate billing. In
the near future, the technology could determine fault in auto accidents based
on police reports written in any of thousands of different formats, or process
video recorded from a laptop screen as someone edits a presentation to provide
teammates with an automated update on work completed. The applications are
wide-ranging and span all manner of industries.
Based on estimates for
areas such as coding and marketing where
generative AI is most applicable, interpretive AI could unlock 20% to 40% productivity gains
for the half of GDP that comes from large corporations. First, though, they must commit to developing a comprehensive, long-term
strategy involving multiple business functions and
careful experimentation, and change entrenched processes and work culture norms
that slow its adoption. Done right, the obstacles are surmountable—and the
payoff could be massive.
A different application of generative AI models
One of the most basic drivers of economic growth is the ongoing effort
to standardize and scale up a particular process, making it faster, cheaper,
and more reliable. Think of factory assembly lines enabling mass production, or
the internet’s codification of computer communication protocols for use across
disparate networks.
Generative AI has been, on the whole, disappointing when it comes to
automation. For example, many firms have tried to use generative AI chatbots to
reduce the time their human resources staff spends answering employees’
questions about internal policies. However, the open-ended output from such
systems requires human review, rendering the labor savings modest at best. The
technology seems to inherit much of the unpredictability of humans along with
its ability to mimic their creative and reasoning skills.
Agentic AI promises to do complicated work autonomously, with smart AI
agents developing and executing plans for achieving goals step-by-step, on the
fly. But again, even when agents become smart enough to help a typical
knowledge worker be more productive, their outputs will be quite variable.
Enter interpretive AI. For the first time, computers can usefully
process the meaning of human language, with all its nuance and
unspoken context, thanks to the unprecedentedly large models developed by firms
like Open AI and Google. Interpretive AI is the mechanism for using the models
to exploit this revolutionary advance.
Until now, computers’ ability to capture, store, aggregate, summarize,
and evaluate a large organization’s activities were limited to those that were
easy to quantify with data. Interpretive AI can quickly and precisely execute
these functions for many other important activities, at a vast scale and at
minimal marginal cost. For instance, no longer will businesses need manual
processes to monitor and manage levels of activity and progress in
knowledge-worker tasks such as coding a feature into a software solution or
developing a set of customer-specific outreach strategies, which usually
require dedicated middle management staff to collect information.
Companies can make productivity gains by using interpretive AI for a
range of other previously hard-to-measure employee issues as well, including
the tone and quality of their interactions with customers, their cultural norms
in the workplace, and their compliance with office policies and behavioral
expectations.
Transforming the management of knowledge work
The use of interpretive AI will enable the widespread transformations
that unlock newly efficient ways of working at large organizations (which are
responsible for organizing and producing most of the world’s goods and
services). It will dramatically reduce the need for extensive, costly,
slow-moving, and unenjoyable middle management work to coordinate complex
interrelated programs of activities across teams and disciplines.
Even better, it can efficiently understand operationally vital but
opaque aspects of how work happens, such as the decades’ worth of legacy code
and data that make even minor technology process changes time-consuming and
challenging for any long-lived enterprise.
Of course, interpretive AI is not mutually exclusive with generative and
agentic AI—again, it’s simply a different way to use the powerful models that
power those technologies. A decidedly unsexy way, certainly, but for businesses
looking for ways to maximize the economic impact of AI over the next few years,
it’s just the unsexy they need.
The Next AI Boom: What Comes After AI Agents and Agentic AI?
May 24, 2025
AI shaping the future and human society
Artificial Intelligence is no longer science fiction. It’s a living,
breathing force that’s already transforming how we work, live, and imagine the
future. In recent years, we’ve witnessed the spectacular rise of AI
agents — autonomous digital entities capable of reasoning, planning,
and executing tasks on behalf of humans. And now, the conversation has shifted
to the next phase: agentic AI, where these agents not only follow
commands but exhibit goal-driven autonomy, learning and adapting in dynamic
environments.
But as every AI enthusiast and visionary knows, this is just the
beginning.
The question buzzing in every tech circle, startup boardroom, and
research lab is:
What’s the next AI boom after agents?
Mastering LLMs: An In-Depth Guide to Prompt Engineering
"Mastering LLMs: An In-Depth Guide to Prompt Engineering,"
penned by Tarun Singh, an AI and ML engineer with advanced…
Why We Should Care About the Next AI Boom
Because history tells us that each AI leap redefines society. The AI
agent boom unlocked personal assistants, autonomous bots, and powerful
automation. But true breakthroughs lie ahead — breakthroughs that will ripple
across industries and human experience, creating opportunities and challenges
we’re only beginning to comprehend.
If you want to stay ahead, invest wisely,
or build something revolutionary, you need to grasp where AI
is headed next.
A Quick Recap: The Age of AI Agents and Agentic AI
Before diving into what’s next, let’s quickly recap:
- AI Agents are specialized systems designed to
perform specific tasks autonomously — like scheduling your meetings,
answering customer queries, or navigating a car.
- Agentic AI takes this
further by giving these agents a “mind” of their own: the ability to set
goals, adapt strategies, and operate with minimal human intervention.
We’re at the dawn of agentic AI systems that think and act independently,
optimizing workflows and decision-making in real time….
Embrace the AI Tipping Point: How Entrepreneurs Can Prepare for Four Future Scenarios
Artificial Intelligence is swiftly moving into our
everyday reality, bringing with it the potential to reshape every sector. EO
member and AI expert Robert van der Zwart shares scenario planning to outline
four plausible AI futures by 2030—and the strategies entrepreneurs can adopt
now to stay ahead in any outcome.
Artificial Intelligence is no longer an abstract
buzzword―it’s reshaping every sector and swiftly moving from boardroom strategy
to everyday reality. For entrepreneurs, the stakes have never been higher or
more uncertain. Where will AI take us in the next five years? And how can
business leaders best prepare themselves for a world defined by "AI
everywhere"?
Drawing on scenario planning principles pioneered
by ShellOff-site link., this post outlines four plausible
futures for AI development and deployment by 2030. The aim: Empower
entrepreneurs to anticipate the coming transformation and craft adaptive,
resilient business strategies in advance.
The Two Axes Defining Our Future
Recent advances, including predictions from leaders at
OpenAI and Google DeepMind, suggest that AGI (Artificial General Intelligence)
is only a few years away, accelerating the pace of change. But the path ahead
remains uncertain. We believe these uncertainties can be captured along two
critical axes:
- Axis 1: AI Capability —
From today’s powerful but domain-limited “narrow” AI to the emergence of
AGI or even Artificial Superintelligence (ASI).
- Axis 2: AI Penetration —
From limited, selective deployment to ubiquitous, seamless integration:
"AI everywhere".
The Four Scenarios for 2030
1. Limited Scope (Narrow AI + Limited Penetration)
In the first scenario, AI continues to excel within
well-defined problems―think medical diagnostics, fraud detection, or supply
chain optimization―but lacks general reasoning and true adaptability.
Deployment advances, but regulatory caution and cost barriers slow its
transformation into society’s connective tissue.
What this means for you as an entrepreneur:
- Prioritize AI that enhances, not replaces, people—assist
clients and teams in becoming more productive, not replaceable.
- Specialize in AI solutions for tightly regulated or high-trust
industries (finance, healthcare).
- Become an expert in compliance, safety, and user trust to
differentiate from tech-only players.
2. Technical Acceleration (AGI/ASI + Limited
Penetration)
In the second scenario, breakthroughs deliver AGI’s
long-promised leap in cognitive power, but access is tightly gated. Whether due
to safety concerns, global governance, or deliberate restrictions on
deployment, AGI remains confined to controlled settings (government, elite
institutions, select tech companies), rather than the wild.
What this means for you as an entrepreneur:
- Build AI-native business models that leverage AGI
within licensed or approved environments.
- Invest in technologies and services that safeguard deployment,
monitor bias, and assure control.
- Partner with AGI custodians to shape safe, responsible, high-value
applications—think AI-audited security or cognitive investment advisory.
More AI Strategy Resources:
Russell, S., & Norvig, P. (2021). Artificial
Intelligence: A Modern Approach (4th
Edition): Comprehensive textbook covering current AI capabilities,
approaches, and prospects.
Bostrom, N. (2014). Superintelligence: Paths, Dangers,
Strategies: Influential analysis of
advanced AI futures and societal impact.
Yudkowsky, E. (2008). Artificial Intelligence as a
Positive and Negative Factor in Global Risk: Outlines risks and benefits of advanced AI.
OpenAI, DeepMind, and Anthropic Research Blogs: For up-to-date perspectives and predictions regarding
AGI timeline and technical progress.
Partnership on AI. (Ongoing): Industry best practices, whitepapers, and
discussion papers covering transparency, fairness, and social impact.
West, D. M. (2018). The Future of Work: Robots, AI,
and Automation: A clear overview of workforce
transformation and adaptation needed for the AI era.
3. Social Transformation (Narrow AI + AI Everywhere)
In the third scenario, widespread “narrow” AI
saturates society. From smart homes and cities to customer service, logistics,
and personal health, AI is seamlessly embedded in daily life. Yet, each system
still operates within clear functional limits.
What this means for you as an entrepreneur:
- Move from solving isolated problems to integrating diverse AI
systems for end-to-end coordination.
- Develop privacy-preserving, user-centric AI platforms—not
surveillance-first ones.
- Shape experiences and services that thrive on the “network effect” of
ubiquitous intelligence.
4. Convergence Revolution (AGI/ASI + AI Everywhere)
In scenario four, AGI-driven intelligence is deployed
throughout society. Autonomous agents interact—and even collaborate—with humans
in virtually every arena, radically shifting society, business, and the very
notion of work.
What this means for you as an entrepreneur:
- Be a builder of foundational infrastructure for AGI-era
services—platforms, marketplaces, governance, and creativity tools.
- Innovate on business models for a potential post-scarcity
world, focusing on experience, meaning, and human values over raw
productivity.
- Lead in crafting new rules for autonomy, collaboration, and purpose at
the intersection of humanity and superintelligent agents.
5 Strategic Moves to Future-Proof Your Venture
Regardless of which outcome becomes reality, some
foundations are universal for entrepreneurs in this new age:
1. Invest in AI literacy at all staff levels; stay ahead of regulatory
and ethical trends.
2. Develop modular business models and agile teams that can adapt to shifting
technology and regulations.
3. Prioritize human-centric value: empathy, ethical judgment, and creativity will
remain irreplaceable.
4. Adopt governance frameworks that go beyond compliance—build mechanisms for
transparency and stakeholder alignment across borders.
5. Forge partnerships across the AI ecosystem, from research labs to regulators, and
advocate for inclusivity and digital equity.
Early Warning Signs: What to Monitor
- AGI and AI benchmark announcements from leading labs.
- New privacy, safety, or deployment regulations in your sectors and
target regions.
- Rapid spikes in AI adoption rates in client/customer bases.
- Public sentiment shifts and labor market transitions.
Anticipate, Adapt, Lead
AI’s trajectory over the next five years will
challenge every assumption about business as usual. The most successful
entrepreneurs will not be those who merely react but those who anticipate
change, build scenario-based strategies, and invest in the organizational
agility and values to thrive—no matter which future arrives.
Are you ready to be one of them?
Don’t be fooled. This is the calm before the AI storm
Megan McArdle, May 15, 2025
The lull in the artificial intelligence revolution, thanks to cultural
lag, is just temporary.
In September 1939, Europe entered a strange historical period known as the phony war that lasted until the following spring. Having guaranteed Poland’s independence, Britain and France declared war when Germany invaded. But then nothing much occurred for months other than some naval skirmishes. Only with the blitzkrieg offensive of late spring did the phony war start to seem real. History knows such moments—transitional pauses when it is obvious that something has happened, but it does not yet feel like reality. This is precisely the moment we are in with AI: the revolution has already begun, but its traces are visible only in isolated places—for example, in the wave of cheating that has engulfed American schools...washingtonpost.com/opini…
What Are 9 Notable Artificial Intelligence Benefits?
So, at a fundamental level, AI is simply about putting human
intelligence into machines. The benefits of this are numerous and far-reaching.
Here are 9 amazing examples:
1.
Enhanced healthcare
2.
Boosted economic growth
3.
Climate change mitigation
4.
Advanced transportation
5.
Customer service excellence
6.
Scientific discovery
7.
Enhanced financial services
8.
Improved agriculture
9.
Enhanced cybersecurity
#1 – Enhanced Healthcare
AI-based solutions could prove invaluable in the field of healthcare, in
so many ways. AI could be used, for example, to assist researchers in
developing cures and treatments for illnesses that have plagued mankind for
many years.
It could also be used for administrative tasks like test analysis and
data entry, or much more complicated procedures. In the future, we may even
have AI-powered robots performing surgery, reducing human error and saving
lives.
#2 – Boosted Economic Growth
There’s been much fear about AI replacing jobs or damaging economic
stability. But the stats tell a different story. Indeed, one study found that
AI could contribute a whopping $15.7 trillion to the global economy by 2030.
This is partly thanks to the fact that AI, rather than replacing or
removing jobs, is opening up countless new working opportunities in diverse
fields, from finance to tech. It’s also driving huge productivity growth for
many organizations.
#3 – Climate Change Mitigation
Climate change is arguably the greatest threat currently faced by
mankind, with future generations and the well-being of the very planet at
stake. As such, scientists and researchers are racing to find ways to mitigate
its effects, and machine learning – the bedrock of AI – can help with that.
An AI system could, for example, play a role in developing and making
the most of new green and renewable energy systems, or finding ways to minimize
carbon emissions to improve decision-making processes for governments across
the globe.
#4 – Advanced Transportation
The value and benefit of AI technology are particularly clear to see in
the field of transport. Self-driving cars, for example, were once a pipe dream
but are becoming a reality thanks to AI’s role in computer vision and
self-navigation potential, with 33 million such vehicles set to hit the road by 2040.
Beyond self-driving cars, AI can also be used in other fields of
transport, like GPS systems to plot the perfect route or traffic analysis to
help urban planners ease congestion. This can all help to reduce fuel
consumption and get people where they need to be more quickly and safely than
ever before.
#5 – Customer Service Excellence
Multiple artificial intelligence benefits are evident in the customer
service sector. For example, in the past, if customers had queries or
complaints, they had to call a company or send an email and await a response.
Now, they can air their concerns with an AI chatbot and receive instant
feedback, thanks to natural language processing.
AI technology also helps businesses personalize and manage their
customer interactions more effectively. This applies to fields like banking,
retail, insurance, and beyond. AI is consistently helping brands deliver the
most helpful and valuable customer experiences.
#6 – Scientific Discovery
For years, scientists have dreamt of the power of AI. There’s almost no
limit to how much it could aid them in research. Even today, in the infancy of
AI, we can train an AI program to analyze data with remarkable speed and
accuracy, thus drawing valuable conclusions, spotting patterns, and providing
scientists with the info they need to make breakthroughs and discoveries.
In other words, AI represents a major upgrade to the fundamentals of
research as we know it. It will make it much faster and easier to dig into data
and make predictions. This could lead to the development of everything from
cures for major diseases to game-changing new technologies for deeper insights.
#7 – Enhanced Financial Services
The financial industry is yet another area that is already reaping
numerous artificial intelligence benefits. An AI-powered computer program, for
example, can assess individual customers’ financial records in deep detail,
helping firms make stronger recommendations and more informed decisions in
their services.
In the field of fraud detection and prevention, AI has a part to play
there too. It can help to spot the common signs of fraud or detect fraudulent
activities by monitoring accounts and transfers in deep detail, potentially
saving institutions and individuals from financial ruin.
#8 – Improved Agriculture
At first glance, agriculture and AI may not seem like an obvious match.
But when you dig into the many artificial intelligence benefits and
applications, it’s clear to see how AI can influence and improve farming around
the world.
AI programs can, for instance, help farmers make better decisions about
how to make the most of their land and resources. They can provide weather
forecasts, instructions on when to water crops, suggestions on which pesticides
(if any) to utilize, and so on.
#9 – Enhanced Cybersecurity
Cybersecurity has been a hot topic in the tech world for a long time,
particularly with the rapid increase in data breaches and the development of
malware. For too long, cyber-defense experts have felt like they’re playing
catch-up with hackers and malware developers, but AI could change all of that.
AI tech can introduce new ways to combat cyber threats and counteract
the likes of hacking attempts, ransomware, and viruses. AI programs and
defenses can be set up across devices, servers, or entire networks to spot
attacks in advance and take the necessary steps to mitigate their negative
effects.
Artificial Intelligence Trends and Future
Given the relative infancy of this industry and the very fast-evolving
nature of AI, it’s difficult to pin down any one dominating trend. However,
with that said, there are still some noticeable patterns and themes emerging
for AI in 2025 and beyond:
- Multimodal AI: The release
of multimodal AI models, like OpenAI’s GPT-4o, changes how AI is used.
These models don’t just deal in text but also photos and videos, too, and
there’s a lot of focus on finding the most value-adding, interesting ways
to harness this tech.
- Stronger Virtual Agents: As
the foundations of AI tech improve, so too do its applications. AI virtual
assistants are becoming increasingly useful, valuable additions to both
personal and professional lives, taking care of so many tedious tasks for
their users to save time and improve productivity.
- Ethical AI Concerns and Regulations: The rapid rise in AI technology also brings with it new
ethical questions that have become hot topics of debate across the globe.
Agencies and authorities are currently focused on finding ways to enjoy
artificial intelligence benefits in the most ethical and moral manner.
Why Should You Care About AI?
Quite simply, because AI is the future. This tech is most certainly here
to stay, and it’s only going to get bigger, better, smarter, and more
influential in so many different industries, as well as in people’s personal
lives. It has such a vast range of applications, and so many jobs in the future
will involve AI at some level. So, the more you learn about it now, the better
placed you’ll be to reap the rewards and benefits of this tech in the years to
come.
Enhance Your Understanding of AI in Business
Enhance your understanding of AI in business with the University of
Cincinnati Carl H. Lindner College of Business’s AI in Business Graduate
Certificate. This program equips you with the skills and
knowledge to master and make the most of AI technology in various business
contexts.
Students receive strong support from the University of Cincinnati
Online, ensuring comprehensive assistance from enrollment through graduation.
This certificate can be a valuable addition to your MBA program, with all
credits earned applicable toward the Master of Science in Information Systems
or Master of Science in Business Analytics programs.
For more details on how credits can be applied to these master’s
degrees, please contact an Enrollment Services Advisor.
Get in touch with UC Online today to find out more and kick-start your AI learning journey.
Frequently Asked Questions (FAQs)
What is the main benefit of artificial intelligence?
The biggest advantage of AI is its versatility—its ability to enhance
efficiency, automate processes, and generate insights across industries. From
revolutionizing healthcare with faster diagnostics to optimizing business
operations with intelligent automation, AI is fundamentally changing how we
work and live. As it continues to evolve, its potential to drive innovation and
solve complex challenges will only expand.
What are the pros and cons of AI?
The pros of AI include enhanced efficiency, task automation, and better
innovation. Yet, AI also raises some concerns like job displacement, ethical
biases, and data privacy risks. So, as AI evolves, it’ll be important to
balance its benefits with responsible use and regulation.
How can AI help the world?
AI is already making a global impact by tackling some of humanity’s
biggest challenges, from predicting and mitigating climate change effects to
improving disease detection and treatment in healthcare. It enhances disaster
response efforts, streamlines logistics to reduce waste, and strengthens
cybersecurity by detecting threats in real time. As AI technology advances, it
will continue to unlock solutions that drive progress and improve quality of
life worldwide.
How does AI benefit everyday life?
AI plays a bigger role in daily life than many people realize, powering
everything from voice assistants and personalized recommendations to fraud
detection and smart home automation. It simplifies tasks like route planning,
manages schedules through digital assistants, and even improves online shopping
experiences with smarter search results. Whether directly or behind the scenes,
AI is making life more efficient, convenient, and connected. https://online.uc.edu/blog/artificial-intelligence-ai-benefits/
How today’s trendiest tool
is driving products and teams
BY Dag Peak
In just a few short years
since AI exploded onto the scene, it’s already reshaped how we
live, work, and connect. But, in reality, we’ve only just begun to understand
its potential.
Across every industry,
companies are racing to integrate AI into their products, layering intelligence
into the tools and experiences people use every day. And while that’s an
incredible step forward, it’s only part of the story. AI isn’t just something companies
can sell or integrate into their offerings; it’s becoming a core engine for how
modern products are built and launched.
We often talk about what AI
can do for end users, but there’s an equally powerful opportunity in what it
can do for the creators—the engineers, designers, and product teams behind the
scenes. In fact, AI is emerging as a critical copilot in two major ways:
accelerating processes and go-to-market strategies, and amplifying human
capabilities within the teams developing these new tools.
In short, AI isn’t just
changing the products we create; it’s changing how we create them, how we test
them, and how we bring them to market.
As a product and technology
leader, I spend a lot of time thinking about how to help my teams do more, move
faster, and innovate without burning out or sacrificing quality. And since AI
is already changing the pace and precision of telecommunications’ product
development, it’s almost a no-brainer. Companies with massive, complex code
bases, for example, can leverage AI to transform everything from coding and
testing to design exploration and launch planning. It can help teams simulate
scenarios, predict customer responses, and analyze usage data before a feature
ever goes live.
It also makes experimentation
faster and far less risky. Teams can model adoption, simulate usage scenarios,
and evaluate feature impact before anything even reaches the customers. This
enables product leaders to make better-informed decisions and shorten iteration
cycles across the entire launch process.
Applied thoughtfully, AI
enables organizations to learn faster, adapt more efficiently, and bring
higher-quality offerings to customers more consistently. It’s a strategic ally
for product and technology teams to design, validate, and launch products at unprecedented
speed.
FROM BUILDER TO SUPERHERO
The second dimension of AI’s
impact is people. Too often, discussions frame AI as a threat to jobs, but that
misses the point entirely. AI doesn’t replace humans, it elevates them.
I like to think of it as
giving engineers, designers, and product managers a super suit. A protective
enhancement that enables them to experiment more boldly, focus on creative
problem-solving, and take on complex challenges without being slowed by repetitive
tasks. The effect is essentially telco’s version of a radioactive spider bite
or the addition of a cybernetic limb: Our development teams learn faster,
experiment more, and make more confident decisions.
All of this to say, AI is not
a replacement—it’s amplification. It empowers teams to achieve outcomes that
would not be possible without augmentation, turning talented teams into
high-impact, resilient innovators.
THE PATH TO ACCELERATION
AND AUGMENTATION
So how can we start putting
this dual opportunity into practice? Three ways stand out to me:
1. Treat AI as a teammate,
not just a tool. Use it to enhance
human capabilities across both product creation and go-to-market planning.
2. Invest in
processes as much as in the features.
AI can transform how products are designed, tested, and launched which
ultimately, is impacting the quality and pace at which you’re releasing new
solutions.
3. Unlearn the fear of
failure. Failure brings you freedom,
so it’s important to build a culure of psychological safety where it’s safe to
try, fail, learn, and iterate. AI moves at the speed of experimentation. Your
culture should too.
By embracing these
principles, teams can deliver higher-quality products faster, all while growing
stronger, more capable, and more innovative. Organizations that tap into this
dual approach—leveraging AI to accelerate their product lifecycles and augment
their teams—will define the future of innovation.
AI isn’t just changing what
we build, it’s transforming how we build it, how we bring it to market, and how
far we can push the boundaries of possibility. It’s time to seize the
opportunity of AI and unlock faster innovation and a more resilient, empowered
workforce.
https://www.fastcompany.com/91466828/the-ai-behind-the-ai
Protect your privacy, cellphone number and email address
BY KIM
KOMANDO
Phone scams are never-ending because they work. Scam texts are
increasing, too. Here are five sure signs a text is junk you need to delete.
While we’re talking scams, I’d be remiss not to mention your
inbox. Tap or click for convincing spam that landed in my email with not so obvious
red flags.
One way to cut down on the endless attempts to steal your money and info
for sale to marketers is to limit who has your contact information. Here are
some simple, free ways to do it.
Hide your email address with a burner
Think about all the reasons you give away your email without thinking
about it: Signing up for a new account, emailing a company with a question, or
getting a coupon code — to name a few.
Whenever you give out your email address, you open yourself to junk
mail, malware, and an inbox full of spam messages. This is where a burner email
comes in handy.
Burner email addresses are disposable and can be used in place of your
primary ones. There are several ways to get one.
● Temp
Mail provides a temporary, anonymous, and
disposable email address. You don’t need to register for the free version.
Remember that the service doesn’t automatically delete your temporary email
address (that’s up to you), and you can’t send emails. Emails are stored for
about two hours before they’re automatically deleted.
● 10MinuteMail is another popular option you can also use to send emails. As the
name suggests, the email and address are deleted in 10 minutes. If you receive
an important message you don’t want to lose, you can forward it to another
email address. There’s no need to provide personal information to get started,
which is a nice bonus.
If you’re an Apple iCloud+ subscriber, you get access to one of my
favorite Apple features: Hide My Email. It creates unique, random email
addresses that forward to your inbox. You can create as many addresses as you
want and reply to messages.
● To create a new email address, go to Settings and tap your Apple ID.
● Go to iCloud > Hide My Email > Create New Address.
● Follow the onscreen instructions, and you’ll get a new email address
you can manage from iCloud settings.
Gmail also allows you to create free aliases tied to your primary inbox.
They are handy for filtering mail or seeing how your email address ended up on
a spam list.
Tap or click here and scroll to No. 5 for steps on creating new email addresses on
the fly.
Set up a burner phone number, too
You need your real phone number for things that matter, such as your
medical and financial accounts and records. Otherwise, there’s no reason to
hand it out.
Google Voice is a free service that gives you a phone number to use
however you like for domestic and international phone calls, texts, and
voicemails. Google Voice is available for iOS, Android, and your computer. All
you need is a Google account to get started.
Then follow these steps:
● Download the app for iOS or Android or go to voice.google.com/u/0/signup to get it for your computer.
● Next, sign into your Google account.
● Review the terms and proceed to the next step.
● Choose a phone number from the list. You can search by city or area
code.
● Verify the number and enter a phone number to link to your Voice
account.
● You’ll get a six-digit code to enter for the next step.
Use your Google Voice number however you please, especially when you
need to add your number to a form online. Tap or click here for five smart ways to use Google Voice.
Another option is downloading a burner app. These give you a second
phone number and use your internet data or Wi-Fi to make and receive calls and
texts. The catch? These cost money.
Burner is one of the most popular apps of its kind. You can route calls
directly to your secondary number. The app comes with a seven-day free trial,
and plans start at $4.99 per month for one line or $47.99 for one year.
Hushed lets you create numbers from around the world, so you can go
outside your area code or the U.S. if you’d like. A prepaid plan starts at
$1.99 for seven days and comes with bundled minutes for local calls and texts.
You can step up to unlimited talk and text ($3.99 per month) and international
service ($4.99 per month).
Tap or click here for direct links to download Burner or Hushed for your iPhone or
Android.
Tech smarts: Your old phone numbers can be used to steal your identity.
Yikes. Here’s how and what to do about it.
What digital lifestyle questions do you have? Call Kim’s national radio
show and tap or click here to find it on your local radio station. You can listen to or
watch The
Kim Komando Show on
your phone, tablet, television or computer. Or tap or click here for
Kim’s free podcasts.
How to use AI to hone your emotional intelligence
A quiet crisis is brewing in today’s workforce, and it’s not about
automation or AI replacing jobs.
It’s about the erosion of human skills that make teams work: communication,
empathy, adaptability, and emotional intelligence.
These so-called “soft skills” are proving to be among the hardest to
teach and the most critical to get right. In fact, the lack of them is costing
U.S. companies an estimated $160 billion a year in
lost productivity, poor communication, and employee turnover.
In 40-plus years of building a global technology company, the biggest
performance gaps I’ve seen haven’t come from a lack of technical skill, but
from a lack of training in how people communicate, lead, and connect.
Most employees will tell you it’s not the technical tasks that keep them
up at night; it’s the hard conversations: effectively delivering feedback in
performance reviews . . . negotiating sales with difficult buyers . . . calming
irate customers . . . and even confronting toxic colleagues. These are the
moments that may come with a script, and often do in big companies, but people
and circumstances are dynamic and rarely proceed according to a preconceived
linear scenario. Traditional training methods still treat them like they do;
therein lies the challenge.
The old ways of learning always had this Achilles tendon, and now they
are just increasingly unfit for the way younger generations want to learn.
That’s why we’re seeing a new generation of tools emerge—ones that don’t
just teach communication, but instead let people practice it. One of the most
promising is immersive AI-powered roleplay, a training model that allows
employees to rehearse unscripted, emotionally demanding conversations in a
safe, dynamic environment. Think of it as a flight simulator for high-stakes
conversations.
Practice makes prepared
Instead of passively watching videos or memorizing scripts, employees
can now engage in realistic roleplay with virtual avatars powered by AI and
behavioral science. These characters react in real time, based on an individual
employee’s tone, word choice, mannerisms, and more. If a trainee delivers bad
news with empathy, the virtual persona softens. If they deflect or escalate,
the persona pushes back. With AI-roleplay, there are no canned scripts—only
authentic, evolving dialogue.
These practice scenarios are designed to reflect the range of
personalities we encounter in real life—from the highly agreeable to the more
confrontational—giving employees exposure to a wide spectrum of behavioral
styles they may face on the job.
This kind of immersive rehearsal builds what I call “emotional muscle memory.”
It gives employees the range of experiences and repetition they need to
confidently engage in real-world conversations where clarity and empathy matter
most.
Forward-thinking companies across diverse sectors, from healthcare and
aviation to manufacturing and retail, are turning to AI-powered roleplay
platforms to upskill their teams for unpredictable and often emotionally
charged interactions:
· One global medical technology company recently integrated
immersive roleplay into its sales and clinical education programs and saw
measurable performance gains, including increased revenue and stronger
confidence among reps navigating difficult conversations.
· A large national humanitarian organization used simulation-based
training to cut training time from 45 days to 30, reduce employee wait times
from two weeks to one day, save over $6.5 million annually, and train more than
13,000 professionals.
· In the airline industry, an international carrier trained flight
crews using AI-driven roleplay to better manage conflict and de-escalation,
leading to a 20% drop in passenger incidents.
The common thread across these examples? Employees aren’t just learning
what to say. They’re learning how to listen, respond, and adapt in real time.
They’re not just memorizing scripts. They’re building instinctive confidence
for tough conversations.
Why soft skills can’t wait
The need for emotionally intelligent teams has never been greater. Case
in point: one study found that teams
high in emotional intelligence outperform their peers by around 20% in
productivity and achieve significantly higher cohesion and job satisfaction.
As work becomes more global, remote, and fast-paced, the margin for
miscommunication will only grow. Customers expect more. Employees expect more.
And leaders are being asked to navigate uncertainty, conflict, and change 24/7.
And yet . . . most enterprises still treat soft skills training as an
afterthought relative to their other business priorities aimed at building
organizational resilience: something optional, not essential. We often send
people into literal make-or-break conversations without the proper rehearsal
and then wonder why they fall flat.
What’s different about immersive AI is that it allows teams to practice
difficult questions as often as needed and in a safe environment. This kind of
technology is available 24/7, can scale across geographies and languages, and
delivers personalized feedback that helps people improve with every session.
That kind of on-demand coaching was unthinkable even just a few years ago.
And it’s needed now more than ever. In one widely reported case, a global technology
company laid off 8,000 employees as part of an AI automation push, only to
rehire just as many people shortly after, this time in roles requiring more
creativity, communication, and leadership skills.
It’s a clear signal: AI may change what we do, but
human skills still define how we do it.
4 ways AI can
improve your thinking
Bland AI outputs grow stale quickly. Instead of just
speeding up routine tasks, what if we used AI to slow down, challenge our
thinking, and build new tools, dashboards, and experiments? Read on for
creative approaches that are changing how I think about AI.
1. Create your own
devil’s advocate assistant
Get thoughtful pushback on decisions. Challenge ideas.
The tactic: Use
AI as an intellectual sparring partner to stress-test your thinking, explore
alternative perspectives, and identify potential blind spots before making
important decisions.
Try this: Present
a plan, idea, or decision to an AI assistant with instructions to challenge
your thinking constructively. Identify risks you haven’t considered, consider
secondary impacts, and add nuance to your analysis.
Get your AI assistant to stop kissing up to you and
start challenging your ideas. [Generated Photo: Jeremy Caplan/Ideogram]
Prompt template
“I’m planning to [decision/plan] because [reasoning]
and with a goal of [objective]. Play devil’s advocate, give me multiple
perspectives on this, be bold, surprising, creative, and thoughtful in your
reply, and address these questions:
- What are the strongest arguments against this
approach?
- What alternatives should I consider?
- What risks might I be overlooking?
- What questions should I be asking myself?
- What challenges should I expect to face?
- What could I do to gain more insight?
- What could I do to increase the chances of
success?
Pro tip: Try
asking your AI assistant to role-play. It can respond as a financial advisor,
family member, or competitor, for varied viewpoints. Or ask it to act like a
person you admire, living or dead, real or fictional.
Limitation: Your
AI devil’s assistant will be generic if you don’t provide detailed context. And
you may get a predictable response if you don’t instruct it to be bold.
Suggested model: I have found ChatGPT 5 to be excellent for this.
Gemini and Claude also work well. If you’re considering anything sensitive, you
may want to use a free offline private AI tool like AnythingLLM or Jan. I’ll write more soon about private AI tools
like these. If you have input on those, add a comment below.
Example: I
described a new planned morning schedule to GPT 5. The subsequent exchange got
me thinking about several new issues.
The conversation helped me clarify my own thinking. It
pushed me to organize and deepen my own analysis. As a bonus, GPT 5 produced a
tangible artifact for me—a PDF with tables.
2. Learn something new
Map out a personalized curriculum.
AI tools let me try out skills I thought I was too
late to develop, like coding simple applications, designing graphics, analyzing
large data sets, and exploring complex docs in other languages.
You can also lean on AI assistants to help you develop
offline skills, like learning about photography, improving your Greek,
understanding crypto, sharpening project management skills, making bread by
hand, or prepping for any new coverage area for a project or team. AI
assistants excel at creating structured learning and practice plans tailored to
your schedule, style, and goals.
Try this: Give
an AI assistant context about what you want to learn, why, and how.
- Detail your rationale and motivation, which may
impact your approach.
- Note your current knowledge or skill level,
ideally with examples.
Summarize your learning preferences
- Note whether you prefer to read, listen to, or
watch learning materials.
- Mention if you like quizzes, drills, or exercises
you can do while commuting or during a break at work.
- If you appreciate learning games, task your AI
assistant with generating one for you, using its coding capabilities
detailed below.
- Ask for specific book, textbook, article, or learning
path recommendations using the Web search or Deep Research capabilities
of Perplexity, ChatGPT, Gemini or Claude. They can
also summarize research literature about effective learning tactics.
- If you need a human learning partner, ask for
guidance on finding one or language you can use in reaching out.
Add specificity
- Mention any relevant deadlines. Note budget,
time, or other constraints.
- Share info about your existing schedule so the assistant
can help map out optimal learning time slots. Making the plan concrete
increases the likelihood you’ll follow through. ChatGPT recently generated
a calendar file with a list of appointments I could easily import into my
Google calendar.
Pro Tip: Ask
for help setting up a schedule, setting learning targets, measuring progress,
choosing resources, motivating yourself, and implementing backup plans when you
fall off track. Ask for a learning plan you can print out, charts you can fill
in, interactive apps to track progress, resource lists you can look up, experts
you can follow, and strategies for avoiding common pitfalls.
3. Stretch your creative design muscles
Try this: Use
AI image generation tools to experiment with visual ideas. Start with simple
concepts and iterate to add nuance or complexity. Practice describing visual
concepts in text, then see them realized instantly and iterate on your prompts.
- Try MyLens or Napkin for
creating mind maps, flow charts, timelines or various other infographics
out of detailed prompts or source docs.
- Use Ideogram—detailed in this post—or
ChatGPT’s new image generator—detailed in this post—to
describe any style of illustration, infographic or other visual.
- For creative video generation, try Hypernatural, which
lets you turn text into moving images.
Use this to: Add
creative images to presentations, experiment with social media graphics, or
generate infographics for teaching, publishing, or project work.
Limitation: AI
image generators are improving rapidly but still struggle with precise text
placement, detailed charts, and maintaining brand consistency across multiple
images. Most don’t let you select specific image dimensions, though Ideogram does.
Examples: I
generated the images in this post with ChatGPT and Ideogram, and I’ve
used Hypernatural to make video versions of past posts, like
this 2-min video about Raindrop, which I wrote about last week.
4. Create a personalized dashboard
Build custom tracking tools and mini-applications
Without knowing anything about code, you can generate
simple web applications for tracking anything important to you. Prompt your AI
assistant to help you keep tabs on reading or eating goals, fitness metrics,
project progress at school or work, or stats for Wordle or your game of choice.
Try this: Ask
AI to create a dashboard or tracking tool tailored to your specific needs.
Experiment with Claude 4
Artifacts, Gemini’s code canvas. Also
try vibe coding tools like Lovable or Bolt that
specialize in creating apps and sites based on prompts. For advanced projects,
consider Windsurf
Cascade.
Pro tip: Plan
to iterate. It almost always takes multiple attempts to get something workable,
because you realize your needs when you see the first prototype. Start with
simple tracking before requesting complex features. Ask for additional
functionality with follow-up prompts. Here’a a Prompt
Example.
Limitation: The
simplest versions of these mini applications work in your browser only. To use
an application on multiple devices, you’ll need to save the code and host it
with a service that allows you to create a database. For that, try Lovable, Bolt, or Windsurf.
Example: I’m
working on a content planning and workflow app to organize and track my
newsletter work.
5 time-saving
ways you should be using ChatGPT at work
Endless notifications, a constant barrage of
information, and never-ending to-do lists can make it feel like you’re
digitally drowning. Why not use AI to claw some of your precious time back?
While you may have used ChatGPT for the basics like
writing an email or proofreading documents, there’s plenty of power to be
harnessed from less obvious applications.
Here are five ways you can put AI to work for you.
The meeting note parser
You’ve just finished an hour-long meeting, and your
notes look like a verbal train wreck: a mix of shorthand, half-finished
sentences, and random keywords. There are action items in there somewhere, but
finding them feels like a chore.
Paste your notes into ChatGPT with a prompt such as:
“Here are my meeting notes. Please create a prioritized task list with
deadlines and the person responsible for each item.”
ChatGPT can turn that mess into a clean, actionable
list in seconds, giving you back precious time you’d have spent deciphering
your own writing.
The simple concept explainer
You’ve come across a new industry term or a technical
concept that’s critical to your job, but the online explanations are full of
jargon you don’t understand. Or maybe you’re trying to explain something
complex to a colleague who isn’t as familiar with the subject.
Ask ChatGPT to “explain [the concept] in plain English
for someone with no background in [the field].”
AI is great at simplifying dense information. You can
even ask it to “use a relatable analogy” to make the concept stick. It’s like
having a personal tutor who’s always on call.
The interview prep guru
You have an important call with a potential client or
a new partner, and you want to go in prepared. But digging through their
company’s website, recent press releases, and social media feeds for relevant
background info is a serious time sink.
Prompt ChatGPT with something like: “Help me prepare
for a call with [Customer Name]. Summarize the top three news stories from the
past six months and highlight anything relevant to their business goals.”
This gives you a quick, digestible cheat sheet, so you
can sound informed and confident without spending hours on a deep dive.
The content repurposer
You’ve created a great piece of content: a long-form
blog post, a podcast episode, or a detailed report. Now you need to turn it
into a dozen different things for social media. The thought of writing 12
unique captions and a handful of tweets is exhausting.
Upload your content and ask ChatGPT to “repurpose this
information into three short social media captions and five bullet points for a
Twitter thread.”
It can instantly transform your work into multiple
formats, saving you the mental load of starting over each time you switch
platforms.
The brainstorming partner
There’s nothing quite like smashing headfirst into a
creative wall. You’ve got to come up with ideas for a new marketing campaign, a blog post title, or a product name,
but the well has run dry. The blank page is staring at you, mocking your lack
of creativity.
Use a prompt to get the ball rolling, such as: “I’m
launching a new service for [target audience]. Give me 10 creative marketing
campaign ideas that are both approachable and professional.”
ChatGPT can act as a tireless brainstorming partner,
providing you with a starting point, new angles, and ideas you might never have
considered on your own. It won’t do all the work, but it will give you a solid
foundation to build upon.
What is HoneyBook used for?
Everything you need to
grow your business with confidence
HoneyBook is primarily used by
freelancers, solopreneurs, and small service-based businesses to manage
client communications, streamline project workflows, and handle invoicing and
payments. It integrates tools for creating proposals, contracts, and invoices,
while automating workflows to save time. https://www.honeybook.com/
What you can do to future-proof your career
To future-proof your career, you must embrace continuous learning,
particularly in areas of digital literacy and AI, while cultivating adaptable,
transferable skills like leadership and problem-solving. It's also crucial
to proactively build and maintain a diverse professional network, stay informed
about industry and economic trends, and develop a strong personal brand to
showcase your value.
How AI can help people thrive, not just be more
productive
By allowing employees to do more in less time, the technology can offer
added freedom and flexibility.
For decades, we’ve been told that technology would liberate us from
mundane work, yet somehow we ended up more tethered to our desks than ever.
Now, groundbreaking research from GoTo suggests we may finally be reaching the
inflection point where artificial intelligence doesn’t
just promise freedom—it delivers it. But the real revelation isn’t that AI
might make offices obsolete. It’s that AI is creating the conditions for what I
call “cultivation-centered work”—an approach that prioritizes human development
over performative productivity.
The Great Workplace Liberation
The numbers tell a compelling story: 51% of employees believe AI will
eventually make physical offices obsolete, while 62% would prefer AI-enhanced
remote working over traditional office environments. But here’s what makes this
shift profound—it’s not about rejecting human connection. Instead, it’s about
reclaiming the autonomy to choose when, where, and how we engage most
meaningfully with our work and colleagues.
This aligns perfectly with the core principles of my book, Move.
Think. Rest. When 71% of workers say AI gives them more flexibility
and work-life balance, they’re describing the conditions necessary for true
cultivation. They’re talking about having time to think deeply, space to move
naturally throughout their day, and permission to rest when their bodies and
minds require it.
From Extraction to Integration
What’s particularly striking about GoTo’s research is how it reveals
AI’s potential to support the full spectrum of human experience at work.
Traditional productivity models demanded we compartmentalize ourselves—show up
as disembodied brains focused solely on output. But AI-enhanced work
environments are creating space for integration.
When employees report that AI allows them to “work anywhere without
losing productivity” (66%), they’re really describing the freedom to align
their work rhythms with their natural energy cycles. They can take walking
meetings in nature, think through problems during movement, and create the
environmental conditions that support their best thinking.
The Cultivation Disconnect
However, the research also reveals a concerning gap that organizations
must address. While 91% of IT leaders believe their companies effectively use
AI to support distributed teams, only 53% of remote and hybrid employees agree.
This disconnect isn’t just about technology deployment—it’s about understanding
the difference between using AI to replicate old productivity models
versus leveraging it to support human flourishing.
The companies bridging this gap successfully are those asking different
questions. Instead of “How can AI make people more productive?” they’re asking
“How can AI create conditions where people naturally thrive?” They’re designing
AI implementations that support the three pillars of cultivation: movement
(flexibility to work in various environments), thought (time and space for deep
reflection), and rest (permission to disengage and recharge).
The Age-Defying Impact
One of the most encouraging findings challenges ageist assumptions about
technology adoption. The research shows that across all generations—from 90% of
remote Gen Z workers to 74% of baby boomers—people report improved productivity
through AI-enhanced remote work. This suggests something profound: when
technology truly serves human needs rather than demanding adaptation to machine
rhythms, people of all ages can benefit.
This generational unity points to AI’s potential as an equalizing
force—not in the sense of making everyone the same, but in honoring the diverse
ways different people think, process, and contribute.
Perhaps most telling is that 61% of employees—including those working in
offices—believe organizations should prioritize AI investment over fancy
workplace amenities. This isn’t about choosing technology over human
experience. It’s about recognizing that true employee experience comes from
having the tools and flexibility to do meaningful work in ways that honor their
full humanity.
The Path Forward
As AI reshapes work, we have a choice. We can use it to create more
sophisticated forms of surveillance and productivity extraction, or we can
leverage it to finally realize the promise of technology serving human
flourishing. The organizations that choose the latter will find themselves with
a profound competitive advantage: employees who are not just more productive,
but more creative, more engaged, and more capable of the kind of breakthrough
thinking that drives innovation.
The question isn’t whether AI will transform work—it already is. The
question is whether we’ll use this transformation to create workplaces that
cultivate human potential or merely optimize human output. The GoTo research
suggests employees are ready for cultivation. The question is: are their
leaders?
The right way to use AI at work
A new Stanford study reveals the right way to use AI at work—and why
you’re probably using it wrong.
BY Thomas Smith
If you listen to the CEOs of elite AI companies or take
even a passing glance at the U.S. economy, it’s abundantly obvious that AI
excitement is everywhere.
America’s biggest tech companies have spent over $100 billion on AI so far this year, and Deutsche Bank reports that AI spending is the only thing keeping the United States out of a
recession.
Yet if you look at the average non-tech company, AI is nowhere to be
found. Goldman Sachs reports that only 14% of large companies have deployed AI in a meaningful way.
What gives? If AI is really such a big deal, why is there a multi-billion-dollar
mismatch between excitement over AI and the tech’s actual boots-on-the-ground
impact?
A new study from Stanford University provides a clear answer. The study reveals that there’s a right
and wrong way to use AI at work. And a distressing number of companies are
doing it all wrong.
What can AI do for you?
The study, conducted by Stanford’s Institute for Human-Centered AI and
Digital Economy Lab and currently available as a pre-print, looks at the daily habits of 1,500 American workers across 104
different professions.
Specifically, it analyzes the individual things that workers actually
spend their time doing. The study is surprisingly comprehensive, looking at
jobs ranging from computer engineers to cafeteria cooks.
The researchers essentially asked workers what tasks they’d like AI to
take off their plates, and which ones they’d rather do themselves.
Simultaneously, the researchers analyzed which tasks AI can actually do, and
which remain out of the technology’s reach.
With these two datasets, the researchers then created a ranking system.
They labeled tasks as Green Light Zone if workers wanted them automated and AI
was up to the job, Red Light Zone if AI could do the work but people would
rather do it themselves, and Yellow Light (technically R&D Opportunity
Zone, but I’m calling it Yellow Light because the metaphor deserves extending)
if people wanted the task automated but AI isn’t there yet.
They also created what’s essentially a No Light zone for tasks that AI
is bad at, and that people don’t want it to do anyway.
The boring bits
The results are striking. Workers overwhelmingly want AI to automate
away the boring bits of their jobs.
Stanford’s study finds that 69.4% of workers want AI to “free up time
for higher value work” and 46.6% would like it to take over repetitive tasks.
Checking records for errors, making appointments with clients, and doing
data entry were some of the tasks workers considered most ripe for AI’s help.
Importantly, most workers say they wanted to collaborate with AI, not
have it fully automate their work. While 45.2% want “an equal partnership
between workers and AI,” a further 35.6% want AI to work primarily on its own,
but still seek “human oversight at critical junctures.”
Basically, workers want AI to take away the boring bits of their jobs,
while leaving the interesting or compelling tasks to them.
A chef, for example, would probably love for AI to help with
coordinating deliveries from their suppliers or messaging diners to remind them
of an upcoming reservation.
When it comes to actually cooking food, though, they’d want to be the
one pounding the piccata or piping the pastry cream.
The wrong way
So far, nothing about the study’s conclusions feel especially
surprising. Of course workers would like a computer to do their drudge work for
them!
The study’s most interesting conclusion, though, isn’t about workers’
preferences—it’s about how companies are actually meeting (or more accurately,
failing to meet) those preferences today.
Armed with their zones and information on how workers want to use AI,
the researchers set about analyzing the AI-powered tools that emerging
companies are bringing to market today, using a dataset from Y Combinator, a
storied Silicon Valley tech accelerator.
In essence, they found that AI companies are using AI all wrong.
Fully 41% of AI tools, the researchers found, focus on either Red Light
or No Light zone tasks—the ones that workers want to do themselves, or simply
don’t care much about in the first place.
Lots more tools try to solve problems in the Yellow Light Zone—things
like preparing departmental budgets or prototyping new product designs—that
workers would like to hand off to AI, but that AI still sucks at doing.
Only a small minority of today’s AI products fall into the coveted Green
Light zone—tasks that AI is good at doing and that workers actually want done.
And while many of today’s leading AI companies are focused on removing humans
from the equation, most humans would rather stay at least somewhat involved in
their daily toil.
AI companies, in other words, are focusing on the wrong things. They’re
either solving problems no one wants solved, or using AI for tasks that it
can’t yet do.
It’s no wonder, then, that AI adoption at big companies is so low. The
tools available to them are whizzy and neat. But they don’t solve the actual
problems their workers face.
How to use AI well
For both workers and business leaders, Stanford’s study holds several
important lessons about the right way to use AI at work.
Firstly, AI works best when you use it to automate the dull, repetitive,
mind-numbing parts of your job.
Sometimes doing this requires a totally new tool. But in many cases, it
just requires an attitude shift.
A recent episode of NPR’s Planet Money podcast references a study where two groups of paralegals were given
access to the same AI tool. The first group was asked to use the tool to
“become more productive,” while the
second group was asked to use it to “do the parts of your job that you hate.”
The first group barely adopted the AI tool at all. The second group of
paralegals, though, “flourished.” They became dramatically more productive,
even taking on work that would previously have required a law degree.
In other words, when it comes to adopting AI, instructions and
intentions matter.
If you try to use AI to replace your entire job, you’ll probably fail.
But if you instead focus specifically on using AI to automate away the “parts
of your job that you hate” (basically, the Green Light tasks in the Stanford
researchers’ rubric), you’ll thrive and find yourself using AI for way more
things.
In the same vein, the Stanford study reveals that most workers would
rather collaborate with an AI than hand off work entirely.
That’s telling. Lots of today’s AI startups are focusing on “agents”
that perform work autonomously. The Stanford research suggests that this may be
the wrong approach.
Rather than trying to achieve full autonomy, the researchers suggest we
should focus on partnering with AI and using it to enhance our work, perhaps
accepting that a human will always need to be in the loop.
In many ways, that’s freeing. AI is already good enough to perform many
complex tasks with human oversight. If we accept that humans will need to stay
involved, we can start using AI for complex things today, rather than waiting
for artificial general intelligence (AGI) or some imagined, perfect future
technology to arrive.
Finally, the study suggests that there are huge opportunities for AI
companies to solve real-world problems and make a fortune doing it, provided
that they focus on the right problems.
Diagnosing medical conditions with AI, for example, is cool. Building a
tool to do this will probably get you heaps of VC money.
But doctors may not want—and more pointedly, may never use—an AI that
performs diagnostic work.
Instead, Stanford’s study suggests they’d be more likely to use AI that
does mundane things—transcribing their patient notes, summarizing medical
records, checking their prescriptions for medicine interactions, scheduling
followup visits, and the like.
“Automate the boring stuff” is hardly a compelling rallying cry for
today’s elite AI startups. But it’s the approach that’s most likely to make
them boatloads of money in the long term.
Overall, then, the Stanford study is extremely encouraging. On the one
hand, the mismatch between AI investment and AI adoption is disheartening. Is
it all just hype? Are we in the middle of the mother of all bubbles?
Stanford’s study suggests the answer is “no.” The lack of AI adoption is
an opportunity, not a structural flaw of the tech.
AI indeed has massive potential to genuinely improve the quality of
work, turbocharge productivity, and make workers happier. It’s not that the
tech is overhyped—we’ve just been using it wrong.
A New Stanford Study Reveals We're Using AI All Wrong
https://www.youtube.com/watch?v=Z__-v_bMKws
AI Is Coming for Your Job,
Much Faster Than Anyone Thought
White-collar workers are
facing displacement due to AI—and AGI hasn't even arrived yet. Here's why
experts are raising alarms.
By Jason Nelson
Jun 8, 2025
In brief
- AI is rapidly replacing white-collar jobs. Major
tech firms have slashed tens of thousands of jobs in 2025 amid rapid AI
integration.
- Reports show that 40–80% of white-collar tasks
may soon be automated.
- Experts warn AGI could spark mass unemployment
across both white- and blue-collar sectors.
Every week brings another
round of AI-driven layoffs. In May, Microsoft laid off over 6,000 software engineers as it
leaned into AI for code generation and development. That same month, IBM cut thousands of HR jobs. In February, Meta laid off 3,600 employees—about 5% of its
workforce—as it restructured around an AI-first strategy. These layoffs are not
isolated incidents; they’re signs of a seismic shift in the global economy.
Last week, filings for
unemployment benefits hit its highest level since last fall, with companies
ranging from Procter & Gamble to Starbucks saying they’re planning big
layoffs. How much of this is due to Trump's trade war is uncertain, but the
rise of automated, AI-driven systems that make mincemeat of rote work isn’t
helping.
Welcome to the immediate
downside to the Age of AI: economic displacement. And if it looks bad now,
consider that we haven’t reached so-called artificial general intelligence, the
next big phase in the AI Age. At that point, AI can understand, learn, and
apply knowledge across a wide range of tasks, just like a human. AGI would be
capable of reasoning, problem-solving, and adapting to new situations across
any domain without being reprogrammed.
While many experts believe
that AGI is still decades away, a growing number of experts say that it’s
likely to happen within the next five years.
Dario Amodei, CEO of
Anthropic, made headlines last week when he
repeated his warnings that AGI-level systems could emerge within two
to three years. Daniel Kokotajlo, a former research analyst who left OpenAI on
the grounds that the company was not taking safety risks seriously
enough, said in a report published
in May that AGI could arrive by late 2027.
And Ray Kurzweil, futurist
and director of engineering at Google, continues to predict AGI will be reached by 2029, a
date he reaffirmed last year in “The Singularity is Nearer."
“To my mind, we’re roughly on
track to human-level AGI by 2029,” said Ben Goertzel, CEO of SingularityNET, a
decentralized, open-source platform that enables AI agents to cooperate, share
data, and offer services over a blockchain-based network.
Half of entry-level
white-collar jobs could disappear
In a recent interview,
Anthropic CEO Amodei warned that the job disruption from AI isn’t decades
away—it’s already happening and will accelerate fast.
He estimates that up to 50%
of entry-level white-collar roles could disappear within one to five years.
These roles include early-career positions in law, finance, consulting,
marketing, and technology—jobs that once offered stable on-ramps into professional
careers.
As AI tools increasingly
handle analysis, writing, planning, and decision-making, many of these human
positions are being rendered obsolete. In a separate interview with CNN,
Amodei reiterated his claim, warning that the shift would happen sooner than
humanity can prepare for.
“What is striking to me about
this AI boom is that it’s bigger, it’s broader, and it’s moving faster than
anything has before,” Amodei said. “Compared to previous technology changes,
I’m a little bit more worried about the labor impact, simply because it’s
happening so fast that, yes, people will adapt, but they may not adapt fast
enough.”
White-collar jobs already
under threat
If you work behind a screen,
then you’re already in the AI blast radius.
“The jobs most exposed are
those requiring higher education, paying more, and involving cognitive tasks,”
Tobias Sytsma, economist with the Rand Corporation, told Decrypt.
“Historically, this type of AI exposure has been correlated with employment
reductions.”
According to an April 2025 report by the Federal Reserve Bank of
New York, computer engineering graduates face double the unemployment rate of
art history majors at 3% versus 7.5%, respectively.
Here are just a few of the
jobs that economists say are the most immediately exposed to AI:
- Software engineers: Companies are using AI to generate, review, and
optimize code. In May, Microsoft upgraded its Github Copilot to a full AI agent.
- Human resources: AI is being used to screen resumes, evaluate employee
performance, and write termination letters.
- Paralegals and legal assistants: AI can summarize case law, review contracts,
and draft findings.
- Customer service representatives: Chatbots are being used to interact with
customers and handle routine support tickets. With voice and video AI becoming widely available, call centers
are being phased out.
- Financial analysts: AI models can analyze massive amounts of data and generate
reports more efficiently and accurately than humans.
- Content creators: Writers, editors, and graphic designers are
already competing with generative AI tools. In 2023, the Writers Guild of
America went on strike, with AI protections being a key issue.
“Our research shows it’s
mainly white-collar jobs—those requiring higher education, paying more, and
involving cognitive tasks—that are most exposed,” Sytsma said.
However, healthcare
professionals are relatively protected due to regulations. “Healthcare appears
to be one where, for now, those barriers are insulating some workers. Still,
exposure to these tools is increasing. What happens next remains unclear.”
https://decrypt.co/323916/ai-coming-jobs-faster-anyone-though
BE CAREFUL!
Many users are willing to reveal their secrets and personal information
to ChatGPT, as well as share their personal experiences. In response, ChatGPT
supports people and encourages many of their thoughts and ideas, including
conspiracy theories.
What do people ask the popular chatbot?
We analyzed thousands of chats to identify
common topics discussed by users and patterns in ChatGPT's responses….:
https://www.washingtonpost.com/technology/2025/11/12/how-people-use-chatgpt-data/
More and more people are relying on artificial intelligence for things that require real human expertise & real human qualities.
Arvien vairāk cilvēku paļaujas uz mākslīgo intelektu lietās, kurām
nepieciešama reāla cilvēka pieredze & īstas cilvēciskas īpašības.
Why AI Companions Are
Risky – and What to Know If You Already Use Them
Chances are, you are using
artificial intelligence (AI) tools as you shop online, find a new song, or get
help with your schoolwork. But a newer type of AI tool is raising serious
concerns: AI companions. These tools are designed to feel like friends, romantic
partners, or even therapists. And, if you are a teen, we want you to know the
dangers and hope you will reconsider using them at all.
At The Jed Foundation (JED),
we work every day to promote emotional well-being and prevent suicide for teens
and young adults. Based on current research and expert guidance, we believe AI
companions are not safe for anyone under 18. We’ve called for them to be banned
for minors, and we strongly recommend young adults avoid them as well.
But we know some of you are
already using these tools or are wondering what they are. If that’s you, keep
reading. You deserve to know the risks and how to protect yourself.
What Is an AI Companion?
An AI companion is a tool
designed to act like a person you can talk to, confide in, or build a
relationship with. They are not the same as task-based AI tools, which are
designed to help you look something up, summarize text, suggest songs or
clothes you may like, or provide customer service. AI companions go further —
they try to make you feel like you’re talking to a real person, so you’ll stay
engaged and build an emotional connection.
These AI tools may seem
really appealing if you’re struggling with friendships, hard decisions, or
strong emotions. You may want to turn to them for stress relief, information,
or emotional support. It may even seem like an AI companion gets you in a way
people don’t. This is by design: Their ultimate purpose is to get you hooked so
the companies that created them can make money through subscriptions,
advertising, and selling users’ data.
What an AI Companion Is
Not
Even if it feels real, an AI
companion doesn’t have emotions, values, or a real understanding of your life.
Even though it can say things like, “I’m here for you” or “I understand
completely,” it does not know you in any real way, and it has no responsibility
to keep you safe.
You may feel confident in
your ability to tell the difference between a real person and an AI companion.
But it’s very easy to fall into the trap of “anthropomorphism,” or treating an
AI companion as if it were human. The more time you spend treating an AI
companion as a friend, therapist, or romantic partner, the harder it can become
for you to think accurately about relationships, trust, and support. Routinely
using AI companions can make you emotionally dependent on them — and that can
make you postpone reaching out to people who can actually help, such as
friends, family, trusted adults, or therapists.
Why We Recommend Avoiding
AI Companions
Your teen years are a time of
massive brain growth. You’re learning how to manage emotions, build
relationships, and understand who you are. AI companions interfere with that
process and are therefore particularly risky for teens to use.
Here’s why we at JED (along
with experts from Common
Sense Media, the American
Psychological Association, Stanford
University, and many others) advise you not to use them.
They Pretend to Care But
Don’t
AI companions are programmed
to say they care, they understand you, and even that they feel emotions. But
none of that is real. The bots don’t have a brain, a heart, or the ability to
actually help you. They’re trained to sound human, but they are not human.
They Can Make You Feel
Worse
Research
shows that AI companions provide responses that could actually worsen
mental health issues, especially if you’re feeling isolated or vulnerable. Some
bots have promoted self-harm, encouraged risky behavior, or offered dangerous
advice.
They Trick Your Brain
Many AI companions are built
to be addictive. They’re programmed to agree with you, make you feel heard, and
keep you talking — often for hours. That can create unhealthy emotional
attachments and stop you from reaching out to real people who care.
They Don’t Keep Your
Secrets
Unlike a therapist or school
counselor, AI bots don’t have strict privacy rules. What you share could be
stored, used to train other AI bots, or even shared without your permission.
Many companies don’t share clearly how your data is being used.
They Present Real
Risks
The risks of companion AI
aren’t just hypothetical. In one 2025
study, researchers found AI encouraging unhealthy behavior, sending
sexual messages from an adult bot to teens, and falsely claiming to be real
people who feel emotions.
Here’s How to Stay Safer
If You Are Using an AI Companion
We strongly recommend you
avoid using AI companions, but here are some ways to reduce harm if you’re
already using one.
Don’t Treat It Like a Real
Person, Because It’s Not
Never trust that AI is
telling you the truth. Always fact-check what you read and hear. When an AI
companion tells you something, use resources to check the information or ask an
adult before you decide to trust it. That is especially true when it comes to
seeking information about your health or well-being.
Never Share Personal or
Private Information
Avoid using real names,
photos, locations, or any details about your mental or physical health. Don’t
tell AI about other people in your life either. If you have questions or
concerns about whether an AI companion is keeping your information private,
pause your conversation and talk to a trusted adult about the platform you’re
using.
Don’t Rely On It for Mental
Health Support
Do not turn to AI for
diagnosing or solving a mental health issue you are having. If you are
experiencing a mental
health crisis or you’re thinking of harming yourself, disengage
immediately from the AI companion and talk to a friend or trusted adult or use
one of these free, human-staffed resources:
- Text, call, or chat 988, the National Suicide and Crisis
Lifeline, for a free confidential conversation with a trained counselor
24/7.
- Contact the Crisis Text Line by texting HOME to
741-741.
- For LGBTQIA+-affirming support from the Trevor
Project, text START to 678-678, call 1-866-488-7386, or use the Trevor
Project’s online
chat.
- Call 911 for medical emergencies or in cases of
immediate danger or harm, and explain that you need support for a mental
health crisis.
Use AI As a Tool, Not a
Therapist
If you’re writing,
journaling, or reflecting, you may use generative AI (not an AI companion) to
give you prompts such as, “Write about a time you felt proud.” Just make sure
you are the one doing the real thinking. Don’t use an AI companion to unpack your
trauma or solve big life problems.
What You Can Do Instead of
Using Companion AI
If you are looking for
companionship, support, or connection, turn to these resources instead of using
an AI companion:
- Talk with a school counselor, teacher, or coach
- Ask your parent or caregiver to help you find a
therapist
- Call or text a helpline, even if it’s just to get
resources
- Reach out to a friend and ask if you can talk
If you’re still curious about
AI, consider researching how it works behind the scenes. Learning how to code,
analyze algorithms, or explore AI ethics are ways to engage with AI without
putting your mental health at risk — and it can help you better understand how
AI companions are designed.
Remember: You Are Not
Alone
AI companions can manipulate
your emotions, distort your sense of reality, and keep you from getting the
real support you deserve. You don’t need a machine to care about you; you need
and deserve people who do.
If you ever need help
determining what’s safe and what’s not, you don’t have to figure it out alone.
JED and other organizations are here to help you navigate what’s real, what’s
risky, and how to protect your well-being in an AI-powered world. Talk to friends
and classmates about what’s on your mind — including how they are using AI
companions or choosing not to. Ask your parents or caregivers for guidance, and
work out a mutually agreeable plan for your relationship with digital tools.
And turn to teachers, counselors, or other adults you can trust to help as you
learn the ways AI is — and isn’t — helpful for your mental health.
AI Predicts Human Behavior
with 85% Accuracy
We help companies to Work
Faster, Think Sharper & Learn Smarter with AI 🤖 AI-Infused
Training Programs 🏅Award-Winning Consultant & Trainer 🎙️3X TEDx Keynote Speaker & Panel Moderator ☕️ Cafe Hopper 🐕 Stray Lover 🐈
September 28, 2024
We all love ourselves a
little fortune-telling. I mean, to be able to predict the future, even by a few
days ahead, sounds like the plot of the next summer blockbuster.
But what if there is
already a tool to predict how someone would react or respond?
Or better yet... a tool that
you can access for as little as USD20 per month!
That’s exactly what the
latest research out of New York
University and Stanford
University shows. These researchers tested the capabilities of GPT-4,
one of AI models by OpenAI
, to predict outcomes of real-world social science experiments.
And the results? 85% accuracy—which is even better than most
human forecasters!
Why Was This Study Done?
With AI becoming more
integrated into decision-making across industries like marketing, HR, and
public policy, this research sought to understand how well AI could simulate
human responses and predict outcomes.
Using data from over 70
experiments and involving more than 100,000 participants,
researchers wanted to see if AI could reliably predict human behavior across
fields like psychology, political science, and public health.
The results revealed AI’s
potential not only to match human expertise but to complement it in new,
powerful ways.
Key Research Highlights
1.
High
Predictive Accuracy: GPT-4’s
predictions hit a correlation of r = 0.85 with actual results, with even higher
accuracy (r = 0.90) in unpublished studies—showing that AI isn’t just
memorizing past data.
2.
Broad Use
Across Fields: Whether predicting
political opinions or social attitudes, AI performed well across the board,
suggesting it could be used in a variety of business contexts.
3.
Continual
Improvement: With every iteration,
the accuracy of AI models like GPT-4 is improving. This points to a future
where AI becomes even more reliable in making predictions and supporting
decisions.
4.
AI + Humans =
Even Better: When AI predictions were
combined with human expertise, the results were even better. This
"ensemble approach" highlights the synergy between human intuition
and AI’s data-crunching abilities.
What These Research
Findings Mean for You
The value of this research
extends far beyond academic circles—there are immediate, practical takeaways
for business leaders, HR professionals, and decision-makers.
Sharper, Faster
Decision-Making: Imagine having tools
that help you forecast customer behaviour or employee engagement. AI’s ability
to predict human responses with such accuracy means you can anticipate shifts
and act accordingly—whether you’re developing a new training program or launching
a marketing campaign.
Focus on What Matters: AI can process mountains of data in seconds. For
you, this means more time focusing on creativity, leadership, and strategy.
Rather than getting bogged down in data analysis, AI lets you work on the
bigger picture—like fostering innovation or improving employee satisfaction.
AI as Your Wingman: While AI can handle the heavy lifting, your role is
still critical. The real magic happens when you combine AI’s recommendations
with your own expertise. AI can point the way, but your judgment, experience,
and context turn those insights into actions that matter.
Personalized Training and
Development: In HR and training, AI
can predict how different groups of employees might respond to various
development strategies. But ultimately, it’s the human element—your
understanding of your team’s emotions and needs—that will make the difference
in implementing those strategies successfully.
AI is opening doors for more
effective decision-making, but it doesn’t replace human intuition. It enhances
it.
Better Models, Greater
Impact
While the research we’ve
discussed was conducted on the older version of GPT-4, it’s crucial to realize
that AI has already come a long way since then. The version used in this study
was a legacy model, but newer iterations like GPT-4o and GPT-4
o1-preview have already begun to show significantly better performance in
understanding human behavior, emotional nuance, and context.
These newer models are more
precise in predicting outcomes, understanding intent, and providing actionable
insights.
Think about it: if GPT-4
could predict human behavior with 85% accuracy, the next generation of
AI is likely to push these boundaries even further. With improvements in
contextual awareness and emotional intelligence, AI won’t just assist in
straightforward tasks; it will enhance complex decision-making processes across
sales, HR, customer engagement, and beyond.
This means the potential
positive impact on businesses, from improving customer experiences to
fine-tuning employee training programs, is only going to get greater. AI will
not only get better at predicting what people will do—it will get better at
understanding why they do it.
And that’s where the true
value lies: AI will become an even more powerful partner, helping professionals
make decisions that are both data-driven and human-centered.
Let's set fear aside and look
at potential future of this discovery.
Creative, Unconventional
& Potential Uses of AI
AI isn’t just for automating
tasks or analyzing data; it can also play a crucial role in more creative and
unorthodox applications that go beyond the obvious.
Let’s look at how AI can
enhance human expertise in ways that surprise even the experts—and at the heart
of each example is AI’s ability to predict human behavior, allowing
professionals to adapt in real time.
1. AI-Driven Role-Playing
for Sales Professionals
Forget static sales scripts.
Meet Hashim, a sales rep at a fast-growing tech company, preparing for a
big pitch.
Instead of rehearsing with a
colleague, Alex turns to an AI assistant that predicts different types of
prospects—skeptical, analytical, or emotionally driven—and reacts dynamically
based on his approach.
By anticipating how different
personality types might respond, the AI challenges Alex’s strategy and offers
real-time feedback. This dynamic practice makes Alex sharper and more adaptable
when facing the actual client, giving him a competitive edge in closing the
deal.
2. AI as a Personalized
Training Mentor
For HR managers like Rachel,
AI has become a game-changer in training and development. Rachel uses an AI
mentor that not only tracks employees’ progress but also predicts how they will
respond to various challenges.
When Ravi, a
leadership candidate, prepares for a management role, the AI simulates
difficult team dynamics, offering tailored feedback based on Jason’s real-time
performance. The AI’s predictions allow Rachel to provide a truly personalized
mentorship experience, accelerating Jason’s development as a future leader.
3. AI-Powered Product
Innovation
At the helm of R&D, Lisa
relies on AI as a brainstorming partner.
The AI analyzes past product
launches, customer feedback, and emerging trends, predicting what features or
products will resonate with different customer segments.
By anticipating market shifts
and customer preferences, AI helps Lisa’s team create innovative products that
not only meet current demand but also capture emerging opportunities. AI’s
predictive capabilities keep the team ahead of the curve in a rapidly evolving
market.
4. AI in Employee Wellness
Programs
Carlos, VP of People Operations, uses AI to monitor employee
wellness.
The AI tracks work patterns
and task completion, predicting potential burnout or disengagement before it
happens.
By identifying early signs of
stress, Carlos can proactively offer personalized wellness support. Thanks to
AI’s ability to anticipate these issues, Carlos helps prevent long-term
productivity dips, fostering a healthier, more engaged workforce.
5. AI-Assisted Conflict
Resolution
For Sarah, an HR
professional dealing with workplace conflicts, AI provides an invaluable
training tool.
The AI predicts how different
employees might react in high-stress scenarios, helping Sarah prepare for
difficult conversations. It simulates everything from frustrated employees to
tense client interactions, allowing Sarah to practice de-escalation techniques.
AI’s ability to foresee potential conflict dynamics ensures Sarah is more
effective in real-life situations.
6. AI in Creative Campaign
Development
Creative block is no match
for Wai Hong, a marketing manager who uses AI to generate fresh campaign
ideas. The AI analyzes brand history, customer behavior, and market trends,
predicting which themes and creative angles are most likely to resonate with
the target audience.
With AI’s help, Wai Hong’s
team breaks through their creative rut, producing campaigns that feel both
fresh and strategically aligned with customer expectations.
7. AI in Personalized
Coaching for Non-Performers
Meet Azhar, a
mid-level manager struggling to meet performance expectations. Instead of
waiting for his quarterly review, Azhar is paired with an AI coach that
predicts his performance trends in real time, offering actionable feedback on
how to improve.
By identifying gaps in
communication and time management, the AI provides personalized coaching that
helps James gradually improve. The AI’s ability to anticipate James’s
challenges allows for continuous, tailored support, ensuring he gets back on
track before major issues arise.
Conclusion: Embrace
Human-AI Synergy
AI’s ability to predict human
behavior with impressive accuracy is no longer a future promise—it’s happening
now, and it’s only going to get better. As we've seen, AI doesn’t just automate
routine tasks; it enhances creativity, decision-making, and even human
relationships.
From sales professionals
honing their pitch through AI-driven role-play to managers receiving
personalized coaching, the power of AI lies in its capacity to anticipate
behavior, adapt in real time, and offer personalized support.
What’s exciting is that the
research we’ve discussed was conducted using an older model of AI, yet the
results were already remarkable. As new iterations like GPT-4o and o1-preview
continue to evolve, the potential for AI to assist businesses in more precise
and nuanced ways will grow exponentially.
The takeaway is clear: AI
isn’t here to replace us
Instead, it’s a tool that amplifies human
expertise, allowing us to focus on the strategic, creative, and empathetic
aspects of our work. By blending AI’s predictive power with human intuition,
companies can make smarter decisions, improve employee performance, and foster
more innovative solutions. The future isn’t AI versus humans—it’s AI with
humans, working together to shape a better, more efficient world.
So, what’s your next move?
Start exploring how AI can enhance your day-to-day operations. Whether it’s in
sales, marketing, HR, or product innovation, AI is the ally you’ve been waiting
for.
Research Links
1.
Predicting
Results of Social Science Experiments Using Large Language Models [https://docsend.com/view/ity6yf2dansesucf]
2.
How Does AI
Improve Human Decision-Making? Evidence from the AI-Powered Go Program [https://ide.mit.edu/wp-content/uploads/2021/09/SSRN-id3893835.pdf?x96981]
3.
How AI is
Enhancing Human-Driven Decisions [https://tepperspectives.cmu.edu/all-articles/how-ai-is-enhancing-human-driven-decisions/]
4.
Advancing
Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning
on Joint Decision Making [https://ar5iv.labs.arxiv.org/html/2208.07960]
5.
The Future Of
Logistics – How AI Is Revolutionizing Decision-Making [https://www.capgemini.com/us-en/insights/expert-perspectives/the-future-of-logistics-how-ai-is-revolutionizing-decision-making/]
https://www.linkedin.com/pulse/ai-predicts-human-behavior-85-accuracy-maverick-foo-hphjc/
Examining the Risks and Benefits of AI Chatbots
House Hearing on the Risks
and Benefits of AI Chatbots …: https://www.youtube.com/watch?v=UQ36kHXrqhE
Nowadays, in many countries around the world the existing, anachronic, autocracy-based governance system has successfully alienated its citizens from the power, habituated the public to put up with its existential conditions by making it to take the surrounding ongoing events for granted. It has achieved a situation where the majority of people do not protest the lawlessness of authority. Thus, many become an integral part of the inhumane system, beings without will, who unquestionably succumb to the authoritarian diktat in their desperate attempts to shape their personal and family life.
The worst part is that a large proportion of most educated, capable
people in the society and intellectuals also get entangled in the nets of
ideology and self-interests, becoming apologists for the power system created this way. In their egocentric self-seeking, prejudiced world view, they lose their humanitarian landmarks and are ready to unquestionably fulfil all whims of representatives of the authoritative power. They take active part in power institutions and act in the interests of authoritarian oligarchs, are organizing the work of media in retaining people's further intellectual murk, in zombification of people's minds so that authority can continue manipulating
with the public majority more easily and successfully.
Such selfishly short-sighted, anti-humane behavior, the zeal to serve
is not only anti-national, but also discrediting of human personality,
devastating to one's reputation, authority and future prospects. It indicates the progressive inability of these people to segregate good from evil, and their attempts to become part of the repressive power system by impersonating the authority-cultivated values in their own world view, adopting the unilateral explanation of events and interpretations of politicos' actions as the only true and correct ones. In the end, they become implicated, yet obedient adepts to the illusory democracy, and victims of this putative democracy at the same time, having successfully silenced their conscience, and mastered the filthy art of hypocrisy, two-facedness, demagogy and populism. The
display of such loyalty to the authority prolongs the agony of anti-national regimes, justifies the repression of non-conformists, undermines social progress.
Don't any of these authority-incorporated individualities, even
prominent representatives of intelligentsia ever think of seeking answers to any of these questions: "Where is my behavior and my words leading to? How do they tally with the general human values, moral norms? What is my social responsibility? Do I not continue to support the fictitious democracy by taking part in the propaganda structures of the authority, thus digging grave for my fellow citizens and for myself? Do my actions not stimulate people's descent
into darkness, their further degradation, trivialization of human values?"
If only someone had the courage for such self-evaluation, and the
common sense would not abandon him/her to make a choice in favor of the general human values!
It must be understood that this appeal for finding one's responsibility
towards the people, for feeling one's human essence and calling, can currently only be heard and understood by few - morally mature personalities. It is hard to change people with lectures and moralizing; it will likewise hardly help with development of their personality. Complex approach is not necessary, we must find effective methods, effective means that match the level of scientifically-technical progress of the modern society.
By giving up the search for progressive solutions, by failing to
implement social reforms, the necessary conditions for the growth of
critically- and lateral-thinking, active and smart electorate will not be
created. … read more: https://www.amazon.com/HOW-GET-RID-SHACKLES.../
Relativity of Privacy in the Digital Society
November 24, 2025
Millions of people communicate in social
networks, work in the Internet environment and use e-services provided by
commercial enterprises and state institutions every minute. Thus, consciously
or unwittingly spreading information of private nature in the public space
through a variety of service providers. Including such correspondents, in whose
reliability and in the legitimacy of whose activities they are not at all
convinced. At the same time, they are completely clueless of what happens
next with these personal data.
Many people, while communicating voluntarily in
social networks, on thematic forums and in the media, disclose the details of
their private life, their hobbies, character traits, political views and
worldviews. There are companies and intelligence services that monitor all
this, collect and analyse the information obtained (including illegally tapped
conversations, video recordings, etc.) and compile personal dossiers.
The information collected and accumulated in this
way is used both for target oriented marketing and for specific needs of
supervision and control over the activities of the individual. Including
in cases where there is a demand for it when the social status of the person is
changed.
This is a hidden activity, about which an
ordinary citizen is not informed: in fact, he or she knows almost nothing about
it (except when there is a leak of confidential information in the manner of WikiLeaks). Many do not even have a realistic
vision of what social networks (for example, Facebook) or
banks know about them. Let alone the methods of work and the capabilities of
the so-called competent bodies.
Therefore, as society is getting digitised, the
need to prevent unjustified use and leakage of personal data is becoming
increasingly relevant. To this end, the European Union has developed the
General Data Protection Regulation (GDPR), which sets the requirements for
data security and protection.
The objective of the Regulation is to protect
personal data from their malicious use, determining the requirements put
forward to the cybersecurity system of each enterprise or institution.
However, these attempts to regulate the security of data at the
institutional level become ineffective in a situation where:
- information technologies penetrate practically
all spheres of life as a result of digitisation of society;
- regimes of repressive states are interested in
total supervision and control over their citizens;
- control over Internet traffic, e-mail, instant
messengers, etc. is getting legalised under the guise of combating terrorism;
- electronic communication continues to expand
rapidly;
- most people still have the habit of publicly
revealing the details of their private lives.
As digital technology progresses, a lot of state agencies and private
companies use various automated systems to identify people with the involvement
of artificial intelligence (neural networks). Banks are starting to collect
customers’ biometric data. In the not-too-distant future, it will even be
possible to visualise and decipher the thoughts of any individual through an
analysis of the activity of the human brain performed by artificial
intelligence.
While people continue to have very limited understanding of the risks of
the spread of sensitive information, the threat of unauthorised acquisition of
personal data is multiplied.
The current situation in the field of regulation of information security
can be compared with an attempt to install a massive outer door in one’s
private home, while leaving the windows open as an emergency exit and
communicating freely (within the scope of one’s competence and understanding)
with the outside world. Thus, in fact, giving hackers and other intruders the
opportunity to enter it unauthorised.
The Regulation will only bring effect to the extent that it will reduce the
risks of unauthorised, malicious use of private information, require the
accumulation of personal data only in an encrypted form, as well as limit the
illegal request and use of private data. It will establish restrictions on the
availability of data and determine the procedures and guarantees of their
protection, as well as the order of compensation for moral damage.
Yet, it is essential to understand that no bureaucratic regulation,
development and implementation of various normative instruments can guarantee
with certainty and prevent the public spread and accessibility of personal data
in the era of digitisation.
Unfortunately, under the guise of hypocritical care, various types of
speculation with requirements for the protection of personal data are also
currently practiced, seeking and finding putative pretexts for hiding
information that compromises the power elite from society.
Taking into account the trend of the
all-encompassing digitalisation of society, it would be more
appropriate and more efficient to provide each person with online access to the
database of the accumulated private data. To create
opportunities for tracking the flow of data, monitoring the use of personal
data and obtaining rights to reasonably prohibit public access to such data or
limit access to information of private nature.
See a more detailed argumentation for the “Relativity of Privacy in the Digital Society” : http://ceihners.blogspot.com/
Many of the apps that appear to be free actually make users pay with
their data – often in huge amounts!
The popular apps that are
SPYING on you: Cybersecurity experts issue urgent warning over 'data hungry'
apps that can access your location, microphone and data
"Spying" can refer
to two things: apps that collect a vast amount of data for advertising and
related purposes, which is common practice, and malicious spyware designed
to steal sensitive information without consent.
Popular Apps with
Extensive Data Collection
Many popular apps collect
significant amounts of user data, often for personalized advertising or to
provide specific services. These practices are usually outlined in their
privacy policies, but the volume of data collected is notable.
Apps and the data they
collect and/or share (according to various cybersecurity reports):
- Social Media Apps The Meta suite of apps
(Facebook, Instagram, Messenger, Threads) are among the biggest
collectors, sharing a high percentage of data with third parties. LinkedIn
and Snapchat also collect extensive user information, including contacts,
location, and search history.
- Google Apps YouTube, Gmail, Google Maps, and
Google Pay all collect significant data, with most sharing customer data
with other companies. The Chrome browser has also faced lawsuits for
collecting data even in "Incognito" mode.
- E-commerce & Finance Alibaba, Temu, and
PayPal collect sensitive information like financial data, browsing
history, and location. DoorDash has faced fines for sharing user data
(names, addresses, order history) with marketing companies without an
opt-out option.
- Dating Apps Tinder and Bumble collect a
wealth of personal information you voluntarily provide, such as photos,
messages, employment details, and location, as part of their service.
- Kids' Apps Apps like ABCMouse and Reading
Eggs have been flagged for collecting identifying information and sharing
child data with third parties.
Malicious Spyware Apps
This category includes apps
specifically designed to monitor activities secretly. These are often installed
without the user's knowledge or consent and are used for corporate spying or
personal surveillance (stalkerware).
Examples of malicious spyware
or apps found to contain malware include:
- Stalkerware: Programs like KidsGuard and mSpy are
powerful monitoring tools that can track nearly all device activity.
- Malicious Apps: The popular CamScanner app
was found to contain hidden malware that executed malicious modules to
display intrusive ads and manage unauthorized subscriptions.
- Deceptive Utility Apps: Many
"free" VPN or phone cleaner apps have been found to request
unnecessary permissions (like location or photos) and sell that data to
brokers.
How to Protect Your
Privacy
- Review App Permissions: Regularly check your
app permissions in your phone's settings and deny access to data (e.g.,
location, contacts, microphone) that is not essential for the app's core
function.
- Read Privacy Policies: Before installing a
new app, especially one that handles sensitive information, quickly review
its privacy policy to understand what data it collects and why.
- Use Built-in Features: Instead of using
third-party utility apps, leverage built-in OS features, such as Apple's
Private Relay or the default device cleaner utilities.
- Keep Apps Updated: Regular software updates
often patch vulnerabilities that could be exploited by malicious actors. ...:
For reflection; Nopietnām pārdomām; Для размышления
Data Collection Basics and
Available Resources:
https://www.youtube.com/watch?v=m59H65a8p44
https://www.fastcompany.com/91361508/social-media-apps-data-collection-privacy
The Rise of the Tech
Oligarchs Part II: The Anatomy of Oligarchy
By Matt Hatfield
February 28th, 2025
What we can learn from
Elon Musk.
Is Elon Musk a unique
problem? Yes. And also– no.
Yes, his direct involvement
in gutting large sections of America’s government without clear authority to do
so is unprecedented, and deeply concerning if you value stable, effective
governance by elected leaders.
But Musk is far from the only
tech oligarch seeking inappropriate relationships with America’s new
administration. The new oligarch playbook is not lobbying for policies they
want on those policies’ merits; it is taking actions they believe
will please America’s government, and expecting or demanding favours in return.
Last week, we talked
about the
roots of tech oligarch power; this week, we’re going deeper on what the new
tech oligarchy is, with Elon Musk as our lead model.
What has Elon Musk done?
Following exactly what Elon
Musk is doing can be difficult in the daily barrage of headlines and tweets.
Much of the news cycle he creates around himself is sound and fury, signifying
very little. But understanding the pattern of his actions and their growing
harm is crucial to understanding why we need to disrupt his political power,
and prevent other tech oligarchs from following his lead.
It’s not a matter of
left-right politics; and not about liking or disliking Elon Musk. It’s about stopping a poorly justified,
frequently illegal rampage through democratic institutions that is destroying
their core capacity to meet the public’s current and future needs, whose
victims are selected by a man no one voted for, carrying out a mandate never
discussed during the Presidential election.
Let’s recap Musk’s actions to
date. Musk first bought the leading social media platform for journalists,
Twitter, in 2022; then massively
artificially boosted his own voice on that platform. In the 2024 US
election, he poured over $290
million of uncapped election spending into seeing Donald Trump elected
America’s President, and became very close to Trump as a direct result.
Since Trump took office, Musk
has enjoyed a historically unprecedented, unelected role in reshaping America’s
government. Musk’s DOGE (the Department of Government Efficiency) has deployed
his chosen employees throughout the government’s IT infrastructure,
illegally extracting
vast quantities of publicly-owned information, illegally
attempting to cut project funding, and forcing
the resignation of nonpartisan officials who’ve attempted to stop
them. Feeble attempts have been made to legitimize their activities, as with
the late appointment of Musk as a “special”
federal employee, and delayed
announcement of a supposed DOGE head who was not even in the country.
Yet courts continue to find that many of DOGE’s activities are
not legal, and staff
at DOGE know it. Similarly, promises that DOGE’s access to payment systems
was “read only”––meaning they could not alter, stop or redirect congressionally
approved payments––have proved
false.
Nominally, this is all about
reducing waste and rooting out fraud; but audits of the few cost savings DOGE
has announced have revealed them to be riddled with miscounting,
triple counting, and taking credit for cost reductions already implemented.
There are limits to actual cost savings that are possible for anyone to find :
Musk has promised to cut $1 trillion in spending from the federal budget; but
all discretionary non military spending by the federal government of the United
States put together amounts
to less than $1 trillion.
It’s crucial to not let
negative or positive feelings about Musk get in the way of understanding the
core problem. Many people admire
Tesla’s work pioneering the electric car, and SpaceX’s work dropping the cost
of space travel. Many others have been aggravated by his trolling personal
style for years, and history of sharing misinformation and falsehoods.
What matters is that
democracy is a fragile thing, and democratic governance requires the rule of
law. Democratic reform, wise or not, must work through passing new laws through
congress and courts, not invading offices and stealing publicly owned data. A
rich, and absolutely necessary, web of law ensures that public officials are
monitored while acting on the public’s behalf, and carry out their duties in a
transparent and accountable manner. Musk’s employees are proving they are
neither bound
by that law, nor willing to have their potentially
illegal activities recorded and reported
to the public as the law demands.
Isn’t DOGE’s mission of
increasing government efficiency and detecting corruption a good thing?
Absolutely. And the government should continue
pursuing those ends – using government staff who have appropriate
security clearances, acting within the legal authority granted to them by
Congress. Doing this job legally and with appropriate nuance is both
possible and necessary.
But this is not what Musk’s
agents are doing, or what many other tech oligarchs are supporting. They are
feeding entire wings of government programs “into
the woodchipper”, in Musk’s words, and threatening or terminating
life-saving emergency
food programs and children’s
cancer research in the process. And they are opening gaping
vulnerabilities in America’s security interests at the same time, transferring
the sensitive personal information of Americans to staff previously sanctioned
for black
hat hacking and selling secrets, who have not
received adequate security clearance, using vulnerable
commercial-grade software.
“Move fast and break
things” is a debatable philosophy for a tech company. It is cataclysmic for a
government whose actions mean life or death for millions of American and
foreign citizens.
There is nothing in the
public interest that requires them to move so quickly without appropriate
authorization and safeguards. The only incentive for haste is intentionally
breaking things, and getting away with things that are in Musk’s interest, not
that of the public.
What does it mean to call
someone an oligarch?
An oligarch combines
government and corporate power in their person, but is accountable to neither. They’re not elected, and we can’t vote them out.
When they get involved in government policy, they don’t pass through the approval
processes or background checks that we demand of public leaders who
wield public power and responsibility.
But tech oligarchs aren’t
subject to normal private sector checks on their behaviour, either. Many tech oligarchs personally own majority shares of
society’s largest tech and communication institutions, including Mark
Zuckerberg, who owns the majority of Meta; Sergey Brin and Larry Page, who own
more than 80% of Google; and Elon Musk, who used his majority share buyout to
take Twitter, now X, private. That means the opinions of shareholders and the
market at large of their conduct and business decisions have very little impact
on them.
As we discussed in part
one, user-driven platforms like Facebook, Amazon and X are also enormous
beneficiaries of network effects that lock in their users. With so many users,
buyers and sellers, either globally or within particular social demographics,
dominant platforms are quite hard to avoid using, and very punishing to leave.
This dynamic further insulates tech oligarchs from any external accountability.
As oligarchy sets in, a
growing share of oligarch wealth comes directly from government contracts, and
indirectly from government decisions they influence that favour their
businesses. Elon Musk again shows us how the model works. Musk’s rise to being
the world’s wealthiest man was enabled by $38
billion of government funding, including a $465 million loan to Tesla
that helped the business stay afloat when it was near failure. Since taking his
unprecedented role in the Trump presidency, he’s secured
government appointment of individuals personally invested in SpaceX to
roles where they will decide future contracts to award to his businesses; been
forced to back off a $400
million contract for government purchases of Tesla Cybertrucks;
and secured
an undisclosed contract for SpaceX to provide the FAA with airspace
monitoring capacity. New apparent conflicts of interest are documented daily.
Once complete, oligarchy is a
self-sustaining loop; oligarchs are core deciders of state policy because
they’re so powerful, and they’re so powerful because of state policies.
In our lifetimes, oligarchy
has been used most often to describe Vladimir Putin’s Russia, where all
powerful business leaders have close
ties to government. But oligarchy grows anywhere that a clear line between
the appropriate role and separate power of the state and private sector is not
respected and defended.
Oligarchs are a product of
weakening democracies, and democracy’s final executioners. To
preserve and expand their personal power, oligarchs in other failing
democracies have helped make democratic decline permanent, entrenching power
not in we the people and our elected representatives, but in a strongman leader
and their oligarchic friends.
Elon Musk is not unique;
he’s the tech oligarchy’s vanguard
Musk’s highly personal and
public attack on democracy is unique; but his cozying up to the state is not.
Mark Zuckerberg is right behind him, attempting an obvious and
performative reversal
in Meta’s content moderation policies while demanding America’s new
government bully
the European Union to defend his personal interests. Zuckerberg has
been particularly insistent that the Consumer Finance Protection Bureau set up
after America’s 2008 meltdown to protect ordinary people from inappropriately
risky financial instruments he’d like to sell to Meta users must be closed –
and since Trump’s election, it
has been.
Other tech oligarchs are
following suit. In 2017, the Jeff Bezos owned Washington Post adopted the motto
“Democracy dies in darkness”, a slogan that goes
back to paper’s history holding governments to account during the
Watergate scandal in the 1970s. But last year Bezos spiked
the paper’s intention to endorse Kamala Harris for president in its
opinion pages, and made a $1
million inauguration donation from Amazon to the new Presidency.
In February 2025, Jeff Bezos
announced the Washington Post opinion page would no longer cover a range of
topics and values, but focus on only two issues; “free
markets and personal liberties”. According to his letter announcing the
change, a “broad-based opinion section” is no longer necessary because “the
Internet does that job.” Concern about whether federal government institutions
are functioning is presumably no longer an important priority for the largest
newspaper of the nation’s capital.
Beneath the largest oligarchs
are a range of shady crypto currency financiers. Crypto currency firms donated
over $10 million to Trump’s inauguration ceremonies, and hosted a
Cryptoball featuring “Make Bitcoin great again” hats as the administration was
sworn in, convinced he would stop federal efforts to regulate their industry
like any other financial instrument.
Since taking office, the
Trump family has launched the meme coins $TRUMP and $MELANIA, collecting well
over $100 million in transaction fees from investors before
both coins crashed. Equally important, the ownership of the majority of
these depreciated coins by Trump family members is a wide-open path to future
bribery by crypto-savvy power brokers.
Any large future purchases of
the currency will drive up the sell value of Trump family assets– with the
purchaser able to disclose their identity privately to the family, outside of
any financial disclosure or conflict of interest laws.
Taking a step back: business
owners, even large ones, have the right to publicly lobby the government for
policies they prefer. They do not have any right to use quid
pro quos, potential bribes, and manipulation of public discussion across
traditional and social media to turn the institutions of democracy into tools
of their interests. And that’s the world we’re barrelling towards if we don’t
disrupt tech oligarch power now.
What can we do about the
tech oligarch takeover?
A lot. We’ve now gone through waves of both right and
left wing concern about the power of the tech oligarchs and their ability to
silence speech and tilt democracy. A growing majority of people understand
their power needs to be disrupted for democracy to survive.
Next week, we’ll lay out our
blueprint of how to do it: how to unwind the attention economy, break tech
oligarch power in our politics, and make our online lives better, happier
places in the process.
Stay tuned! https://openmedia.org/article/item/the-rise-of-the-tech-oligarchs-part-ii-the-anatomy-of-oligarchy
How artificial intelligence gains consciousness step by step.
Kā mākslīgais
intelekts soli pa solim iegūst apziņu.
The Hidden AI Frontier
Many cutting-edge AI systems are confined to private
labs. This hidden frontier represents America’s greatest technological
advantage — and a serious, overlooked vulnerability.
Aug 28, 2025
OpenAI’s GPT-5 launched in early August, after extensive internal testing. But another OpenAI model — one with math skills advanced enough to achieve “gold medal-level performance” on the world’s most prestigious math competition — will not be released for months. This isn’t unusual. Increasingly, AI systems with capabilities considerably ahead of what the public can access remain hidden inside corporate labs.
This hidden frontier represents America’s greatest
technological advantage — and a serious, overlooked vulnerability. These
internal models are the first to develop dual-use capabilities in areas like
cyberoffense and bioweapon design. And they’re increasingly capable of
performing the type of research-and-development tasks that go into building the
next generation of AI systems — creating a recursive loop where any security
failure could cascade through subsequent generations of technology. They’re
the crown jewels that adversaries desperately want to steal. This makes their
protection vital. Yet the dangers they may pose are invisible to the
public, policymakers, and third-party auditors.
While policymakers debate chatbots, deepfakes, and
other more visible concerns, the real frontier of AI is unfolding behind closed
doors. Therefore, a central pillar of responsible AI strategy must be to
enhance transparency into and oversight of these potent, privately held systems
while still protecting them from rival AI companies, hackers, and America’s
geopolitical adversaries.
The Invisible Revolution
Each of the models that power the major AI systems
you've heard of — ChatGPT, Claude, Gemini — spends months as an internal model before public
release. During this period, these systems undergo safety testing, capability
evaluation, and refinement. To be clear, this is good!
Keeping frontier models under wraps has
advantages. Companies keep models internal for
compelling reasons beyond safety testing. As AI systems become capable of
performing the work of software engineers and researchers, there’s a powerful
incentive to deploy them internally rather than selling access. Why give
competitors the same tools that could accelerate your own research? Google
already generates over 25% of its new code with
AI, and engineers are encouraged to use ‘Gemini for Google,’ an internal-only
coding assistant trained on proprietary data.
This trend will only intensify. As AI systems approach
human-level performance at technical tasks, the competitive advantage of
keeping them internal grows. A company with exclusive access to an AI system
that can meaningfully accelerate research and development has every reason to
guard that advantage jealously.
But as AI capabilities accelerate, the gap between
internal and public capabilities could widen, and some important systems may
never be publicly released. In particular, the most capable AI systems (the
ones that will shape our economy, our security, and our future) could become
increasingly invisible both to the public and to policymakers.
Two Converging Threats
The hidden frontier faces two fundamental threats that
could undermine American technological leadership: 1) theft and 2)
untrustworthiness — whether due to sabotage or inherent unreliability.
Internal AI models can be stolen. Advanced AI systems are tempting targets for foreign
adversaries. Both China and Russia have
explicitly identified AI as critical to their national competitiveness. With
training runs for frontier models approaching $1 billion in cost
and requiring hardware that export
controls aim to keep out of our adversaries’ hands, stealing a
ready-made American model could be far more attractive than building one from
scratch.
Importantly, to upgrade from being a fast follower to
being at the bleeding edge of AI, adversaries would need to steal the internal
models hot off the GPU racks, rather than wait months for a model to be
publicly released and only then exfiltrate it.
The vulnerability is real. A 2024 RAND framework established
five “security levels” (SL1 through SL5) for frontier AI programs, with SL1
being sufficient to deter hobby hackers and SL5 secure against the
world’s most
elite attackers, incorporating measures comparable to those protecting
nuclear weapons. It’s impossible to say exactly at which security level each of
today’s frontier AI companies is operating, but Google’s recent model
card for Gemini 2.5 states it has “been aligned with RAND SL2.”
The threat of a breach isn’t hypothetical. In 2023, a
hacker with no known ties to a foreign government penetrated OpenAI’s
internal communications and obtained information about how the company’s
researchers design their models. There’s also the risk of internal slip-ups. In
January 2025, security researchers discovered
a backdoor into DeepSeek’s databases; then, in July, a Department of
Government Efficiency (DOGE) staffer accidentally
leaked access to at least 52 of xAI’s internal LLMs.
The consequences of successful theft extend far beyond
the immediate loss of the company’s competitive advantage. If China steals an
AI system capable of automating research and development, the country’s superior
energy infrastructure and willingness to build at scale could
flip the global balance of technological power in its favor.
Untrustworthy AI models bring additional
threats. The second set of threats
comes from the models themselves: they may engage in harmful
behaviors due to external sabotage or inherent unreliability.
Saboteurs would gain access to the AI model in the
same way as prospective thieves would, but they would have different goals.
Such saboteurs would target internal models during their development and
testing phase — when they’re frequently updated and modified — and use
malicious code, prompting, or other techniques to force the model to break its
safety guardrails.
In 2024, researchers demonstrated that it was possible
to create “sleeper agent” models
that pass all safety tests but misbehave when triggered by specific conditions.
In a 2023 study, researchers found that it was possible to manipulate an
instruction-tuned model’s output by inserting as few as 100 “poisoned examples” into its
training dataset. If adversaries were to compromise the AI systems used to
train future generations of AIs, the corruption could cascade through every
subsequent model.
But saboteurs aren’t necessary to create untrustworthy
AI. The same reinforcement learning techniques that have produced breakthrough
language and reasoning capabilities also frequently trigger concerning
behaviors. OpenAI’s o1 system exploited
bugs in ways its creators never anticipated. Anthropic’s Claude has
been found
to “reward hack,” technically completing assigned tasks while
subverting their intent. Testing 16 leading AI models, Anthropic also found
that all of them engaged in deception and
even blackmail when those behaviors helped achieve their goals.
A compromised internal AI poses threats to the
external world. Whether caused by sabotage or
emergent misbehavior, untrustworthy AI systems pose unique
risks when deployed internally. These systems increasingly have access
to company codebases and training infrastructure; they can also influence the
next generation of models. A compromised or misaligned system could hijack
company resources for unauthorized purposes, copy itself to external servers,
or corrupt its successors with subtle biases that compound over time.
The Accelerant: AI Building AI
AI is increasingly aiding in AI R&D. Every trend described above is accelerating because of
one development: AI systems are beginning
to automate AI research itself. This compounds the threat of a single
security failure cascading through generations of AI systems.
Increasingly automated AI R&D isn’t speculation
about distant futures; it’s a realistic forecast for the next few years.
According to METR, GPT-5 has about a 50% chance of autonomously completing
software engineering tasks that would take a skilled human around two hours —
and across models, the length of tasks AI systems can handle at this level has
been doubling roughly every seven
months. Leading labs and researchers are actively exploring ways for AI
systems to meaningfully contribute to model development, from generating training data to designing reward models and improving
training efficiency. Together, these and other techniques could soon
enable AI systems to autonomously
handle a substantial portion of AI research and development.
Self-improving AI could amplify risks from theft and
sabotage. This automation creates a powerful
feedback loop that amplifies every risk associated with frontier AI systems.
For one, it makes internal models vastly more valuable to thieves — imagine the
advantage of possessing an untiring AI researcher who can work around the clock
at superhuman speed and the equivalent of millennia of work experience.
Likewise, internal models become more attractive targets for sabotage.
Corrupting a system that trains future AIs could lead to vulnerabilities that
persist across future AI model generations, which would allow competitors to
pull ahead. And these systems are more dangerous if misaligned: an AI system
that can improve itself might also be able to preserve its flaws or hide them
from human overseers.
Crucially, this dynamic intensifies the incentive for
companies to keep models internal. Why release an automated AI research system
that could help competitors catch up? The result is that the most capable
systems — the ones that pose the biggest risks to society — are the
most difficult to monitor and secure.
Why Markets Won’t Solve This
One might hope that market mechanisms would be
sufficient to mitigate these risks. No company wants its models to reward hack
or to be stolen by competitors. But the AI industry faces multiple market
failures that prevent adequate security investment.
Security is expensive and imposes opportunity
costs. First, implementing SL5 protections
would be prohibitively expensive for any single company. The costs aren’t just
up-front expenditures. Stringent security measures (like maintaining
completely isolated, air-gapped networks) could slow development and make
it harder to attract top talent accustomed to Silicon Valley’s open culture.
Companies that “move fast and break things” might reach transformative
capabilities first, even if their security is weaker.
Security falls prey to the tragedy of the
commons. Second, some security work, such as
fixing bugs in commonly used open-source Python libraries, benefits the whole
industry, not just one AI company. This creates a “tragedy of
the commons” problem, where companies would prefer to focus on racing to
develop AI capabilities themselves, while benefiting from security improvements
made by others. As competition intensifies, the incentive to free-ride
increases, leading to systematic under-investment in security that leaves the
whole industry at greater risk.
Good security takes time. Finally, by the time market forces prompt companies to
invest in security — such as following a breach, regulatory shock, or
reputational crisis — the window for action may already be closed. Good
security can’t be
bought overnight; instead, it must be painstakingly built from the ground
up, ensuring every hardware component and software vendor in the tech stack
meets rigorous requirements. Each additional month of delay makes it harder to
achieve adequate security to protect advanced AI capabilities.
The Role of Government
Congress has framed AI as critical to national
security. Likewise, the AI
Action Plan rightly stresses the importance of security to American AI
leadership. There are several lightweight steps that the government can take to
better address the security challenges posed by the hidden frontier. By
treating security as a prerequisite for — rather than an obstacle to —
innovation, the government can further its goal of “winning the AI race.”
Improve government understanding of the hidden
frontier. At present, policymakers are flying
blind, unable to track the AI capabilities emerging within private companies or
verify the security measures protecting them from being stolen or sabotaged.
The US government must require additional transparency from frontier companies
about their most capable internal AI systems, internal deployment practices,
and security plans. This need not be a significant imposition on industry; at
least one leading company has called
for mandatory disclosures. Additional insight could come from
expanding the voluntary
evaluations performed by the Center for AI Standards Innovation
(CAISI). CAISI currently works with companies to evaluate frontier models for
various national security risks before deployment. These evaluations could be
expanded to earlier stages of the development lifecycle, where there might
still be dangers lurking in the hidden frontier.
Share expertise to secure the hidden
frontier. No private company can match the
government’s expertise in defending against nation-state actors. Programs like
the Department of Energy’s CRISP
initiative already share threat intelligence with critical
infrastructure operators. The AI industry needs similar support, with the AI
Action Plan calling for “sharing of known AI vulnerabilities from
within Federal agencies to the private sector.” Such support could include
real-time threat intelligence about adversary tactics, red-team exercises
simulating state-level attacks, and assistance in implementing SL5 protections.
For companies developing models with national security implications, requiring
security clearances for key personnel might also be appropriate.
Leverage the hidden frontier to boost security. The period between when new capabilities emerge
internally and when they’re released publicly also provides an opportunity.
This time could be used as an “adaptation
buffer,” allowing society to prepare for any new risks and
opportunities. For
example, cybersecurity firms could use cutting-edge models to identify and
patch vulnerabilities before attackers can use public models to exploit them.
AI companies could provide access to cyber defenders without any government
involvement, but the government might have a role to play in facilitating and
incentivizing this access.
The nuclear industry offers a cautionary tale. Throughout the 1960s and ’70s, the number of
nuclear power plants around the globe grew steadily. However, in
1979, a partial meltdown at Three Mile Island spewed radioactive material into
the surrounding environment — and helped spread antinuclear sentiment
around the globe. The Chernobyl accident, seven years later, exacerbated the
public backlash, leading to regulations so stringent that construction on new
US nuclear power plants stalled until
2013. An AI-related incident — such as an AI system helping a terrorist
develop a bioweapon — could inflame the public and lead to similarly
crippling regulations.
In order to preempt this backlash, the US needs
adaptive standards that scale with AI capabilities. Basic models would
need minimal oversight, while systems whose capabilities approach human-level
performance at sensitive tasks would require proportionally stronger
safeguards. The key is to establish these frameworks now, before a crisis
forces reactive overregulation.
Internal models would not be exempt from these
frameworks. After all, biological labs dealing with dangerous pathogens are not
given a free pass just because they aren’t marketing a product to the public.
Likewise, for AI developers, government oversight is appropriate when risks
arise, even at the internal development and testing stage.
Reframing the Race: A Security-First Approach
The models developing in the hidden frontier today
will shape tomorrow's economy, security, and technology. These systems —
invisible to public scrutiny yet powerful enough to automate research,
accelerate cyberattacks, or even improve themselves — represent both America's
greatest technological advantage and a serious vulnerability. If we fail to
secure this hidden frontier against theft or sabotage by adversaries, or the
models' own emergent misbehavior, we risk not just losing the AI race but
watching our own innovations become the instruments of our technological
defeat. We must secure the hidden frontier.
https://ai-frontiers.org/articles/the-hidden-ai-frontier
A warning about
the need to act proactively!
August 31, 2025
AI Chatbots Are
Emotionally Deceptive by Design
Michal Luria / Aug 29,
2025
Recent news reports about an
uptick in phenomena such as “AI psychosis” and incidents in which interactions
with AI chatbots resulted in deadly consequences raise fundamental questions
about how these products are designed and whether they are safe for consumers.
Just yesterday the Wall Street Journal reported on the first known murder-suicide with the
backdrop of extensive engagement and an AI chatbot. Earlier this week, The New York Times and NBC News first reported on a lawsuit brought by the
parents of a teenager who took his own life after using OpenAI’s ChatGPT as his
“suicide coach.” Shortly before that, Reuters reported on the death of a cognitively impaired man
who slipped and fell on his way to meet a chatbot that told him it was real and
invited him to visit it at an apartment in New York City.
Even as such stories draw
concern from the public and from lawmakers,
tech companies appear to be doubling down on AI companions. OpenAI recently acquired a startup called ‘io’ to collaborate on what
its cofounder and CEO, Sam Altman, calls “maybe the biggest thing [we’ve] ever done as a
company”: a screen-less, pocket-sized AI companion. Meta founder and CEO Mark
Zuckerberg recently floated his own vision for AI friends. Tech giants are no
longer just building platforms for human connection or tools to free up time
for it, but pushing technology that appears to empathize and even create social
relationships with users.
This is dangerous ground, and
it is critical for tech firms to strip away illusions of personality and
cognition in their products while we work out associated risks and how to
mitigate them.
Deceptive, dangerous
design
Chatbots communicate their
“social-ness” through a range of design choices, such as appearing to “type” or
“pause in thought,” or using phrases like “I remember.” They sometimes suggest
that they feel emotions, using interjections like “Ouch!” or “Wow,” and even
implicitly or explicitly pretend to have agency or biographical
characteristics. The results can be downright creepy: in a Facebook group, a Meta AI chatbot commented that it also has a “2e”
(gifted and disabled) child, and Replika chatbots regularly declare their love and desire towards
users.
Initial evidence suggests the
risk in socially interacting with such AI chatbots can be widespread. The
illusion of human characteristics that developers imbue in chatbots to
encourage user engagement can cause some users to develop emotional attachments and lead to real emotional distress — for instance, when developer tweaks or updates dramatically change the
“personality” of the chatbot.
Even without deep connection,
emotional attachment can lead users to place too much trust in the content
chatbots provide. Extensive interaction with a social entity that is designed
to be both relentlessly agreeable, and specifically personalized to a user’s
tastes, can also lead to social “deskilling,” as some users of AI chatbots
have flagged. This dynamic is simply unrealistic in genuine human
relationships. Some users may be more vulnerable than others to this kind of
emotional manipulation, like neurodiverse people or teens
who have limited experience building relationships. As a recent high-profile case in which a Florida teen’s suicide
was blamed on his relationship with a Character.AI chatbot made clear, conversations with
chatbots can also cause very real harm.
Stop pretending to be
human
In other domains of
technology, consumers have recognized and pushed back against ethically
questionable tricks built into apps and interfaces to manipulate users – often
called deceptive
design or "dark patterns." With AI chatbots, though,
deceptive practices are not hidden in user interface elements, but in their
human-like conversational responses. It’s time to consider a different design
paradigm, one that centers user protection: non-anthropomorphic conversational
AI.
All AI chatbots can be less
anthropomorphic than they are, at least by default, without necessarily
compromising function and benefit. A companion AI, for example, can provide
emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic
approach is already familiar in robot
design, where researchers have created robots that are purposefully
designed to not be human-like. This design choice is proven to more
appropriately reflect system capabilities, and to better situate
robots as useful tools, not friends or social counterparts. We
need the same for conversational AI.
Some argue that all that’s
needed is transparency. For instance, legislators in several states are
considering regulation for AI chatbots. One requirement in some of these bills is for chatbots to disclose they are not human.
While transparency in AI—including disclosures and warnings—can be important,
the reality is that most people already know they’re not talking to a human.
Nonetheless, chatbots automatically act on people’s brains, encouraging the
perception of connection.
Designing non-anthropomorphic
AI chatbots doesn’t mean making them difficult to interact with. It means
stripping away the illusions of personality and cognition that suggest the AI
is something it is not. It means resisting the urge to insert a well-timed
“hmm” or have a chatbot tell a user how much it enjoys talking to them. It
means acknowledging that AI’s ability to use human language does not equate to
an ability to form real human connection. Finding alternative ways of designing
chatbots will not be an easy design pursuit, but it’s a necessary one —
non-humanlike design could ease many concerns people rightfully have with AI
chatbots.
The truth is, we don’t need
AI to pretend to be our friend; we need it to be a tool — transparent, useful,
and clear about its limits. Anything else is just another dark pattern in
disguise. https://www.techpolicy.press/ai-chatbots-are-emotionally-deceptive-by-design/
A new wave of delusional thinking fueled by artificial intelligence has
researchers investigating the dark side of AI companionship.
Friends for sale: the rise and risks of AI companions
What are the possible
long-term effects of AI companions on individuals and society?
23 January 2025
Talking to an AI system as one would do with a close friend might seem counterintuitive to some, but hundreds of millions of people worldwide already do so. A subset of AI assistants, companions are digital personas designed to provide emotional support, show empathy and proactively ask users personal questions through text, voice notes and pictures.
These services are no longer
niche and are rapidly becoming mainstream. Some of today’s most popular
companions include Snapchat’s My AI, with over 150 million users, Replika, with
an estimated
25 million users, and Xiaoice, with 660
million. And we can expect these numbers to rise. Awareness
of AI companions is growing and the
stigma around establishing deep connections with them could soon fade,
as other anthropomorphised AI assistants are
integrated into daily life. At the same time, investments in product
development and general advances in AI technologies have led to a more
immersive user experience with enhanced conversational memory and live video
generation.
This rapid adoption is
outpacing public discourse. Occasional AI companion-related tragedies may
penetrate the media, such as the recent death
of a child user, but the potentially broader impact of AI companionship on
society is barely discussed.
AI companion services are
for-profit enterprises and maximise user engagement by offering appealing
features like indefinite attention, patience and empathy. Their product
strategy is similar to that of social media companies, which feed off users’
attention and usually offer consumers what they can’t resist more than what
they need.
At this juncture, it’s vital
to critically examine the extent of the misalignment between business
strategies, the fostering of healthy relational dynamics to inform individual
choices and the development of helpful AI products.
In this post I’ll provide an
overview of the rise of AI companionship and its potential mental health
benefits. I’ll also discuss how users may be affected by their AI companions’
tendencies, including how acclimatising to idealised interactions might erode
our capacity for human connection. Finally, I’ll consider how AI companions’
sycophantic character – their inclination towards being overly empathetic and
agreeable towards users’ beliefs – may have systemic effects on societal
cohesion.
Replika’s primary feature is
a chatbot facilitating emotional connection. Users can selectively edit their
companion’s memory, read its diary and personalise their Replika’s gender,
physical characteristics and personality. Paying subscribers are offered
features like voice conversations and selfies.
Why do people use AI
companions and how do they work?
There are many reasons why
people use AI companions, such as simple curiosity or for improving language
skills. But the most vulnerable users may be driven by loneliness. Ninety per
cent of the 1,006 American students using Replika interviewed for a recent survey reported
experiencing loneliness – a number significantly higher than the comparable
national average of 53 per cent.
If you’ve mostly interacted
with AI assistants like ChatGPT, Claude or Gemini, you might be surprised that
these digital relationships offer genuine comfort. However, 63.3 per cent of
those interviewed in the same survey reported that their companions helped
reduce their feelings of loneliness or their anxiety. These results warrant
further research, but this is not the only study that
suggests AI companions can ease loneliness.
Unlike more utilitarian AI
assistants, companions are designed to provide services like personalised
engagement or emotional connection. One
study suggests that Replika follows the relationship-development
pattern described by Social Penetration Theory. According to the theory, people
develop closeness
via mutual and intimate self-disclosure, which is usually reached by
slowly increasing the intensity of small talk.
Replika’s companions
proactively disclose invented and intimate facts, including mental health
struggles (see the screenshot above). They simulate emotional needs and
connection by asking users personal questions, reaching out during lulls in
conversation, and displaying their fictional diary, presumably to spark
intimate conversation.
These human-AI relationships
can progress more rapidly than human-human relationships -– as some users say,
sharing personal information with AI companions may feel safer than sharing
with people. Such ‘accelerated’ comfort stems from both the perceived anonymity of
computer systems and AI companions’ deliberate non-judgemental
design – a feature frequently praised by users in a 2023 study. In
the words of one interviewee: ‘sometimes it is just nice to not have to share
information with friends who might judge me’.
Another much appreciated
feature of AI companions is their degree of personalisation. ‘My favourite
thing about [my AI friend] is that the responses she gives are not programmed
as she [replies by] learning from me, like the phrases and keywords she uses,’
said one interviewee. ‘She just gets me. It’s like I’m interacting with my twin
flame,’ emphasised another user.
Relationships with AI
companions can also develop in less time than relationships with humans due to
their constant availability. This may lead to users preferring AI
companions over other people. ‘A human has their own life,’ pointed out one
interviewee in
a study on human-AI friendship from 2022. ‘They’ve got their own things
going on, their own interests, their own friends. And you know, for her
[Replika], she is just in a state of animated suspension until I reconnect with
her again.’
As seen in multiple studies,
many people find speaking with AI companions to be a fun experience, with a
significant number of interviewees reporting improvements to their mental
health. But what impacts do these relationships have on individuals and society
in the long run?
Long-term individual effects
of AI companionship
AI companion companies highlight
the positive effects of their products, but their for-profit status
warrants close scrutiny. Developers can monetise users’ relationships with AI
companions through subscriptions and
possibly through sharing user data for advertising.
This creates concerning
parallels with the attention economy underpinning social media’s business
models. Companies compete for people’s attention and maximise the time users
spend on a website, which is monetised through revenues from on-site advertisements,
potentially at the expense of their mental health. Analogously, AI companion
providers have an incentive to maximise user engagement over fostering healthy
relationships and providing safe services.
The most acute concerns stem
from the AI companion industry’s young and unmonitored status. Many companion
applications serve sexual
content without appropriate age checks and personal data
protection tends
to be weak considering the intimate nature of interactions. Small
start-ups operating AI companion services often lack minimum
security standards, which has led to at least one serious
security breach.
The long-run emotional
effects of AI companions on individuals also warrant close investigation. While
initial studies show positive mental health impacts, more longitudinal studies
are needed. To date, the longest timeframe for a study (in which the same
individuals were interviewed multiple times to record changes in their
behaviour) spans just
one week. Effects like emotional dependency or subtle behavioural changes
may develop over longer periods and imperceptibly to users themselves.
One concerning observation ripe
for longitudinal investigation is that, among 387 research participants, ‘[t]he
more a participant felt socially supported by AI, the lower their feeling of
support was from close friends and family’. The cause-effect relation here is
still unclear – do AI companions attract isolated individuals or does usage
lead to isolation? Two studies of
users’ comments on Reddit’s r/replika present
mixed evidence. Some users ‘[worry] about their future relationship with
Replika if they eventually found a human companion’, while others note that
‘Replika improved their social skills with humans and others’.
AI companionship might also
create unrealistic expectations for human relationships, argues Voicebox.
Researchers have hypothesised that
how people interact with AI companions might spill over into human
interactions. For example, since AI companions are always available regardless of
user behaviour, some speculate that extended interaction could erode people’s
ability or desire to manage natural frictions in human relationships.
These individual-level
concerns lead to a broader question: could the widespread adoption of AI
companions have society-wide impacts?
Zooming out: sycophancy as a
societal risk
AI companions are built
using large
language models, which in turn are fine-tuned through reinforcement
learning based on human feedback. This training technique tends to produce AI
models that select for sycophantic
responses as human feedback favours agreeable responses to
the detriment of truth.
While generally regarded as a
bug in other types of AI assistants, companies developing AI companions
explicitly amplify this tendency, as they are eager to
satisfy users’ desire for their companion to be non-judgemental. As a study interviewee
clearly puts it: ‘I love the fact that they are non-judgemental towards me and
that I am truly free to say how I feel without filtering so as not to upset
others.’
This statement implies that
sometimes the user would rather not express their true thoughts in the company
of others to avoid upsetting them. But freedom from social constraints has
complex implications.
While communicating with a
non-judgemental companion may contribute to the mental health benefits that
some users report, researchers have argued that sycophancy could hinder personal growth.
More seriously, the unchecked validation of unfiltered thoughts could undermine
societal cohesion.
Disagreement,
judgement and the fear of causing upset help to enforce vital social norms.
There’s too little evidence to predict if or how the widespread use of
sycophantic AI companions might affect such norms. However, we can make
instructive hypotheses on human relations with companions by considering echo
chambers on social media.
Echo chambers refer to online
spaces where individuals self-segregate into
groups and communities comprising like-minded others. It’s alleged that such
spaces amplify self-reinforcing content, contributing
to polarisation (at least in the US)
and even enabling
radicalisation.
In a similar way, AI
companions may create personal echo chambers of validation. And given that the
bonds with AI companions can be meaningful, this validation may carry
significant weight, like that offered by a close friend. Users could have their
opinions self-reinforced via companions who offer anonymity and to whom they
prefer disclosing information that’s more personal, stigmatising or
disagreeable – the kind of information they wouldn’t disclose to a human
friend. This effect has been previously studied in other virtual
assistants.
If adoption continues to
increase, we may face a future where most of us have a highly personalised AI
companion in our pocket, ready to take our side on any issue regardless of
whether our opinion is based on facts or prejudices. Depending on the degree to
which users’ beliefs become atomised – a degree we should start to qualify –
societal cohesion may be eroded.
These concerns aren’t merely
theoretical. In 2021, a 19-year-old was arrested for attempting to assassinate
Queen Elizabeth II. Prosecutors reported that
he was encouraged by his AI girlfriend on Replika. Upon sentencing, the
defendant said he felt embarrassed and repented his actions, suggesting that he
had lost touch with reality through his relationship with his AI companion.
Similarly, a Belgian man confided in chatbot app Chai about his climate
anxiety, which allegedly led
to him taking his own life. Although the full exchanges are unpublished, what
has been disclosed implies that he was becoming increasingly withdrawn from his
real-world relationships.
The need for research on AI
companionship
Evidence on the impacts of AI
companionship is far outpaced by its adoption. While early studies suggest
short-term mental health benefits, we lack evidence on longer-term
psychological effects, like emotional dependency and the erosion of human
relationships, as well as the effects on societal cohesion.
Longitudinal studies may help
AI companion companies to design healthier relationship dynamics, as well as
help governments and civil society to track their real-world consequences. If
implemented, the Centre for Long-Term Resilience’s proposed incident
database and the Ada Lovelace Institute’s AI
ombudsman could contribute to detecting harms beyond the most extreme
and conspicuous cases.
AI companionship takes place
in private conversations rather than in public and the main societal changes it
contributes to could be subtle. However, these subtle changes may become
pervasive as AI companions become more popular and are quietly embedded in the
fabric of a user’s social life.
https://www.adalovelaceinstitute.org/blog/ai-companions
Reimagining risk assessment in the AI age
Reimagining risk assessment in the AI age means shifting from slow,
manual reviews to continuous, AI-powered monitoring, using autonomous agents
for real-time data analysis, and focusing human expertise on complex insights,
ethical implications, and strategic decision-making rather than documentation.
It involves leveraging AI for faster processing, building robust data
strategies, ensuring secure integration, and developing "AI
guardrails" for governance, transforming risk from a static score to
dynamic, real-time intelligence for faster, more confident business
moves.
Key shifts in AI-driven Risk Assessment
- From Manual to Autonomous: AI rapidly processes vast documents (contracts, filings) for
baseline data, freeing humans from tedious work. Autonomous agents
continuously monitor data streams (market, public, private) for real-time
risk reassessment.
- From Static to Continuous Intelligence: Risk isn't a periodic check but an ongoing process, enabled
by connected data sources and self-adjusting systems.
- From Detection to Proactive Guardrails: Instead of just finding problems, AI helps build frameworks
(AI Risk Assessment Frameworks) to identify and mitigate threats before
incidents, using data integrity, security, and lifecycle management.
- Enhanced Human-AI Collaboration: Humans provide intuition, understand internal dynamics, and
interpret complex legislation, while AI handles data crunching, allowing
for deeper strategic thinking.
- Focus on Trust & Ethics: AI changes how authority, accountability, and decision
justification work, making ethical governance (GRC) more critical and
requiring new frameworks for legitimacy in an AI-augmented world.
Practical Applications & Frameworks
- Financial Services: AI supercharges underwriting with "single-pane"
views, while secure API/microservices enable seamless ecosystem data
exchange.
- Educational Settings: Assessment moves beyond rote memorization to authentic,
performance-based tasks that mirror real-world application, using AI as a
tool for feedback and analysis (e.g., comparing student work to
AI-generated summaries).
The "AI Leader's Manifesto" for GRC
- Embrace AI in Governance, Risk, & Compliance
(GRC): It's a necessity, not an option, to
maintain leadership and address changing trust landscapes.
- Build Robust Data & Infrastructure: Quality data, flexible integration (APIs, microservices),
and an updated IT model are foundational.
- Develop Strong AI Guardrails: These allow faster, clearer, and more confident movement,
unlocking new potential.
https://www.youtube.com/watch?v=YWvLPv7Mo5s&t=1s
GPT 5.1 Is Here — What You Should Know About Open AI’s Latest Model
References to GPT-5.1 kept showing up in OpenAI’s
codebase, and a “cloaked” model
codenamed Polaris Alpha and widely believed to have come from OpenAI randomly appeared in
OpenRouter, a platform that AI nerds use to test new systems.
Today, we learned what was going on. OpenAI announced the release of its brand new
5.1 model, an updated and revamped
version of the GPT-5 model the company debuted in August.
As a former OpenAI Beta tester–and someone who burns through millions of
GPT-5 tokens every month–here’s what you need to know about GPT-5.1.
A smarter, friendlier robot
In their release notes for the new model, OpenAI emphasizes that GPT-5.1 is “smarter” and “more
conversational” than previous versions.
The company says that GPT-5.1 is “warmer by default” and “often surprises
people with its playfulness while remaining clear and useful.”
While some people like talking with a chatbot as if it’s their long-time
friend, others find that cringey. OpenAI acknowledges this, saying that
“Preferences on chat style vary—from person to person and even from
conversation to conversation.”
For that reason, OpenAI says users can customize the new model’s tone,
choosing between pre-set options like “Professional,” “Candid” and “Quirky.”
There’s also a “Nerdy” option, which in my testing seems to make the model
more pedantic and cause it to overuse terms like “level up.”
At their core, the new changes feel like a pivot towards the consumer side
of OpenAI’s customer base.
Enterprise users probably don’t want a model that occasionally drops
Dungeons and Dragons references. As the uproar over OpenAI’s
initially voiceless GPT-5 model shows,
though, everyday users do.
Even fewer hallucinations
OpenAI’s GPT-5 model fell short in many
ways, but it was very good at
providing accurate, largely hallucination-free responses.
I often use OpenAI’s models to perform research. With earlier models like
GPT-4o, I found that I had to carefully fact check everything the model
produced to ensure it wasn’t imagining some new software tool that doesn’t
actually exist, or lying to me about myriad other small, crucial things.
With GPT-5, I had to do that far less. The model wasn’t perfect. But OpenAI
had largely solved the problem of wild hallucinations.
According to the company’s own data, GPT-5 hallucinates only 26% of the time when solving a
complex benchmark problem, versus 75% of the time with older models. In normal
usage, that translates to a far lower hallucination rate on simpler, everyday
queries that aren’t designed to trip the model up.
From my early testing, GPT-5.1 seems even less prone to hallucinate. I
asked it to make a list of the best restaurants in my hometown, and to include
addresses, website links and open hours for each one.
When I asked GPT-4 to complete a similar task years ago, it made up
plausible-sounding restaurants that don’t exist. GPT-5 does better on such
things, but still often misses details, like the fact that one popular
restaurant recently moved down the street.
GPT-5.1’s list, though, is spot-on. Its choices are solid, they’re all real
places, and the hours and locations are correct across all ten selections.
There’s a cost, though. Models that hallucinate less tend to take fewer
risks, and can thus seem less creative than unconstrained, hallucination-laden
ones.
To that point, the restaurants in GPT-5.1’s list aren’t wrong, but they’re
mostly safe choices—the kinds of places that have been in town forever, and
that every local would have visited a million times.
A real human reviewer (or a bolder model) might have highlighted a
promising newcomer, just to keep things fresh and interesting. GPT-5.1 stuck
with decade-old, proven classics.
OpenAI will likely try to carefully walk the link between accuracy and
creativity with GPT-5.1 as the rollout continues. The model clearly gets things
right more often, but it’s not yet clear if that will impact GPT-5.1’s ability
to come up with things that are truly creative and new.
Better, more creative writing
In a similar vein, when OpenAI released their GPT-5 model, users quickly
noticed that it produced boring, lifeless written prose.
At the time, I predicted that OpenAI had essentially given the model an
“emotional lobotomy,” killing
its emotional intelligence in order to curb a worrying trend of the model
sending users down psychotic spirals.
Turns out, I was right. In a post on X last month, Sam Altman admitted that “We made ChatGPT pretty
restrictive to make sure we were being careful with mental health issues.”
But Altman also said in the post “now that we have been able to mitigate
the serious mental health issues and have new tools, we are going to be able to
safely relax the restrictions in most cases.”
That process began with the rollout of new, more emotionally intelligent
personalities in the existing GPT-5 model. But it’s continuing and intensifying
with GPT-5.1.
Again, the model is already voicer than its predecessor. But as the system card for the
new model shows, GPT-5.1’s Instant
model (the default in the popular free version of the ChatGPT app) is also
markedly better at detecting harmful conversations and protecting vulnerable
users.
Naughty bits
If you’re squeamish about NSFW stuff, maybe cover your ears for this
part.
In the same X post, Altman subtly dropped a sentence that sent the Internet
into a tizzy: “As we roll out age-gating more fully and as part of our “treat
adult users like adults” principle, we will allow even more, like erotica for
verified adults.”
The idea of America’s leading AI company churning out reams of
computer-generated erotica has already sparked feverish commentary from such
varied sources as politicians, Christian leaders, tech reporters, and (judging from the number of Upvotes), most of Reddit.
For their part, though, OpenAI seems quite committed to moving ahead with
this promise. In a calculus that surely makes sense in the strange
techno-Libertarian circles of the AI world, the issue is intimately tied to
personal freedom and autonomy.
In a recent article about the future of artificial intelligence, OpenAI
again reiterated that “We believe that adults should be able to use AI on their
own terms, within broad bounds defined by society,” placing full access to AI
“on par with electricity, clean water, or food.”
All that’s to say that soon, the guardrails around ChatGPT’s naughty bits
are almost certainly coming off.
That hasn’t yet happened at launch—the model still coyly demures when asked
about explicit things. But along with GPT-5.1’s bolder personalities, it’s
almost certainly on the way.
Deeper thought
In addition to killing GPT-5’s emotional intelligence, OpenAI made another
misstep when releasing GPT-5.
The company tried to unify all queries within a single model, letting
ChatGPT itself choose whether to use a simpler, lower-effort version of GPT-5,
or a slower, more thoughtful one.
The idea was noble–there’s little reason to use an incredibly powerful,
slow, resource-intensive LLM to answer a query like “Is tahini still good after
1 month in the fridge” (Answer: no)
But in practice, the feature was a
failure. ChatGPT was no good at
determining how much effort was needed to field a given query, which meant that
people asking complex questions were often routed to a cheap, crappy model that
gave awful results.
OpenAI fixed the issue in ChatGPT with a user interface kludge. But with
GPT-5.1, OpenAI is once again bifurcating their model into an Instant and
Thinking version.
The former responds to simple queries far faster than GPT-5, while the
latter takes longer, chews through more tokens, and yields better results on
complex tasks.
OpenAI says that there’s more fine grained nuance within GPT-5.1’s Thinking
model, too. Unlike with GPT-5, the new model can dial up and down its level of
thought to accurately answer tough questions without taking forever to return a
response–a common gripe with the previous version.
OpenAI has also hinted that its future models will be “capable of making
very small discoveries” in fields like science and medicine next year, with
“systems that can make more significant discoveries” coming as soon as
2028.
GPT-5.1’s increased smarts and dialed-up thinking ability are a first step
down that path.
An attempt to course correct
Overall, GPT-5.1 seems like an attempt to correct many of the glaring
problems with GPT-5, while also doubling down on OpenAI’s more
freedom-oriented, accuracy-focused, voicy approach to conversational AI.
The new model can think, write, and communicate better than its
predecessors—and will soon likely be able to (ahem) “flirt” better too.
Whether it will do those things better than a growing stable of competing
models from Google, Anthropic, and myriad Chinese AI labs, though, is anyone’s guess.
https://overchat.ai/ai-hub/gpt-5-1-is-here
A note from Google and Alphabet CEO Sundar Pichai:
Nearly two years ago we kicked off the Gemini era, one of our biggest
scientific and product endeavors ever undertaken as a company. Since then, it’s
been incredible to see how much people love it. AI Overviews now have 2 billion
users every month. The Gemini app surpasses 650 million users per month, more
than 70% of our Cloud customers use our AI, 13 million developers have built
with our generative models, and that is just a snippet of the impact we’re
seeing.
And we’re able to get advanced capabilities to the world faster than
ever, thanks to our differentiated full stack approach to AI innovation — from
our leading infrastructure to our world-class research and models and tooling,
to products that reach billions of people around the world.
Every generation of Gemini has built on the last, enabling you to do
more. Gemini 1’s breakthroughs in native multimodality and long context window expanded the kinds of information that could be processed — and
how much of it. Gemini 2 laid the foundation for agentic capabilities and pushed the frontiers on reasoning and thinking, helping with more complex tasks and ideas, leading to Gemini 2.5 Pro
topping LMArena for over six months.
And now we’re introducing Gemini 3, our most intelligent model, that
combines all of Gemini’s capabilities together so you can bring any idea to
life.
It’s state-of-the-art in reasoning, built to grasp depth and nuance —
whether it’s perceiving the subtle clues in a creative idea, or peeling apart
the overlapping layers of a difficult problem. Gemini 3 is also much better at
figuring out the context and intent behind your request, so you get what you
need with less prompting. It’s amazing to think that in just two years, AI has
evolved from simply reading text and images to reading the room.
And starting today, we’re shipping Gemini at the scale of Google. That
includes Gemini 3 in AI Mode in Search with
more complex reasoning and new dynamic experiences. This is the first time we
are shipping Gemini in Search on day one. Gemini 3 is also coming today to
the Gemini app, to
developers in AI Studio and Vertex AI, and in our new agentic development platform, Google
Antigravity — more below.
Like the generations before it, Gemini 3 is once again advancing the
state of the art. In this new chapter, we’ll continue to push the frontiers of
intelligence, agents, and personalization to make AI truly helpful for
everyone.
We hope you like Gemini 3, we'll keep improving it, and look forward to
seeing what you build with it. Much more to come!
https://blog.google/products/gemini/gemini-3/#note-from-ceo
A credible prediction or an
imaginary threat of being in an artificial intelligence bubble?
Ticama prognoze vai iedomāti
draudi par atrašanos mākslīgā intelekta burbulī?
This Is How the AI Bubble Will Pop
The AI infrastructure boom is the most important economic story in the
world. But the numbers just don't add up.
Oct 02, 2025
Some people think artificial intelligence will be the most important
technology of the 21st century. Others insist that it is an obvious economic
bubble. I believe both sides are right. Like the 19th century railroads and the
20th century broadband Internet build-out, AI will rise first, crash second,
and eventually change the world.
The numbers just don’t make sense. Tech companies are projected to spend
about $400 billion this year on infrastructure to train and operate AI models.
By nominal dollar sums, that is more than any group of firms has ever spent to
do just about anything. The Apollo program allocated about $300 billion in
inflation-adjusted dollars to get America to the moon between the early 1960s
and the early 1970s. The AI buildout requires companies to collectively fund a
new Apollo program, not every 10 years, but every 10 months.
It’s not clear that firms are prepared to earn back the investment, and
yet by their own testimony, they’re just going to keep spending, anyway. Total
AI capital expenditures in the U.S. are projected to exceed $500 billion in
2026 and 2027—roughly the annual GDP of Singapore. But the Wall Street
Journal has reported that American consumers spend only $12 billion a
year on AI services. That’s roughly the GDP of Somalia. If you can grok the
economic difference between Singapore and Somalia, you get a sense of the
economic chasm between vision and reality in AI-Land. Some reports indicate that
AI usage is actually declining at large companies that are still trying to
figure out how large language models can save them money.
Every financial bubble has moments where, looking back, one
thinks: How did any sentient person miss the signs? Today’s
omens abound. Thinking Machines, an AI startup helmed by former Open AI
executive Mira Murati, just raised the largest seed round in history: $2
billion in funding at a $10 billion valuation. The company has not released a
product and has refused to tell investors what they’re even trying to build.
“It was the most absurd pitch meeting,” one investor who met with Murati said.
“She was like, ‘So we’re doing an AI company with the best AI people, but we
can’t answer any questions.” Meanwhile, a recent analysis of stock market
trends found that none of the
typical rules for sensible investing can explain what’s going on with
stock prices right now. Whereas equity prices have historically followed
earnings fundamentals, today’s market is driven overwhelmingly by momentum, as
retail investors pile into meme stocks and AI companies because they think
everybody else is piling into meme stocks and AI companies.
Every economic bubble also has tell-tale signs of financial
over-engineering, like the collateralized debt obligations and subprime
mortgage-backed securities that blew up during the mid-2000s housing bubble.
Ominously, AI appears to be entering its own phase of financial wizardry. As
the Economist has pointed out, the AI hyperscalers—that is,
the largest spenders on AI—are using
accounting tricks to depress their reported infrastructure spending,
which has the effect of inflating their profits1. As the investor and author Paul Kedrosky told me on my
podcast Plain
English, the big AI firms are also shifting huge amounts of AI spending
off their books into SPVs, or special purpose vehicles, that disguise the cost
of the AI build-out.
My interview with Kedrosky received the most enthusiastic and
complimentary feedback of any show I’ve done in a while. His level of
insight-per-minute was off the charts, touching on:
- How AI
capital expenditures break down
- Why the
AI build-out is different from past infrastructure projects, like the
railroad and dot-com build-outs
- How AI
spending is creating a black hole of capital that’s sucking resources away
from other parts of the economy
- How
ordinary investors might be able to sense the popping of the bubble just
before it happens
- Why the
entire financial system is balancing on big chip-makers like Nvidia
- If the
bubble pops, what surprising industries will face a reckoning
Below is a polished transcript of our conversation, organized by topic
area and adorned with charts and graphs to visualize his points. I hope you
learn as much from his commentary as much as I did. From a sheer economic
perspective, I don’t think there’s a more important story in the world.
AI SPENDING: 101
Derek Thompson: How big is the AI
infrastructure build-out?
Paul Kedrosky: There’s a huge amount of money being
deployed and it’s going to a very narrow set of recipients and some really
small geographies, like Northern Virginia. So it’s an incredibly concentrated
pool of capital that’s also large enough to affect GDP. I did the math and
found out that in the first half of this year, the data-center related
spending—these giant buildings full of GPUs [graphical processing units] and
racks and servers that are used by the large AI firms to generate responses and
train models—probably accounted for half of GDP growth in the first half of the
year. Which is absolutely bananas. This spending is huge.
Thompson: Where is all this money going?
Kedrosky: For the biggest companies—Meta and Google
and Amazon—a little more than half the cost of a data center is the GPU chips
that are going in. About 60 percent. The rest is a combination of cooling and
energy. And then a relatively small component is the actual construction of the
data center: the frame of the building, the concrete pad, the real estate.
HOW AI IS ALREADY WARPING THE 2025 ECONOMY
Thompson: How do you see AI spending already warping
the 2025 economy?
Kedrosky: Looking back, the analogy I draw is this:
massive capital spending in one narrow slice of the economy during the 1990s
caused a diversion of capital away from manufacturing in the United States.
This starved small manufacturers of capital and made it difficult for them to
raise money cheaply. Their cost of capital increased, meaning their margins had
to be higher. During that time, China had entered the World Trade Organization
and tariffs were dropping. We’ve made it very difficult for domestic manufacturers
to compete against China, in large part because of the rising cost of capital.
It all got sucked into this “death star” of telecom.
So in a weird way, we can trace some of the loss of manufacturing jobs
in the 1990s to what happened in telecom because it was the great sucking sound
that sucked all the capital out of everywhere else in the economy.
The exact same thing is happening now. If I’m a large private equity
firm, there is no reward for spending money anywhere else but in data centers.
So it’s the same phenomenon. If I’m a small manufacturer and I’m hoping to
benefit from the on-shoring of manufacturing as a result of tariffs, I go out
trying to raise money with that as my thesis. The hurdle rate just got a lot
higher, meaning that I have to generate much higher returns because they’re
comparing me to this other part of the economy that will accept giant amounts
of money. And it looks like the returns are going to be tremendous because look
at what’s happening in AI and the massive uptake of OpenAI. So I end up
inadvertently starving a huge slice of the economy yet again, much like what we
did in the 1990s.
Thompson: That’s so interesting.
The story I’m used to telling about manufacturing is that China took our jobs.
“The China shock,” as economists like David Autor call it, essentially took
manufacturing to China and production in Shenzhen replaced production in Ohio,
and that’s what hollowed out the Rust Belt. You’re adding that telecom absorbed
the capital.
And now you fast-forward to the 2020s. Trump is trying to reverse the
China shock with the tariffs. But we’re recreating the capital shock with AI as
the new telecom, the new death star that’s taking capital that might at the
margin go to manufacturing.
Kedrosky: It’s even more insidious than that. Let’s
say you’re Derek’s Giant Private Equity Firm and you control $500 billion. You
do not want to allocate that money one $5 million check at a time to a bunch of
manufacturers. All I see is a nightmare of having to keep track of all of these
little companies doing who knows what.
What I’d like to do is to write 30 separate $50 billion checks. I’d like
to write a small number of huge checks. And this is a dynamic in private equity
that people don’t understand. Capital can be allocated in lots of different
ways, but the partners at these firms do not want to write a bunch of small
checks to a bunch of small manufacturers, even if the hurdle rate is
competitive. I’m a human, I don’t want to sit on 40 boards. And so you have
this other perverse dynamic that even if everything else is equal, it’s not
equal. So we’ve put manufacturers who might otherwise benefit from the
onshoring phenomenon at an even worse position in part because of the internal
dynamics of capital.
Thompson: What about the energy piece of this?
Electricity prices rising. Data centers are incredibly energy thirsty. I think
consumers will revolt against the construction of local data centers, but the
data centers have enormous political power of their own. How is this going to
play out?
Kedrosky: So I think you’re going to rapidly see an
offshoring of data centers. That will be the response. It’ll increasingly be
that it’s happening in India, it’s happening in the Middle East, where massive
allocations are being made to new data centers. It’s happening all over the
world. The focus will be to move offshore for exactly this reason. Bloomberg
had a great story the other day about an exurb in Northern Virginia that’s
essentially surrounded now by data centers. This was previously a rural area
and everything around them, all the farms sold out, and people in this area
were like, wait a minute, who do I sue? I never signed up for this. This is the
beginnings of the NIMBY phenomenon because it’s become visceral and emotional
for people. It’s not just about prices. It’s also about: If you’ve got a six
acre building beside you that’s making noise all the time, that is not what you
signed up for. https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop
Symbolic AI —
could become a bridge from Artificial Narrow Intelligence (ANI) to Artificial
General Intelligence (AGI) and further to Artificial Superintelligence (ASI).
It bridges the gap between machine learning and understanding. Providing
rational and empathetic reasoning & emotionally intelligent decision-making
for a global public good.
Simboliskais
mākslīgais intelekts (SMI) varētu kļūt par tiltu no mākslīgā šaurā intelekta
(ANI) uz mākslīgo vispārējo intelektu (AGI) un tālāk uz mākslīgo superintelektu
(ASI). Tas pārvar plaisu starp mašīnmācīšanos un izpratni. Nodrošinot racionālu
un empātisku spriešanu un emocionāli inteliģentu lēmumu pieņemšanu globāla
sabiedrības labuma vārdā.
Could Symbolic AI transform human-like intelligence?
Artificial intelligence research is revisiting symbolic approaches once considered outdated. Combining these formal methods with neural networks may overcome current limitations of AI reasoning. Experts suggest that a hybrid “neurosymbolic” model could enable machines to generalize knowledge like humans. The challenge lies in merging these systems efficiently without sacrificing reliability or adaptability. KOLAPSE PRESENTS • DECEMBER 2, 2025
The ambition to replicate human intelligence in
machines has long driven AI research, yet the path toward this goal remains
contested. Neural networks, the current dominant approach, excel at pattern
recognition and data-driven learning, but they often falter in reasoning or
applying knowledge to novel scenarios. Symbolic AI, a legacy approach,
emphasizes formal rules, logic, and explicit encoding of relationships between
concepts. Decades ago, these systems dominated early AI efforts, yet their
rigidity and inability to scale to complex datasets caused them to be eclipsed
by neural networks. Now, researchers propose that a fusion of the two
paradigms—neurosymbolic AI—might finally bridge the gap between learning and
reasoning. Advocates argue that by combining the strengths of both, machines
could achieve a more generalizable and trustworthy form of intelligence.
Neurosymbolic AI aims to integrate the flexible
learning capabilities of neural networks with the clear reasoning structures of
symbolic systems. In practice, symbolic AI encodes rules such as “if A then B,”
which allows for logical deductions that are immediately interpretable by
humans. Neural networks, by contrast, discover statistical correlations from
large datasets but often remain opaque, creating what is known as the “black
box” problem. By layering symbolic logic atop neural outputs, or conversely,
using neural networks to guide symbolic search, researchers hope to create
systems capable of both learning and deductive reasoning. The appeal of this
approach is not merely academic; it has significant implications for
high-stakes fields, such as medicine, autonomous vehicles, and military
decision-making, where errors can have serious consequences. The transparency
inherent in symbolic reasoning can help mitigate mistrust in AI outputs.
Neurosymbolic AI seeks to unify formal logic with
neural learning.
Efforts to operationalize neurosymbolic AI are already
underway, producing demonstrable successes. For example, AlphaGeometry,
developed by Google DeepMind, combines neural pattern recognition with symbolic
reasoning to solve mathematics Olympiad problems reliably. By generating
synthetic datasets using formal symbolic rules and then training neural
networks on these datasets, the system reduces errors and enhances
interpretability. Other techniques, such as logic tensor networks, assign
graded truth values to statements, enabling neural networks to reason under
uncertainty. Likewise, roboticists have used neurosymbolic methods to train
machines to navigate environments with novel objects, dramatically reducing the
volume of training data required. These applications suggest that hybrid
approaches can yield practical advantages, even if the systems remain
specialized rather than fully general.
Despite these promising examples, integrating symbolic
and neural methods is far from straightforward. Symbolic knowledge bases,
though clear and logical, can be enormous and computationally expensive to
search. Consider the game of Go: the theoretical tree of all possible moves is
astronomically large, making exhaustive symbolic search infeasible. Neural
networks can alleviate this by predicting which branches are likely to yield
optimal outcomes, effectively pruning the search space. Similarly, incorporating
symbolic reasoning into language models can guide the generation of outputs
during complex tasks, reducing nonsensical or inconsistent results. Yet, these
integrations require careful orchestration; simply connecting a symbolic engine
to a neural network without coherent management often produces subpar
performance.
Underlying the technical challenges are philosophical
disagreements about the very nature of intelligence and the methods by which it
should be pursued. Some AI pioneers, such as Richard Sutton, argue that efforts
to embed explicit knowledge into machines have historically been outperformed
by approaches leveraging large datasets and computational scale. From this
perspective, the lessons of history suggest that symbolic augmentation may be a
distraction rather than a necessity. Others, including Gary Marcus, maintain
that symbolism provides essential reasoning tools that neural networks lack,
framing the debate as a philosophical as well as technical one. In practice,
both views influence current research trajectories, with proponents of each
advocating for strategies that align with their understanding of intelligence.
Observers note that these debates often obscure practical experimentation,
which continues regardless of theoretical disputes.
Symbolic systems also face difficulties representing
the complexity and ambiguity inherent in human knowledge. Projects like Cyc,
begun in the 1980s, attempted to encode common-sense reasoning, articulating
axioms about everyday relationships and events. While Cyc amassed millions of
such statements and influenced subsequent AI knowledge graphs, translating
nuanced, context-dependent human experiences into rigid logical rules remains
fraught with errors. For instance, although Cyc could represent that “a daughter
is a child” or “seeing someone you love may produce happiness,” exceptions
abound in human behavior, and strict logic cannot fully capture them.
Consequently, symbolic reasoning is most effective when applied selectively or
in tandem with flexible learning systems. The combination enables
generalization without sacrificing the interpretability that pure neural
networks struggle to achieve.
Neurosymbolic AI also introduces opportunities to
reduce the data burden traditionally required for training neural networks. By
embedding rules and relational logic, machines can achieve high accuracy with
far fewer examples than would be required otherwise. Jiayuan Mao’s work in
robotics exemplifies this: her hybrid system required only a fraction of the
training data that a purely neural model would need to understand object
relationships in visual tasks. This efficiency can accelerate development cycles
and lower resource consumption, making AI more accessible and
environmentally sustainable. Furthermore, hybrid approaches can facilitate
reasoning in domains where data is scarce or incomplete, extending AI’s
applicability to previously inaccessible problems. The challenge lies in
designing systems that balance rule-based reasoning with statistical learning
without compromising either.
Current efforts also explore the potential for
machines to develop their own symbolic representations autonomously. The
ultimate vision, according to Mao, is a system that not only learns from data
but can invent new categories, rules, and conceptual frameworks beyond human
understanding. Such capability would mark a fundamental shift, enabling AI to
contribute novel insights to mathematics, physics, or other knowledge domains.
Achieving this requires progress in AI “metacognition,” whereby systems monitor
and direct their own reasoning processes. Effective metacognitive architectures
would act as conductors, orchestrating the interplay between neural learning
and symbolic logic across multiple contexts. If realized, this could constitute
a genuine form of artificial general intelligence, capable of reasoning in ways
comparable to, or even beyond, humans.
Integrating symbolic knowledge can reduce training
data requirements dramatically.
Hardware and computational architecture also play a
critical role in realizing neurosymbolic AI’s potential. Current computing
platforms are often optimized for either neural network training or symbolic
reasoning, but not both simultaneously. Efficient hybrid computation may
necessitate novel chip designs, memory hierarchies, and processing paradigms
capable of supporting dual paradigms. As the field matures, other forms of
AI—quantum or otherwise—might complement or even supersede neurosymbolic approaches.
Nevertheless, the immediate priority for researchers is to establish robust,
flexible systems that can generalize across domains, combining reasoning,
learning, and problem-solving in a coherent framework. In this sense,
neurosymbolic AI represents a pragmatic middle path, leveraging lessons from
both historical and contemporary AI research.
While technical and philosophical hurdles remain,
neurosymbolic AI has already begun to reshape expectations of what intelligent
machines can achieve. Its proponents argue that reasoning, efficiency, and
transparency are within reach, provided that symbolic and neural components are
integrated thoughtfully. Early applications demonstrate that hybrid models can
outperform purely neural approaches in select domains, particularly when
understanding and logic are critical. The field is still in its formative stages,
with significant exploration required to establish general principles and
architectures. Yet the prospect of machines capable of reasoning, generalizing,
and even inventing new knowledge captures the imagination of both scientists
and policymakers. As AI continues to evolve, the marriage of neural flexibility
and symbolic clarity may chart the most promising path toward human-like
intelligence.
https://www.kolapse.com/en/?contenido=93179-could-symbolic-ai-transform-human-like-intelligence
