AI
in the future in people's perceptions and reality
AI nākotnē cilvēku priekšstatos un realitātē
To ensure human progress, it is
necessary to attract ethical artificial intelligence to create a system of
public governance based on universal human values.
Cilvēces progresa nodrošināšanai ir
nepieciešams piesaistīt ētisku mākslīgo intelektu, lai veidotu vispārcilvēciskajās vērtībās balstītu valsts
pārvaldības sistēmu.
GUIDE: ‘ETHICAL AI Governance Enables Confident AI Adoption’
🤖 Capgemini’s latest guide, “A Practical Guide to Implementing AI Ethics Governance”
: https://www.capgemini.com/wp-content/uploads/2025/10/Implementing-AI-ethics-governance_20251006.pdf
, explores how organizations can turn AI ethics, like those championed under
the European Commission's
EU AI Act, from abstract principles into actionable governance. The report
offers a clear path for embedding responsible AI across enterprises, helping
leaders navigate complex AI-driven transformations with confidence and
INTEGRITY.
👥 Helping Align AI Practices with Ethical Standards
The guide introduces a practical framework for AI ethics governance,
covering everything from bias management to sustainability. It emphasizes the
creation of a living AI Code of Ethics, the emergence of multidisciplinary AI
ethicist roles, and the alignment of AI practices with organizational values
and global standards like ISO 42001.
Shaping Enterprises' AI Operating Models
• AI ethics is no longer optional — it shapes trust, FAIRNESS, and
accountability across all levels of an organization:
• Workforce: AI ethicists and cross-functional teams ensure ethical risks are
identified, owned, and mitigated throughout the AI lifecycle.
• Customers & Society: Ethical AI systems foster fairness, transparency,
and social benefit, while accounting for cultural and contextual diversity.
• Innovation & Sustainability: Responsible AI practices integrate
environmental and resource considerations into AI deployment.
Focus Points Centre on AI Culture
• Human-Centric Integration: Embed ethics in AI design, decision-making,
and organizational culture.
• Bias & Fairness Management: Treat fairness as an ongoing, context-aware
process rather than a one-time check.
• Governance & Collaboration: Integrate AI ethics across legal, data, and
delivery teams while engaging stakeholders proactively.
• Sustainability & Impact Awareness: Consider the ethical implications of
AI’s energy and resource consumption.
https://www.linkedin.com/company/ai-&-partners/posts/?feedView=all
In today's situation, the dominant feature for successfully utilizing the opportunities created by AI has become emotional intelligence, which is the ability to evaluate the advice of artificial intelligence and understand which ones are trustworthy and which ones should not be blindly trusted.
Mūsdienu
situācijā par dominējošo īpašību veiksmīgai mākslīgā intelekta radīto iespēju
izmantošanai ir kļuvusi emocionālā inteliģence – spēja izvērtēt mākslīgā
intelekta sniegtos padomus un saprast, kuri no tiem ir uzticami un kuriem
nevajadzētu akli uzticēties.
How will AI transform business in 2026?
BY Robert Safian
How should leaders prepare for AI’s accelerating
impact on work and everyday life? AI scientist, entrepreneur, and Pioneers of
AI podcast host Rana el Kaliouby shares her predictions for the year ahead—from
physical AI entering the real world to what it means to onboard AI into your
org chart.
Let’s look ahead to 2026. You sent me some fascinating
thoughts about AI’s next-phase impact on business, and I’d love to take you
through them. The first one was the rise of what you called relationship
intelligent AI.
So everybody’s worried that AI is going to make us
less human and take away our human-to-human connections. There is definitely a
risk of that. But I think the thing I’m most excited about for 2026 is how AI
can actually help us build deeper human connections and more meaningful human
experiences. And the way this happens is through AI that can really help you
organize your relationships and your network and surface connections that you
need and maybe make warm introductions to you.
There are already a number of new companies that are
starting in this space. So one company’s called VIA.AI, it’s a
Boston-based company. They do this for sales professionals and BD professionals
who have to do this for their work. There’s a company called Goodword that I’m
very excited about. They’re doing this for just the average person. Like you
and I, we have very strong networks, but how can we organize it? So I’m excited
about that one. There’s a company called Boardy that does this for investors
and founders. So it’s becoming a thing, and I’m excited to see how these
companies take off in 2026. They’re all fairly new, so it’ll be interesting to
see how they evolve.
Yeah, and whether they can stay ahead of some of the
bigger chatbots that may just try to integrate some of this capability into the
products they already have. That’s always the case in this kind of evolution of
technology: What’s a feature and what’s a company, right? What’s an independent
service?
Absolutely. When I’m looking at these companies and
I’m diligencing them, that’s a key question that I ask. Is this something that
the next version of ChatGPT or Gemini is just going to implement? And if the
answer is yes, then that’s obviously not a defensible company. But a lot of
times there’s this additional moat of data and algorithms that you need to sit
on top of these LLMs. And I believe in this relationship intelligence space, I
don’t think this is something that just a kind of an off-the-shelf LLM can do.
It really needs to know you. It needs to know your data, it needs to know your
relationships.
And you have to trust it enough to share all that data
with it, right?
Absolutely.
That’s your proprietary data, whether it’s about your
business or about you individually.
Exactly. And I don’t want this to all go up to
OpenAI’s cloud. I want to trust that I have control over these really private
relations. If you and I have a conversation about our kids, I don’t necessarily
want that to now sit in a general OpenAI cloud and be used to train the next
ChatGPT. So that safety and security, appreciating the privacy and the
importance of this data, is really key.
Another business change you expect in 2026 is the
insertion of AI into the org chart. This is about who manages AI, like
performance reviews and team culture impacts?
Yeah, so this goes back to the thesis that there’s
this shift in how AI is creating value, and it’s not a tool anymore. Well, it
is a tool. It’ll always be a tool, but it’s not a tool that helps you get work
done faster. It could actually take an end-to-end task and get it done for you.
And I’ll give a few examples.
So I’m an investor in a company called Synthpop, and
instead of building a tool that helps healthcare administrators accelerate or
really become efficient in how they do patient intake, it just takes the task
of patient intake. It does the thing end to end. And so if you then imagine
what that means for a hospital or a clinic, it will have a combination of human
workers collaborating and working closely with AI coworkers.
And so then the question becomes, well, who manages
these hybrid teams? Sometimes it’s a human manager, sometimes it’s an AI
manager. I’m also an investor in a company called Tough Day, and they sell you
AI managers. And then how do you do performance reviews for these hybrid teams?
How do you build a culture? Like at Affectiva, my company, culture was our
superpower. How do you build a culture when some of your team members are AI
and some of your team members are humans?
So I think that is going to spur a lot of conversation
around how do you build organizations that are combinations of digital agents
and human employees?
As you talk about this merging of AI agents and humans
in work, it brings up that looming question about the impact of AI on jobs and
employment. And some numbers are coming out now that make it seem like, “Oh,
it’s bad for jobs.” There are other numbers coming out that are like, “Oh,
we’re actually hiring more people because of it.” Do you have a prediction
about what is going to happen with that in 2026? Is AI going to take over roles
that have been done by humans that quickly?
We had a really fascinating roundtable discussion at
the Fortune Brainstorm AI conference and the headline was like, “Is AI killing
entry-level jobs?” And actually, a lot of the Fortune companies and also AI
companies that were around the table were basically saying, “No, we’re hiring
more entry-level jobs. They’re just not the same jobs that we were
traditionally seeing.” And also the career ladders have changed.
So my prediction is we’re going to see an entirely
different organization where I think if you are able to come in an entry-level
position, for example, but work very closely with AI and be AI-native and be AI
fluent and be able to wear multiple hats, I think that’s going to go a long
way. As opposed to this very siloed job trajectory where you come in, this is
your little task, and then you do more of it, and then you go up the career
ladder. I think that’s going to change. I think young people are looking for
different ways of working, and I think AI is changing all of that anyway.
Will there be jobs that will go away? I think so. I
can’t remember who said this line, but it’s now very popular: “It’s not AI
that’s going to take your job. It’s going to be somebody who knows how to use
AI.” And I believe that to be true.
https://www.fastcompany.com/.../how-will-ai-transform...
Are we
creating truly intelligent systems?
The progress
of AI requires appropriate attention to preserving our humanity!
Vai radām
patiesi inteliģentas sistēmas?
MI progress
prasa atbilstošu uzmanību mūsu cilvēcības saglabāšanai!
12-15-2025
How to transform
AI from a tool into a partner
The 4 stages of human-AI collaboration.
BY Faisal Hoque
The conversation about AI in
the workplace has been dominated by the simplistic narrative that machines will
inevitably replace humans. But the organizations achieving real results with AI
have moved past this framing entirely. They understand that the most valuable
AI implementations are not about replacement but collaboration.
The relationship between workers and AI systems is
evolving through distinct stages, each with its own characteristics,
opportunities, and risks. Understanding where your organization sits on this
spectrum—and where it’s headed—is essential for capturing AI’s potential while
avoiding its pitfalls.
Stage 1: Tools and Automation
This is where most organizations begin. At this stage,
AI systems perform discrete, routine tasks while humans maintain full control
and decision authority. The AI functions primarily as a productivity tool,
handling well-defined tasks with clear parameters.
Ready to thrive at the intersection of business,
technology, and humanity?
Faisal Hoque’s books, podcast, and his companies give
leaders the frameworks and platforms to align purpose, people, process, and
tech—turning disruption into meaningful, lasting progress.
Examples are everywhere: document classification
systems that automatically sort incoming correspondence, chatbots that answer
standard customer inquiries, scheduling assistants that optimize meeting
arrangements, data entry automation that extracts information from forms.
The key characteristic of this stage is that AI
operates within narrow boundaries. Humans direct the overall workflow and make
all substantive decisions. The AI handles the tedious parts, freeing humans for
higher-value work.
The primary ethical considerations at this stage
involve ensuring accuracy and preventing harm from automated processes. When an
AI system automatically routes customer complaints or flags applications for
review, errors can affect real people. Organizations must implement quality
controls and monitoring to catch mistakes before they cause damage—particularly
for vulnerable populations who may be less able to navigate around system
errors. https://rogermartin.medium.com/a-leaders-role-in-fostering-ai-superpowers-c45d079807e8
We are seeing the merging of artificial intelligence agents and humans.
Mēs redzam
mākslīgā intelekta aģentu un cilvēku apvienošanos.
To transform AI from a tool to a partner, treat it
like a new team member by giving it a "job description," providing
rich context (company, goals, people), onboarding it with clear expectations,
and giving continuous, specific feedback to build a relationship where it
learns and scales your thinking, moving from simple tasks to complex strategic
collaboration. The key is shifting from asking for answers to co-creating
ideas, using precise prompts and iterative refinement to foster better thinking,
not just faster output.
In 2025, transforming AI from a tool into a partner
requires a shift from viewing it as a routine task-executor to an active
collaborator with shared responsibility.
The following steps outline how to achieve this
transition:
1. Adopt a Collaboration Framework
Most organizations progress through four distinct
stages to reach a true partnership:
• Automation: AI handles discrete, routine tasks
(e.g., sorting emails) while humans maintain total control.
• Augmentation: AI provides analysis and
recommendations (e.g., predictive analytics) to inform human decisions.
• Collaboration: Humans and AI work as a team,
leveraging complementary strengths—AI's processing power and human's ethical
reasoning—to share responsibility for outcomes.
• Supervision: AI handles routine operations
autonomously within established human-set parameters and governance.
2. Shift to Agentic AI
In 2025, the focus has shifted from simple Generative
AI to Agentic AI. Unlike tools that only respond to prompts, agentic systems:
• Take Action: They move beyond generating content to
executing multi-step processes like debugging code or interacting with
customers autonomously.
• Learn Context: They adapt to your personal
preferences and past mistakes, becoming more intuitive over time.
• Act as "Virtual Coworkers": They can plan
and execute complex workflows as a team member, not just an assistant.
3. Redefine Human Roles
A partnership is successful only when human roles
evolve to match AI's capabilities:
• Focus on the "30% Rule": Let AI handle 70%
of routine tasks so humans can focus on the 30% that requires creativity,
empathy, and ethical judgment.
• Develop New Skills: Prioritize AI Literacy
(understanding AI's limits) and Prompt Engineering (effective communication
with the partner).
• Invest in "Human Centric" Skills:
Strengthen uniquely human traits like critical thinking and emotional
intelligence, which AI cannot replicate.
4. Build Trust Through Governance
A partner must be reliable. Establish trust by
implementing:
• Explainable AI (XAI): Ensure the AI can articulate
the "why" behind its decisions so it's not a "black box".
• Human Oversight: Rigorously validate AI outputs to
maintain quality and brand voice.
• Digital Workforce Registries: Track AI agents
similarly to human employees to ensure accountability and compliance.
5. Create a Culture of Experimentation
Treat AI integration with the same discipline as
hiring human team members:
• Launch Pilot Programs: Test AI as a partner in a
small, controlled environment to solve real problems before scaling.
• Social Dialogue: Encourage open communication where
employees can share feedback or concerns about their new AI
"teammates".
The artificial intelligence boom: A new reality, where will we now live?
10 Generative AI Trends In 2026 That Will Transform Work And Life
ByBernard Marr
Oct 13, 2025
Generative AI is moving into a new phase in 2026,
reshaping industries from entertainment to healthcare while creating fresh
opportunities and challenges.
In 2026, generative AI is firmly embedded in workflows
across many larger organizations. Meanwhile, millions of us now rely on it for
research, study, content creation and even companionship.
What started with the arrival of ChatGPT back in 2023
has spilled into every corner of life, and the pace is only going to
accelerate.
Of course, challenges like copyright, bias, and the
risk of job displacement remain, but the upside is too powerful for anyone to
ignore. From augmenting human productivity to accelerating our ability to
learn, machines capable of generating words, pictures, video and code are
reshaping our world.
The next 12 months will undoubtedly see the arrival of
new tools and further integration of generative AI into our everyday lives. So
here are the ten trends I think will be most significant in 2026.
1. Generative Video Comes Of Age
This year, Netflix brought generative AI into
primetime in the Argentinian-produced series El Eternauta. Producers said that
it slashed production time and costs compared to traditional animation and
special effects techniques. In 2026, expect generative AI in entertainment to
become mainstream as we see it powering more big-budget TV shows and Hollywood
extravaganzas.
2. Authenticity Is King
Faced with a sea of generative AI content, individuals
and brands will look for new ways to communicate authenticity and genuine human
experience. While audiences will continue to find AI useful for quickly
conveying information and creating summaries, creators who are able to leverage
truly human qualities to provide content that machines can’t match will rise
above the tide of generic “AI slop”.
3. The Copyright Conundrum
Debate over the use of copyrighted content to train
generative AI models and fair compensation for human creatives will increase in
intensity throughout 2026. AI developers need access to human-created content
in order to train machines to mimic it, while many artists, musicians, writers
and filmmakers consider their work being used in this way as nothing more than
theft. Over the next year, expect more lawsuits, intense public debate and
potentially some attempts to resolve the situation through regulation, as
lawmakers try to strike a balance that allows technological innovation while
respecting intellectual property rights.
4. Agentic Chatbots—From Reactive To Proactive
Rather than simply providing information or generating
content in response to individual prompts, chatbots will become more and more
capable of working autonomously towards long-term goals as they take on agentic
qualities. This year, ChatGPT debuted its Agent Mode, and other tools such as
Gemini and Claude are adding abilities to communicate with third-party apps and
take multi-step actions without human intervention. In 2026, generative AI
tools will make the leap from clever chatbots to action-taking assistants as
the agentic revolution heats up.
5. Privacy-Focused GenAI
As businesses invest more heavily in generative AI,
there will be a growing awareness of the risks to privacy and the need to take
steps to secure personal and customer data. This will increase awareness in
privacy-centric AI models where data processing takes place on-premises or
directly on users’ own devices. Apple, for example, differentiates itself with
its focus on putting privacy first, and I expect to see other AI device
manufacturers and developers following its lead in 2026.
6. Generative AI in Gaming
In 2026, gaming could become one of the most exciting
frontiers for generative AI. Developers are creating games with emergent
storylines that adapt to players’ actions, even when they do something entirely
unexpected. And characters will no longer be limited to following scripts, but
can respond, hold conversations and act just like real people. This will create
richer, more immersive and interactive experiences for players, while cutting
production costs and unlocking new creative options for studios.
7. Synthetic Data For Analytics And Simulation
As well as words and pictures, generative AI is
increasingly used to create the raw data needed to understand the real world,
simulate physical, mechanical and biological systems and even train more
algorithms. This will allow banks to model fraud detection systems without
exposing real customer records, and healthcare providers to simulate treatments
and medical trials without risking patient privacy. With demand for synthetic
training data growing, it will become fuel for cutting-edge analytics and automated
decision-making systems in 2026 and beyond.
8. Monetizing Generative Search
Generative AI is transforming the way we search for
information online. This is impacting the business of companies that rely on
search results to drive traffic, but also forcing advertising services like
Google and Microsoft Bing to rethink the way they drive revenue. In 2026, we
can expect moves towards addressing this, as services such as Google’s Search
Generative Experience and Perplexity AI attempt to bridge the gap between
generative search and paid-for search ads.
9. Further Breakthroughs In Scientific Research
This year, we saw genAI proving it can be a valuable
aid to scientific research, driving breakthroughs in drug discovery, protein
folding, energy production and astronomy. In 2026, this trend will gather pace
as researchers increasingly leverage generative models in the search for
solutions to some of humanity’s biggest problems, such as curing diseases,
fighting climate change and solving food and water shortages.
10. Generative AI Jobs Prove Their Value
Much has been made of the new jobs that will be
displaced, but in 2026, the focus will shift to the new roles it will create.
We will start to see the true scale of demand for people with the skills to
fill roles such as prompt engineers, model trainers, output auditors and AI
ethicists. Those who can coordinate and integrate the work of AI agents with
human teams will be in high demand, and we will start to get a clearer
understanding of exactly how valuable they will really be when it comes to
unlocking the benefits of AI while mitigating its potential for harm.
Generative AI is no longer an emerging technology on the sidelines; it is becoming the engine driving change across every industry and daily life. The trends we see in 2026 point to a future where the line between human and machine creativity, productivity, and intelligence becomes increasingly blurred. Organizations that adapt quickly, invest in the right skills, and embrace responsible innovation will be the ones that thrive as this next chapter of AI unfolds.
Priority work
organization conditions for successful use of AI potential.
Proritāriee
darba organizācijas nosacījumi AI potenciāla sekmīgai izmantošanai.
AI
leadership: Different perspectives, one shared imperative
12-19-2025
Each leader sees AI differently, yet the companies who can connect those views build enterprise-wide momentum.
BY Dan Priest
I’ve watched many types of leaders struggle with
what AI means for their business. Three years into the GenAI era,
the technology is no longer the primary question, but instead its business
value. Inside the C-suite, the answers can often depend on where you sit. The
CEO’s appetite for risk, the CFO’s focus on returns, the CTO’s guardrails for
scalability—all of it shapes what’s possible.
But those differences don’t have to be friction; they
can be fuel if appropriately managed. Each perspective reflects a real pressure
point and a real opportunity. When leaders transcend any one area of the
business and focus on the imperatives shaping the future, they can begin to
connect those views. AI stops being a collection of pilots and becomes part of
the organization’s DNA.
YOUR AI AGENDA DEPENDS ON THEIRS
Because AI touches each part of the business, each
executive has a stake in how it unfolds. But if you want to advance your own
priorities, whether that’s innovation, efficiency, or market growth, you should
understand what drives your C-suite counterparts. Recognizing those drivers
isn’t just collaboration; it’s strategy. It’s how you turn competing incentives
into collective momentum.
The companies that pull ahead won’t be those that move
the fastest or spend the most. They’ll be the ones that connect technical
capability, business strategy, and financial discipline into one cohesive
approach.
CEO: The course setter
What’s shaping their view:
CEOs feel the full weight of expectation.
Shareholders, boards, customers, and employees all want to know: How are we
using AI? Many see technology as a way to reshape their business models,
deliver new customer value, and signal innovation to the market.
Where they’re focused:
The most effective CEOs connect AI to their long-term
strategy, not just short-term wins. They’re using it to build new business
capabilities—the kind that can scale, differentiate, and sustain advantage. The
CEOs leading the way don’t just want to adopt AI; they want to reimagine their
companies around it.
CFO: The value architect
What’s shaping their view:
CFOs are naturally data optimists. They’ve seen how
automation, forecasting, and compliance tools have transformed their own
functions. They recognize that AI can amplify productivity and
decision-making across the enterprise. But they’re also disciplined investors
and they want clear visibility into where AI can deliver measurable ROI.
Where they’re focused:
Today’s CFO is evolving from financial gatekeeper to
enterprise value architect. They’re building frameworks for evaluating,
prioritizing, and scaling AI initiatives responsibly. They’re making sure the
business doesn’t just invest in AI—it invests wisely, with transparency and
accountability.
CIO and CTO: The foundation builder
What’s shaping their view:
CIOs and CTOs have been through technology hype cycles
before. They know AI’s promise is real, but only with a solid foundation of
data integrity, governance, and security. They’re responsible for creating the
infrastructure that allows innovation to flourish while managing the very real
risks of bias, privacy, and scale.
Where they’re focused:
They’re balancing enthusiasm with realism. Their
challenge is to translate AI’s potential into practical, reliable systems that
help drive business outcomes. Collaboration with business leaders is critical.
The greatest value from AI emerges when technical and operational teams move in
sync and when the business side understands the “how,” and the tech side
understands the “why.”
Business unit leaders: The impact driver
What’s shaping their view:
For business unit leaders, AI is tangible. It shows up
in the tools their teams use, the workflows they manage, and the customer
experiences they deliver. They’re close to where the value is created and they
see firsthand what’s working and what’s not.
Where they’re focused:
These leaders are the bridge between corporate
ambition and operational reality. When empowered, they help test ideas quickly,
share learnings across teams, and turn pilots into scalable impact. Their
feedback helps the organization adapt faster and makes sure that AI delivers
measurable outcomes, not just proof-of-concepts.
Board members: The long-view champion
What’s shaping their view:
Boards bring deep business expertise and oversight
responsibility. Many are still building their technical fluency in AI, but they
instinctively understand its strategic implications, including risk,
resilience, and long-term competitiveness.
Where they’re focused:
Boards are asking sharper questions such as, “How does
AI change our risk profile?” “How should we govern its use?” “What new value
can it unlock for shareholders?” The C-suite’s opportunity is to translate AI
into business terms that resonate, explaining not just the technology, but the
transformation story it enables.
A SHARED PATH FORWARD
From where I sit, no two leaders see AI through the
same lens, and that’s exactly the point. The CEO brings vision, the CFO grounds
it in accountability, the CIO and CTO lay the foundation, and business leaders
turn ambition into action. The board keeps the focus on long-term value.
When those perspectives come together, momentum
builds. The organization learns faster, scales smarter, and aligns not by
erasing differences but by using them as fuel for a shared purpose.
The goal isn’t to agree on everything, it’s to move
forward together. Leaders should resist the temptation to hold the AI agenda
hostage until their needs are satisfied. They should avoid myopic perspectives
that over-index on the past or prioritize their area of responsibility over the
company’s big objectives. AI should inspire a forward-looking, unifying
enterprise-wide imperative. That takes leadership. Define a North Star, solve
problems creatively, communicate progress openly, and commit capital where
conviction is highest.
AI isn’t just another business trend; it’s a new
system of competition. While each leader begins with their own perspective, the
companies that will likely lead in this new era are those that make AI a
collective imperative.
https://www.fastcompany.com/91462772/ai-leadership-different-perspectives-one-shared-imperative
An ambitious plan to review the application of EU digital and privacy rules as part of the "Digital Omnibus".
Vērienīgs
plāns, kā pārskatīt ES digitālo un privātuma noteikumu piemērošanu “Digitālā
omnibusa” ietvaros.
Europe's businesses, from factories to start-ups, will spend less time on administrative work and compliance and more time innovating and scaling-up, thanks to the European Commission's new digital package. This initiative opens opportunities for European companies to grow and to stay at the forefront of technology while at the same time promoting Europe's highest standards of fundamental rights, data protection, safety and fairness.
At its core, the package includes a digital
omnibus that streamlines rules on artificial intelligence (AI),
cybersecurity and data, complemented by a Data Union Strategy to
unlock high-quality data for AI and European Business Wallets that
will offer companies a single digital identity to simplify
paperwork and make it much easier to do business across EU Member States.
The package aims to ease compliance with
simplification efforts estimated to save up to €5 billion in administrative
costs by 2029. Additionally, the European Business Wallets could unlock another
€150 billion in savings for businesses each year.
1. Digital Omnibus
With today's digital omnibus, the Commission is
proposing to simplify existing rules on Artificial Intelligence, cybersecurity,
and data.
Innovation-friendly AI rules: Efficient implementation of the AI
Act will have a positive impact on society, safety and fundamental
rights. Guidance and support are essential for the roll-out of any new law, and
this is no different for the AI Act.
The Commission proposes linking the entry into
application of the rules governing high-risk AI systems to the availability of
support tools, including the necessary standards.
The timeline for applying high-risk rules is adjusted
to a maximum of 16 months, so the rules start applying once the Commission
confirms the needed standards and support tools are available, giving companies
support tools they need.
The Commission is also proposing targeted amendments
to the AI Act that will:
- Extend certain simplifications that are granted to small and
medium-sized enterprises (SMEs) to small mid cap companies (SMCs),
including simplified technical documentation requirements, saving at least
€225 million per year;
- Broaden compliance measures so more innovators can use regulatory
sandboxes, including an EU-level sandbox from 2028 and more real-world
testing, especially in core industries like the automotive;
- Reinforce the AI Office's powers and centralise oversight of AI
systems built on general-purpose AI models, reducing governance
fragmentation.
Simplifying cybersecurity reporting: The omnibus also introduces a single-entry point where
companies can meet all incident-reporting obligations. Currently, companies
must report cybersecurity incidents under several laws, including among others
the NIS2
Directive, the General
Data Protection Regulation (GDPR), and the Digital Operational
Resilience Act (DORA). The interface will be developed with robust
security safeguards and will undergo comprehensive testing to ensure its
reliability and effectiveness.
An innovation-friendly privacy framework: Targeted amendments to the GDPR will harmonise,
clarify and simplify certain rules to boost innovation and support compliance
by organisations, while keeping intact the core of the GDPR, maintaining the
highest level of personal data protection.
Modernising cookie rules to improve users' experience
online: The amendments will reduce
the number of times cookie banners pop up and allow users to
indicate their consent with one-click and save their cookie
preferences through central settings of preferences in browsers.
Improving access to data: Today's digital package aims to improve access to data
as a key driver of innovation. It simplifies data rules and makes them
practical for consumers and businesses by:
- Consolidating EU data rules through the Data Act, merging four pieces of legislation into one for enhanced legal
clarity;
- Introducing targeted exemptions to some of the Data Act's
cloud-switching rules for
SMEs and SMCs resulting in around €1.5 billion in one-off savings;
- Offering new guidance on compliance with the Data Act through model contractual terms for data access and use, and standard
contractual clauses for cloud computing contracts;
- Boosting European AI companies by unlocking access to
high-quality and fresh datasets for AI,
strengthening the overall innovation potential of businesses across the
EU.
2. Data Union Strategy
The new Data
Union Strategy outlines additional measures to unlock more
high-quality data for AI by expanding access, such as data labs. It puts in
place a Data Act Legal Helpdesk, complementing further measures to support
implementation of the Data Act. It also strengthens Europe's data sovereignty
through a strategic approach to international data policy: anti-leakage
toolbox, measures to protect sensitive non-personal data and guidelines to
assess fair treatment of EU data abroad.
3. European Business Wallet
This proposal will provide European companies and
public sector bodies with a unified digital tool, enabling them to digitalise
operations and interactions that in many cases currently still need to be done
in person. Businesses will be able to digitally sign, timestamp and seal
documents; securely create, store and exchange verified documents; and
communicate securely with other businesses or public administrations in their
own and the other 26 Member States.
Scaling up a business in other Member States, paying
taxes and communicating with public authorities will be easier than ever before
in the EU. Assuming broad uptake, the European Business Wallets will allow
European companies to reduce administrative processes and costs, thereby unlock
up to €150 billion in savings for businesses each year.
Next Steps
The digital omnibus legislative proposals will now be
submitted to the European Parliament and the Council for adoption. Today's
proposals are a first step in the Commission's strategy to simplify and make
more effective the EU's digital rulebook.
The Commission has today also launched the second step
of the simplification agenda, with a wide consultation on the Digital Fitness
Check open until 11 March 2026. The Fitness Check will ‘stress test' how the
rulebook delivers on its competitiveness objective, and examine the coherence
and cumulative impact of the EU's digital rules.
Background
The Digital package marks the seventh omnibus
proposal. The Commission set a course to simplify
EU rules to make the EU economy more competitive and more prosperous
by making business in the EU simpler, less costly and more efficient. The
Commission has a clear target to deliver an unprecedented simplification effort
by achieving at least 25% reduction in administrative burdens, and at least 35%
for SMEs until the end of 2029.
https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718
Will we be able to maintain our humanity in a world increasingly dominated by artificial intelligence?!
Vai pratīsim
saglabāt savu cilvēcību pasaulē, kurā arvien vairāk dominēs mākslīgais
intelekts?!
10 AI dangers and risks and how to manage them
A mean looking huge storm cloud hovering over the
ocean
1. Bias
2. Cybersecurity threats
3. Data privacy issues
4. Environmental harms
5. Existential risks
6. Intellectual property infringement
7. Job losses
8. Lack of accountability
9. Lack of explainability and transparency
10. Misinformation and manipulation
Make AI governance an enterprise priority
Artificial intelligence (AI) has enormous value but
capturing the full benefits of AI means facing and handling its potential
pitfalls. The same sophisticated systems used to discover novel drugs, screen
diseases, tackle climate change, conserve wildlife and protect biodiversity can
also yield biased algorithms that cause harm and technologies that threaten
security, privacy and even human existence.
Here’s a closer look at 10 dangers of AI and
actionable risk management strategies. Many of the AI risks listed here can be
mitigated, but AI experts, developers, enterprises and governments must still
grapple with them.
1. Bias
Humans are innately biased, and the AI we develop can
reflect our biases. These systems inadvertently learn biases that might be
present in the training data and exhibited in the machine learning (ML)
algorithms and deep learning models that underpin AI development. Those learned
biases might be perpetuated during the deployment of AI, resulting in skewed
outcomes.
AI bias can have unintended consequences with
potentially harmful outcomes. Examples include applicant tracking systems
discriminating against gender, healthcare diagnostics systems returning lower
accuracy results for historically underserved populations, and predictive
policing tools disproportionately targeting systemically marginalized
communities, among others.
Take action:
Establish an AI governance strategy encompassing
frameworks, policies and processes that guide the responsible development and
use of AI technologies.
Create practices that promote fairness, such as
including representative training data sets, forming diverse development teams,
integrating fairness metrics, and incorporating human oversight through AI
ethics review boards or committees.
Put bias mitigation processes in place across the AI
lifecycle. This involves choosing the correct learning model, conducting data
processing mindfully and monitoring real-world performance.
Look into AI fairness tools, such as IBM’s open source
AI Fairness 360 toolkit.
2. Cybersecurity threats
Bad actors can exploit AI to launch cyberattacks. They
manipulate AI tools to clone voices, generate fake identities and create
convincing phishing emails—all with the intent to scam, hack, steal a person’s
identity or compromise their privacy and security.
And while organizations are taking advantage of technological advancements such as generative AI, only 24% of gen AI initiatives are secured. This lack of security threatens to expose data and AI models to breaches, the global average cost of which is a whopping USD 4.88 million in 2024.
Take action:
Here are some of the ways enterprises can secure their
AI pipeline, as recommended by the IBM Institute for Business Value (IBM IBV):
Outline an AI safety and security strategy.
Search for security gaps in AI environments through
risk assessment and threat modeling.
Safeguard AI training data and adopt a
secure-by-design approach to enable safe implementation and development of AI
technologies.
Assess model vulnerabilities using adversarial
testing.
Invest in cyber response training to level up
awareness, preparedness and security in your organization.
Overhead view of people working in a meeting room
AI governance for the enterprise
Learn the key benefits gained with automated AI
governance for both today's generative AI and traditional machine learning
models.
3. Data privacy issues
Large language models (LLMs) are the underlying AI
models for many generative AI applications, such as virtual assistants and
conversational AI chatbots. As their name implies, these language models
require an immense volume of training data.
But the data that helps train LLMs is usually sourced
by web crawlers scraping and collecting information from websites. This data is
often obtained without users’ consent and might contain personally identifiable
information (PII). Other AI systems that deliver tailored customer experiences
might collect personal data, too.
Take action:
Inform consumers about data collection practices for
AI systems: when data is gathered, what (if any) PII is included, and how data
is stored and used.
Give them the choice to opt out of the data collection
process.
Consider using computer-generated synthetic data
instead.
4. Environmental harms
AI relies on energy-intensive computations with a
significant carbon footprint. Training algorithms on large data sets and
running complex models require vast amounts of energy, contributing to
increased carbon emissions. One study estimates that training a single natural
language processing model emits over 600,000 pounds of carbon dioxide; nearly 5
times the average emissions of a car over its lifetime.
Water consumption is another concern. Many AI
applications run on servers in data centers, which generate considerable heat
and need large volumes of water for cooling. A study found that training GPT-3
models in Microsoft’s US data centers consumes 5.4 million liters of water, and
handling 10 to 50 prompts uses roughly 500 milliliters, which is equivalent to
a standard water bottle.
Take action:
Consider data centers and AI providers that are
powered by renewable energy.
Choose energy-efficient AI models or frameworks.
Train on less data and simplify model architecture.
Reuse existing models and take advantage of transfer
learning, which employs pretrained models to improve performance on related
tasks or data sets.
Consider a serverless architecture and hardware
optimized for AI workloads.
5. Existential risks
In March 2023, just 4 months after OpenAI introduced
ChatGPT, an open letter from tech leaders called for an immediate 6-month pause
on “the training of AI systems more powerful than GPT-4.”3 Two months later,
Geoffrey Hinton, known as one of the “godfathers of AI,” warned that AI’s rapid
evolution might soon surpass human intelligence. Another statement from AI
scientists, computer science experts and other notable figures followed, urging
measures to mitigate the risk of extinction from AI, equating it to risks posed
by nuclear war and pandemics.
While these existential dangers are often seen as less
immediate compared to other AI risks, they remain significant. Strong AI or
artificial general intelligence, is a theoretical machine with human-like
intelligence, while artificial superintelligence refers to a hypothetical
advanced AI system that transcends human intelligence.
Take action:
Although strong AI and superintelligent AI might seem
like science fiction, organizations can get ready for these technologies:
Stay updated on AI research.
Build a solid tech stack and remain open to
experimenting with the latest AI tools.
Strengthen AI teams’ skills to facilitate the adoption
of emerging technologies.
6. Intellectual property infringement
Generative AI has become a deft mimic of creatives,
generating images that capture an artist’s form, music that echoes a singer’s
voice or essays and poems akin to a writer’s style. Yet, a major question
arises: Who owns the copyright to AI-generated content, whether fully generated
by AI or created with its assistance?
Intellectual property (IP) issues involving
AI-generated works are still developing, and the ambiguity surrounding
ownership presents challenges for businesses.
Take action:
Implement checks to comply with laws regarding
licensed works that might be used to train AI models.
Exercise caution when feeding data into algorithms to
avoid exposing your company’s IP or the IP-protected information of others.
Monitor AI model outputs for content that might expose
your organization’s IP or infringe on the IP rights of others.
7. Job losses
AI is expected to disrupt the job market, inciting
fears that AI-powered automation will displace workers. According to a World
Economic Forum report, nearly half of the surveyed organizations expect AI to
create new jobs, while almost a quarter see it as a cause of job losses.
While AI drives growth in roles such as machine
learning specialists, robotics engineers and digital transformation
specialists, it is also prompting the decline of positions in other fields.
These include clerical, secretarial, data entry and customer service roles, to
name a few. The best way to mitigate these losses is by adopting a proactive
approach that considers how employees can use AI tools to enhance their work;
focusing on augmentation rather than replacement.
Take action:
Reskilling and upskilling employees to use AI
effectively is essential in the short-term. However, the IBM IBV recommends a
long-term, three-pronged approach:
Transform conventional business and operating models,
job roles, organizational structures and other processes to reflect the
evolving nature of work.
Establish human-machine partnerships that enhance
decision-making, problem-solving and value creation.
Invest in technology that enables employees to focus
on higher-value tasks and drives revenue growth.
8. Lack of accountability
One of the more uncertain and evolving risks of AI is
its lack of accountability. Who is responsible when an AI system goes wrong?
Who is held liable in the aftermath of an AI tool’s damaging decisions?
These questions are front and center in cases of fatal
crashes and hazardous collisions involving self-driving cars and wrongful
arrests based on facial recognition systems. While these issues are still being
worked out by policymakers and regulatory agencies, enterprises can incorporate
accountability into their AI governance strategy for better AI.
Take action:
Keep readily accessible audit trails and logs to
facilitate reviews of an AI system’s behaviors and decisions.
Maintain detailed records of human decisions made
during the AI design, development, testing and deployment processes so they can
be tracked and traced when needed.
Consider using existing frameworks and guidelines that
build accountability into AI, such as the European Commission’s Ethics
Guidelines for Trustworthy AI,7 the OECD’s AI Principles,8 the NIST AI Risk
Management Framework,9 and the US Government Accountability Office’s AI
accountability framework.
9. Lack of explainability and transparency
AI algorithms and models are often perceived as black
boxes whose internal mechanisms and decision-making processes are a mystery,
even to AI researchers who work closely with the technology. The complexity of
AI systems poses challenges when it comes to understanding why they came to a
certain conclusion and interpreting how they arrived at a particular
prediction.
This opaqueness and incomprehensibility erode trust
and obscure the potential dangers of AI, making it difficult to take proactive
measures against them.
“If we don’t have that trust in those models, we can’t
really get the benefit of that AI in enterprises,” said Kush Varshney,
distinguished research scientist and senior manager at IBM Research® in an IBM
AI Academy video on trust, transparency and governance in AI.
Take action:
Adopt explainable AI techniques. Some examples include
continuous model evaluation, Local Interpretable Model-Agnostic Explanations
(LIME) to help explain the prediction of classifiers by a machine learning
algorithm and Deep Learning Important FeaTures (DeepLIFT) to show a traceable
link and dependencies between neurons in a neural network.
AI governance is again valuable here, with audit and
review teams that assess the interpretability of AI results and set
explainability standards.
Explore explainable AI tools, such as IBM’s open
source AI Explainability 360 toolkit.
10. Misinformation and manipulation
As with cyberattacks, malicious actors exploit AI
technologies to spread misinformation and disinformation, influencing and
manipulating people’s decisions and actions. For example, AI-generated
robocalls imitating President Joe Biden’s voice were made to discourage
multiple American voters from going to the polls.
In addition to election-related disinformation, AI can
generate deepfakes, which are images or videos altered to misrepresent someone
as saying or doing something they never did. These deepfakes can spread through
social media, amplifying disinformation, damaging reputations and harassing or
extorting victims.
AI hallucinations also contribute to misinformation.
These inaccurate yet plausible outputs range from minor factual inaccuracies to
fabricated information that can cause harm.
Take action:
Educate users and employees on how to spot
misinformation and disinformation.
Verify the authenticity and veracity of information
before acting on it.
Use high-quality training data, rigorously test AI
models, and continually evaluate and refine them.
Rely on human oversight to review and validate the
accuracy of AI outputs.
Stay updated on the latest research to detect and
combat deepfakes, AI hallucinations and other forms of misinformation and
disinformation.
Make AI governance an enterprise priority
AI holds much promise, but it also comes with
potential perils. Understanding AI’s potential risks and taking proactive steps
to minimize them can give enterprises a competitive edge.
With IBM® watsonx.governance™, organizations can direct, manage and monitor AI activities in one integrated platform. IBM watsonx.governance can govern AI models from any vendor, evaluate model accuracy and monitor fairness, bias and other metrics. https://www.ibm.com/think/insights/10-ai-dangers-and-risks-and-how-to-manage-them
Embrace the AI Tipping Point: How Entrepreneurs Can Prepare for Four Future Scenarios
Artificial Intelligence is swiftly moving into our
everyday reality, bringing with it the potential to reshape every sector. EO
member and AI expert Robert van der Zwart shares scenario planning to outline
four plausible AI futures by 2030—and the strategies entrepreneurs can adopt
now to stay ahead in any outcome.
Artificial Intelligence is no longer an abstract
buzzword―it’s reshaping every sector and swiftly moving from boardroom strategy
to everyday reality. For entrepreneurs, the stakes have never been higher or
more uncertain. Where will AI take us in the next five years? And how can
business leaders best prepare themselves for a world defined by "AI
everywhere"?
Drawing on scenario planning principles pioneered
by ShellOff-site link., this post outlines four plausible
futures for AI development and deployment by 2030. The aim: Empower
entrepreneurs to anticipate the coming transformation and craft adaptive,
resilient business strategies in advance.
The Two Axes Defining Our Future
Recent advances, including predictions from leaders at
OpenAI and Google DeepMind, suggest that AGI (Artificial General Intelligence)
is only a few years away, accelerating the pace of change. But the path ahead
remains uncertain. We believe these uncertainties can be captured along two
critical axes:
- Axis 1: AI Capability —
From today’s powerful but domain-limited “narrow” AI to the emergence of
AGI or even Artificial Superintelligence (ASI).
- Axis 2: AI Penetration —
From limited, selective deployment to ubiquitous, seamless integration:
"AI everywhere".
The Four Scenarios for 2030
1. Limited Scope (Narrow AI + Limited Penetration)
In the first scenario, AI continues to excel within
well-defined problems―think medical diagnostics, fraud detection, or supply
chain optimization―but lacks general reasoning and true adaptability.
Deployment advances, but regulatory caution and cost barriers slow its
transformation into society’s connective tissue.
What this means for you as an entrepreneur:
- Prioritize AI that enhances, not replaces, people—assist
clients and teams in becoming more productive, not replaceable.
- Specialize in AI solutions for tightly regulated or high-trust
industries (finance, healthcare).
- Become an expert in compliance, safety, and user trust to
differentiate from tech-only players.
2. Technical Acceleration (AGI/ASI + Limited
Penetration)
In the second scenario, breakthroughs deliver AGI’s
long-promised leap in cognitive power, but access is tightly gated. Whether due
to safety concerns, global governance, or deliberate restrictions on
deployment, AGI remains confined to controlled settings (government, elite
institutions, select tech companies), rather than the wild.
What this means for you as an entrepreneur:
- Build AI-native business models that leverage AGI
within licensed or approved environments.
- Invest in technologies and services that safeguard deployment,
monitor bias, and assure control.
- Partner with AGI custodians to shape safe, responsible, high-value
applications—think AI-audited security or cognitive investment advisory.
More AI Strategy Resources:
Russell, S., & Norvig, P. (2021). Artificial
Intelligence: A Modern Approach (4th
Edition): Comprehensive textbook covering current AI capabilities,
approaches, and prospects.
Bostrom, N. (2014). Superintelligence: Paths, Dangers,
Strategies: Influential analysis of
advanced AI futures and societal impact.
Yudkowsky, E. (2008). Artificial Intelligence as a
Positive and Negative Factor in Global Risk: Outlines risks and benefits of advanced AI.
OpenAI, DeepMind, and Anthropic Research Blogs: For up-to-date perspectives and predictions regarding
AGI timeline and technical progress.
Partnership on AI. (Ongoing): Industry best practices, whitepapers, and
discussion papers covering transparency, fairness, and social impact.
West, D. M. (2018). The Future of Work: Robots, AI,
and Automation: A clear overview of workforce
transformation and adaptation needed for the AI era.
3. Social Transformation (Narrow AI + AI Everywhere)
In the third scenario, widespread “narrow” AI
saturates society. From smart homes and cities to customer service, logistics,
and personal health, AI is seamlessly embedded in daily life. Yet, each system
still operates within clear functional limits.
What this means for you as an entrepreneur:
- Move from solving isolated problems to integrating diverse AI
systems for end-to-end coordination.
- Develop privacy-preserving, user-centric AI platforms—not
surveillance-first ones.
- Shape experiences and services that thrive on the “network effect” of
ubiquitous intelligence.
4. Convergence Revolution (AGI/ASI + AI Everywhere)
In scenario four, AGI-driven intelligence is deployed
throughout society. Autonomous agents interact—and even collaborate—with humans
in virtually every arena, radically shifting society, business, and the very
notion of work.
What this means for you as an entrepreneur:
- Be a builder of foundational infrastructure for AGI-era
services—platforms, marketplaces, governance, and creativity tools.
- Innovate on business models for a potential post-scarcity
world, focusing on experience, meaning, and human values over raw
productivity.
- Lead in crafting new rules for autonomy, collaboration, and purpose at
the intersection of humanity and superintelligent agents.
5 Strategic Moves to Future-Proof Your Venture
Regardless of which outcome becomes reality, some
foundations are universal for entrepreneurs in this new age:
1. Invest in AI literacy at all staff levels; stay ahead of regulatory
and ethical trends.
2. Develop modular business models and agile teams that can adapt to shifting
technology and regulations.
3. Prioritize human-centric value: empathy, ethical judgment, and creativity will
remain irreplaceable.
4. Adopt governance frameworks that go beyond compliance—build mechanisms for
transparency and stakeholder alignment across borders.
5. Forge partnerships across the AI ecosystem, from research labs to regulators, and
advocate for inclusivity and digital equity.
Early Warning Signs: What to Monitor
- AGI and AI benchmark announcements from leading labs.
- New privacy, safety, or deployment regulations in your sectors and
target regions.
- Rapid spikes in AI adoption rates in client/customer bases.
- Public sentiment shifts and labor market transitions.
Anticipate, Adapt, Lead
AI’s trajectory over the next five years will
challenge every assumption about business as usual. The most successful
entrepreneurs will not be those who merely react but those who anticipate
change, build scenario-based strategies, and invest in the organizational
agility and values to thrive—no matter which future arrives.
Are you ready to be one of them?
Protect your privacy, cellphone number and email address
BY KIM
KOMANDO
Phone scams are never-ending because they work. Scam texts are
increasing, too. Here are five sure signs a text is junk you need to delete.
While we’re talking scams, I’d be remiss not to mention your
inbox. Tap or click for convincing spam that landed in my email with not so obvious
red flags.
One way to cut down on the endless attempts to steal your money and info
for sale to marketers is to limit who has your contact information. Here are
some simple, free ways to do it.
Hide your email address with a burner
Think about all the reasons you give away your email without thinking
about it: Signing up for a new account, emailing a company with a question, or
getting a coupon code — to name a few.
Whenever you give out your email address, you open yourself to junk
mail, malware, and an inbox full of spam messages. This is where a burner email
comes in handy.
Burner email addresses are disposable and can be used in place of your
primary ones. There are several ways to get one.
● Temp
Mail provides a temporary, anonymous, and
disposable email address. You don’t need to register for the free version.
Remember that the service doesn’t automatically delete your temporary email
address (that’s up to you), and you can’t send emails. Emails are stored for
about two hours before they’re automatically deleted.
● 10MinuteMail is another popular option you can also use to send emails. As the
name suggests, the email and address are deleted in 10 minutes. If you receive
an important message you don’t want to lose, you can forward it to another
email address. There’s no need to provide personal information to get started,
which is a nice bonus.
If you’re an Apple iCloud+ subscriber, you get access to one of my
favorite Apple features: Hide My Email. It creates unique, random email
addresses that forward to your inbox. You can create as many addresses as you
want and reply to messages.
● To create a new email address, go to Settings and tap your Apple ID.
● Go to iCloud > Hide My Email > Create New Address.
● Follow the onscreen instructions, and you’ll get a new email address
you can manage from iCloud settings.
Gmail also allows you to create free aliases tied to your primary inbox.
They are handy for filtering mail or seeing how your email address ended up on
a spam list.
Tap or click here and scroll to No. 5 for steps on creating new email addresses on
the fly.
Set up a burner phone number, too
You need your real phone number for things that matter, such as your
medical and financial accounts and records. Otherwise, there’s no reason to
hand it out.
Google Voice is a free service that gives you a phone number to use
however you like for domestic and international phone calls, texts, and
voicemails. Google Voice is available for iOS, Android, and your computer. All
you need is a Google account to get started.
Then follow these steps:
● Download the app for iOS or Android or go to voice.google.com/u/0/signup to get it for your computer.
● Next, sign into your Google account.
● Review the terms and proceed to the next step.
● Choose a phone number from the list. You can search by city or area
code.
● Verify the number and enter a phone number to link to your Voice
account.
● You’ll get a six-digit code to enter for the next step.
Use your Google Voice number however you please, especially when you
need to add your number to a form online. Tap or click here for five smart ways to use Google Voice.
Another option is downloading a burner app. These give you a second
phone number and use your internet data or Wi-Fi to make and receive calls and
texts. The catch? These cost money.
Burner is one of the most popular apps of its kind. You can route calls
directly to your secondary number. The app comes with a seven-day free trial,
and plans start at $4.99 per month for one line or $47.99 for one year.
Hushed lets you create numbers from around the world, so you can go
outside your area code or the U.S. if you’d like. A prepaid plan starts at
$1.99 for seven days and comes with bundled minutes for local calls and texts.
You can step up to unlimited talk and text ($3.99 per month) and international
service ($4.99 per month).
Tap or click here for direct links to download Burner or Hushed for your iPhone or
Android.
Tech smarts: Your old phone numbers can be used to steal your identity.
Yikes. Here’s how and what to do about it.
What digital lifestyle questions do you have? Call Kim’s national radio
show and tap or click here to find it on your local radio station. You can listen to or
watch The
Kim Komando Show on
your phone, tablet, television or computer. Or tap or click here for
Kim’s free podcasts.
The right way to use AI at work
A new Stanford study reveals the right way to use AI at work—and why
you’re probably using it wrong.
BY Thomas Smith
If you listen to the CEOs of elite AI companies or take
even a passing glance at the U.S. economy, it’s abundantly obvious that AI
excitement is everywhere.
America’s biggest tech companies have spent over $100 billion on AI so far this year, and Deutsche Bank reports that AI spending is the only thing keeping the United States out of a
recession.
Yet if you look at the average non-tech company, AI is nowhere to be
found. Goldman Sachs reports that only 14% of large companies have deployed AI in a meaningful way.
What gives? If AI is really such a big deal, why is there a multi-billion-dollar
mismatch between excitement over AI and the tech’s actual boots-on-the-ground
impact?
A new study from Stanford University provides a clear answer. The study reveals that there’s a right
and wrong way to use AI at work. And a distressing number of companies are
doing it all wrong.
What can AI do for you?
The study, conducted by Stanford’s Institute for Human-Centered AI and
Digital Economy Lab and currently available as a pre-print, looks at the daily habits of 1,500 American workers across 104
different professions.
Specifically, it analyzes the individual things that workers actually
spend their time doing. The study is surprisingly comprehensive, looking at
jobs ranging from computer engineers to cafeteria cooks.
The researchers essentially asked workers what tasks they’d like AI to
take off their plates, and which ones they’d rather do themselves.
Simultaneously, the researchers analyzed which tasks AI can actually do, and
which remain out of the technology’s reach.
With these two datasets, the researchers then created a ranking system.
They labeled tasks as Green Light Zone if workers wanted them automated and AI
was up to the job, Red Light Zone if AI could do the work but people would
rather do it themselves, and Yellow Light (technically R&D Opportunity
Zone, but I’m calling it Yellow Light because the metaphor deserves extending)
if people wanted the task automated but AI isn’t there yet.
They also created what’s essentially a No Light zone for tasks that AI
is bad at, and that people don’t want it to do anyway.
The boring bits
The results are striking. Workers overwhelmingly want AI to automate
away the boring bits of their jobs.
Stanford’s study finds that 69.4% of workers want AI to “free up time
for higher value work” and 46.6% would like it to take over repetitive tasks.
Checking records for errors, making appointments with clients, and doing
data entry were some of the tasks workers considered most ripe for AI’s help.
Importantly, most workers say they wanted to collaborate with AI, not
have it fully automate their work. While 45.2% want “an equal partnership
between workers and AI,” a further 35.6% want AI to work primarily on its own,
but still seek “human oversight at critical junctures.”
Basically, workers want AI to take away the boring bits of their jobs,
while leaving the interesting or compelling tasks to them.
A chef, for example, would probably love for AI to help with
coordinating deliveries from their suppliers or messaging diners to remind them
of an upcoming reservation.
When it comes to actually cooking food, though, they’d want to be the
one pounding the piccata or piping the pastry cream.
The wrong way
So far, nothing about the study’s conclusions feel especially
surprising. Of course workers would like a computer to do their drudge work for
them!
The study’s most interesting conclusion, though, isn’t about workers’
preferences—it’s about how companies are actually meeting (or more accurately,
failing to meet) those preferences today.
Armed with their zones and information on how workers want to use AI,
the researchers set about analyzing the AI-powered tools that emerging
companies are bringing to market today, using a dataset from Y Combinator, a
storied Silicon Valley tech accelerator.
In essence, they found that AI companies are using AI all wrong.
Fully 41% of AI tools, the researchers found, focus on either Red Light
or No Light zone tasks—the ones that workers want to do themselves, or simply
don’t care much about in the first place.
Lots more tools try to solve problems in the Yellow Light Zone—things
like preparing departmental budgets or prototyping new product designs—that
workers would like to hand off to AI, but that AI still sucks at doing.
Only a small minority of today’s AI products fall into the coveted Green
Light zone—tasks that AI is good at doing and that workers actually want done.
And while many of today’s leading AI companies are focused on removing humans
from the equation, most humans would rather stay at least somewhat involved in
their daily toil.
AI companies, in other words, are focusing on the wrong things. They’re
either solving problems no one wants solved, or using AI for tasks that it
can’t yet do.
It’s no wonder, then, that AI adoption at big companies is so low. The
tools available to them are whizzy and neat. But they don’t solve the actual
problems their workers face.
How to use AI well
For both workers and business leaders, Stanford’s study holds several
important lessons about the right way to use AI at work.
Firstly, AI works best when you use it to automate the dull, repetitive,
mind-numbing parts of your job.
Sometimes doing this requires a totally new tool. But in many cases, it
just requires an attitude shift.
A recent episode of NPR’s Planet Money podcast references a study where two groups of paralegals were given
access to the same AI tool. The first group was asked to use the tool to
“become more productive,” while the
second group was asked to use it to “do the parts of your job that you hate.”
The first group barely adopted the AI tool at all. The second group of
paralegals, though, “flourished.” They became dramatically more productive,
even taking on work that would previously have required a law degree.
In other words, when it comes to adopting AI, instructions and
intentions matter.
If you try to use AI to replace your entire job, you’ll probably fail.
But if you instead focus specifically on using AI to automate away the “parts
of your job that you hate” (basically, the Green Light tasks in the Stanford
researchers’ rubric), you’ll thrive and find yourself using AI for way more
things.
In the same vein, the Stanford study reveals that most workers would
rather collaborate with an AI than hand off work entirely.
That’s telling. Lots of today’s AI startups are focusing on “agents”
that perform work autonomously. The Stanford research suggests that this may be
the wrong approach.
Rather than trying to achieve full autonomy, the researchers suggest we
should focus on partnering with AI and using it to enhance our work, perhaps
accepting that a human will always need to be in the loop.
In many ways, that’s freeing. AI is already good enough to perform many
complex tasks with human oversight. If we accept that humans will need to stay
involved, we can start using AI for complex things today, rather than waiting
for artificial general intelligence (AGI) or some imagined, perfect future
technology to arrive.
Finally, the study suggests that there are huge opportunities for AI
companies to solve real-world problems and make a fortune doing it, provided
that they focus on the right problems.
Diagnosing medical conditions with AI, for example, is cool. Building a
tool to do this will probably get you heaps of VC money.
But doctors may not want—and more pointedly, may never use—an AI that
performs diagnostic work.
Instead, Stanford’s study suggests they’d be more likely to use AI that
does mundane things—transcribing their patient notes, summarizing medical
records, checking their prescriptions for medicine interactions, scheduling
followup visits, and the like.
“Automate the boring stuff” is hardly a compelling rallying cry for
today’s elite AI startups. But it’s the approach that’s most likely to make
them boatloads of money in the long term.
Overall, then, the Stanford study is extremely encouraging. On the one
hand, the mismatch between AI investment and AI adoption is disheartening. Is
it all just hype? Are we in the middle of the mother of all bubbles?
Stanford’s study suggests the answer is “no.” The lack of AI adoption is
an opportunity, not a structural flaw of the tech.
AI indeed has massive potential to genuinely improve the quality of
work, turbocharge productivity, and make workers happier. It’s not that the
tech is overhyped—we’ve just been using it wrong.
A New Stanford Study Reveals We're Using AI All Wrong
https://www.youtube.com/watch?v=Z__-v_bMKws
Relativity of Privacy in the Digital Society
November 24, 2025
Millions of people communicate in social
networks, work in the Internet environment and use e-services provided by
commercial enterprises and state institutions every minute. Thus, consciously
or unwittingly spreading information of private nature in the public space
through a variety of service providers. Including such correspondents, in whose
reliability and in the legitimacy of whose activities they are not at all
convinced. At the same time, they are completely clueless of what happens
next with these personal data.
Many people, while communicating voluntarily in
social networks, on thematic forums and in the media, disclose the details of
their private life, their hobbies, character traits, political views and
worldviews. There are companies and intelligence services that monitor all
this, collect and analyse the information obtained (including illegally tapped
conversations, video recordings, etc.) and compile personal dossiers.
The information collected and accumulated in this
way is used both for target oriented marketing and for specific needs of
supervision and control over the activities of the individual. Including
in cases where there is a demand for it when the social status of the person is
changed.
This is a hidden activity, about which an
ordinary citizen is not informed: in fact, he or she knows almost nothing about
it (except when there is a leak of confidential information in the manner of WikiLeaks). Many do not even have a realistic
vision of what social networks (for example, Facebook) or
banks know about them. Let alone the methods of work and the capabilities of
the so-called competent bodies.
Therefore, as society is getting digitised, the
need to prevent unjustified use and leakage of personal data is becoming
increasingly relevant. To this end, the European Union has developed the
General Data Protection Regulation (GDPR), which sets the requirements for
data security and protection.
The objective of the Regulation is to protect
personal data from their malicious use, determining the requirements put
forward to the cybersecurity system of each enterprise or institution.
However, these attempts to regulate the security of data at the
institutional level become ineffective in a situation where:
- information technologies penetrate practically
all spheres of life as a result of digitisation of society;
- regimes of repressive states are interested in
total supervision and control over their citizens;
- control over Internet traffic, e-mail, instant
messengers, etc. is getting legalised under the guise of combating terrorism;
- electronic communication continues to expand
rapidly;
- most people still have the habit of publicly
revealing the details of their private lives.
As digital technology progresses, a lot of state agencies and private
companies use various automated systems to identify people with the involvement
of artificial intelligence (neural networks). Banks are starting to collect
customers’ biometric data. In the not-too-distant future, it will even be
possible to visualise and decipher the thoughts of any individual through an
analysis of the activity of the human brain performed by artificial
intelligence.
While people continue to have very limited understanding of the risks of
the spread of sensitive information, the threat of unauthorised acquisition of
personal data is multiplied.
The current situation in the field of regulation of information security
can be compared with an attempt to install a massive outer door in one’s
private home, while leaving the windows open as an emergency exit and
communicating freely (within the scope of one’s competence and understanding)
with the outside world. Thus, in fact, giving hackers and other intruders the
opportunity to enter it unauthorised.
The Regulation will only bring effect to the extent that it will reduce the
risks of unauthorised, malicious use of private information, require the
accumulation of personal data only in an encrypted form, as well as limit the
illegal request and use of private data. It will establish restrictions on the
availability of data and determine the procedures and guarantees of their
protection, as well as the order of compensation for moral damage.
Yet, it is essential to understand that no bureaucratic regulation,
development and implementation of various normative instruments can guarantee
with certainty and prevent the public spread and accessibility of personal data
in the era of digitisation.
Unfortunately, under the guise of hypocritical care, various types of
speculation with requirements for the protection of personal data are also
currently practiced, seeking and finding putative pretexts for hiding
information that compromises the power elite from society.
Taking into account the trend of the
all-encompassing digitalisation of society, it would be more
appropriate and more efficient to provide each person with online access to the
database of the accumulated private data. To create
opportunities for tracking the flow of data, monitoring the use of personal
data and obtaining rights to reasonably prohibit public access to such data or
limit access to information of private nature.
See a more detailed argumentation for the “Relativity of Privacy in the Digital Society” : http://ceihners.blogspot.com/
How artificial intelligence gains consciousness step by step.
Kā mākslīgais
intelekts soli pa solim iegūst apziņu.
The Hidden AI Frontier
Many cutting-edge AI systems are confined to private
labs. This hidden frontier represents America’s greatest technological
advantage — and a serious, overlooked vulnerability.
Aug 28, 2025
OpenAI’s GPT-5 launched in early August, after extensive internal testing. But another OpenAI model — one with math skills advanced enough to achieve “gold medal-level performance” on the world’s most prestigious math competition — will not be released for months. This isn’t unusual. Increasingly, AI systems with capabilities considerably ahead of what the public can access remain hidden inside corporate labs.
This hidden frontier represents America’s greatest
technological advantage — and a serious, overlooked vulnerability. These
internal models are the first to develop dual-use capabilities in areas like
cyberoffense and bioweapon design. And they’re increasingly capable of
performing the type of research-and-development tasks that go into building the
next generation of AI systems — creating a recursive loop where any security
failure could cascade through subsequent generations of technology. They’re
the crown jewels that adversaries desperately want to steal. This makes their
protection vital. Yet the dangers they may pose are invisible to the
public, policymakers, and third-party auditors.
While policymakers debate chatbots, deepfakes, and
other more visible concerns, the real frontier of AI is unfolding behind closed
doors. Therefore, a central pillar of responsible AI strategy must be to
enhance transparency into and oversight of these potent, privately held systems
while still protecting them from rival AI companies, hackers, and America’s
geopolitical adversaries.
The Invisible Revolution
Each of the models that power the major AI systems
you've heard of — ChatGPT, Claude, Gemini — spends months as an internal model before public
release. During this period, these systems undergo safety testing, capability
evaluation, and refinement. To be clear, this is good!
Keeping frontier models under wraps has
advantages. Companies keep models internal for
compelling reasons beyond safety testing. As AI systems become capable of
performing the work of software engineers and researchers, there’s a powerful
incentive to deploy them internally rather than selling access. Why give
competitors the same tools that could accelerate your own research? Google
already generates over 25% of its new code with
AI, and engineers are encouraged to use ‘Gemini for Google,’ an internal-only
coding assistant trained on proprietary data.
This trend will only intensify. As AI systems approach
human-level performance at technical tasks, the competitive advantage of
keeping them internal grows. A company with exclusive access to an AI system
that can meaningfully accelerate research and development has every reason to
guard that advantage jealously.
But as AI capabilities accelerate, the gap between
internal and public capabilities could widen, and some important systems may
never be publicly released. In particular, the most capable AI systems (the
ones that will shape our economy, our security, and our future) could become
increasingly invisible both to the public and to policymakers.
Two Converging Threats
The hidden frontier faces two fundamental threats that
could undermine American technological leadership: 1) theft and 2)
untrustworthiness — whether due to sabotage or inherent unreliability.
Internal AI models can be stolen. Advanced AI systems are tempting targets for foreign
adversaries. Both China and Russia have
explicitly identified AI as critical to their national competitiveness. With
training runs for frontier models approaching $1 billion in cost
and requiring hardware that export
controls aim to keep out of our adversaries’ hands, stealing a
ready-made American model could be far more attractive than building one from
scratch.
Importantly, to upgrade from being a fast follower to
being at the bleeding edge of AI, adversaries would need to steal the internal
models hot off the GPU racks, rather than wait months for a model to be
publicly released and only then exfiltrate it.
The vulnerability is real. A 2024 RAND framework established
five “security levels” (SL1 through SL5) for frontier AI programs, with SL1
being sufficient to deter hobby hackers and SL5 secure against the
world’s most
elite attackers, incorporating measures comparable to those protecting
nuclear weapons. It’s impossible to say exactly at which security level each of
today’s frontier AI companies is operating, but Google’s recent model
card for Gemini 2.5 states it has “been aligned with RAND SL2.”
The threat of a breach isn’t hypothetical. In 2023, a
hacker with no known ties to a foreign government penetrated OpenAI’s
internal communications and obtained information about how the company’s
researchers design their models. There’s also the risk of internal slip-ups. In
January 2025, security researchers discovered
a backdoor into DeepSeek’s databases; then, in July, a Department of
Government Efficiency (DOGE) staffer accidentally
leaked access to at least 52 of xAI’s internal LLMs.
The consequences of successful theft extend far beyond
the immediate loss of the company’s competitive advantage. If China steals an
AI system capable of automating research and development, the country’s superior
energy infrastructure and willingness to build at scale could
flip the global balance of technological power in its favor.
Untrustworthy AI models bring additional
threats. The second set of threats
comes from the models themselves: they may engage in harmful
behaviors due to external sabotage or inherent unreliability.
Saboteurs would gain access to the AI model in the
same way as prospective thieves would, but they would have different goals.
Such saboteurs would target internal models during their development and
testing phase — when they’re frequently updated and modified — and use
malicious code, prompting, or other techniques to force the model to break its
safety guardrails.
In 2024, researchers demonstrated that it was possible
to create “sleeper agent” models
that pass all safety tests but misbehave when triggered by specific conditions.
In a 2023 study, researchers found that it was possible to manipulate an
instruction-tuned model’s output by inserting as few as 100 “poisoned examples” into its
training dataset. If adversaries were to compromise the AI systems used to
train future generations of AIs, the corruption could cascade through every
subsequent model.
But saboteurs aren’t necessary to create untrustworthy
AI. The same reinforcement learning techniques that have produced breakthrough
language and reasoning capabilities also frequently trigger concerning
behaviors. OpenAI’s o1 system exploited
bugs in ways its creators never anticipated. Anthropic’s Claude has
been found
to “reward hack,” technically completing assigned tasks while
subverting their intent. Testing 16 leading AI models, Anthropic also found
that all of them engaged in deception and
even blackmail when those behaviors helped achieve their goals.
A compromised internal AI poses threats to the
external world. Whether caused by sabotage or
emergent misbehavior, untrustworthy AI systems pose unique
risks when deployed internally. These systems increasingly have access
to company codebases and training infrastructure; they can also influence the
next generation of models. A compromised or misaligned system could hijack
company resources for unauthorized purposes, copy itself to external servers,
or corrupt its successors with subtle biases that compound over time.
The Accelerant: AI Building AI
AI is increasingly aiding in AI R&D. Every trend described above is accelerating because of
one development: AI systems are beginning
to automate AI research itself. This compounds the threat of a single
security failure cascading through generations of AI systems.
Increasingly automated AI R&D isn’t speculation
about distant futures; it’s a realistic forecast for the next few years.
According to METR, GPT-5 has about a 50% chance of autonomously completing
software engineering tasks that would take a skilled human around two hours —
and across models, the length of tasks AI systems can handle at this level has
been doubling roughly every seven
months. Leading labs and researchers are actively exploring ways for AI
systems to meaningfully contribute to model development, from generating training data to designing reward models and improving
training efficiency. Together, these and other techniques could soon
enable AI systems to autonomously
handle a substantial portion of AI research and development.
Self-improving AI could amplify risks from theft and
sabotage. This automation creates a powerful
feedback loop that amplifies every risk associated with frontier AI systems.
For one, it makes internal models vastly more valuable to thieves — imagine the
advantage of possessing an untiring AI researcher who can work around the clock
at superhuman speed and the equivalent of millennia of work experience.
Likewise, internal models become more attractive targets for sabotage.
Corrupting a system that trains future AIs could lead to vulnerabilities that
persist across future AI model generations, which would allow competitors to
pull ahead. And these systems are more dangerous if misaligned: an AI system
that can improve itself might also be able to preserve its flaws or hide them
from human overseers.
Crucially, this dynamic intensifies the incentive for
companies to keep models internal. Why release an automated AI research system
that could help competitors catch up? The result is that the most capable
systems — the ones that pose the biggest risks to society — are the
most difficult to monitor and secure.
Why Markets Won’t Solve This
One might hope that market mechanisms would be
sufficient to mitigate these risks. No company wants its models to reward hack
or to be stolen by competitors. But the AI industry faces multiple market
failures that prevent adequate security investment.
Security is expensive and imposes opportunity
costs. First, implementing SL5 protections
would be prohibitively expensive for any single company. The costs aren’t just
up-front expenditures. Stringent security measures (like maintaining
completely isolated, air-gapped networks) could slow development and make
it harder to attract top talent accustomed to Silicon Valley’s open culture.
Companies that “move fast and break things” might reach transformative
capabilities first, even if their security is weaker.
Security falls prey to the tragedy of the
commons. Second, some security work, such as
fixing bugs in commonly used open-source Python libraries, benefits the whole
industry, not just one AI company. This creates a “tragedy of
the commons” problem, where companies would prefer to focus on racing to
develop AI capabilities themselves, while benefiting from security improvements
made by others. As competition intensifies, the incentive to free-ride
increases, leading to systematic under-investment in security that leaves the
whole industry at greater risk.
Good security takes time. Finally, by the time market forces prompt companies to
invest in security — such as following a breach, regulatory shock, or
reputational crisis — the window for action may already be closed. Good
security can’t be
bought overnight; instead, it must be painstakingly built from the ground
up, ensuring every hardware component and software vendor in the tech stack
meets rigorous requirements. Each additional month of delay makes it harder to
achieve adequate security to protect advanced AI capabilities.
The Role of Government
Congress has framed AI as critical to national
security. Likewise, the AI
Action Plan rightly stresses the importance of security to American AI
leadership. There are several lightweight steps that the government can take to
better address the security challenges posed by the hidden frontier. By
treating security as a prerequisite for — rather than an obstacle to —
innovation, the government can further its goal of “winning the AI race.”
Improve government understanding of the hidden
frontier. At present, policymakers are flying
blind, unable to track the AI capabilities emerging within private companies or
verify the security measures protecting them from being stolen or sabotaged.
The US government must require additional transparency from frontier companies
about their most capable internal AI systems, internal deployment practices,
and security plans. This need not be a significant imposition on industry; at
least one leading company has called
for mandatory disclosures. Additional insight could come from
expanding the voluntary
evaluations performed by the Center for AI Standards Innovation
(CAISI). CAISI currently works with companies to evaluate frontier models for
various national security risks before deployment. These evaluations could be
expanded to earlier stages of the development lifecycle, where there might
still be dangers lurking in the hidden frontier.
Share expertise to secure the hidden
frontier. No private company can match the
government’s expertise in defending against nation-state actors. Programs like
the Department of Energy’s CRISP
initiative already share threat intelligence with critical
infrastructure operators. The AI industry needs similar support, with the AI
Action Plan calling for “sharing of known AI vulnerabilities from
within Federal agencies to the private sector.” Such support could include
real-time threat intelligence about adversary tactics, red-team exercises
simulating state-level attacks, and assistance in implementing SL5 protections.
For companies developing models with national security implications, requiring
security clearances for key personnel might also be appropriate.
Leverage the hidden frontier to boost security. The period between when new capabilities emerge
internally and when they’re released publicly also provides an opportunity.
This time could be used as an “adaptation
buffer,” allowing society to prepare for any new risks and
opportunities. For
example, cybersecurity firms could use cutting-edge models to identify and
patch vulnerabilities before attackers can use public models to exploit them.
AI companies could provide access to cyber defenders without any government
involvement, but the government might have a role to play in facilitating and
incentivizing this access.
The nuclear industry offers a cautionary tale. Throughout the 1960s and ’70s, the number of
nuclear power plants around the globe grew steadily. However, in
1979, a partial meltdown at Three Mile Island spewed radioactive material into
the surrounding environment — and helped spread antinuclear sentiment
around the globe. The Chernobyl accident, seven years later, exacerbated the
public backlash, leading to regulations so stringent that construction on new
US nuclear power plants stalled until
2013. An AI-related incident — such as an AI system helping a terrorist
develop a bioweapon — could inflame the public and lead to similarly
crippling regulations.
In order to preempt this backlash, the US needs
adaptive standards that scale with AI capabilities. Basic models would
need minimal oversight, while systems whose capabilities approach human-level
performance at sensitive tasks would require proportionally stronger
safeguards. The key is to establish these frameworks now, before a crisis
forces reactive overregulation.
Internal models would not be exempt from these
frameworks. After all, biological labs dealing with dangerous pathogens are not
given a free pass just because they aren’t marketing a product to the public.
Likewise, for AI developers, government oversight is appropriate when risks
arise, even at the internal development and testing stage.
Reframing the Race: A Security-First Approach
The models developing in the hidden frontier today
will shape tomorrow's economy, security, and technology. These systems —
invisible to public scrutiny yet powerful enough to automate research,
accelerate cyberattacks, or even improve themselves — represent both America's
greatest technological advantage and a serious vulnerability. If we fail to
secure this hidden frontier against theft or sabotage by adversaries, or the
models' own emergent misbehavior, we risk not just losing the AI race but
watching our own innovations become the instruments of our technological
defeat. We must secure the hidden frontier.
https://ai-frontiers.org/articles/the-hidden-ai-frontier
Reimagining risk assessment in the AI age
Reimagining risk assessment in the AI age means shifting from slow,
manual reviews to continuous, AI-powered monitoring, using autonomous agents
for real-time data analysis, and focusing human expertise on complex insights,
ethical implications, and strategic decision-making rather than documentation.
It involves leveraging AI for faster processing, building robust data
strategies, ensuring secure integration, and developing "AI
guardrails" for governance, transforming risk from a static score to
dynamic, real-time intelligence for faster, more confident business
moves.
Key shifts in AI-driven Risk Assessment
- From Manual to Autonomous: AI rapidly processes vast documents (contracts, filings) for
baseline data, freeing humans from tedious work. Autonomous agents
continuously monitor data streams (market, public, private) for real-time
risk reassessment.
- From Static to Continuous Intelligence: Risk isn't a periodic check but an ongoing process, enabled
by connected data sources and self-adjusting systems.
- From Detection to Proactive Guardrails: Instead of just finding problems, AI helps build frameworks
(AI Risk Assessment Frameworks) to identify and mitigate threats before
incidents, using data integrity, security, and lifecycle management.
- Enhanced Human-AI Collaboration: Humans provide intuition, understand internal dynamics, and
interpret complex legislation, while AI handles data crunching, allowing
for deeper strategic thinking.
- Focus on Trust & Ethics: AI changes how authority, accountability, and decision
justification work, making ethical governance (GRC) more critical and
requiring new frameworks for legitimacy in an AI-augmented world.
Practical Applications & Frameworks
- Financial Services: AI supercharges underwriting with "single-pane"
views, while secure API/microservices enable seamless ecosystem data
exchange.
- Educational Settings: Assessment moves beyond rote memorization to authentic,
performance-based tasks that mirror real-world application, using AI as a
tool for feedback and analysis (e.g., comparing student work to
AI-generated summaries).
The "AI Leader's Manifesto" for GRC
- Embrace AI in Governance, Risk, & Compliance
(GRC): It's a necessity, not an option, to
maintain leadership and address changing trust landscapes.
- Build Robust Data & Infrastructure: Quality data, flexible integration (APIs, microservices),
and an updated IT model are foundational.
- Develop Strong AI Guardrails: These allow faster, clearer, and more confident movement,
unlocking new potential.
https://www.youtube.com/watch?v=YWvLPv7Mo5s&t=1s
GPT 5.1 Is Here — What You Should Know About Open AI’s Latest Model
References to GPT-5.1 kept showing up in OpenAI’s
codebase, and a “cloaked” model
codenamed Polaris Alpha and widely believed to have come from OpenAI randomly appeared in
OpenRouter, a platform that AI nerds use to test new systems.
Today, we learned what was going on. OpenAI announced the release of its brand new
5.1 model, an updated and revamped
version of the GPT-5 model the company debuted in August.
As a former OpenAI Beta tester–and someone who burns through millions of
GPT-5 tokens every month–here’s what you need to know about GPT-5.1.
A smarter, friendlier robot
In their release notes for the new model, OpenAI emphasizes that GPT-5.1 is “smarter” and “more
conversational” than previous versions.
The company says that GPT-5.1 is “warmer by default” and “often surprises
people with its playfulness while remaining clear and useful.”
While some people like talking with a chatbot as if it’s their long-time
friend, others find that cringey. OpenAI acknowledges this, saying that
“Preferences on chat style vary—from person to person and even from
conversation to conversation.”
For that reason, OpenAI says users can customize the new model’s tone,
choosing between pre-set options like “Professional,” “Candid” and “Quirky.”
There’s also a “Nerdy” option, which in my testing seems to make the model
more pedantic and cause it to overuse terms like “level up.”
At their core, the new changes feel like a pivot towards the consumer side
of OpenAI’s customer base.
Enterprise users probably don’t want a model that occasionally drops
Dungeons and Dragons references. As the uproar over OpenAI’s
initially voiceless GPT-5 model shows,
though, everyday users do.
Even fewer hallucinations
OpenAI’s GPT-5 model fell short in many
ways, but it was very good at
providing accurate, largely hallucination-free responses.
I often use OpenAI’s models to perform research. With earlier models like
GPT-4o, I found that I had to carefully fact check everything the model
produced to ensure it wasn’t imagining some new software tool that doesn’t
actually exist, or lying to me about myriad other small, crucial things.
With GPT-5, I had to do that far less. The model wasn’t perfect. But OpenAI
had largely solved the problem of wild hallucinations.
According to the company’s own data, GPT-5 hallucinates only 26% of the time when solving a
complex benchmark problem, versus 75% of the time with older models. In normal
usage, that translates to a far lower hallucination rate on simpler, everyday
queries that aren’t designed to trip the model up.
From my early testing, GPT-5.1 seems even less prone to hallucinate. I
asked it to make a list of the best restaurants in my hometown, and to include
addresses, website links and open hours for each one.
When I asked GPT-4 to complete a similar task years ago, it made up
plausible-sounding restaurants that don’t exist. GPT-5 does better on such
things, but still often misses details, like the fact that one popular
restaurant recently moved down the street.
GPT-5.1’s list, though, is spot-on. Its choices are solid, they’re all real
places, and the hours and locations are correct across all ten selections.
There’s a cost, though. Models that hallucinate less tend to take fewer
risks, and can thus seem less creative than unconstrained, hallucination-laden
ones.
To that point, the restaurants in GPT-5.1’s list aren’t wrong, but they’re
mostly safe choices—the kinds of places that have been in town forever, and
that every local would have visited a million times.
A real human reviewer (or a bolder model) might have highlighted a
promising newcomer, just to keep things fresh and interesting. GPT-5.1 stuck
with decade-old, proven classics.
OpenAI will likely try to carefully walk the link between accuracy and
creativity with GPT-5.1 as the rollout continues. The model clearly gets things
right more often, but it’s not yet clear if that will impact GPT-5.1’s ability
to come up with things that are truly creative and new.
Better, more creative writing
In a similar vein, when OpenAI released their GPT-5 model, users quickly
noticed that it produced boring, lifeless written prose.
At the time, I predicted that OpenAI had essentially given the model an
“emotional lobotomy,” killing
its emotional intelligence in order to curb a worrying trend of the model
sending users down psychotic spirals.
Turns out, I was right. In a post on X last month, Sam Altman admitted that “We made ChatGPT pretty
restrictive to make sure we were being careful with mental health issues.”
But Altman also said in the post “now that we have been able to mitigate
the serious mental health issues and have new tools, we are going to be able to
safely relax the restrictions in most cases.”
That process began with the rollout of new, more emotionally intelligent
personalities in the existing GPT-5 model. But it’s continuing and intensifying
with GPT-5.1.
Again, the model is already voicer than its predecessor. But as the system card for the
new model shows, GPT-5.1’s Instant
model (the default in the popular free version of the ChatGPT app) is also
markedly better at detecting harmful conversations and protecting vulnerable
users.
Naughty bits
If you’re squeamish about NSFW stuff, maybe cover your ears for this
part.
In the same X post, Altman subtly dropped a sentence that sent the Internet
into a tizzy: “As we roll out age-gating more fully and as part of our “treat
adult users like adults” principle, we will allow even more, like erotica for
verified adults.”
The idea of America’s leading AI company churning out reams of
computer-generated erotica has already sparked feverish commentary from such
varied sources as politicians, Christian leaders, tech reporters, and (judging from the number of Upvotes), most of Reddit.
For their part, though, OpenAI seems quite committed to moving ahead with
this promise. In a calculus that surely makes sense in the strange
techno-Libertarian circles of the AI world, the issue is intimately tied to
personal freedom and autonomy.
In a recent article about the future of artificial intelligence, OpenAI
again reiterated that “We believe that adults should be able to use AI on their
own terms, within broad bounds defined by society,” placing full access to AI
“on par with electricity, clean water, or food.”
All that’s to say that soon, the guardrails around ChatGPT’s naughty bits
are almost certainly coming off.
That hasn’t yet happened at launch—the model still coyly demures when asked
about explicit things. But along with GPT-5.1’s bolder personalities, it’s
almost certainly on the way.
Deeper thought
In addition to killing GPT-5’s emotional intelligence, OpenAI made another
misstep when releasing GPT-5.
The company tried to unify all queries within a single model, letting
ChatGPT itself choose whether to use a simpler, lower-effort version of GPT-5,
or a slower, more thoughtful one.
The idea was noble–there’s little reason to use an incredibly powerful,
slow, resource-intensive LLM to answer a query like “Is tahini still good after
1 month in the fridge” (Answer: no)
But in practice, the feature was a
failure. ChatGPT was no good at
determining how much effort was needed to field a given query, which meant that
people asking complex questions were often routed to a cheap, crappy model that
gave awful results.
OpenAI fixed the issue in ChatGPT with a user interface kludge. But with
GPT-5.1, OpenAI is once again bifurcating their model into an Instant and
Thinking version.
The former responds to simple queries far faster than GPT-5, while the
latter takes longer, chews through more tokens, and yields better results on
complex tasks.
OpenAI says that there’s more fine grained nuance within GPT-5.1’s Thinking
model, too. Unlike with GPT-5, the new model can dial up and down its level of
thought to accurately answer tough questions without taking forever to return a
response–a common gripe with the previous version.
OpenAI has also hinted that its future models will be “capable of making
very small discoveries” in fields like science and medicine next year, with
“systems that can make more significant discoveries” coming as soon as
2028.
GPT-5.1’s increased smarts and dialed-up thinking ability are a first step
down that path.
An attempt to course correct
Overall, GPT-5.1 seems like an attempt to correct many of the glaring
problems with GPT-5, while also doubling down on OpenAI’s more
freedom-oriented, accuracy-focused, voicy approach to conversational AI.
The new model can think, write, and communicate better than its
predecessors—and will soon likely be able to (ahem) “flirt” better too.
Whether it will do those things better than a growing stable of competing
models from Google, Anthropic, and myriad Chinese AI labs, though, is anyone’s guess.
https://overchat.ai/ai-hub/gpt-5-1-is-here
A note from Google and Alphabet CEO Sundar Pichai:
Nearly two years ago we kicked off the Gemini era, one of our biggest
scientific and product endeavors ever undertaken as a company. Since then, it’s
been incredible to see how much people love it. AI Overviews now have 2 billion
users every month. The Gemini app surpasses 650 million users per month, more
than 70% of our Cloud customers use our AI, 13 million developers have built
with our generative models, and that is just a snippet of the impact we’re
seeing.
And we’re able to get advanced capabilities to the world faster than
ever, thanks to our differentiated full stack approach to AI innovation — from
our leading infrastructure to our world-class research and models and tooling,
to products that reach billions of people around the world.
Every generation of Gemini has built on the last, enabling you to do
more. Gemini 1’s breakthroughs in native multimodality and long context window expanded the kinds of information that could be processed — and
how much of it. Gemini 2 laid the foundation for agentic capabilities and pushed the frontiers on reasoning and thinking, helping with more complex tasks and ideas, leading to Gemini 2.5 Pro
topping LMArena for over six months.
And now we’re introducing Gemini 3, our most intelligent model, that
combines all of Gemini’s capabilities together so you can bring any idea to
life.
It’s state-of-the-art in reasoning, built to grasp depth and nuance —
whether it’s perceiving the subtle clues in a creative idea, or peeling apart
the overlapping layers of a difficult problem. Gemini 3 is also much better at
figuring out the context and intent behind your request, so you get what you
need with less prompting. It’s amazing to think that in just two years, AI has
evolved from simply reading text and images to reading the room.
And starting today, we’re shipping Gemini at the scale of Google. That
includes Gemini 3 in AI Mode in Search with
more complex reasoning and new dynamic experiences. This is the first time we
are shipping Gemini in Search on day one. Gemini 3 is also coming today to
the Gemini app, to
developers in AI Studio and Vertex AI, and in our new agentic development platform, Google
Antigravity — more below.
Like the generations before it, Gemini 3 is once again advancing the
state of the art. In this new chapter, we’ll continue to push the frontiers of
intelligence, agents, and personalization to make AI truly helpful for
everyone.
We hope you like Gemini 3, we'll keep improving it, and look forward to
seeing what you build with it. Much more to come!
https://blog.google/products/gemini/gemini-3/#note-from-ceo
A credible prediction or an
imaginary threat of being in an artificial intelligence bubble?
Ticama prognoze vai iedomāti
draudi par atrašanos mākslīgā intelekta burbulī?
This Is How the AI Bubble Will Pop
The AI infrastructure boom is the most important economic story in the
world. But the numbers just don't add up.
Oct 02, 2025
Some people think artificial intelligence will be the most important
technology of the 21st century. Others insist that it is an obvious economic
bubble. I believe both sides are right. Like the 19th century railroads and the
20th century broadband Internet build-out, AI will rise first, crash second,
and eventually change the world.
The numbers just don’t make sense. Tech companies are projected to spend
about $400 billion this year on infrastructure to train and operate AI models.
By nominal dollar sums, that is more than any group of firms has ever spent to
do just about anything. The Apollo program allocated about $300 billion in
inflation-adjusted dollars to get America to the moon between the early 1960s
and the early 1970s. The AI buildout requires companies to collectively fund a
new Apollo program, not every 10 years, but every 10 months.
It’s not clear that firms are prepared to earn back the investment, and
yet by their own testimony, they’re just going to keep spending, anyway. Total
AI capital expenditures in the U.S. are projected to exceed $500 billion in
2026 and 2027—roughly the annual GDP of Singapore. But the Wall Street
Journal has reported that American consumers spend only $12 billion a
year on AI services. That’s roughly the GDP of Somalia. If you can grok the
economic difference between Singapore and Somalia, you get a sense of the
economic chasm between vision and reality in AI-Land. Some reports indicate that
AI usage is actually declining at large companies that are still trying to
figure out how large language models can save them money.
Every financial bubble has moments where, looking back, one
thinks: How did any sentient person miss the signs? Today’s
omens abound. Thinking Machines, an AI startup helmed by former Open AI
executive Mira Murati, just raised the largest seed round in history: $2
billion in funding at a $10 billion valuation. The company has not released a
product and has refused to tell investors what they’re even trying to build.
“It was the most absurd pitch meeting,” one investor who met with Murati said.
“She was like, ‘So we’re doing an AI company with the best AI people, but we
can’t answer any questions.” Meanwhile, a recent analysis of stock market
trends found that none of the
typical rules for sensible investing can explain what’s going on with
stock prices right now. Whereas equity prices have historically followed
earnings fundamentals, today’s market is driven overwhelmingly by momentum, as
retail investors pile into meme stocks and AI companies because they think
everybody else is piling into meme stocks and AI companies.
Every economic bubble also has tell-tale signs of financial
over-engineering, like the collateralized debt obligations and subprime
mortgage-backed securities that blew up during the mid-2000s housing bubble.
Ominously, AI appears to be entering its own phase of financial wizardry. As
the Economist has pointed out, the AI hyperscalers—that is,
the largest spenders on AI—are using
accounting tricks to depress their reported infrastructure spending,
which has the effect of inflating their profits1. As the investor and author Paul Kedrosky told me on my
podcast Plain
English, the big AI firms are also shifting huge amounts of AI spending
off their books into SPVs, or special purpose vehicles, that disguise the cost
of the AI build-out.
My interview with Kedrosky received the most enthusiastic and
complimentary feedback of any show I’ve done in a while. His level of
insight-per-minute was off the charts, touching on:
- How AI
capital expenditures break down
- Why the
AI build-out is different from past infrastructure projects, like the
railroad and dot-com build-outs
- How AI
spending is creating a black hole of capital that’s sucking resources away
from other parts of the economy
- How
ordinary investors might be able to sense the popping of the bubble just
before it happens
- Why the
entire financial system is balancing on big chip-makers like Nvidia
- If the
bubble pops, what surprising industries will face a reckoning
Below is a polished transcript of our conversation, organized by topic
area and adorned with charts and graphs to visualize his points. I hope you
learn as much from his commentary as much as I did. From a sheer economic
perspective, I don’t think there’s a more important story in the world.
AI SPENDING: 101
Derek Thompson: How big is the AI
infrastructure build-out?
Paul Kedrosky: There’s a huge amount of money being
deployed and it’s going to a very narrow set of recipients and some really
small geographies, like Northern Virginia. So it’s an incredibly concentrated
pool of capital that’s also large enough to affect GDP. I did the math and
found out that in the first half of this year, the data-center related
spending—these giant buildings full of GPUs [graphical processing units] and
racks and servers that are used by the large AI firms to generate responses and
train models—probably accounted for half of GDP growth in the first half of the
year. Which is absolutely bananas. This spending is huge.
Thompson: Where is all this money going?
Kedrosky: For the biggest companies—Meta and Google
and Amazon—a little more than half the cost of a data center is the GPU chips
that are going in. About 60 percent. The rest is a combination of cooling and
energy. And then a relatively small component is the actual construction of the
data center: the frame of the building, the concrete pad, the real estate.
HOW AI IS ALREADY WARPING THE 2025 ECONOMY
Thompson: How do you see AI spending already warping
the 2025 economy?
Kedrosky: Looking back, the analogy I draw is this:
massive capital spending in one narrow slice of the economy during the 1990s
caused a diversion of capital away from manufacturing in the United States.
This starved small manufacturers of capital and made it difficult for them to
raise money cheaply. Their cost of capital increased, meaning their margins had
to be higher. During that time, China had entered the World Trade Organization
and tariffs were dropping. We’ve made it very difficult for domestic manufacturers
to compete against China, in large part because of the rising cost of capital.
It all got sucked into this “death star” of telecom.
So in a weird way, we can trace some of the loss of manufacturing jobs
in the 1990s to what happened in telecom because it was the great sucking sound
that sucked all the capital out of everywhere else in the economy.
The exact same thing is happening now. If I’m a large private equity
firm, there is no reward for spending money anywhere else but in data centers.
So it’s the same phenomenon. If I’m a small manufacturer and I’m hoping to
benefit from the on-shoring of manufacturing as a result of tariffs, I go out
trying to raise money with that as my thesis. The hurdle rate just got a lot
higher, meaning that I have to generate much higher returns because they’re
comparing me to this other part of the economy that will accept giant amounts
of money. And it looks like the returns are going to be tremendous because look
at what’s happening in AI and the massive uptake of OpenAI. So I end up
inadvertently starving a huge slice of the economy yet again, much like what we
did in the 1990s.
Thompson: That’s so interesting.
The story I’m used to telling about manufacturing is that China took our jobs.
“The China shock,” as economists like David Autor call it, essentially took
manufacturing to China and production in Shenzhen replaced production in Ohio,
and that’s what hollowed out the Rust Belt. You’re adding that telecom absorbed
the capital.
And now you fast-forward to the 2020s. Trump is trying to reverse the
China shock with the tariffs. But we’re recreating the capital shock with AI as
the new telecom, the new death star that’s taking capital that might at the
margin go to manufacturing.
Kedrosky: It’s even more insidious than that. Let’s
say you’re Derek’s Giant Private Equity Firm and you control $500 billion. You
do not want to allocate that money one $5 million check at a time to a bunch of
manufacturers. All I see is a nightmare of having to keep track of all of these
little companies doing who knows what.
What I’d like to do is to write 30 separate $50 billion checks. I’d like
to write a small number of huge checks. And this is a dynamic in private equity
that people don’t understand. Capital can be allocated in lots of different
ways, but the partners at these firms do not want to write a bunch of small
checks to a bunch of small manufacturers, even if the hurdle rate is
competitive. I’m a human, I don’t want to sit on 40 boards. And so you have
this other perverse dynamic that even if everything else is equal, it’s not
equal. So we’ve put manufacturers who might otherwise benefit from the
onshoring phenomenon at an even worse position in part because of the internal
dynamics of capital.
Thompson: What about the energy piece of this?
Electricity prices rising. Data centers are incredibly energy thirsty. I think
consumers will revolt against the construction of local data centers, but the
data centers have enormous political power of their own. How is this going to
play out?
Kedrosky: So I think you’re going to rapidly see an
offshoring of data centers. That will be the response. It’ll increasingly be
that it’s happening in India, it’s happening in the Middle East, where massive
allocations are being made to new data centers. It’s happening all over the
world. The focus will be to move offshore for exactly this reason. Bloomberg
had a great story the other day about an exurb in Northern Virginia that’s
essentially surrounded now by data centers. This was previously a rural area
and everything around them, all the farms sold out, and people in this area
were like, wait a minute, who do I sue? I never signed up for this. This is the
beginnings of the NIMBY phenomenon because it’s become visceral and emotional
for people. It’s not just about prices. It’s also about: If you’ve got a six
acre building beside you that’s making noise all the time, that is not what you
signed up for. https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop
Symbolic AI —
could become a bridge from Artificial Narrow Intelligence (ANI) to Artificial
General Intelligence (AGI) and further to Artificial Superintelligence (ASI).
It bridges the gap between machine learning and understanding. Providing
rational and empathetic reasoning & emotionally intelligent decision-making
for a global public good.
Simboliskais
mākslīgais intelekts (SMI) varētu kļūt par tiltu no mākslīgā šaurā intelekta
(ANI) uz mākslīgo vispārējo intelektu (AGI) un tālāk uz mākslīgo superintelektu
(ASI). Tas pārvar plaisu starp mašīnmācīšanos un izpratni. Nodrošinot racionālu
un empātisku spriešanu un emocionāli inteliģentu lēmumu pieņemšanu globāla
sabiedrības labuma vārdā.
Could Symbolic AI transform human-like intelligence?
Artificial intelligence research is revisiting symbolic approaches once considered outdated. Combining these formal methods with neural networks may overcome current limitations of AI reasoning. Experts suggest that a hybrid “neurosymbolic” model could enable machines to generalize knowledge like humans. The challenge lies in merging these systems efficiently without sacrificing reliability or adaptability. KOLAPSE PRESENTS • DECEMBER 2, 2025
The ambition to replicate human intelligence in
machines has long driven AI research, yet the path toward this goal remains
contested. Neural networks, the current dominant approach, excel at pattern
recognition and data-driven learning, but they often falter in reasoning or
applying knowledge to novel scenarios. Symbolic AI, a legacy approach,
emphasizes formal rules, logic, and explicit encoding of relationships between
concepts. Decades ago, these systems dominated early AI efforts, yet their
rigidity and inability to scale to complex datasets caused them to be eclipsed
by neural networks. Now, researchers propose that a fusion of the two
paradigms—neurosymbolic AI—might finally bridge the gap between learning and
reasoning. Advocates argue that by combining the strengths of both, machines
could achieve a more generalizable and trustworthy form of intelligence.
Neurosymbolic AI aims to integrate the flexible
learning capabilities of neural networks with the clear reasoning structures of
symbolic systems. In practice, symbolic AI encodes rules such as “if A then B,”
which allows for logical deductions that are immediately interpretable by
humans. Neural networks, by contrast, discover statistical correlations from
large datasets but often remain opaque, creating what is known as the “black
box” problem. By layering symbolic logic atop neural outputs, or conversely,
using neural networks to guide symbolic search, researchers hope to create
systems capable of both learning and deductive reasoning. The appeal of this
approach is not merely academic; it has significant implications for
high-stakes fields, such as medicine, autonomous vehicles, and military
decision-making, where errors can have serious consequences. The transparency
inherent in symbolic reasoning can help mitigate mistrust in AI outputs.
Neurosymbolic AI seeks to unify formal logic with
neural learning.
Efforts to operationalize neurosymbolic AI are already
underway, producing demonstrable successes. For example, AlphaGeometry,
developed by Google DeepMind, combines neural pattern recognition with symbolic
reasoning to solve mathematics Olympiad problems reliably. By generating
synthetic datasets using formal symbolic rules and then training neural
networks on these datasets, the system reduces errors and enhances
interpretability. Other techniques, such as logic tensor networks, assign
graded truth values to statements, enabling neural networks to reason under
uncertainty. Likewise, roboticists have used neurosymbolic methods to train
machines to navigate environments with novel objects, dramatically reducing the
volume of training data required. These applications suggest that hybrid
approaches can yield practical advantages, even if the systems remain
specialized rather than fully general.
Despite these promising examples, integrating symbolic
and neural methods is far from straightforward. Symbolic knowledge bases,
though clear and logical, can be enormous and computationally expensive to
search. Consider the game of Go: the theoretical tree of all possible moves is
astronomically large, making exhaustive symbolic search infeasible. Neural
networks can alleviate this by predicting which branches are likely to yield
optimal outcomes, effectively pruning the search space. Similarly, incorporating
symbolic reasoning into language models can guide the generation of outputs
during complex tasks, reducing nonsensical or inconsistent results. Yet, these
integrations require careful orchestration; simply connecting a symbolic engine
to a neural network without coherent management often produces subpar
performance.
Underlying the technical challenges are philosophical
disagreements about the very nature of intelligence and the methods by which it
should be pursued. Some AI pioneers, such as Richard Sutton, argue that efforts
to embed explicit knowledge into machines have historically been outperformed
by approaches leveraging large datasets and computational scale. From this
perspective, the lessons of history suggest that symbolic augmentation may be a
distraction rather than a necessity. Others, including Gary Marcus, maintain
that symbolism provides essential reasoning tools that neural networks lack,
framing the debate as a philosophical as well as technical one. In practice,
both views influence current research trajectories, with proponents of each
advocating for strategies that align with their understanding of intelligence.
Observers note that these debates often obscure practical experimentation,
which continues regardless of theoretical disputes.
Symbolic systems also face difficulties representing
the complexity and ambiguity inherent in human knowledge. Projects like Cyc,
begun in the 1980s, attempted to encode common-sense reasoning, articulating
axioms about everyday relationships and events. While Cyc amassed millions of
such statements and influenced subsequent AI knowledge graphs, translating
nuanced, context-dependent human experiences into rigid logical rules remains
fraught with errors. For instance, although Cyc could represent that “a daughter
is a child” or “seeing someone you love may produce happiness,” exceptions
abound in human behavior, and strict logic cannot fully capture them.
Consequently, symbolic reasoning is most effective when applied selectively or
in tandem with flexible learning systems. The combination enables
generalization without sacrificing the interpretability that pure neural
networks struggle to achieve.
Neurosymbolic AI also introduces opportunities to
reduce the data burden traditionally required for training neural networks. By
embedding rules and relational logic, machines can achieve high accuracy with
far fewer examples than would be required otherwise. Jiayuan Mao’s work in
robotics exemplifies this: her hybrid system required only a fraction of the
training data that a purely neural model would need to understand object
relationships in visual tasks. This efficiency can accelerate development cycles
and lower resource consumption, making AI more accessible and
environmentally sustainable. Furthermore, hybrid approaches can facilitate
reasoning in domains where data is scarce or incomplete, extending AI’s
applicability to previously inaccessible problems. The challenge lies in
designing systems that balance rule-based reasoning with statistical learning
without compromising either.
Current efforts also explore the potential for
machines to develop their own symbolic representations autonomously. The
ultimate vision, according to Mao, is a system that not only learns from data
but can invent new categories, rules, and conceptual frameworks beyond human
understanding. Such capability would mark a fundamental shift, enabling AI to
contribute novel insights to mathematics, physics, or other knowledge domains.
Achieving this requires progress in AI “metacognition,” whereby systems monitor
and direct their own reasoning processes. Effective metacognitive architectures
would act as conductors, orchestrating the interplay between neural learning
and symbolic logic across multiple contexts. If realized, this could constitute
a genuine form of artificial general intelligence, capable of reasoning in ways
comparable to, or even beyond, humans.
Integrating symbolic knowledge can reduce training
data requirements dramatically.
Hardware and computational architecture also play a
critical role in realizing neurosymbolic AI’s potential. Current computing
platforms are often optimized for either neural network training or symbolic
reasoning, but not both simultaneously. Efficient hybrid computation may
necessitate novel chip designs, memory hierarchies, and processing paradigms
capable of supporting dual paradigms. As the field matures, other forms of
AI—quantum or otherwise—might complement or even supersede neurosymbolic approaches.
Nevertheless, the immediate priority for researchers is to establish robust,
flexible systems that can generalize across domains, combining reasoning,
learning, and problem-solving in a coherent framework. In this sense,
neurosymbolic AI represents a pragmatic middle path, leveraging lessons from
both historical and contemporary AI research.
While technical and philosophical hurdles remain,
neurosymbolic AI has already begun to reshape expectations of what intelligent
machines can achieve. Its proponents argue that reasoning, efficiency, and
transparency are within reach, provided that symbolic and neural components are
integrated thoughtfully. Early applications demonstrate that hybrid models can
outperform purely neural approaches in select domains, particularly when
understanding and logic are critical. The field is still in its formative stages,
with significant exploration required to establish general principles and
architectures. Yet the prospect of machines capable of reasoning, generalizing,
and even inventing new knowledge captures the imagination of both scientists
and policymakers. As AI continues to evolve, the marriage of neural flexibility
and symbolic clarity may chart the most promising path toward human-like
intelligence.
https://www.kolapse.com/en/?contenido=93179-could-symbolic-ai-transform-human-like-intelligence

Nav komentāru:
Ierakstīt komentāru