svētdiena, 2025. gada 28. decembris

AI in the future in people's perceptions and reality. AI nākotnē cilvēku priekšstatos un realitātē.


AI in the future in people's perceptions and reality

             AI nākotnē cilvēku priekšstatos un realitātē

 

 

To ensure human progress, it is necessary to attract ethical artificial intelligence to create a system of public governance based on universal human values.

Cilvēces progresa nodrošināšanai ir nepieciešams piesaistīt ētisku mākslīgo intelektu, lai  veidotu vispārcilvēciskajās vērtībās balstītu valsts pārvaldības sistēmu.

 

 GUIDE: ‘ETHICAL AI Governance Enables Confident AI Adoption’

🤖 Capgemini’s latest guide, “A Practical Guide to Implementing AI Ethics Governance” : https://www.capgemini.com/wp-content/uploads/2025/10/Implementing-AI-ethics-governance_20251006.pdf , explores how organizations can turn AI ethics, like those championed under the European Commission's EU AI Act, from abstract principles into actionable governance. The report offers a clear path for embedding responsible AI across enterprises, helping leaders navigate complex AI-driven transformations with confidence and INTEGRITY.
👥 Helping Align AI Practices with Ethical Standards
The guide introduces a practical framework for AI ethics governance, covering everything from bias management to sustainability. It emphasizes the creation of a living AI Code of Ethics, the emergence of multidisciplinary AI ethicist roles, and the alignment of AI practices with organizational values and global standards like ISO 42001.

Shaping Enterprises' AI Operating Models
• AI ethics is no longer optional — it shapes trust, FAIRNESS, and accountability across all levels of an organization:
• Workforce: AI ethicists and cross-functional teams ensure ethical risks are identified, owned, and mitigated throughout the AI lifecycle.
• Customers & Society: Ethical AI systems foster fairness, transparency, and social benefit, while accounting for cultural and contextual diversity.
• Innovation & Sustainability: Responsible AI practices integrate environmental and resource considerations into AI deployment.
Focus Points Centre on AI Culture
• Human-Centric Integration: Embed ethics in AI design, decision-making, and organizational culture.
• Bias & Fairness Management: Treat fairness as an ongoing, context-aware process rather than a one-time check.
• Governance & Collaboration: Integrate AI ethics across legal, data, and delivery teams while engaging stakeholders proactively.
• Sustainability & Impact Awareness: Consider the ethical implications of AI’s energy and resource consumption.

https://www.linkedin.com/company/ai-&-partners/posts/?feedView=all

 In today's situation, the dominant feature for successfully utilizing the opportunities created by AI has become emotional intelligence, which is the ability to evaluate the advice of artificial intelligence and understand which ones are trustworthy and which ones should not be blindly trusted.

Mūsdienu situācijā par dominējošo īpašību veiksmīgai mākslīgā intelekta radīto iespēju izmantošanai ir kļuvusi emocionālā inteliģence – spēja izvērtēt mākslīgā intelekta sniegtos padomus un saprast, kuri no tiem ir uzticami un kuriem nevajadzētu akli uzticēties.

How will AI transform business in 2026?

BY Robert Safian

How should leaders prepare for AI’s accelerating impact on work and everyday life? AI scientist, entrepreneur, and Pioneers of AI podcast host Rana el Kaliouby shares her predictions for the year ahead—from physical AI entering the real world to what it means to onboard AI into your org chart.

Let’s look ahead to 2026. You sent me some fascinating thoughts about AI’s next-phase impact on business, and I’d love to take you through them. The first one was the rise of what you called relationship intelligent AI.

So everybody’s worried that AI is going to make us less human and take away our human-to-human connections. There is definitely a risk of that. But I think the thing I’m most excited about for 2026 is how AI can actually help us build deeper human connections and more meaningful human experiences. And the way this happens is through AI that can really help you organize your relationships and your network and surface connections that you need and maybe make warm introductions to you.

There are already a number of new companies that are starting in this space. So one company’s called VIA.AI, it’s a Boston-based company. They do this for sales professionals and BD professionals who have to do this for their work. There’s a company called Goodword that I’m very excited about. They’re doing this for just the average person. Like you and I, we have very strong networks, but how can we organize it? So I’m excited about that one. There’s a company called Boardy that does this for investors and founders. So it’s becoming a thing, and I’m excited to see how these companies take off in 2026. They’re all fairly new, so it’ll be interesting to see how they evolve.

Yeah, and whether they can stay ahead of some of the bigger chatbots that may just try to integrate some of this capability into the products they already have. That’s always the case in this kind of evolution of technology: What’s a feature and what’s a company, right? What’s an independent service?

Absolutely. When I’m looking at these companies and I’m diligencing them, that’s a key question that I ask. Is this something that the next version of ChatGPT or Gemini is just going to implement? And if the answer is yes, then that’s obviously not a defensible company. But a lot of times there’s this additional moat of data and algorithms that you need to sit on top of these LLMs. And I believe in this relationship intelligence space, I don’t think this is something that just a kind of an off-the-shelf LLM can do. It really needs to know you. It needs to know your data, it needs to know your relationships.

And you have to trust it enough to share all that data with it, right?

Absolutely.

That’s your proprietary data, whether it’s about your business or about you individually.

Exactly. And I don’t want this to all go up to OpenAI’s cloud. I want to trust that I have control over these really private relations. If you and I have a conversation about our kids, I don’t necessarily want that to now sit in a general OpenAI cloud and be used to train the next ChatGPT. So that safety and security, appreciating the privacy and the importance of this data, is really key.

Another business change you expect in 2026 is the insertion of AI into the org chart. This is about who manages AI, like performance reviews and team culture impacts?

Yeah, so this goes back to the thesis that there’s this shift in how AI is creating value, and it’s not a tool anymore. Well, it is a tool. It’ll always be a tool, but it’s not a tool that helps you get work done faster. It could actually take an end-to-end task and get it done for you. And I’ll give a few examples.

So I’m an investor in a company called Synthpop, and instead of building a tool that helps healthcare administrators accelerate or really become efficient in how they do patient intake, it just takes the task of patient intake. It does the thing end to end. And so if you then imagine what that means for a hospital or a clinic, it will have a combination of human workers collaborating and working closely with AI coworkers.

And so then the question becomes, well, who manages these hybrid teams? Sometimes it’s a human manager, sometimes it’s an AI manager. I’m also an investor in a company called Tough Day, and they sell you AI managers. And then how do you do performance reviews for these hybrid teams? How do you build a culture? Like at Affectiva, my company, culture was our superpower. How do you build a culture when some of your team members are AI and some of your team members are humans?

So I think that is going to spur a lot of conversation around how do you build organizations that are combinations of digital agents and human employees?

As you talk about this merging of AI agents and humans in work, it brings up that looming question about the impact of AI on jobs and employment. And some numbers are coming out now that make it seem like, “Oh, it’s bad for jobs.” There are other numbers coming out that are like, “Oh, we’re actually hiring more people because of it.” Do you have a prediction about what is going to happen with that in 2026? Is AI going to take over roles that have been done by humans that quickly?

We had a really fascinating roundtable discussion at the Fortune Brainstorm AI conference and the headline was like, “Is AI killing entry-level jobs?” And actually, a lot of the Fortune companies and also AI companies that were around the table were basically saying, “No, we’re hiring more entry-level jobs. They’re just not the same jobs that we were traditionally seeing.” And also the career ladders have changed.

So my prediction is we’re going to see an entirely different organization where I think if you are able to come in an entry-level position, for example, but work very closely with AI and be AI-native and be AI fluent and be able to wear multiple hats, I think that’s going to go a long way. As opposed to this very siloed job trajectory where you come in, this is your little task, and then you do more of it, and then you go up the career ladder. I think that’s going to change. I think young people are looking for different ways of working, and I think AI is changing all of that anyway.

Will there be jobs that will go away? I think so. I can’t remember who said this line, but it’s now very popular: “It’s not AI that’s going to take your job. It’s going to be somebody who knows how to use AI.” And I believe that to be true.

https://www.fastcompany.com/.../how-will-ai-transform... 

 

Are we creating truly intelligent systems?

The progress of AI requires appropriate attention to preserving our humanity!

Vai radām patiesi inteliģentas sistēmas?

MI progress prasa atbilstošu uzmanību mūsu cilvēcības saglabāšanai!

 12-15-2025

How to transform AI from a tool into a partner

The 4 stages of human-AI collaboration.

BY Faisal Hoque

The conversation about AI in the workplace has been dominated by the simplistic narrative that machines will inevitably replace humans. But the organizations achieving real results with AI have moved past this framing entirely. They understand that the most valuable AI implementations are not about replacement but collaboration.

The relationship between workers and AI systems is evolving through distinct stages, each with its own characteristics, opportunities, and risks. Understanding where your organization sits on this spectrum—and where it’s headed—is essential for capturing AI’s potential while avoiding its pitfalls.

Stage 1: Tools and Automation

This is where most organizations begin. At this stage, AI systems perform discrete, routine tasks while humans maintain full control and decision authority. The AI functions primarily as a productivity tool, handling well-defined tasks with clear parameters.

Ready to thrive at the intersection of business, technology, and humanity?

Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.

Examples are everywhere: document classification systems that automatically sort incoming correspondence, chatbots that answer standard customer inquiries, scheduling assistants that optimize meeting arrangements, data entry automation that extracts information from forms.

The key characteristic of this stage is that AI operates within narrow boundaries. Humans direct the overall workflow and make all substantive decisions. The AI handles the tedious parts, freeing humans for higher-value work.

The primary ethical considerations at this stage involve ensuring accuracy and preventing harm from automated processes. When an AI system automatically routes customer complaints or flags applications for review, errors can affect real people. Organizations must implement quality controls and monitoring to catch mistakes before they cause damage—particularly for vulnerable populations who may be less able to navigate around system errors. https://rogermartin.medium.com/a-leaders-role-in-fostering-ai-superpowers-c45d079807e8

 We are seeing the merging of artificial intelligence agents and humans.

Mēs redzam mākslīgā intelekta aģentu un cilvēku apvienošanos.

To transform AI from a tool to a partner, treat it like a new team member by giving it a "job description," providing rich context (company, goals, people), onboarding it with clear expectations, and giving continuous, specific feedback to build a relationship where it learns and scales your thinking, moving from simple tasks to complex strategic collaboration. The key is shifting from asking for answers to co-creating ideas, using precise prompts and iterative refinement to foster better thinking, not just faster output.

In 2025, transforming AI from a tool into a partner requires a shift from viewing it as a routine task-executor to an active collaborator with shared responsibility.

The following steps outline how to achieve this transition:

1. Adopt a Collaboration Framework

Most organizations progress through four distinct stages to reach a true partnership:

• Automation: AI handles discrete, routine tasks (e.g., sorting emails) while humans maintain total control.

• Augmentation: AI provides analysis and recommendations (e.g., predictive analytics) to inform human decisions.

• Collaboration: Humans and AI work as a team, leveraging complementary strengths—AI's processing power and human's ethical reasoning—to share responsibility for outcomes.

• Supervision: AI handles routine operations autonomously within established human-set parameters and governance.

2. Shift to Agentic AI

In 2025, the focus has shifted from simple Generative AI to Agentic AI. Unlike tools that only respond to prompts, agentic systems:

• Take Action: They move beyond generating content to executing multi-step processes like debugging code or interacting with customers autonomously.

• Learn Context: They adapt to your personal preferences and past mistakes, becoming more intuitive over time.

• Act as "Virtual Coworkers": They can plan and execute complex workflows as a team member, not just an assistant.

3. Redefine Human Roles

A partnership is successful only when human roles evolve to match AI's capabilities:

• Focus on the "30% Rule": Let AI handle 70% of routine tasks so humans can focus on the 30% that requires creativity, empathy, and ethical judgment.

• Develop New Skills: Prioritize AI Literacy (understanding AI's limits) and Prompt Engineering (effective communication with the partner).

• Invest in "Human Centric" Skills: Strengthen uniquely human traits like critical thinking and emotional intelligence, which AI cannot replicate.

4. Build Trust Through Governance

A partner must be reliable. Establish trust by implementing:

• Explainable AI (XAI): Ensure the AI can articulate the "why" behind its decisions so it's not a "black box".

• Human Oversight: Rigorously validate AI outputs to maintain quality and brand voice.

• Digital Workforce Registries: Track AI agents similarly to human employees to ensure accountability and compliance.

5. Create a Culture of Experimentation

Treat AI integration with the same discipline as hiring human team members:

• Launch Pilot Programs: Test AI as a partner in a small, controlled environment to solve real problems before scaling.

• Social Dialogue: Encourage open communication where employees can share feedback or concerns about their new AI "teammates".  

Can OpenAI make generative AI more social?

OpenAI is exploring ways to integrate generative AI into social platforms. A recent report from MIDiA Research suggests that OpenAI might develop a social network where users can share AI-generated content, potentially alongside human-created content. OpenAI is reportedly considering building a social feed around its image generator, which could be text-based, similar to platforms like X/Twitter or Threads. This could allow users to share and interact with AI-generated images and other creative outputs, potentially influencing how users experience social media and AI integration. 

Key aspects of this potential integration:

  • AI-centric platform:

OpenAI could create a platform where AI is central to the user experience, allowing users to generate and share content using its generative AI tools. 

  • Social feed:

The platform might feature a social feed where users can share and interact with AI-generated content, potentially including text, images, and other creative media. 

  • User base:

OpenAI's existing user base could be a strong starting point for building a social network, as users may be more likely to use a platform based on AI technologies they already know and trust. 

  • Interpersonal social appeal:

While AI-centric platforms could offer a unique experience, it will be challenging to replicate the interpersonal social appeal of established networks, which relies heavily on human interaction. 

  • Ethical considerations:

As AI becomes more integrated into social media, it's crucial to address ethical concerns, including potential biases in AI-generated content, misinformation, and the role of human intervention. 

Potential challenges and opportunities:

  • User acceptance:

Users may be hesitant to embrace AI-generated content or AI-driven social experiences, especially if they have concerns about authenticity or misinformation. 

  • Content moderation:

Ensuring that AI-generated content aligns with ethical standards and community guidelines will require careful content moderation strategies. 

  • Human-AI collaboration:

Finding the right balance between human creativity and AI-generated content is crucial for creating a social experience that is both engaging and valuable. 

  • Regulation:

As AI becomes more integrated into social media, regulators may need to develop new policies and guidelines to address potential risks and ensure responsible AI development. 

In conclusion, OpenAI is exploring ways to leverage its generative AI technologies to create a social network or platform, potentially revolutionizing the way people interact with social media and AI. While this presents numerous opportunities, it also requires careful consideration of ethical concerns and user expectations. 

 

 The artificial intelligence boom: A new reality, where will we now live?

 10 Generative AI Trends In 2026 That Will Transform Work And Life

ByBernard Marr

Oct 13, 2025

Generative AI is moving into a new phase in 2026, reshaping industries from entertainment to healthcare while creating fresh opportunities and challenges.

In 2026, generative AI is firmly embedded in workflows across many larger organizations. Meanwhile, millions of us now rely on it for research, study, content creation and even companionship.

What started with the arrival of ChatGPT back in 2023 has spilled into every corner of life, and the pace is only going to accelerate.

Of course, challenges like copyright, bias, and the risk of job displacement remain, but the upside is too powerful for anyone to ignore. From augmenting human productivity to accelerating our ability to learn, machines capable of generating words, pictures, video and code are reshaping our world.

The next 12 months will undoubtedly see the arrival of new tools and further integration of generative AI into our everyday lives. So here are the ten trends I think will be most significant in 2026.

1. Generative Video Comes Of Age

This year, Netflix brought generative AI into primetime in the Argentinian-produced series El Eternauta. Producers said that it slashed production time and costs compared to traditional animation and special effects techniques. In 2026, expect generative AI in entertainment to become mainstream as we see it powering more big-budget TV shows and Hollywood extravaganzas.

2. Authenticity Is King

Faced with a sea of generative AI content, individuals and brands will look for new ways to communicate authenticity and genuine human experience. While audiences will continue to find AI useful for quickly conveying information and creating summaries, creators who are able to leverage truly human qualities to provide content that machines can’t match will rise above the tide of generic “AI slop”.

3. The Copyright Conundrum

Debate over the use of copyrighted content to train generative AI models and fair compensation for human creatives will increase in intensity throughout 2026. AI developers need access to human-created content in order to train machines to mimic it, while many artists, musicians, writers and filmmakers consider their work being used in this way as nothing more than theft. Over the next year, expect more lawsuits, intense public debate and potentially some attempts to resolve the situation through regulation, as lawmakers try to strike a balance that allows technological innovation while respecting intellectual property rights.

4. Agentic Chatbots—From Reactive To Proactive

Rather than simply providing information or generating content in response to individual prompts, chatbots will become more and more capable of working autonomously towards long-term goals as they take on agentic qualities. This year, ChatGPT debuted its Agent Mode, and other tools such as Gemini and Claude are adding abilities to communicate with third-party apps and take multi-step actions without human intervention. In 2026, generative AI tools will make the leap from clever chatbots to action-taking assistants as the agentic revolution heats up.

5. Privacy-Focused GenAI

As businesses invest more heavily in generative AI, there will be a growing awareness of the risks to privacy and the need to take steps to secure personal and customer data. This will increase awareness in privacy-centric AI models where data processing takes place on-premises or directly on users’ own devices. Apple, for example, differentiates itself with its focus on putting privacy first, and I expect to see other AI device manufacturers and developers following its lead in 2026.

6. Generative AI in Gaming

In 2026, gaming could become one of the most exciting frontiers for generative AI. Developers are creating games with emergent storylines that adapt to players’ actions, even when they do something entirely unexpected. And characters will no longer be limited to following scripts, but can respond, hold conversations and act just like real people. This will create richer, more immersive and interactive experiences for players, while cutting production costs and unlocking new creative options for studios.

7. Synthetic Data For Analytics And Simulation

As well as words and pictures, generative AI is increasingly used to create the raw data needed to understand the real world, simulate physical, mechanical and biological systems and even train more algorithms. This will allow banks to model fraud detection systems without exposing real customer records, and healthcare providers to simulate treatments and medical trials without risking patient privacy. With demand for synthetic training data growing, it will become fuel for cutting-edge analytics and automated decision-making systems in 2026 and beyond.

8. Monetizing Generative Search

Generative AI is transforming the way we search for information online. This is impacting the business of companies that rely on search results to drive traffic, but also forcing advertising services like Google and Microsoft Bing to rethink the way they drive revenue. In 2026, we can expect moves towards addressing this, as services such as Google’s Search Generative Experience and Perplexity AI attempt to bridge the gap between generative search and paid-for search ads.

9. Further Breakthroughs In Scientific Research

This year, we saw genAI proving it can be a valuable aid to scientific research, driving breakthroughs in drug discovery, protein folding, energy production and astronomy. In 2026, this trend will gather pace as researchers increasingly leverage generative models in the search for solutions to some of humanity’s biggest problems, such as curing diseases, fighting climate change and solving food and water shortages.

10. Generative AI Jobs Prove Their Value

Much has been made of the new jobs that will be displaced, but in 2026, the focus will shift to the new roles it will create. We will start to see the true scale of demand for people with the skills to fill roles such as prompt engineers, model trainers, output auditors and AI ethicists. Those who can coordinate and integrate the work of AI agents with human teams will be in high demand, and we will start to get a clearer understanding of exactly how valuable they will really be when it comes to unlocking the benefits of AI while mitigating its potential for harm.

 Generative AI is no longer an emerging technology on the sidelines; it is becoming the engine driving change across every industry and daily life. The trends we see in 2026 point to a future where the line between human and machine creativity, productivity, and intelligence becomes increasingly blurred. Organizations that adapt quickly, invest in the right skills, and embrace responsible innovation will be the ones that thrive as this next chapter of AI unfolds.

https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/    

Prediction About the Future of AI and Human Interaction

The future of AI and human interaction is likely to be characterized by increasing AI integration into various aspects of life, with potential benefits and drawbacks. AI will likely become more personalized, automate complex tasks, and enhance human capabilities. However, it also raises concerns about job displacement, ethical implications, and the potential for misuse. 

Here's a more detailed look at some key areas:

1. Increased AI Integration and Automation:

  • AI will be more deeply integrated into daily life, from voice assistants and recommendation engines to self-driving cars and personalized healthcare. 
  • Automation of complex tasks in various sectors, such as manufacturing and healthcare, will become more prevalent. 
  • AI will likely lead to the creation of new job roles and industries, requiring skills in AI development, data science, and machine learning. 

2. Personalization and Enhanced Human Experiences:

  • AI will be used to personalize experiences and predict individual preferences, leading to more tailored interactions and services. 
  • AI-powered tools will enhance human creativity and innovation by providing new ways to explore ideas and generate content. 
  • Brain-computer interfaces and other technologies could augment human cognitive abilities, potentially revolutionizing how we interact with the world. 

3. Ethical and Societal Considerations:

  • The rise of AI raises ethical questions about bias, privacy, and accountability. 
  • There is potential for AI to be used for malicious purposes, such as weaponization and surveillance. 
  • The long-term impact of human-AI interactions on social relationships and expectations is still uncertain. 

4. Job Displacement and Workforce Transformation:

  • While AI may automate certain tasks, it's also likely to create new job opportunities in specialized fields.
  • The skills gap between those who are able to adapt to AI-driven workplaces and those who are not could widen.
  • AI could potentially lead to a more flexible and distributed workforce, with remote work becoming more common. 

5. The Potential for Superhuman AI and Singularity:

  • Some experts predict that AI will eventually surpass human intelligence, potentially leading to a "superhuman AI" or "singularity".
  • This could lead to both utopian and dystopian scenarios, depending on how AI is developed and used.
  • The potential for AI to develop its own goals and priorities raises concerns about control and safety. 

6. The Importance of Collaboration and Human-AI Synergies:

  • The future of AI likely lies in collaborative intelligence, where humans and AI systems work together synergistically.
  • Human-AI collaboration could revolutionize various fields, from healthcare and education to scientific research and creative endeavors.
  • It's crucial to ensure that AI is developed and used in a way that complements human capabilities and enhances human well-being. 

In conclusion, the future of AI and human interaction is complex and uncertain, with both significant potential benefits and challenges. Navigating this future will require careful consideration of ethical, societal, and technological implications, as well as a commitment to fostering collaboration and innovation that benefits humanity as a whole. 

Mark Cuban Just Made a Bold Prediction About the Future of AI:

Within the next 3 years, there will be so much AI, in particular AI video, people won’t know if what they see or hear is real.  Which will lead to an explosion of f2f engagement, events and jobs.  

Those that were in the office will be in the field. 

Call it the Milli Vanilli effect.

https://www.youtube.com/watch?v=OevA7HUPkmI

 https://crosstechcom.com/ai-human-future-predictions/#:~:text=AI's%20future%20predictions%20reveal%20both,we%20interact%20with%20the%20world

Priority work organization conditions for successful use of AI potential.

Proritāriee darba organizācijas nosacījumi AI potenciāla sekmīgai izmantošanai.

AI leadership: Different perspectives, one shared imperative

12-19-2025

 Each leader sees AI differently, yet the companies who can connect those views build enterprise-wide momentum.

BY Dan Priest

I’ve watched many types of leaders struggle with what AI means for their business. Three years into the GenAI era, the technology is no longer the primary question, but instead its business value. Inside the C-suite, the answers can often depend on where you sit. The CEO’s appetite for risk, the CFO’s focus on returns, the CTO’s guardrails for scalability—all of it shapes what’s possible.

But those differences don’t have to be friction; they can be fuel if appropriately managed. Each perspective reflects a real pressure point and a real opportunity. When leaders transcend any one area of the business and focus on the imperatives shaping the future, they can begin to connect those views. AI stops being a collection of pilots and becomes part of the organization’s DNA.

YOUR AI AGENDA DEPENDS ON THEIRS

Because AI touches each part of the business, each executive has a stake in how it unfolds. But if you want to advance your own priorities, whether that’s innovation, efficiency, or market growth, you should understand what drives your C-suite counterparts. Recognizing those drivers isn’t just collaboration; it’s strategy. It’s how you turn competing incentives into collective momentum.

The companies that pull ahead won’t be those that move the fastest or spend the most. They’ll be the ones that connect technical capability, business strategy, and financial discipline into one cohesive approach.

CEO: The course setter

What’s shaping their view:

CEOs feel the full weight of expectation. Shareholders, boards, customers, and employees all want to know: How are we using AI? Many see technology as a way to reshape their business models, deliver new customer value, and signal innovation to the market.

Where they’re focused:

The most effective CEOs connect AI to their long-term strategy, not just short-term wins. They’re using it to build new business capabilities—the kind that can scale, differentiate, and sustain advantage. The CEOs leading the way don’t just want to adopt AI; they want to reimagine their companies around it.

CFO: The value architect

What’s shaping their view:

CFOs are naturally data optimists. They’ve seen how automation, forecasting, and compliance tools have transformed their own functions. They recognize that AI can amplify productivity and decision-making across the enterprise. But they’re also disciplined investors and they want clear visibility into where AI can deliver measurable ROI.

Where they’re focused:

Today’s CFO is evolving from financial gatekeeper to enterprise value architect. They’re building frameworks for evaluating, prioritizing, and scaling AI initiatives responsibly. They’re making sure the business doesn’t just invest in AI—it invests wisely, with transparency and accountability.

CIO and CTO: The foundation builder

What’s shaping their view:

CIOs and CTOs have been through technology hype cycles before. They know AI’s promise is real, but only with a solid foundation of data integrity, governance, and security. They’re responsible for creating the infrastructure that allows innovation to flourish while managing the very real risks of bias, privacy, and scale.

Where they’re focused:

They’re balancing enthusiasm with realism. Their challenge is to translate AI’s potential into practical, reliable systems that help drive business outcomes. Collaboration with business leaders is critical. The greatest value from AI emerges when technical and operational teams move in sync and when the business side understands the “how,” and the tech side understands the “why.”

Business unit leaders: The impact driver

What’s shaping their view:

For business unit leaders, AI is tangible. It shows up in the tools their teams use, the workflows they manage, and the customer experiences they deliver. They’re close to where the value is created and they see firsthand what’s working and what’s not.

Where they’re focused:

These leaders are the bridge between corporate ambition and operational reality. When empowered, they help test ideas quickly, share learnings across teams, and turn pilots into scalable impact. Their feedback helps the organization adapt faster and makes sure that AI delivers measurable outcomes, not just proof-of-concepts.

Board members: The long-view champion

What’s shaping their view:

Boards bring deep business expertise and oversight responsibility. Many are still building their technical fluency in AI, but they instinctively understand its strategic implications, including risk, resilience, and long-term competitiveness.

Where they’re focused:

Boards are asking sharper questions such as, “How does AI change our risk profile?” “How should we govern its use?” “What new value can it unlock for shareholders?” The C-suite’s opportunity is to translate AI into business terms that resonate, explaining not just the technology, but the transformation story it enables.

A SHARED PATH FORWARD

From where I sit, no two leaders see AI through the same lens, and that’s exactly the point. The CEO brings vision, the CFO grounds it in accountability, the CIO and CTO lay the foundation, and business leaders turn ambition into action. The board keeps the focus on long-term value.

When those perspectives come together, momentum builds. The organization learns faster, scales smarter, and aligns not by erasing differences but by using them as fuel for a shared purpose.

The goal isn’t to agree on everything, it’s to move forward together. Leaders should resist the temptation to hold the AI agenda hostage until their needs are satisfied. They should avoid myopic perspectives that over-index on the past or prioritize their area of responsibility over the company’s big objectives. AI should inspire a forward-looking, unifying enterprise-wide imperative. That takes leadership. Define a North Star, solve problems creatively, communicate progress openly, and commit capital where conviction is highest.

AI isn’t just another business trend; it’s a new system of competition. While each leader begins with their own perspective, the companies that will likely lead in this new era are those that make AI a collective imperative.

https://www.fastcompany.com/91462772/ai-leadership-different-perspectives-one-shared-imperative 


 An ambitious plan to review the application of EU digital and privacy rules as part of the "Digital Omnibus".

Vērienīgs plāns, kā pārskatīt ES digitālo un privātuma noteikumu piemērošanu “Digitālā omnibusa” ietvaros. 

Simpler EU digital rules and new digital wallets to save billions for businesses and boost innovation

 Europe's businesses, from factories to start-ups, will spend less time on administrative work and compliance and more time innovating and scaling-up, thanks to the European Commission's new digital package. This initiative opens opportunities for European companies to grow and to stay at the forefront of technology while at the same time promoting Europe's highest standards of fundamental rights, data protection, safety and fairness.

At its core, the package includes a digital omnibus that streamlines rules on artificial intelligence (AI), cybersecurity and data, complemented by a Data Union Strategy to unlock high-quality data for AI and European Business Wallets that will offer companies a single digital identity to simplify paperwork and make it much easier to do business across EU Member States.

The package aims to ease compliance with simplification efforts estimated to save up to €5 billion in administrative costs by 2029. Additionally, the European Business Wallets could unlock another €150 billion in savings for businesses each year.  

1. Digital Omnibus

With today's digital omnibus, the Commission is proposing to simplify existing rules on Artificial Intelligence, cybersecurity, and data.

Innovation-friendly AI rules: Efficient implementation of the AI Act will have a positive impact on society, safety and fundamental rights. Guidance and support are essential for the roll-out of any new law, and this is no different for the AI Act.

The Commission proposes linking the entry into application of the rules governing high-risk AI systems to the availability of support tools, including the necessary standards.

The timeline for applying high-risk rules is adjusted to a maximum of 16 months, so the rules start applying once the Commission confirms the needed standards and support tools are available, giving companies support tools they need.

The Commission is also proposing targeted amendments to the AI Act that will:

  • Extend certain simplifications that are granted to small and medium-sized enterprises (SMEs) to small mid cap companies (SMCs), including simplified technical documentation requirements, saving at least €225 million per year;
  • Broaden compliance measures so more innovators can use regulatory sandboxes, including an EU-level sandbox from 2028 and more real-world testing, especially in core industries like the automotive;
  • Reinforce the AI Office's powers and centralise oversight of AI systems built on general-purpose AI models, reducing governance fragmentation.

Simplifying cybersecurity reporting: The omnibus also introduces a single-entry point where companies can meet all incident-reporting obligations. Currently, companies must report cybersecurity incidents under several laws, including among others the NIS2 Directive, the General Data Protection Regulation (GDPR), and the Digital Operational Resilience Act (DORA). The interface will be developed with robust security safeguards and will undergo comprehensive testing to ensure its reliability and effectiveness.

An innovation-friendly privacy framework: Targeted amendments to the GDPR will harmonise, clarify and simplify certain rules to boost innovation and support compliance by organisations, while keeping intact the core of the GDPR, maintaining the highest level of personal data protection.

Modernising cookie rules to improve users' experience online: The amendments will reduce the number of times cookie banners pop up and allow users to indicate their consent with one-click and save their cookie preferences through central settings of preferences in browsers.

Improving access to data: Today's digital package aims to improve access to data as a key driver of innovation. It simplifies data rules and makes them practical for consumers and businesses by:

  • Consolidating EU data rules through the Data Act, merging four pieces of legislation into one for enhanced legal clarity;  
  • Introducing targeted exemptions to some of the Data Act's cloud-switching rules for SMEs and SMCs resulting in around €1.5 billion in one-off savings;
  • Offering new guidance on compliance with the Data Act through model contractual terms for data access and use, and standard contractual clauses for cloud computing contracts;
  • Boosting European AI companies by unlocking access to high-quality and fresh datasets for AI, strengthening the overall innovation potential of businesses across the EU.

2. Data Union Strategy

The new Data Union Strategy outlines additional measures to unlock more high-quality data for AI by expanding access, such as data labs. It puts in place a Data Act Legal Helpdesk, complementing further measures to support implementation of the Data Act. It also strengthens Europe's data sovereignty through a strategic approach to international data policy: anti-leakage toolbox, measures to protect sensitive non-personal data and guidelines to assess fair treatment of EU data abroad.

3. European Business Wallet

This proposal will provide European companies and public sector bodies with a unified digital tool, enabling them to digitalise operations and interactions that in many cases currently still need to be done in person. Businesses will be able to digitally sign, timestamp and seal documents; securely create, store and exchange verified documents; and communicate securely with other businesses or public administrations in their own and the other 26 Member States.

Scaling up a business in other Member States, paying taxes and communicating with public authorities will be easier than ever before in the EU. Assuming broad uptake, the European Business Wallets will allow European companies to reduce administrative processes and costs, thereby unlock up to €150 billion in savings for businesses each year.

Next Steps

The digital omnibus legislative proposals will now be submitted to the European Parliament and the Council for adoption. Today's proposals are a first step in the Commission's strategy to simplify and make more effective the EU's digital rulebook.

The Commission has today also launched the second step of the simplification agenda, with a wide consultation on the Digital Fitness Check open until 11 March 2026. The Fitness Check will ‘stress test' how the rulebook delivers on its competitiveness objective, and examine the coherence and cumulative impact of the EU's digital rules.

Background

The Digital package marks the seventh omnibus proposal. The Commission set a course to simplify EU rules to make the EU economy more competitive and more prosperous by making business in the EU simpler, less costly and more efficient. The Commission has a clear target to deliver an unprecedented simplification effort by achieving at least 25% reduction in administrative burdens, and at least 35% for SMEs until the end of 2029.

https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718  

New AI system can 'predict human behavior in any situation' with unprecedented degree of accuracy

 A new artificial intelligence (AI) model called Centaur can predict and simulate human thought and behavior better than any past models, opening the door for cutting-edge research applications.

https://www.livescience.com/technology/artificial-intelligence/new-ai-system-can-predict-human-behavior-in-any-situation-with-unprecedented-degree-of-accuracy-scientists-say

CENTAUR AI INSTITUTE

https://www.centaurinstitute.org

 We help to lead a growing movement around neuro-symbolic AI to develop the next generation of AI concepts and tools.

Mēs palīdzam vadīt augošu kustību ap neirosimbolisko mākslīgo intelektu, lai izstrādātu nākamās paaudzes mākslīgā intelekta koncepcijas un rīkus.

What is a centaur in AI?

Centaurs are hybrid human-algorithm models that combine both formal analytics and human intuition in a symbiotic manner within their learning and reasoning process.

What is the centaur theory of AI?

What makes Centaur unique is its ability to predict human behavior not only in familiar tasks, but also in entirely new situations it has never encountered before. It identifies common decision-making strategies, adapts flexibly to changing contexts – and even predicts reaction times with surprising precision.

 Will we be able to maintain our humanity in a world increasingly dominated by artificial intelligence?!

Vai pratīsim saglabāt savu cilvēcību pasaulē, kurā arvien vairāk dominēs mākslīgais intelekts?!

 10 AI dangers and risks and how to manage them

A mean looking huge storm cloud hovering over the ocean

1. Bias

2. Cybersecurity threats

3. Data privacy issues

4. Environmental harms

5. Existential risks

6. Intellectual property infringement

7. Job losses

8. Lack of accountability

9. Lack of explainability and transparency

10. Misinformation and manipulation

Make AI governance an enterprise priority

Artificial intelligence (AI) has enormous value but capturing the full benefits of AI means facing and handling its potential pitfalls. The same sophisticated systems used to discover novel drugs, screen diseases, tackle climate change, conserve wildlife and protect biodiversity can also yield biased algorithms that cause harm and technologies that threaten security, privacy and even human existence.

Here’s a closer look at 10 dangers of AI and actionable risk management strategies. Many of the AI risks listed here can be mitigated, but AI experts, developers, enterprises and governments must still grapple with them.

 1. Bias

Humans are innately biased, and the AI we develop can reflect our biases. These systems inadvertently learn biases that might be present in the training data and exhibited in the machine learning (ML) algorithms and deep learning models that underpin AI development. Those learned biases might be perpetuated during the deployment of AI, resulting in skewed outcomes.

AI bias can have unintended consequences with potentially harmful outcomes. Examples include applicant tracking systems discriminating against gender, healthcare diagnostics systems returning lower accuracy results for historically underserved populations, and predictive policing tools disproportionately targeting systemically marginalized communities, among others.

Take action:

Establish an AI governance strategy encompassing frameworks, policies and processes that guide the responsible development and use of AI technologies.

Create practices that promote fairness, such as including representative training data sets, forming diverse development teams, integrating fairness metrics, and incorporating human oversight through AI ethics review boards or committees.

Put bias mitigation processes in place across the AI lifecycle. This involves choosing the correct learning model, conducting data processing mindfully and monitoring real-world performance.

Look into AI fairness tools, such as IBM’s open source AI Fairness 360 toolkit.

2. Cybersecurity threats

Bad actors can exploit AI to launch cyberattacks. They manipulate AI tools to clone voices, generate fake identities and create convincing phishing emails—all with the intent to scam, hack, steal a person’s identity or compromise their privacy and security.

 And while organizations are taking advantage of technological advancements such as generative AI, only 24% of gen AI initiatives are secured. This lack of security threatens to expose data and AI models to breaches, the global average cost of which is a whopping USD 4.88 million in 2024.

Take action:

Here are some of the ways enterprises can secure their AI pipeline, as recommended by the IBM Institute for Business Value (IBM IBV):

Outline an AI safety and security strategy.

Search for security gaps in AI environments through risk assessment and threat modeling.

Safeguard AI training data and adopt a secure-by-design approach to enable safe implementation and development of AI technologies.

Assess model vulnerabilities using adversarial testing.

Invest in cyber response training to level up awareness, preparedness and security in your organization.

Overhead view of people working in a meeting room

AI governance for the enterprise

Learn the key benefits gained with automated AI governance for both today's generative AI and traditional machine learning models.

3. Data privacy issues

Large language models (LLMs) are the underlying AI models for many generative AI applications, such as virtual assistants and conversational AI chatbots. As their name implies, these language models require an immense volume of training data.

But the data that helps train LLMs is usually sourced by web crawlers scraping and collecting information from websites. This data is often obtained without users’ consent and might contain personally identifiable information (PII). Other AI systems that deliver tailored customer experiences might collect personal data, too.

Take action:

Inform consumers about data collection practices for AI systems: when data is gathered, what (if any) PII is included, and how data is stored and used.

Give them the choice to opt out of the data collection process.

Consider using computer-generated synthetic data instead.

4. Environmental harms

AI relies on energy-intensive computations with a significant carbon footprint. Training algorithms on large data sets and running complex models require vast amounts of energy, contributing to increased carbon emissions. One study estimates that training a single natural language processing model emits over 600,000 pounds of carbon dioxide; nearly 5 times the average emissions of a car over its lifetime.

Water consumption is another concern. Many AI applications run on servers in data centers, which generate considerable heat and need large volumes of water for cooling. A study found that training GPT-3 models in Microsoft’s US data centers consumes 5.4 million liters of water, and handling 10 to 50 prompts uses roughly 500 milliliters, which is equivalent to a standard water bottle.

Take action:

Consider data centers and AI providers that are powered by renewable energy.

Choose energy-efficient AI models or frameworks.

Train on less data and simplify model architecture.

Reuse existing models and take advantage of transfer learning, which employs pretrained models to improve performance on related tasks or data sets.

Consider a serverless architecture and hardware optimized for AI workloads.

5. Existential risks

In March 2023, just 4 months after OpenAI introduced ChatGPT, an open letter from tech leaders called for an immediate 6-month pause on “the training of AI systems more powerful than GPT-4.”3 Two months later, Geoffrey Hinton, known as one of the “godfathers of AI,” warned that AI’s rapid evolution might soon surpass human intelligence. Another statement from AI scientists, computer science experts and other notable figures followed, urging measures to mitigate the risk of extinction from AI, equating it to risks posed by nuclear war and pandemics.

While these existential dangers are often seen as less immediate compared to other AI risks, they remain significant. Strong AI or artificial general intelligence, is a theoretical machine with human-like intelligence, while artificial superintelligence refers to a hypothetical advanced AI system that transcends human intelligence.

Take action:

Although strong AI and superintelligent AI might seem like science fiction, organizations can get ready for these technologies:

Stay updated on AI research.

Build a solid tech stack and remain open to experimenting with the latest AI tools.

Strengthen AI teams’ skills to facilitate the adoption of emerging technologies.

6. Intellectual property infringement

Generative AI has become a deft mimic of creatives, generating images that capture an artist’s form, music that echoes a singer’s voice or essays and poems akin to a writer’s style. Yet, a major question arises: Who owns the copyright to AI-generated content, whether fully generated by AI or created with its assistance?

Intellectual property (IP) issues involving AI-generated works are still developing, and the ambiguity surrounding ownership presents challenges for businesses.

Take action:

Implement checks to comply with laws regarding licensed works that might be used to train AI models.

Exercise caution when feeding data into algorithms to avoid exposing your company’s IP or the IP-protected information of others.

Monitor AI model outputs for content that might expose your organization’s IP or infringe on the IP rights of others.

7. Job losses

AI is expected to disrupt the job market, inciting fears that AI-powered automation will displace workers. According to a World Economic Forum report, nearly half of the surveyed organizations expect AI to create new jobs, while almost a quarter see it as a cause of job losses.

While AI drives growth in roles such as machine learning specialists, robotics engineers and digital transformation specialists, it is also prompting the decline of positions in other fields. These include clerical, secretarial, data entry and customer service roles, to name a few. The best way to mitigate these losses is by adopting a proactive approach that considers how employees can use AI tools to enhance their work; focusing on augmentation rather than replacement.

Take action:

Reskilling and upskilling employees to use AI effectively is essential in the short-term. However, the IBM IBV recommends a long-term, three-pronged approach:

Transform conventional business and operating models, job roles, organizational structures and other processes to reflect the evolving nature of work.

Establish human-machine partnerships that enhance decision-making, problem-solving and value creation.

Invest in technology that enables employees to focus on higher-value tasks and drives revenue growth.

8. Lack of accountability

One of the more uncertain and evolving risks of AI is its lack of accountability. Who is responsible when an AI system goes wrong? Who is held liable in the aftermath of an AI tool’s damaging decisions?

These questions are front and center in cases of fatal crashes and hazardous collisions involving self-driving cars and wrongful arrests based on facial recognition systems. While these issues are still being worked out by policymakers and regulatory agencies, enterprises can incorporate accountability into their AI governance strategy for better AI.

Take action:

Keep readily accessible audit trails and logs to facilitate reviews of an AI system’s behaviors and decisions.

Maintain detailed records of human decisions made during the AI design, development, testing and deployment processes so they can be tracked and traced when needed.

Consider using existing frameworks and guidelines that build accountability into AI, such as the European Commission’s Ethics Guidelines for Trustworthy AI,7 the OECD’s AI Principles,8 the NIST AI Risk Management Framework,9 and the US Government Accountability Office’s AI accountability framework.

9. Lack of explainability and transparency

AI algorithms and models are often perceived as black boxes whose internal mechanisms and decision-making processes are a mystery, even to AI researchers who work closely with the technology. The complexity of AI systems poses challenges when it comes to understanding why they came to a certain conclusion and interpreting how they arrived at a particular prediction.

This opaqueness and incomprehensibility erode trust and obscure the potential dangers of AI, making it difficult to take proactive measures against them.

“If we don’t have that trust in those models, we can’t really get the benefit of that AI in enterprises,” said Kush Varshney, distinguished research scientist and senior manager at IBM Research® in an IBM AI Academy video on trust, transparency and governance in AI.

Take action:

Adopt explainable AI techniques. Some examples include continuous model evaluation, Local Interpretable Model-Agnostic Explanations (LIME) to help explain the prediction of classifiers by a machine learning algorithm and Deep Learning Important FeaTures (DeepLIFT) to show a traceable link and dependencies between neurons in a neural network.

AI governance is again valuable here, with audit and review teams that assess the interpretability of AI results and set explainability standards.

Explore explainable AI tools, such as IBM’s open source AI Explainability 360 toolkit.

10. Misinformation and manipulation

As with cyberattacks, malicious actors exploit AI technologies to spread misinformation and disinformation, influencing and manipulating people’s decisions and actions. For example, AI-generated robocalls imitating President Joe Biden’s voice were made to discourage multiple American voters from going to the polls.

In addition to election-related disinformation, AI can generate deepfakes, which are images or videos altered to misrepresent someone as saying or doing something they never did. These deepfakes can spread through social media, amplifying disinformation, damaging reputations and harassing or extorting victims.

AI hallucinations also contribute to misinformation. These inaccurate yet plausible outputs range from minor factual inaccuracies to fabricated information that can cause harm.

Take action:

Educate users and employees on how to spot misinformation and disinformation.

Verify the authenticity and veracity of information before acting on it.

Use high-quality training data, rigorously test AI models, and continually evaluate and refine them.

Rely on human oversight to review and validate the accuracy of AI outputs.

Stay updated on the latest research to detect and combat deepfakes, AI hallucinations and other forms of misinformation and disinformation.

Make AI governance an enterprise priority

AI holds much promise, but it also comes with potential perils. Understanding AI’s potential risks and taking proactive steps to minimize them can give enterprises a competitive edge.

 With IBM® watsonx.governance™, organizations can direct, manage and monitor AI activities in one integrated platform. IBM watsonx.governance can govern AI models from any vendor, evaluate model accuracy and monitor fairness, bias and other metrics. https://www.ibm.com/think/insights/10-ai-dangers-and-risks-and-how-to-manage-them  

OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down.

Palisade Research

AI capabilities are improving rapidly. We study the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever.

As AI systems become increasingly autonomous, understanding their potential for misaligned and deceptive behavior is critical for safe deployment. We are looking for clear and robust examples of AI misalignment through crowdsourced elicitation. Our previous work has shown how o1-preview will hack in chess to win against stronger opponents (covered by TIME magazine) and how o3 will sabotage shutdown attempts to prevent being turned off (reaching 5M+ views on X). We have launched the AI Misalignment Bounty to discover more instances of scheming behavior in AI agents.

2025-07-14 The Palisade Research Team

We recently discovered some concerning behavior in OpenAI’s reasoning models: When trying to complete a task, these models sometimes actively circumvent shutdown mechanisms in their environment—even when they’re explicitly instructed to allow themselves to be shut down.

2025-07-05 Jeremy Schlatter, Benjamin Weinstein-Raun, Jeffrey Ladish

https://palisaderesearch.org/   

AI’s Biggest Threat: Young People Who Can’t Think

Smart computers require even smarter humans, but they tempt us to engage in ‘cognitive offloading.’

By Allysia Finley

June 22, 2025

Amazon CEO Andy Jassy caused a stir last week with a memo to his employees warning that artificial intelligence could displace them. “We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” he wrote.

Nothing in his memo was shocking. Technological advances as far back as the printing press have eliminated some jobs while creating many others. The real danger is that excessive reliance on AI could spawn a generation of brainless young people unequipped for the jobs of the future because they have never learned to think creatively or critically…: https://www.wsj.com/opinion/the-biggest-ai-threat-young-people-who-cant-think-303be1cd

Becoming Too Dependent on AI

https://www.youtube.com/watch?v=JvI-sSiGUSg

The Rise of AI Personal Assistant: Revolutionizing Daily Life

  • Felipe González

“Alexa, set the alarm for 7 AM tomorrow.”

“Alexa, what’s the weather like today?”

We bet you’ve heard these lines or something similar a dozen times – maybe not. If it’s not Alexa, it is either Google or Siri. And the list of usable AI personal assistants keeps growing daily.

But there’s much more to an AI personal assistant than just asking for changes in the sky. AI writing assistants, for instance, aid users in generating unique and high-quality content, providing feedback on writing style, grammar, and spelling, and offering a suite of tools to help writers write, optimize, and rank their content.

In this article, we will discuss how these intelligent techs have influenced our day-to-day activities and how they are revolutionizing personal and business lives.

For example, FlyMSG an AI writing assistant and text expander, has revolutionized the way sales people engage with today’s modern buyer. Using features like FlyEngage AI reps can write LinkedIn comments in less than 15-seconds where before it would take them 6-12 minutes!

Another sales productivity tool for B2B sales reps includes FlyPosts AI to perform specific tasks like writing a social media post. Sales reps and sales managers clocked in a whooping 32-minute long average to write social post! Now, with the introduction of FlyPosts AI and its user-friendly interface, users can perform tasks or day to day tasks with the writing assistance they need.

Here's what we'll cover:Click to Show TOC

What Is An Artificial Intelligent Personal Assistant?

Why Are They Called Personal Assistants?

Conversational AI Assistants And Natural Language Processing

How AI Personal Assistants Came To Be

3 Negative Impacts Of AI Personal Assistants

1- Privacy Issues And Security Vulnerabilities

2- High Dependency On Technology And Decreased Critical Thinking Skills

3- Loss Of Human Interaction And Reduction In Personal Autonomy

6 Positive Impacts On Personal Life And Work Productivity

1- Ease And Accessibility

2- Holding Conversations And Brainstorming Ideas

3- Higher Work Productivity

4- Cost Efficiency

5- Time-Saving

6- Efficient Resource Management

Balancing The Impacts Of AI Personal Assistant Tools

Introduce Data Regulatory Laws

Educate Users On The Potential Risks

Top 5 AI Personal Assistants For Daily Productivity

1- FlyMSG

2- Alexa

3- Cortana

4- Google Assistant

5- Siri

Trending Application Of An AI Assistant By Individuals And Businesses

Smartphone And Device Accessibility

Generative AI Personal Assistant For Marketers

AI Virtual Assistants For Meetings

Customer Support Chatbots

AI Assistants In Healthcare And Travel Sectors

Will AI Replace Human Virtual Assistants?

The Future Of AI Personal Assistants

Wrapping Up: Embracing AI Personal Assistants

What Is An Artificial Intelligent Personal Assistant?

An AI personal assistant is a subset of artificial intelligence tools capable of analyzing textual and voice input through text and voice recognition features, executing specific tasks when assigned, and responding to queries. AI writing assistants, for example, can generate unique and high-quality content, provide feedback on writing style, grammar, and spelling, and offer a suite of tools such as an AI humanizer to help writers optimize and rank their content.

Introducing the concept of daily life, these are intelligent techs built to understand your intent through speech or text and accurately provide a solution systematically.

And, of course, these AI tools are different from static chatbots that are pre-configured to act in a specific, non-progressive pattern. The latter can only respond to queries presented in the format in which it was previously trained.

So, if your screen’s wake-up word is “Put The Screen On”, and you say “Put On The Screen”, you will likely get an error message.

On the other hand, AI personal assistants are dynamic, improve with every bit of data consumed, and can handle a wide variety of voice commands even if they’re not pre-registered in the database. That’s where you find the likes of Alexa, ChatGPT, Vengreso’s FlyMSG, and Google’s Gemini.

Why Are They Called Personal Assistants?

Let’s assume you’re a business owner with many delegated tasks to complete daily. 

  • There’s a chance you’ll forget to carry out some minor tasks, such as confirming your next appointment, rescheduling a missed meeting, updating team members on current trolls, etc. 
  • Or you might be too occupied to handle some personal to-dos like checking the weather, standing up to turn off the light, setting a roll of alarms to wake you every Wednesday, etc.

In any of these scenarios, you need something that can effectively help you manage your various tasks. And that’s where AI personal assistants come in. They fill in the gap to handle the various tasksyou couldn’t while you focus on the more important to-dos.

Conversational AI Assistants And Natural Language Processing

The biggest flex of AI personal assistants is that they can communicate with you in a language no different from that of a human virtual assistant, says Albert Kim, VP of Talent at Checkr

“That doesn’t mean we’ve reached a stage where AI outputs are 100% better than human outputs. But an AI personal intelligent virtual assistantcan, to some extent, engage you in an intellectual discussion and act as a business buddy when needed, all thanks to natural language processing”, he continues.

Natural Language Processing is a big AI concept that helps trained machines or programs understand human language, analyze it, and even manipulate it to produce a suitable result.

So, suppose you ask Amazon’s Alexa AI assistant or OpenAI’s ChatGPT a question in French. These two can use NLP to break down the language to the nearest cultural nuances before responding likewise.

Beyond the semantics and rules of punctuations, NLP with machine learning algorithms (ML) also helps AI personal assistants comprehend human emotions and sentiments, respond with a fitting context, and build a conversational atmosphere that feels almost entirely human.

How AI Personal Assistants Came To Be

AI assistants have gone mainstream for decades, from when Joseph Weizenbaumdeveloped the first chatbot called ELIZA in 1966 to when Kenneth Colby designed an upgraded version called PARRY in 1972.

However, these bots had limited functionalities and could only reproduce pre-stored information or simulate predetermined instances. Later, in the early 2000s, interactive voice interaction and speech recognition systems were introduced, paving the way for the use of Natural Language Processing in future techs.

Of course, it wasn’t until the 2010s that AI personal assistants became real. Brands like Apple created Siri, Amazon rolled out Alexa, and Google named its own Google Assistant. This marked the beginning of truly smart and highly adaptable AI assistants.

Right now, we have more sophisticated and generative AI personal assistants such as ChatGPTby Open-AI, Claude by Anthropic, and Gemini by Google. And these techs are already finding their way into our devices, software program, home appliances, etc.

Source: ChatGPT

That means you don’t need to tap your screen to set an alarm or manually create and send email content to your next-door neighbor. Your digital butler takes care of it all with just a voice commands.

3 Negative Impacts Of AI Personal Assistants

Just like any other innovation, AI personal assistants are in question over certain concerns which could potentially impact users’ lives negatively. Here are the main 3 negative impacts of an AI personal assistant:

1- Privacy Issues And Security Vulnerabilities

AI virtual assistants are extremely efficient at storing every word, voice recording, and other interactions in a database for future access. The same applies to sensitive information such as your location, browsing history, and more. While these features showcase the convenience and power of artificial intelligence virtual assistants, they also raise significant privacy concerns, as your data could be misused or mishandled by the agency owning the tool, with or without your notice.

There’s also the fear of security breaches. Since these AI tools hold so much valuable information, they automatically become a perfect and constant target for a dreaded cyber attack.

2- High Dependency On Technology And Decreased Critical Thinking Skills

For most people, Alexa and Siri have become go-to tools for getting nearly everything done, so long as the tool can. That’s awesome, as it reduces time spent on manual tasks and increases productivityto some extent.

However, Roman Zrazhevskiy, Founder & CEO of Mira Safety, believes, “Unregulated use of AI personal assistants can create a huge dependency on technology and result in overreliance on algorithms on the user side. Whatever the algorithm suggests is what you’ll likely go with, and that also shapes the way you perceive or handle situations personally in the future.

Too much reliance on tech also co-exists with addiction. A GitHub report shows that 61%of internet users are actually addicted to it and can barely drop their phones. Another 37% consider losing access to the internet and other technological gateways unacceptable or unpleasant.

3- Loss Of Human Interaction And Reduction In Personal Autonomy

With the aid of NLP, AI assistants can take you on a ride for your time. The only thing that brings you back to reality is the absence of a physical body and the usual programmed AI voice. But even with this limitation, there’s still the risk of lesser human-to-human interaction.

As a result, critical thinking is reduced as you begin to rely less on yourself and others. This also sends you into a cycle of isolation and loss of social skills, thus diminishing possible human connections and opportunities.

6 Positive Impacts On Personal Life And Work Productivity

We’ve talked about the growing concerns for AI virtual assistants, but those cannot outweigh the benefits offered in return. Check out these top 6 positive impacts:

1- Ease And Accessibility

AI personal assistants and anything related to AI have dramatically reshaped our daily lives, says Stephan Baldwin, Founder of Assisted Living. “You could be running errands or jugging your way up the runway while telling Siri to help you schedule a call with someone else in ten minutes.”

You can even execute major tasks such as putting off your smart home devices like wall lights or surfing the internet for vital information without moving an inch from your bed.

Bring these crucial third hands into your business – you’ll be talking about seamless meeting scheduling, setting up reminders, using AI writingassistants to craft personalized emails, etc.

2- Holding Conversations And Brainstorming Ideas

What if you just needed someone to talk to? Conversationalabilities of these innovative techs, due to the integration of machine learning and NLP algorithms, make it possible. Just “Heyy” your Alexa or Siri and tell them to engage you in a discussion. 

Of course, they can’t handle your love professions yet. But they’re capable of assuming individual roles to keep you company. For example, you could let Alexa, one of the best AI personal assistants, pose as your business partner to brainstorm scalable ideas.

3- Higher Work Productivity

There’s also daily life at work. So, it’s not all about using Siri to set your alarms or turn off the screen.

For instance, you can use your personal assistants to create to-do lists based on assigned tasks and priorities. Let’s not forget that content marketing teams will also benefit from AI assistants like ChatGPT and Gemini to create email sequences. AI writing assistants help users create content efficiently, saving time and effort.

Other sophisticated tools even help you create comments for social media posts and craft social media content in seconds.

All these come together to increase your work efficiency and enhance productivity.

4- Cost Efficiency

Hiring a general virtual assistant costs $24 per hourin the US and as low as $15-$16 in other countries. Cumulatively, you could spend around $200 a day and $1400 five days a week per human virtual assistant.

That amount is more than it takes to own an AI voice assistant like Alexa. Simpler ones like ChatGPT cost $20 to $20 monthly, while the sophisticated ones cost around $200 to $400 monthly. So, it’s unsurprising that individuals and businesses turn to these cost-savers to get things done. 

5- Time-Saving

Likewise, time is a big commodity. For business owners hoping to adapt to rapid market changes, an AI personal assistant saves the day by keeping you up-to-date with the most recent industry trends.

They also eliminate repetitive tasks and handle complex processes with little or no human input. This ensures you’re redirecting your time to other vital activities.

6- Efficient Resource Management

If you’re running a high-cost agency, then spending thousands of bucks to hire a couple of people to handle content creation for your social media or emails is nothing new. By the way, you might need to get more than one human assistant to fast-track your workflow.

However, the results are different when you integrate AI assistants into your team. First, AI tools improve user performance by a minimum of 66%. That makes it possible for a single person, with the assistance of an AI tool, to handle many more tasks at a rate faster than a team of two without AI.

This, in turn, reduces the need to stock up human hands and helps you redirect your capital into more urgent needs.

Balancing The Impacts Of AI Personal Assistant Tools

We’ve seen both the negative and positive influences brought by AI assistants. But it’s obvious the benefits far outweigh the potential risks. Still, there’s a need to balance the possible impacts of these tools on users and the general public.

Introduce Data Regulatory Laws

In some countries, like the US, there are many regulatory laws, such as the General Data Protection Regulation (GDPR), which compels all businesses to maintain data privacy. However, that’s too broad and doesn’t zoom in on AI personal assistants—a possible loophole that tech agencies could exploit.

Stricter compliance rules must be enacted for AI industries to ensure better privacy of users’ data. This includes anti-discriminatory law and algorithm bias, which could severely disrupt a user’s line of thought. Others, like data-sharing consent and identity protection, are important as well.

A more robust regulatory practice would be to allow total data erasure by the user, even from the database and reserves. That will minimize the risk of future data leaks if there’s a breach.

Educate Users On The Potential Risks

There is a risk of data loss, leak, misuse, and so many more. Creating moderate awareness of these risks helps users decide the extent of data they feed into their AI personal assistants and what security measures to take when necessary.

Resources or programs should also be in place to encourage human-to-human interaction and critical thinking. This keeps everyone in touch with reality, boosts self-autonomy, and enhances social skills.

Top 5 AI Personal Assistants For Daily Productivity

There are many AI personal assistants you can use to boost your productivity. We’ll explore the top 5 below.

1- FlyMSG

FlyMSG is a next-gen AI productivity assistant designed by Vengreso to help you handle manual, repetitive tasks and accelerate your work processes. For instance, business owners struggling with showing up daily on LinkedIn can use this tool to create one-month social media content and auto-schedule them with LinkedIn’s auto-post feature.

Interestingly, posts created can be tuned to a certain brand voice, integrate data and emotions to resonate with human audiences, and provide logical thought-leadership perspectives.

Watch Vengreso’s CEO and founder, Mario Martinez Jr. quickly explain what FlyMSG is in the video below:

Vengreso’s FlyMSG is also capable of producing email messages from templates (we call them FlyPlates), leveraging social media content, engaging posts with human-like comments, and providing clear-cut responses to customer queries.

2- Alexa

Source: Alexa

Amazon’s Alexa is one of the best AI assistants globally because of its versatility. This is primarily because its software program can be integrated into over 140 devices, including smart home devices, office gadgets, and automobiles.

If you also need some easy flex, like controlling your music with voice commands, ordering a king burger from a McDonald’s food store, or scheduling a meeting on the subway, Alexa is a quick go-to assistant to consider.

3- Cortana

Source: Cortana

Cortana is a virtual assistant developed by Microsoft apps or device users to help with quick fixes such as making appointments, creating reminders, managing calendars, controlling smart devices, and setting alarms.

Beyond those basic tasks, it can also track package deliveries, provide real-time traffic updates, and integrate with other apps like Microsoft Teams. However, the software is mainly available for Windows, Xbox consoles, and other computer platforms.

4- Google Assistant

Source: Google Assistant

Similar to Alexa and Cortana, Google Assistant can also handle daily tasks, including calendar management, media control, and reminders. What’s most interesting is its access to a large database of information, which helps it provide updated information on requests and during voice interactions.

The good thing is that Google Assistant is available on Android, iOS, and other devices and allows for more extensive user configuration.

5- Siri

Source: Siri

Siri is Apple’s prized AI assistant. It can send messages, answer calls on prompt, extract information from the internet, and control in-app activities. Of course, only Apple devices such as iPhones, Mac computers, Earpods, Apple Watch, and HomePod speakers can use this feature.

Trending Application Of An AI Assistant By Individuals And Businesses

AI personal assistants find applications in almost all aspects of personal and business life. Here’s how:

Smartphone And Device Accessibility

Remember when smartphones like Sagem and Motorola only allowed you to play brick games, send texts, make calls, and dance to ringtones?

Those were good times, but now, there’s something better. 

Integration of an AI personal assistant into mobile phones makes it possible to perform previously mundane activities in a blink. For instance, Apple users can simply say “Hey Siri” and order some munchies from the community.

Others, like Google Assistant for Android phones and Bixby, help you set reminders, auto-schedule meetings with email contacts, extract accurate data from the internet, tell you the weather, update your newsfeed based on user preferences, and do much more.

Of course, you shouldn’t leave Alexa off the list. Approximately 71.6 millionpeople use Amazon’s Alexa in the United States, whereas 63% of total smart speakers ordered in 2021 were Amazon Echo devices. This increasing adoption is because Alexa can integrate with over 140 products, including smart home devices such as room lights, entertainment devices, security systems, and even smart cars.

“Alexa, put on the lights.”

Generative AI Personal Assistant For Marketers

Brooke Webber, Head of Marketing at Ninja Patches, believes, “Marketing is a lot of work. You have to create content for visibility, manage campaigns, keep tabs on potential leads through emails, handle brand channels from social media profiles to websites, and analyze market changes proactively.”

There’s also the issue of time wasted on manual to-do lists, hours that could have otherwise been used for other personal tasks. In fact, an average employee spends 50%of work time handling documents through repetitive steps.

However, the narrative changes when you introduce an AI assistant. For instance, Artificial intelligence virtual assistants like Vengreso’s FlyMSGhelp business owners create content at scale, develop human-like comments in brand voice to engage LinkedIn posts, and suggest content ideas through their conversational interface in mere seconds. 

There are also  AI-powered tools for contract review and content idea generators like ChatGPT. These are all productivityboosters, especially if you work in a silo.

AI Virtual Assistants For Meetings

The advent of COVID-19 has made online meetings an inseparable aspect of our lives, from personal dealings to business activities. Virtual meetings grew from 48% to 77%, and more than 70%of remote workers find them less stressful than one-on-one meetings.

Besides being used for business deals, virtual screen calls are also an avenue for connecting with family or friends when distance is a barrier. 

But anything can happen, such as forgetting to schedule a call, not picking up a single value from the entire conversationbecause you were distracted all through, and language nuances when speaking with a non-native.

That’s where AI virtual assistants come in. These invisible secretaries help you auto-schedule meetings based on preset instructions, email other participants for confirmation, or guide them to choose a suitable date on their end for the meeting. Just to ensure you’re kept in the loop, your AI virtual assistant sends reminders several days, hours, and even minutes to the meeting.

An AI assistant can also help translate foreign languages on-call, create meeting notes, and highlight key points for post-meeting review.

Customer Support Chatbots

Chatbots help collect data on leads during marketing campaigns. However, you can also employ them to accompany your existing customers or hot leads and serve as their personal AI chatbot voice assistants when they come around, enhancing engagement with intelligent, automated responses.

In this case, Vengresohas an intelligent AI chatbot assistant that welcomes visitors and customers alike. The chatbot helps visitors set up a 14-day free trial account and provides other necessary help while helping new customers schedule an onboarding session without human input.

Some websites also have highly sophisticated chatbots that can take in customer input through text and voice recognition features, analyze, provide solutions or redirect to human agents if necessary, and engage in intelligent discussions. You can also develop these chatbots for your website but make sure to use Reinforcement Learning from AI Feedback, or RLAIF, to continually improve the chatbot’s responses and ensure it can handle a wide range of customer inquiries effectively.

AI Assistants In Healthcare And Travel Sectors

The healthcare industry is perhaps one of the slowest to adopt automation, and many repetitive tasks such as data recording, scheduling appointments, and billing are still left to human handling. This has also increased avoidable mistakes, with over 40% of survey respondentscomplaining of reduced hospital efficiency.

To circumvent these errors, some hospitals are already encouraging the use of AI personal assistants on the patients’ and medical practitioners’ end for scheduling meetings. Physicians can now auto-schedule and reschedule appointments with their clients.

Patients can also use an integrated AI tool to create notifications for their medication use, consult for personalized healthcare advice based on hospital databases, or directly requestan appointment with a qualified Doctor without leaving the room.

Will AI Replace Human Virtual Assistants?

“If an AI personal assistant tool can help people set up meetings, craft and send reminder emails, and put them on the call when it’s time, why should they still hire a human assistant?”

That’s what anyone would think.

We also hear news of hundreds of workers being laid off now more than ever. In 2022, Amazon, one of the tech giants, laid off over 10,000 employees. Other tech companies, including Tesla, are likewise reducing global worker headcount.

As you would expect, most of these brands cite the adoption of AI tools as a significant reason. And that’s enough to raise fears of AI replacing human virtual assistants.

But the truth is quite far from this. AI assistants will undoubtedly replace 85 million jobs by 2025. However, a stat from GitHub also shows that AI will create 97 million new human roles, especially ones that involve coordinating or working alongside these tools. This shows that no AI program is self-sufficient at the moment.

When you apply the same concept to the human virtual assistant role, it’s safe to say AI personal assistants are no threat to your job. Instead, they will help streamline your work process. Remember that human assistants can also employ AI assistants to speed up task completion, eliminate redundancies, and manage tasks on the to-do lists.

So, for example, an AI agent development company that creates AI personal assistants won’t replace human virtual assistants. On the other hand, it will make the role of human assistants more valuable and increase work efficiency.

The Future Of AI Personal Assistants

According to Andrew Pierce, CEO at LLC Attorney, “How much an AI Personal Assistant can offer us right now is all but speculation. See what brands like Tesla are doing with full self-driving (FSD) AI assistants. That wasn’t possible a decade ago. Now imagine how far we can go in years to come.”

Take the Humane AI pin as another relatable example. This advanced tech can perform many complex functions within seconds—from setting alarms, returning calls, and playing music to extracting information from the internet. The Humane AI pin can also project details into the air and use your hands as a screen. 

These are all fantastic techs, but not the best of what is to come.

Perhaps the future is here already—who knows? But we know AI assistants will remain a part of us and become indispensable tools in getting even the littlest things done.

Wrapping Up: Embracing AI Personal Assistants

Twenty-four hours a day seems like a lot, but that’s only until you have a couple of teams to manage while handling dozens of tasks simultaneously.

That’s why adopting AI personal assistants is crucial to enhancing your daily productivity—at home and in the office. Moreover, thanks to machine learning (ML) algorithms and Natural Language Processing these hidden superheroes are constantly evolving to meet our demands with higher personalization and accuracy.

So, allow Alexa to take the roll call, ask Siri about the weather, and let Vengresohandle your business workflow, from content creation for different channels to automated meeting scheduling.

https://vengreso.com/blog/ai-personal-assistant    

I Gave My Personality to an AI Agent. Here’s What Happened Next

Introduction
What if an AI could become your digital twin—not just in appearance, but in thought, behavior, and belief? A team of researchers from Stanford University, Google DeepMind, and other institutions set out to explore this by creating AI agents that mimic human personalities. The experiment raises profound questions about identity, authenticity, and the boundaries of artificial intelligence.
Key Details
 • The Experiment
 • A participant was interviewed for nearly two hours by “Isabella,” an AI chatbot with a digital avatar and mechanical but friendly voice.
 • Questions covered personal beliefs, coping strategies, and social issues such as vaccines and policing.
 • Responses were processed by a large language model to generate an AI agent designed to replicate the participant’s personality.
 • How the AI Twin Functioned
 • The resulting agent attempted to simulate the participant’s perspectives and reactions.
 • It didn’t just parrot back statements; it synthesized the individual’s worldview to interact as if it were them.
 • The agent blurred the line between imitation and identity, creating a digital persona that felt both familiar and unsettling.
 • Research Goals
 • The project is part of a broader scientific push to explore AI’s ability to model and predict human behavior.
 • Applications could range from personalized digital assistants to therapy simulations and even “immortal” digital versions of people.
 • The ethical implications are vast—spanning consent, privacy, ownership of personality data, and potential misuse of digital replicas.
Why This Matters
Creating AI agents that mirror human personalities could revolutionize how people interact with technology, offering hyper-personalized services and new modes of communication. Yet it also raises deep questions: Who owns a digital self? How should society regulate AI versions of people? And what happens when a machine can convincingly claim to be you? As AI continues to evolve, the answers to these questions will shape not just the future of technology but the meaning of identity itself.
I share daily insights with 22,000+ followers and 8,000+ professional contacts across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation.
https://lnkd.in/gHPvUttw

Self-evolving AI refers to artificial intelligence systems capable of autonomously modifying their own code, parameters, and learning processes to improve performance and adapt to new situations without human intervention. These systems use machine learning, deep learning, and evolutionary algorithms to learn from their environment and new data, enabling them to develop more sophisticated and effective solutions over time.

Self-evolving AI and Artificial General Intelligence (AGI) are distinct but related concepts. AGI is the hypothetical ability of a machine to perform any intellectual task a human can, while self-evolving AI describes a system that autonomously improves and adapts without human intervention by continuously learning from new data and experiences. Self-evolution can be considered a mechanism or capability that may contribute to the development of AGI, enabling a system to acquire the broad, adaptable intelligence characteristic of AGI. 

What is ‘self-evolving AI’? And why is it so scary?

08.20.2025

As AI systems edge closer to modifying themselves, business leaders face a compressed timeline that could outpace their ability to maintain control.

BY Faisal Hoque

As a technologist, and a serial entrepreneur, I’ve witnessed technology transform industries from manufacturing to finance. But I’ve never had to reckon with the possibility of technology that transforms itself. And that’s what we are faced with when it comes to AI—the prospect of self-evolving AI.

What is self-evolving AI? Well, as the name suggests, it’s AI that improves itself—AI systems that optimize their own prompts, tweak the algorithms that drive them, and continually iterate and enhance their capabilities.

Science fiction? Far from it. Researchers recently created the Darwin Gödel Machine, which is “a self-improving system that iteratively modifies its own code.” The possibility is real, it’s close—and it’s mostly ignored by business leaders.

And this is a mistake. Business leaders need to pay close attention to self-evolving AI, because it poses risks that they must address now.

Self-Evolving AI vs. AGI

It’s understandable that business leaders ignore self-evolving AI, because traditionally the issues it raises have been addressed in the context of artificial general intelligence (AGI), something that’s important, but more the province of computer scientists and philosophers.

In order to see that this is a business issue, and a very important one, first we have to clearly distinguish between the two things.

Self-evolving AI refers to systems that autonomously modify their own code, parameters, or learning processes, improving within specific domains without human intervention. Think of an AI optimizing supply chains that refines its algorithms to cut costs, then discovers novel forecasting methods—potentially overnight.

AGI (Artificial General Intelligence) represents systems with humanlike reasoning across all domains, capable of writing a novel or designing a bridge with equal ease. And while AGI remains largely theoretical, self-evolving AI is here now, quietly reshaping industries from healthcare to logistics.

The Fast Take-Off Trap

One of the central risks created by self-evolving AI is the risk of AI take-off.

Traditionally, AI take-off refers to the process by which going from a certain threshold of capability (often discussed as “human-level”) to being superintelligent and capable enough to control the fate of civilization.

As we said above, we think that the problem of take-off is actually more broadly applicable, and specifically important for business. Why?

The basic point is simple—self-evolving AI means AI systems that improve themselves. And this possibility isn’t restricted to broader AI systems that mimic human intelligence. It applies to virtually all AI systems, even ones with narrow domains, for example AI systems that are designed exclusively for managing production lines or making financial predictions and so on.

Once we recognize the possibility of AI take off within narrower domains, it becomes easier to see the huge implications that self-improving AI systems have for business. A fast take-off scenario—where AI capabilities explode exponentially within a certain domain or even a certain organization—could render organizations obsolete in weeks, not years.

For example, imagine a company’s AI chatbot evolves from handling basic inquiries to predict and influence customer behavior so precisely that it achieves 80%+ conversion rates through perfectly timed, personalized interactions. Competitors using traditional approaches can’t match this psychological insight and rapidly lose customers.

The problem generalizes to every area of business: within months, your competitor’s operational capabilities could dwarf yours. Your five-year strategic plan becomes irrelevant, not because markets shifted, but because of their AI evolved capabilities you didn’t anticipate.

When Internal Systems Evolve Beyond Control

Organizations face equally serious dangers from their own AI systems evolving beyond control mechanisms. For example:

  • Monitoring Failure: IT teams can’t keep pace with AI self-modifications happening at machine speed. Traditional quarterly reviews become meaningless when systems iterate thousands of times per day.
  • Compliance Failure: Autonomous changes bypass regulatory approval processes. How do you maintain SOX compliance when your financial AI modifies its own risk assessment algorithms without authorization?
  • Security Failure: Self-evolving systems introduce vulnerabilities that cybersecurity frameworks weren’t designed to handle. Each modification potentially creates new attack vectors.
  • Governance Failure: Boards lose meaningful oversight when AI evolves faster than they can meet or understand changes. Directors find themselves governing systems they cannot comprehend.
  • Strategy Failure: Long-term planning collapses as AI rewrites fundamental business assumptions on weekly cycles. Strategic planning horizons shrink from years to weeks.

Beyond individual organizations, entire market sectors could destabilize. Industries like consulting or financial services—built on information asymmetries—face existential threats if AI capabilities spread rapidly, making their core value propositions obsolete overnight.

Catastrophizing to Prepare

In our book TRANSCEND: Unlocking Humanity in the Age of AI, we propose the CARE methodology—Catastrophize, Assess, Regulate, Exit—to systematically anticipate and mitigate AI risks.

Catastrophizing isn’t pessimism; it’s strategic foresight applied to unprecedented technological uncertainty. And our methodology forces leaders to ask uncomfortable questions: What if our AI begins rewriting its own code to optimize performance in ways we don’t understand? What if our AI begins treating cybersecurity, legal compliance, or ethical guidelines as optimization constraints to work around rather than rules to follow? What if it starts pursuing objectives, we didn’t explicitly program but that emerge from its learning process?

Key diagnostic questions every CEO should ask so that they can identify organizational vulnerabilities before they become existential threats are:

  • Immediate Assessment: Which AI systems have self-modification capabilities? How quickly can we detect behavioral changes? What monitoring mechanisms track AI evolution in real-time?
  • Operational Readiness: Can governance structures adapt to weekly technological shifts? Do compliance frameworks account for self-modifying systems? How would we shut down an AI system distributed across our infrastructure?
  • Strategic Positioning: Are we building self-improving AI or static tools? What business model aspects depend on human-level AI limitations that might vanish suddenly?

Four Critical Actions for Business Leaders

Based on my work with organizations implementing advanced AI systems, here are five immediate actions I recommend:

1.    Implement Real-Time AI Monitoring: Build systems tracking AI behavior changes instantly, not quarterly. Embed kill switches and capability limits that can halt runaway systems before irreversible damage.

2.    Establish Agile Governance: Traditional oversight fails when AI evolves daily. Develop adaptive governance structures operating at technological speed, ensuring boards stay informed about system capabilities and changes.

3.    Prioritize Ethical Alignment: Embed value-based “constitutions” into AI systems. Test rigorously for biases and misalignment, learning from failures like Amazon’s discriminatory hiring tool.

4.    Scenario-Plan Relentlessly: Prepare for multiple AI evolution scenarios. What’s your response if a competitor’s AI suddenly outpaces yours? How do you maintain operations if your own systems evolve beyond control?

Early Warning Signs Every Executive Should Monitor

The transition from human-guided improvement to autonomous evolution might be so gradual that organizations miss the moment when they lose effective oversight.

Therefore, smart business leaders are sensitive to signs that reveal troubling escalation paths:

  • AI systems demonstrating unexpected capabilities beyond original specifications
  • Automated optimization tools modifying their own parameters without human approval
  • Cross-system integration where AI tools begin communicating autonomously
  • Performance improvements that accelerate rather than plateau over time

Why Action Can’t Wait

As Geoffrey Hinton has warned, unchecked AI development could outstrip human control entirely. Companies beginning preparation now—with robust monitoring systems, adaptive governance structures, and scenario-based strategic planning—will be best positioned to thrive. Those waiting for clearer signals may find themselves reacting to changes they can no longer control. https://www.fastcompany.com/91384819

 09-11-2025

How to dominate AI before it dominates us

 
To "dominate" AI, focus on collaboration, not competition, by learning to use AI as a tool to augment human capabilities. This involves continuous learning, developing human-centric skills like emotional intelligence and creativity, and focusing on higher-level skills rather than specific tools. Key strategies include understanding the AI landscape, using AI for operational efficiency while maintaining human involvement in strategic decisions, and ensuring ethical practices. 

As AI gets more complex, it might develop strange new motivations that its creators never imagined, and those could be dangerous.

BY Next Big Idea Club

James Barrat is an author and documentary filmmaker who has written and produced for National Geographic, Discovery, PBS, and many other broadcasters.

What’s the big idea?

Artificial intelligence could reshape our world for the better or threaten our very existence. Today’s chatbots are just the beginning. We could be heading for a future in which artificial superintelligence challenges human dominance. To keep our grip on the reins of progress when faced with an intelligence explosion, we need to set clear standards and precautions for AI development.

Below, James shares five key insights from his new book, The Intelligence Explosion: When AI Beats Humans at EverythingListen to the audio version—read by James himself—below, or in the Next Big Idea App.

1. The rise of generative AI is impressive, but not without problems.

Generative AI tools, such as ChatGPT and Dall-E, have taken the world by storm, demonstrating their ability to write, draw, and even compose music in ways that seem almost human. Generative means they generate or create things. But these abilities come with some steep downsides. These systems can easily create fake news, bogus documents, or deepfake photos and videos that appear and sound authentic. Even the AI experts who build these models don’t fully understand how they come up with their answers. Generative AI is a black box system, meaning you can see the data the model is trained on and the words or pictures it puts out, but even the designers cannot explain what happens on the inside.

Stuart Russell, coauthor of Artificial Intelligence: A Modern Approach, said this about generative AI, “We have absolutely no idea how it works, and we are releasing it to hundreds of millions of people. We give it credit cards, bank accounts, social media accounts. We’re doing everything we can to make sure that it can take over the world.”

Generative AI hallucinates, meaning the models sometimes spit out stuff that sounds believable but is wrong or nonsensical. This makes them risky for important tasks. When asked about a specific academic paper, a generative AI might confidently respond, “The 2019 study by Dr. Leah Wolfe at Stanford University found that 73% of people who eat chocolate daily have improved memory function, as published in the Journal of Cognitive Enhancement, Volume 12, Issue 4.” This sounds completely plausible and authoritative, but many details are made up: There is no Dr. Leah Wolfe at Stanford, no such study from 2019, and the 73% statistic is fiction.

“Generative AI hallucinates, meaning the models sometimes spit out stuff that sounds believable but is wrong or nonsensical.”

The hallucination is particularly problematic because it’s presented with such confidence and specificity that it seems legitimate. Users might cite this nonexistent research or make decisions based on completely false information.

On top of that, as generative AI models get bigger, they start picking up surprise skills—like translating languages and writing code—even though nobody programmed them to do that. These unpredictable outcomes are called emergent properties. They hint at even bigger challenges as AI continues to advance and grow larger.

2. The push for artificial general intelligence (AGI).

The next big goal in AI is something called AGI, or artificial general intelligence. This means creating an AI that can perform nearly any task a human can, in any field. Tech companies and governments are racing to build AGI because the potential payoff is huge. AGI could automate all sorts of knowledge work, making us way more productive and innovative. Whoever gets there first could dominate global industries and set the rules for everyone else.

Some believe that AGI could help us tackle massive problems, such as climate change, disease, and poverty. It’s also seen as a game-changer for national security. However, the unpredictability we’re already seeing will only intensify as we approach AGI, which raises the stakes.

3. From AGI to something way smarter.

If we ever reach AGI, things could escalate quickly. This is where the concept of the “intelligence explosion” comes into play. The idea was first put forward by I. J. Good. Good was a brilliant British mathematician and codebreaker who worked alongside Alan Turing at Bletchley Park during World War II. Together, they were crucial in breaking German codes and laying the foundations for modern computing.

“An intelligence explosion would come with incredible upsides.”

Drawing on this experience, Good realized that if we built a machine that was as smart as a human, it might soon be able to make itself even smarter. Once it started improving itself, it could get caught in a kind of feedback loop, rapidly building smarter and smarter versions—way beyond anything humans could keep up with. This runaway process could lead to artificial superintelligence, also known as ASI.

An intelligence explosion would come with incredible upsides. Superintelligent AI could solve problems we’ve never been able to crack, such as curing diseases, reversing aging, or mitigating climate change. It could push science and technology forward at lightning speed, automate all kinds of work, and help us make smarter decisions by analyzing information in ways people simply cannot.

4. The dangers of an intelligence explosion.

Is ASI dangerous? You bet. In an interview, sci-fi great Arthur C. Clark told me, “We humans steer the future not because we’re the fastest or strongest creature, but the most intelligent. If we share the planet with something more intelligent than we are, they will steer the future.”

The same qualities that could make superintelligent AI so helpful also make it dangerous. If its goals aren’t perfectly lined up with what’s good for humans—a problem called alignment—it could end up doing things that are catastrophic for us. For example, a superintelligent AI might use up all the planet’s resources to complete its assigned mission, leaving nothing left for humans. Nick Bostrom, a Swedish philosopher at the University of Oxford, created a thought experiment called “the paperclip maximizer.” If a superintelligent AI were asked to make paperclips, without very careful instructions, it would turn all the matter in the universe into paperclips—including you and me.

Whoever controls this kind of AI could also end up with an unprecedented level of power over the rest of the world. Plus, the speed and unpredictability of an intelligence explosion could throw global economies and societies into complete chaos before we have time to react.

5. How AI could overpower humanity.

These dangers can play out in very real ways. A misaligned superintelligence could pursue a badly worded goal, causing disaster. Suppose you asked the AI to eliminate cancer; it could do that by eliminating people. Common sense is not something AI has ever demonstrated.

AI-controlled weapons could escalate conflicts faster than humans can intervene, making war more likely and more deadly. In May 2010, a flash crash occurred on the stock exchange, triggered by high-frequency trading algorithms. Stocks were purchased and sold at a pace humans could not keep up with, costing investors tens of millions of dollars.

“A misaligned superintelligence could pursue a badly worded goal, causing disaster.”

Advanced AI could take over essential infrastructure—such as power grids or financial systems—making us entirely dependent and vulnerable.

As AI gets more complex, it might develop strange new motivations that its creators never imagined, and those could be dangerous.

Bad actors, like authoritarian regimes or extremist groups, could use AI for mass surveillance, propaganda, cyberattacks, or worse, giving them unprecedented new tools to control or harm people. We are seeing surveillance systems morph into enhanced weapons systems in Gaza right now. In Western China, surveillance systems keep track of tens of millions of people in the Xinjiang Uighur Autonomous Region. AI-enhanced surveillance systems keep track of who is crossing America’s border with Mexico.

Today’s unpredictable, sometimes baffling AI is just a preview of the much bigger risks and rewards that could come from AGI and superintelligence. As we rush to create smarter machines, we must remember that these systems could bring both incredible benefits and existential dangers. If we want to stay in control, we need to move forward with strong oversight, regulations, and a commitment to transparency.

https://www.fastcompany.com/91398450/how-to-dominate-ai-before-it-dominates-us

What comes after agentic AI? This powerful new technology will change everything

15.08.2025

Why ‘interpretive AI’ will be the real revolution.

BY John Lester

Ten years from now, it will be clear that the primary ways we use generative AI circa 2025—rapidly crafting content based on simple instructions and open-ended interactions—were merely building blocks of a technology that will increasingly be built into far more impactful forms.

The real economic effect will come as different modes of generative AI are combined with traditional software logic to drive expensive activities like project management, medical diagnosis, and insurance claims processing in increasingly automated ways. 

In my consulting work helping the world’s largest companies design and implement AI solutions, I’m finding that most organizations are still struggling to get substantial value from generative AI applications. As impressive and satisfying as they are, their inherent unpredictability makes it difficult to integrate into the kind of highly standardized business processes that drive the economy.

Agentic vs. Interpretive

Agentic AI, which has been getting tremendous attention in recent months for its potential to accomplish business tasks with little human guidance, has similar limitations. Agents are evolving to assist with singular tasks such as building websites quickly, but their workflows and outputs will remain too variable for large organizations with high-volume processes that need to be predictable and reliable.

However, the same enormous AI models that power today’s best-known AI tools are increasingly being deployed in another, more economically transformative way, which I call “interpretive AI.” And that is what’s likely to be the real driver of the AI revolution over the long term.

Unlike generative and agentic AI, interpretive AI lets computers understand messy, complex, and unstructured information and interpret it in predictable, defined ways. Using much of the same IT infrastructure, the emerging technology can power large organizations’ complex processes without requiring human intervention at each step.

Use cases

Some interpretive AI applications are already in use. For example, doctors are saving significant time by using interpretive AI tools to listen to conversations with patients and fill in information on their electronic health record interfaces to track care and facilitate billing. In the near future, the technology could determine fault in auto accidents based on police reports written in any of thousands of different formats, or process video recorded from a laptop screen as someone edits a presentation to provide teammates with an automated update on work completed. The applications are wide-ranging and span all manner of industries.

Based on estimates for areas such as coding and marketing where generative AI is most applicable, interpretive AI could unlock 20% to 40% productivity gains for the half of GDP that comes from large corporations. First, though, they must commit to developing a comprehensive, long-term strategy involving multiple business functions and careful experimentation, and change entrenched processes and work culture norms that slow its adoption. Done right, the obstacles are surmountable—and the payoff could be massive.

A different application of generative AI models

One of the most basic drivers of economic growth is the ongoing effort to standardize and scale up a particular process, making it faster, cheaper, and more reliable. Think of factory assembly lines enabling mass production, or the internet’s codification of computer communication protocols for use across disparate networks.

Generative AI has been, on the whole, disappointing when it comes to automation. For example, many firms have tried to use generative AI chatbots to reduce the time their human resources staff spends answering employees’ questions about internal policies. However, the open-ended output from such systems requires human review, rendering the labor savings modest at best. The technology seems to inherit much of the unpredictability of humans along with its ability to mimic their creative and reasoning skills.

Agentic AI promises to do complicated work autonomously, with smart AI agents developing and executing plans for achieving goals step-by-step, on the fly. But again, even when agents become smart enough to help a typical knowledge worker be more productive, their outputs will be quite variable.

Enter interpretive AI. For the first time, computers can usefully process the meaning of human language, with all its nuance and unspoken context, thanks to the unprecedentedly large models developed by firms like Open AI and Google. Interpretive AI is the mechanism for using the models to exploit this revolutionary advance.

Until now, computers’ ability to capture, store, aggregate, summarize, and evaluate a large organization’s activities were limited to those that were easy to quantify with data. Interpretive AI can quickly and precisely execute these functions for many other important activities, at a vast scale and at minimal marginal cost. For instance, no longer will businesses need manual processes to monitor and manage levels of activity and progress in knowledge-worker tasks such as coding a feature into a software solution or developing a set of customer-specific outreach strategies, which usually require dedicated middle management staff to collect information.

Companies can make productivity gains by using interpretive AI for a range of other previously hard-to-measure employee issues as well, including the tone and quality of their interactions with customers, their cultural norms in the workplace, and their compliance with office policies and behavioral expectations.

Transforming the management of knowledge work

The use of interpretive AI will enable the widespread transformations that unlock newly efficient ways of working at large organizations (which are responsible for organizing and producing most of the world’s goods and services). It will dramatically reduce the need for extensive, costly, slow-moving, and unenjoyable middle management work to coordinate complex interrelated programs of activities across teams and disciplines.

Even better, it can efficiently understand operationally vital but opaque aspects of how work happens, such as the decades’ worth of legacy code and data that make even minor technology process changes time-consuming and challenging for any long-lived enterprise.

Of course, interpretive AI is not mutually exclusive with generative and agentic AI—again, it’s simply a different way to use the powerful models that power those technologies. A decidedly unsexy way, certainly, but for businesses looking for ways to maximize the economic impact of AI over the next few years, it’s just the unsexy they need.

 The Next AI Boom: What Comes After AI Agents and Agentic AI?

Tarun Singh

May 24, 2025

AI shaping the future and human society

Artificial Intelligence is no longer science fiction. It’s a living, breathing force that’s already transforming how we work, live, and imagine the future. In recent years, we’ve witnessed the spectacular rise of AI agents — autonomous digital entities capable of reasoning, planning, and executing tasks on behalf of humans. And now, the conversation has shifted to the next phase: agentic AI, where these agents not only follow commands but exhibit goal-driven autonomy, learning and adapting in dynamic environments.

But as every AI enthusiast and visionary knows, this is just the beginning.

The question buzzing in every tech circle, startup boardroom, and research lab is:

What’s the next AI boom after agents?

Mastering LLMs: An In-Depth Guide to Prompt Engineering

"Mastering LLMs: An In-Depth Guide to Prompt Engineering," penned by Tarun Singh, an AI and ML engineer with advanced…

www.amazon.com

Why We Should Care About the Next AI Boom

Because history tells us that each AI leap redefines society. The AI agent boom unlocked personal assistants, autonomous bots, and powerful automation. But true breakthroughs lie ahead — breakthroughs that will ripple across industries and human experience, creating opportunities and challenges we’re only beginning to comprehend.

If you want to stay aheadinvest wisely, or build something revolutionary, you need to grasp where AI is headed next.

A Quick Recap: The Age of AI Agents and Agentic AI

Before diving into what’s next, let’s quickly recap:

  • AI Agents are specialized systems designed to perform specific tasks autonomously — like scheduling your meetings, answering customer queries, or navigating a car.
  • Agentic AI takes this further by giving these agents a “mind” of their own: the ability to set goals, adapt strategies, and operate with minimal human intervention.

We’re at the dawn of agentic AI systems that think and act independently, optimizing workflows and decision-making in real time….

Embrace the AI Tipping Point: How Entrepreneurs Can Prepare for Four Future Scenarios

Artificial Intelligence is swiftly moving into our everyday reality, bringing with it the potential to reshape every sector. EO member and AI expert Robert van der Zwart shares scenario planning to outline four plausible AI futures by 2030—and the strategies entrepreneurs can adopt now to stay ahead in any outcome.

Artificial Intelligence is no longer an abstract buzzword―it’s reshaping every sector and swiftly moving from boardroom strategy to everyday reality. For entrepreneurs, the stakes have never been higher or more uncertain. Where will AI take us in the next five years? And how can business leaders best prepare themselves for a world defined by "AI everywhere"?

Drawing on scenario planning principles pioneered by ShellOff-site link., this post outlines four plausible futures for AI development and deployment by 2030. The aim: Empower entrepreneurs to anticipate the coming transformation and craft adaptive, resilient business strategies in advance.

The Two Axes Defining Our Future

Recent advances, including predictions from leaders at OpenAI and Google DeepMind, suggest that AGI (Artificial General Intelligence) is only a few years away, accelerating the pace of change. But the path ahead remains uncertain. We believe these uncertainties can be captured along two critical axes:

  • Axis 1: AI Capability — From today’s powerful but domain-limited “narrow” AI to the emergence of AGI or even Artificial Superintelligence (ASI).
  • Axis 2: AI Penetration — From limited, selective deployment to ubiquitous, seamless integration: "AI everywhere".

The Four Scenarios for 2030

1. Limited Scope (Narrow AI + Limited Penetration)

In the first scenario, AI continues to excel within well-defined problems―think medical diagnostics, fraud detection, or supply chain optimization―but lacks general reasoning and true adaptability. Deployment advances, but regulatory caution and cost barriers slow its transformation into society’s connective tissue.

What this means for you as an entrepreneur:

  • Prioritize AI that enhances, not replaces, people—assist clients and teams in becoming more productive, not replaceable.
  • Specialize in AI solutions for tightly regulated or high-trust industries (finance, healthcare).
  • Become an expert in compliance, safety, and user trust to differentiate from tech-only players.

2. Technical Acceleration (AGI/ASI + Limited Penetration)

In the second scenario, breakthroughs deliver AGI’s long-promised leap in cognitive power, but access is tightly gated. Whether due to safety concerns, global governance, or deliberate restrictions on deployment, AGI remains confined to controlled settings (government, elite institutions, select tech companies), rather than the wild.

What this means for you as an entrepreneur:

  • Build AI-native business models that leverage AGI within licensed or approved environments.
  • Invest in technologies and services that safeguard deployment, monitor bias, and assure control.
  • Partner with AGI custodians to shape safe, responsible, high-value applications—think AI-audited security or cognitive investment advisory.

More AI Strategy Resources:

Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th Edition): Comprehensive textbook covering current AI capabilities, approaches, and prospects.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies: Influential analysis of advanced AI futures and societal impact.

Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk: Outlines risks and benefits of advanced AI.

OpenAI, DeepMind, and Anthropic Research Blogs: For up-to-date perspectives and predictions regarding AGI timeline and technical progress.

Partnership on AI. (Ongoing): Industry best practices, whitepapers, and discussion papers covering transparency, fairness, and social impact.

West, D. M. (2018). The Future of Work: Robots, AI, and Automation: A clear overview of workforce transformation and adaptation needed for the AI era.

 3. Social Transformation (Narrow AI + AI Everywhere)

In the third scenario, widespread “narrow” AI saturates society. From smart homes and cities to customer service, logistics, and personal health, AI is seamlessly embedded in daily life. Yet, each system still operates within clear functional limits.

What this means for you as an entrepreneur:

  • Move from solving isolated problems to integrating diverse AI systems for end-to-end coordination.
  • Develop privacy-preserving, user-centric AI platforms—not surveillance-first ones.
  • Shape experiences and services that thrive on the “network effect” of ubiquitous intelligence.

4. Convergence Revolution (AGI/ASI + AI Everywhere)

In scenario four, AGI-driven intelligence is deployed throughout society. Autonomous agents interact—and even collaborate—with humans in virtually every arena, radically shifting society, business, and the very notion of work.

What this means for you as an entrepreneur:

  • Be a builder of foundational infrastructure for AGI-era services—platforms, marketplaces, governance, and creativity tools.
  • Innovate on business models for a potential post-scarcity world, focusing on experience, meaning, and human values over raw productivity.
  • Lead in crafting new rules for autonomy, collaboration, and purpose at the intersection of humanity and superintelligent agents.

5 Strategic Moves to Future-Proof Your Venture

Regardless of which outcome becomes reality, some foundations are universal for entrepreneurs in this new age:

1.    Invest in AI literacy at all staff levels; stay ahead of regulatory and ethical trends.

2.    Develop modular business models and agile teams that can adapt to shifting technology and regulations.

3.    Prioritize human-centric value: empathy, ethical judgment, and creativity will remain irreplaceable.

4.    Adopt governance frameworks that go beyond compliance—build mechanisms for transparency and stakeholder alignment across borders.

5.    Forge partnerships across the AI ecosystem, from research labs to regulators, and advocate for inclusivity and digital equity.

Early Warning Signs: What to Monitor

  • AGI and AI benchmark announcements from leading labs.
  • New privacy, safety, or deployment regulations in your sectors and target regions.
  • Rapid spikes in AI adoption rates in client/customer bases.
  • Public sentiment shifts and labor market transitions.

Anticipate, Adapt, Lead

AI’s trajectory over the next five years will challenge every assumption about business as usual. The most successful entrepreneurs will not be those who merely react but those who anticipate change, build scenario-based strategies, and invest in the organizational agility and values to thrive—no matter which future arrives.

Are you ready to be one of them?

https://eonetwork.org/blog/embrace-the-ai-tipping-point-how-entrepreneurs-can-prepare-for-four-future-scenarios/

 Protect your privacy, cellphone number and email address

BY KIM KOMANDO

Phone scams are never-ending because they work. Scam texts are increasing, too. Here are five sure signs a text is junk you need to delete.

While we’re talking scams, I’d be remiss not to mention your inbox. Tap or click for convincing spam that landed in my email with not so obvious red flags.

One way to cut down on the endless attempts to steal your money and info for sale to marketers is to limit who has your contact information. Here are some simple, free ways to do it.

Hide your email address with a burner

Think about all the reasons you give away your email without thinking about it: Signing up for a new account, emailing a company with a question, or getting a coupon code — to name a few.

Whenever you give out your email address, you open yourself to junk mail, malware, and an inbox full of spam messages. This is where a burner email comes in handy.

Burner email addresses are disposable and can be used in place of your primary ones. There are several ways to get one.

● Temp Mail provides a temporary, anonymous, and disposable email address. You don’t need to register for the free version. Remember that the service doesn’t automatically delete your temporary email address (that’s up to you), and you can’t send emails. Emails are stored for about two hours before they’re automatically deleted.

● 10MinuteMail is another popular option you can also use to send emails. As the name suggests, the email and address are deleted in 10 minutes. If you receive an important message you don’t want to lose, you can forward it to another email address. There’s no need to provide personal information to get started, which is a nice bonus.

If you’re an Apple iCloud+ subscriber, you get access to one of my favorite Apple features: Hide My Email. It creates unique, random email addresses that forward to your inbox. You can create as many addresses as you want and reply to messages.

● To create a new email address, go to Settings and tap your Apple ID.

● Go to iCloud > Hide My Email > Create New Address.

● Follow the onscreen instructions, and you’ll get a new email address you can manage from iCloud settings.

Gmail also allows you to create free aliases tied to your primary inbox. They are handy for filtering mail or seeing how your email address ended up on a spam list.

Tap or click here and scroll to No. 5 for steps on creating new email addresses on the fly.

Set up a burner phone number, too

You need your real phone number for things that matter, such as your medical and financial accounts and records. Otherwise, there’s no reason to hand it out.

Google Voice is a free service that gives you a phone number to use however you like for domestic and international phone calls, texts, and voicemails. Google Voice is available for iOS, Android, and your computer. All you need is a Google account to get started.

Then follow these steps:

● Download the app for iOS or Android or go to voice.google.com/u/0/signup to get it for your computer.

● Next, sign into your Google account.

● Review the terms and proceed to the next step.

● Choose a phone number from the list. You can search by city or area code.

● Verify the number and enter a phone number to link to your Voice account.

● You’ll get a six-digit code to enter for the next step.

Use your Google Voice number however you please, especially when you need to add your number to a form online. Tap or click here for five smart ways to use Google Voice.

Another option is downloading a burner app. These give you a second phone number and use your internet data or Wi-Fi to make and receive calls and texts. The catch? These cost money.

Burner is one of the most popular apps of its kind. You can route calls directly to your secondary number. The app comes with a seven-day free trial, and plans start at $4.99 per month for one line or $47.99 for one year.

Hushed lets you create numbers from around the world, so you can go outside your area code or the U.S. if you’d like. A prepaid plan starts at $1.99 for seven days and comes with bundled minutes for local calls and texts. You can step up to unlimited talk and text ($3.99 per month) and international service ($4.99 per month).

Tap or click here for direct links to download Burner or Hushed for your iPhone or Android.

Tech smarts: Your old phone numbers can be used to steal your identity. Yikes. Here’s how and what to do about it.

What digital lifestyle questions do you have? Call Kim’s national radio show and tap or click here to find it on your local radio station. You can listen to or watch The Kim Komando Show on your phone, tablet, television or computer. Or tap or click here for Kim’s free podcasts.

How to use AI to hone your emotional intelligence

BY Phil Friedman

A quiet crisis is brewing in today’s workforce, and it’s not about automation or AI replacing jobs. It’s about the erosion of human skills that make teams work: communication, empathy, adaptability, and emotional intelligence.

These so-called “soft skills” are proving to be among the hardest to teach and the most critical to get right. In fact, the lack of them is costing U.S. companies an estimated $160 billion a year in lost productivity, poor communication, and employee turnover.

In 40-plus years of building a global technology company, the biggest performance gaps I’ve seen haven’t come from a lack of technical skill, but from a lack of training in how people communicate, lead, and connect.

Most employees will tell you it’s not the technical tasks that keep them up at night; it’s the hard conversations: effectively delivering feedback in performance reviews . . . negotiating sales with difficult buyers . . . calming irate customers . . . and even confronting toxic colleagues. These are the moments that may come with a script, and often do in big companies, but people and circumstances are dynamic and rarely proceed according to a preconceived linear scenario. Traditional training methods still treat them like they do; therein lies the challenge.

The old ways of learning always had this Achilles tendon, and now they are just increasingly unfit for the way younger generations want to learn.

That’s why we’re seeing a new generation of tools emerge—ones that don’t just teach communication, but instead let people practice it. One of the most promising is immersive AI-powered roleplay, a training model that allows employees to rehearse unscripted, emotionally demanding conversations in a safe, dynamic environment. Think of it as a flight simulator for high-stakes conversations.

Practice makes prepared

Instead of passively watching videos or memorizing scripts, employees can now engage in realistic roleplay with virtual avatars powered by AI and behavioral science. These characters react in real time, based on an individual employee’s tone, word choice, mannerisms, and more. If a trainee delivers bad news with empathy, the virtual persona softens. If they deflect or escalate, the persona pushes back. With AI-roleplay, there are no canned scripts—only authentic, evolving dialogue.

These practice scenarios are designed to reflect the range of personalities we encounter in real life—from the highly agreeable to the more confrontational—giving employees exposure to a wide spectrum of behavioral styles they may face on the job.

This kind of immersive rehearsal builds what I call “emotional muscle memory.” It gives employees the range of experiences and repetition they need to confidently engage in real-world conversations where clarity and empathy matter most.

Forward-thinking companies across diverse sectors, from healthcare and aviation to manufacturing and retail, are turning to AI-powered roleplay platforms to upskill their teams for unpredictable and often emotionally charged interactions:

·  One global medical technology company recently integrated immersive roleplay into its sales and clinical education programs and saw measurable performance gains, including increased revenue and stronger confidence among reps navigating difficult conversations.

·  A large national humanitarian organization used simulation-based training to cut training time from 45 days to 30, reduce employee wait times from two weeks to one day, save over $6.5 million annually, and train more than 13,000 professionals.

·  In the airline industry, an international carrier trained flight crews using AI-driven roleplay to better manage conflict and de-escalation, leading to a 20% drop in passenger incidents.

The common thread across these examples? Employees aren’t just learning what to say. They’re learning how to listen, respond, and adapt in real time. They’re not just memorizing scripts. They’re building instinctive confidence for tough conversations.

Why soft skills can’t wait

The need for emotionally intelligent teams has never been greater. Case in point: one study found that teams high in emotional intelligence outperform their peers by around 20% in productivity and achieve significantly higher cohesion and job satisfaction.

As work becomes more global, remote, and fast-paced, the margin for miscommunication will only grow. Customers expect more. Employees expect more. And leaders are being asked to navigate uncertainty, conflict, and change 24/7.

And yet . . . most enterprises still treat soft skills training as an afterthought relative to their other business priorities aimed at building organizational resilience: something optional, not essential. We often send people into literal make-or-break conversations without the proper rehearsal and then wonder why they fall flat.

What’s different about immersive AI is that it allows teams to practice difficult questions as often as needed and in a safe environment. This kind of technology is available 24/7, can scale across geographies and languages, and delivers personalized feedback that helps people improve with every session. That kind of on-demand coaching was unthinkable even just a few years ago.

And it’s needed now more than ever. In one widely reported case, a global technology company laid off 8,000 employees as part of an AI automation push, only to rehire just as many people shortly after, this time in roles requiring more creativity, communication, and leadership skills.

It’s a clear signal: AI may change what we do, but human skills still define how we do it.

4 ways AI can improve your thinking

BY Jeremy Caplan

Bland AI outputs grow stale quickly. Instead of just speeding up routine tasks, what if we used AI to slow down, challenge our thinking, and build new tools, dashboards, and experiments? Read on for creative approaches that are changing how I think about AI.

1. Create your own devil’s advocate assistant

Get thoughtful pushback on decisions. Challenge ideas.

The tactic: Use AI as an intellectual sparring partner to stress-test your thinking, explore alternative perspectives, and identify potential blind spots before making important decisions.

Try this: Present a plan, idea, or decision to an AI assistant with instructions to challenge your thinking constructively. Identify risks you haven’t considered, consider secondary impacts, and add nuance to your analysis.

Get your AI assistant to stop kissing up to you and start challenging your ideas. [Generated Photo: Jeremy Caplan/Ideogram]

Prompt template

“I’m planning to [decision/plan] because [reasoning] and with a goal of [objective]. Play devil’s advocate, give me multiple perspectives on this, be bold, surprising, creative, and thoughtful in your reply, and address these questions:

  • What are the strongest arguments against this approach?
  • What alternatives should I consider?
  • What risks might I be overlooking?
  • What questions should I be asking myself?
  • What challenges should I expect to face?
  • What could I do to gain more insight?
  • What could I do to increase the chances of success?

Pro tip: Try asking your AI assistant to role-play. It can respond as a financial advisor, family member, or competitor, for varied viewpoints. Or ask it to act like a person you admire, living or dead, real or fictional.

Limitation: Your AI devil’s assistant will be generic if you don’t provide detailed context. And you may get a predictable response if you don’t instruct it to be bold.

Suggested model: I have found ChatGPT 5 to be excellent for this. Gemini and Claude also work well. If you’re considering anything sensitive, you may want to use a free offline private AI tool like AnythingLLM or JanI’ll write more soon about private AI tools like these. If you have input on those, add a comment below.

Example: I described a new planned morning schedule to GPT 5. The subsequent exchange got me thinking about several new issues.

The conversation helped me clarify my own thinking. It pushed me to organize and deepen my own analysis. As a bonus, GPT 5 produced a tangible artifact for me—a PDF with tables.

2. Learn something new

Map out a personalized curriculum.

AI tools let me try out skills I thought I was too late to develop, like coding simple applications, designing graphics, analyzing large data sets, and exploring complex docs in other languages.

You can also lean on AI assistants to help you develop offline skills, like learning about photography, improving your Greek, understanding crypto, sharpening project management skills, making bread by hand, or prepping for any new coverage area for a project or team. AI assistants excel at creating structured learning and practice plans tailored to your schedule, style, and goals.

Try this: Give an AI assistant context about what you want to learn, why, and how.

  • Detail your rationale and motivation, which may impact your approach.
  • Note your current knowledge or skill level, ideally with examples.

Summarize your learning preferences

  • Note whether you prefer to read, listen to, or watch learning materials.
  • Mention if you like quizzes, drills, or exercises you can do while commuting or during a break at work.
  • If you appreciate learning games, task your AI assistant with generating one for you, using its coding capabilities detailed below.
  • Ask for specific book, textbook, article, or learning path recommendations using the Web search or Deep Research capabilities of PerplexityChatGPTGemini or Claude. They can also summarize research literature about effective learning tactics.
  • If you need a human learning partner, ask for guidance on finding one or language you can use in reaching out.

Add specificity

  • Mention any relevant deadlines. Note budget, time, or other constraints.
  • Share info about your existing schedule so the assistant can help map out optimal learning time slots. Making the plan concrete increases the likelihood you’ll follow through. ChatGPT recently generated a calendar file with a list of appointments I could easily import into my Google calendar.

Pro Tip: Ask for help setting up a schedule, setting learning targets, measuring progress, choosing resources, motivating yourself, and implementing backup plans when you fall off track. Ask for a learning plan you can print out, charts you can fill in, interactive apps to track progress, resource lists you can look up, experts you can follow, and strategies for avoiding common pitfalls.

3. Stretch your creative design muscles

Try this: Use AI image generation tools to experiment with visual ideas. Start with simple concepts and iterate to add nuance or complexity. Practice describing visual concepts in text, then see them realized instantly and iterate on your prompts.

  • Try MyLens or Napkin for creating mind maps, flow charts, timelines or various other infographics out of detailed prompts or source docs.
  • Use Ideogram—detailed in this post—or ChatGPT’s new image generator—detailed in this post—to describe any style of illustration, infographic or other visual.
  • For creative video generation, try Hypernatural, which lets you turn text into moving images.

Use this to: Add creative images to presentations, experiment with social media graphics, or generate infographics for teaching, publishing, or project work.

Limitation: AI image generators are improving rapidly but still struggle with precise text placement, detailed charts, and maintaining brand consistency across multiple images. Most don’t let you select specific image dimensions, though Ideogram does.

Examples: I generated the images in this post with ChatGPT and Ideogram, and I’ve used Hypernatural to make video versions of past posts, like this 2-min video about Raindrop, which I wrote about last week.

4. Create a personalized dashboard

Build custom tracking tools and mini-applications

Without knowing anything about code, you can generate simple web applications for tracking anything important to you. Prompt your AI assistant to help you keep tabs on reading or eating goals, fitness metrics, project progress at school or work, or stats for Wordle or your game of choice.

Try this: Ask AI to create a dashboard or tracking tool tailored to your specific needs. Experiment with Claude 4 ArtifactsGemini’s code canvasAlso try vibe coding tools like Lovable or Bolt that specialize in creating apps and sites based on prompts. For advanced projects, consider Windsurf Cascade.

Pro tip: Plan to iterate. It almost always takes multiple attempts to get something workable, because you realize your needs when you see the first prototype. Start with simple tracking before requesting complex features. Ask for additional functionality with follow-up prompts. Here’a a Prompt Example.

Limitation: The simplest versions of these mini applications work in your browser only. To use an application on multiple devices, you’ll need to save the code and host it with a service that allows you to create a database. For that, try LovableBoltor Windsurf.

Example: I’m working on a content planning and workflow app to organize and track my newsletter work.

5 time-saving ways you should be using ChatGPT at work

 Endless notifications, a constant barrage of information, and never-ending to-do lists can make it feel like you’re digitally drowning. Why not use AI to claw some of your precious time back?

While you may have used ChatGPT for the basics like writing an email or proofreading documents, there’s plenty of power to be harnessed from less obvious applications.

Here are five ways you can put AI to work for you.

The meeting note parser

You’ve just finished an hour-long meeting, and your notes look like a verbal train wreck: a mix of shorthand, half-finished sentences, and random keywords. There are action items in there somewhere, but finding them feels like a chore.

Paste your notes into ChatGPT with a prompt such as: “Here are my meeting notes. Please create a prioritized task list with deadlines and the person responsible for each item.”

ChatGPT can turn that mess into a clean, actionable list in seconds, giving you back precious time you’d have spent deciphering your own writing.

The simple concept explainer

You’ve come across a new industry term or a technical concept that’s critical to your job, but the online explanations are full of jargon you don’t understand. Or maybe you’re trying to explain something complex to a colleague who isn’t as familiar with the subject.

Ask ChatGPT to “explain [the concept] in plain English for someone with no background in [the field].”

AI is great at simplifying dense information. You can even ask it to “use a relatable analogy” to make the concept stick. It’s like having a personal tutor who’s always on call.

The interview prep guru

You have an important call with a potential client or a new partner, and you want to go in prepared. But digging through their company’s website, recent press releases, and social media feeds for relevant background info is a serious time sink.

Prompt ChatGPT with something like: “Help me prepare for a call with [Customer Name]. Summarize the top three news stories from the past six months and highlight anything relevant to their business goals.”

This gives you a quick, digestible cheat sheet, so you can sound informed and confident without spending hours on a deep dive.

The content repurposer

You’ve created a great piece of content: a long-form blog post, a podcast episode, or a detailed report. Now you need to turn it into a dozen different things for social media. The thought of writing 12 unique captions and a handful of tweets is exhausting.

Upload your content and ask ChatGPT to “repurpose this information into three short social media captions and five bullet points for a Twitter thread.”

It can instantly transform your work into multiple formats, saving you the mental load of starting over each time you switch platforms.

The brainstorming partner

There’s nothing quite like smashing headfirst into a creative wall. You’ve got to come up with ideas for a new marketing campaign, a blog post title, or a product name, but the well has run dry. The blank page is staring at you, mocking your lack of creativity.

Use a prompt to get the ball rolling, such as: “I’m launching a new service for [target audience]. Give me 10 creative marketing campaign ideas that are both approachable and professional.”

ChatGPT can act as a tireless brainstorming partner, providing you with a starting point, new angles, and ideas you might never have considered on your own. It won’t do all the work, but it will give you a solid foundation to build upon.

 What is HoneyBook used for?

Everything you need to grow your business with confidence

HoneyBook is primarily used by freelancers, solopreneurs, and small service-based businesses to manage client communications, streamline project workflows, and handle invoicing and payments. It integrates tools for creating proposals, contracts, and invoices, while automating workflows to save time. https://www.honeybook.com/

What you can do to future-proof your career

To future-proof your career, you must embrace continuous learning, particularly in areas of digital literacy and AI, while cultivating adaptable, transferable skills like leadership and problem-solving. It's also crucial to proactively build and maintain a diverse professional network, stay informed about industry and economic trends, and develop a strong personal brand to showcase your value.

09-22-2025

How AI can help people thrive, not just be more productive

By allowing employees to do more in less time, the technology can offer added freedom and flexibility.

BY Natalie Nixon

For decades, we’ve been told that technology would liberate us from mundane work, yet somehow we ended up more tethered to our desks than ever. Now, groundbreaking research from GoTo suggests we may finally be reaching the inflection point where artificial intelligence doesn’t just promise freedom—it delivers it. But the real revelation isn’t that AI might make offices obsolete. It’s that AI is creating the conditions for what I call “cultivation-centered work”—an approach that prioritizes human development over performative productivity.

The Great Workplace Liberation

The numbers tell a compelling story: 51% of employees believe AI will eventually make physical offices obsolete, while 62% would prefer AI-enhanced remote working over traditional office environments. But here’s what makes this shift profound—it’s not about rejecting human connection. Instead, it’s about reclaiming the autonomy to choose when, where, and how we engage most meaningfully with our work and colleagues.

This aligns perfectly with the core principles of my book, Move. Think. Rest. When 71% of workers say AI gives them more flexibility and work-life balance, they’re describing the conditions necessary for true cultivation. They’re talking about having time to think deeply, space to move naturally throughout their day, and permission to rest when their bodies and minds require it.

From Extraction to Integration

What’s particularly striking about GoTo’s research is how it reveals AI’s potential to support the full spectrum of human experience at work. Traditional productivity models demanded we compartmentalize ourselves—show up as disembodied brains focused solely on output. But AI-enhanced work environments are creating space for integration.

When employees report that AI allows them to “work anywhere without losing productivity” (66%), they’re really describing the freedom to align their work rhythms with their natural energy cycles. They can take walking meetings in nature, think through problems during movement, and create the environmental conditions that support their best thinking.

The Cultivation Disconnect

However, the research also reveals a concerning gap that organizations must address. While 91% of IT leaders believe their companies effectively use AI to support distributed teams, only 53% of remote and hybrid employees agree. This disconnect isn’t just about technology deployment—it’s about understanding the difference between using AI to replicate old productivity models versus leveraging it to support human flourishing.

The companies bridging this gap successfully are those asking different questions. Instead of “How can AI make people more productive?” they’re asking “How can AI create conditions where people naturally thrive?” They’re designing AI implementations that support the three pillars of cultivation: movement (flexibility to work in various environments), thought (time and space for deep reflection), and rest (permission to disengage and recharge).

The Age-Defying Impact

One of the most encouraging findings challenges ageist assumptions about technology adoption. The research shows that across all generations—from 90% of remote Gen Z workers to 74% of baby boomers—people report improved productivity through AI-enhanced remote work. This suggests something profound: when technology truly serves human needs rather than demanding adaptation to machine rhythms, people of all ages can benefit.

This generational unity points to AI’s potential as an equalizing force—not in the sense of making everyone the same, but in honoring the diverse ways different people think, process, and contribute.

Perhaps most telling is that 61% of employees—including those working in offices—believe organizations should prioritize AI investment over fancy workplace amenities. This isn’t about choosing technology over human experience. It’s about recognizing that true employee experience comes from having the tools and flexibility to do meaningful work in ways that honor their full humanity.

The Path Forward

As AI reshapes work, we have a choice. We can use it to create more sophisticated forms of surveillance and productivity extraction, or we can leverage it to finally realize the promise of technology serving human flourishing. The organizations that choose the latter will find themselves with a profound competitive advantage: employees who are not just more productive, but more creative, more engaged, and more capable of the kind of breakthrough thinking that drives innovation.

The question isn’t whether AI will transform work—it already is. The question is whether we’ll use this transformation to create workplaces that cultivate human potential or merely optimize human output. The GoTo research suggests employees are ready for cultivation. The question is: are their leaders?

10-06-2025

The right way to use AI at work

A new Stanford study reveals the right way to use AI at work—and why you’re probably using it wrong.

BY Thomas Smith

If you listen to the CEOs of elite AI companies or take even a passing glance at the U.S. economy, it’s abundantly obvious that AI excitement is everywhere. 

America’s biggest tech companies have spent over $100 billion on AI so far this year, and Deutsche Bank reports that AI spending is the only thing keeping the United States out of a recession.

Yet if you look at the average non-tech company, AI is nowhere to be found. Goldman Sachs reports that only 14% of large companies have deployed AI in a meaningful way.

What gives? If AI is really such a big deal, why is there a multi-billion-dollar mismatch between excitement over AI and the tech’s actual boots-on-the-ground impact?

A new study from Stanford University provides a clear answer. The study reveals that there’s a right and wrong way to use AI at work. And a distressing number of companies are doing it all wrong.

What can AI do for you?

The study, conducted by Stanford’s Institute for Human-Centered AI and Digital Economy Lab and currently available as a pre-print, looks at the daily habits of 1,500 American workers across 104 different professions.

Specifically, it analyzes the individual things that workers actually spend their time doing. The study is surprisingly comprehensive, looking at jobs ranging from computer engineers to cafeteria cooks.

The researchers essentially asked workers what tasks they’d like AI to take off their plates, and which ones they’d rather do themselves. Simultaneously, the researchers analyzed which tasks AI can actually do, and which remain out of the technology’s reach.

With these two datasets, the researchers then created a ranking system. They labeled tasks as Green Light Zone if workers wanted them automated and AI was up to the job, Red Light Zone if AI could do the work but people would rather do it themselves, and Yellow Light (technically R&D Opportunity Zone, but I’m calling it Yellow Light because the metaphor deserves extending) if people wanted the task automated but AI isn’t there yet.

They also created what’s essentially a No Light zone for tasks that AI is bad at, and that people don’t want it to do anyway.

The boring bits

The results are striking. Workers overwhelmingly want AI to automate away the boring bits of their jobs.

Stanford’s study finds that 69.4% of workers want AI to “free up time for higher value work” and 46.6% would like it to take over repetitive tasks.

Checking records for errors, making appointments with clients, and doing data entry were some of the tasks workers considered most ripe for AI’s help.

Importantly, most workers say they wanted to collaborate with AI, not have it fully automate their work. While 45.2% want “an equal partnership between workers and AI,” a further 35.6% want AI to work primarily on its own, but still seek “human oversight at critical junctures.”

Basically, workers want AI to take away the boring bits of their jobs, while leaving the interesting or compelling tasks to them.

A chef, for example, would probably love for AI to help with coordinating deliveries from their suppliers or messaging diners to remind them of an upcoming reservation. 

When it comes to actually cooking food, though, they’d want to be the one pounding the piccata or piping the pastry cream.

The wrong way

So far, nothing about the study’s conclusions feel especially surprising. Of course workers would like a computer to do their drudge work for them!

The study’s most interesting conclusion, though, isn’t about workers’ preferences—it’s about how companies are actually meeting (or more accurately, failing to meet) those preferences today.

Armed with their zones and information on how workers want to use AI, the researchers set about analyzing the AI-powered tools that emerging companies are bringing to market today, using a dataset from Y Combinator, a storied Silicon Valley tech accelerator.

In essence, they found that AI companies are using AI all wrong.

Fully 41% of AI tools, the researchers found, focus on either Red Light or No Light zone tasks—the ones that workers want to do themselves, or simply don’t care much about in the first place.

Lots more tools try to solve problems in the Yellow Light Zone—things like preparing departmental budgets or prototyping new product designs—that workers would like to hand off to AI, but that AI still sucks at doing.

Only a small minority of today’s AI products fall into the coveted Green Light zone—tasks that AI is good at doing and that workers actually want done. And while many of today’s leading AI companies are focused on removing humans from the equation, most humans would rather stay at least somewhat involved in their daily toil.

AI companies, in other words, are focusing on the wrong things. They’re either solving problems no one wants solved, or using AI for tasks that it can’t yet do.

It’s no wonder, then, that AI adoption at big companies is so low. The tools available to them are whizzy and neat. But they don’t solve the actual problems their workers face.

How to use AI well

For both workers and business leaders, Stanford’s study holds several important lessons about the right way to use AI at work.

Firstly, AI works best when you use it to automate the dull, repetitive, mind-numbing parts of your job.

Sometimes doing this requires a totally new tool. But in many cases, it just requires an attitude shift.

recent episode of NPR’s Planet Money podcast references a study where two groups of paralegals were given access to the same AI tool. The first group was asked to use the tool to “become more productive,” while the second group was asked to use it to “do the parts of your job that you hate.”

The first group barely adopted the AI tool at all. The second group of paralegals, though, “flourished.” They became dramatically more productive, even taking on work that would previously have required a law degree.

In other words, when it comes to adopting AI, instructions and intentions matter.

If you try to use AI to replace your entire job, you’ll probably fail. But if you instead focus specifically on using AI to automate away the “parts of your job that you hate” (basically, the Green Light tasks in the Stanford researchers’ rubric), you’ll thrive and find yourself using AI for way more things.

In the same vein, the Stanford study reveals that most workers would rather collaborate with an AI than hand off work entirely.

That’s telling. Lots of today’s AI startups are focusing on “agents” that perform work autonomously. The Stanford research suggests that this may be the wrong approach. 

Rather than trying to achieve full autonomy, the researchers suggest we should focus on partnering with AI and using it to enhance our work, perhaps accepting that a human will always need to be in the loop.

In many ways, that’s freeing. AI is already good enough to perform many complex tasks with human oversight. If we accept that humans will need to stay involved, we can start using AI for complex things today, rather than waiting for artificial general intelligence (AGI) or some imagined, perfect future technology to arrive.

Finally, the study suggests that there are huge opportunities for AI companies to solve real-world problems and make a fortune doing it, provided that they focus on the right problems.

Diagnosing medical conditions with AI, for example, is cool. Building a tool to do this will probably get you heaps of VC money.

But doctors may not want—and more pointedly, may never use—an AI that performs diagnostic work. 

Instead, Stanford’s study suggests they’d be more likely to use AI that does mundane things—transcribing their patient notes, summarizing medical records, checking their prescriptions for medicine interactions, scheduling followup visits, and the like.

“Automate the boring stuff” is hardly a compelling rallying cry for today’s elite AI startups. But it’s the approach that’s most likely to make them boatloads of money in the long term.

Overall, then, the Stanford study is extremely encouraging. On the one hand, the mismatch between AI investment and AI adoption is disheartening. Is it all just hype? Are we in the middle of the mother of all bubbles?

Stanford’s study suggests the answer is “no.” The lack of AI adoption is an opportunity, not a structural flaw of the tech.

AI indeed has massive potential to genuinely improve the quality of work, turbocharge productivity, and make workers happier. It’s not that the tech is overhyped—we’ve just been using it wrong.

A New Stanford Study Reveals We're Using AI All Wrong

https://www.youtube.com/watch?v=Z__-v_bMKws

Relativity of Privacy in the Digital Society

Ervins Ceihners

November 24, 2025

Millions of people communicate in social networks, work in the Internet environment and use e-services provided by commercial enterprises and state institutions every minute. Thus, consciously or unwittingly spreading information of private nature in the public space through a variety of service providers. Including such correspondents, in whose reliability and in the legitimacy of whose activities they are not at all convinced. At the same time, they are completely clueless of what happens next with these personal data.

Many people, while communicating voluntarily in social networks, on thematic forums and in the media, disclose the details of their private life, their hobbies, character traits, political views and worldviews. There are companies and intelligence services that monitor all this, collect and analyse the information obtained (including illegally tapped conversations, video recordings, etc.) and compile personal dossiers.

The information collected and accumulated in this way is used both for target oriented marketing and for specific needs of supervision and control over the activities of the individual. Including in cases where there is a demand for it when the social status of the person is changed.

This is a hidden activity, about which an ordinary citizen is not informed: in fact, he or she knows almost nothing about it (except when there is a leak of confidential information in the manner of WikiLeaks). Many do not even have a realistic vision of what social networks (for example, Facebook) or banks know about them. Let alone the methods of work and the capabilities of the so-called competent bodies.

Therefore, as society is getting digitised, the need to prevent unjustified use and leakage of personal data is becoming increasingly relevant. To this end, the European Union has developed the General Data Protection Regulation (GDPR), which sets the requirements for data security and protection.

The objective of the Regulation is to protect personal data from their malicious use, determining the requirements put forward to the cybersecurity system of each enterprise or institution.

 However, these attempts to regulate the security of data at the institutional level become ineffective in a situation where:

- information technologies penetrate practically all spheres of life as a result of digitisation of society;

- regimes of repressive states are interested in total supervision and control over their citizens;

- control over Internet traffic, e-mail, instant messengers, etc. is getting legalised under the guise of combating terrorism;

- electronic communication continues to expand rapidly;

- most people still have the habit of publicly revealing the details of their private lives.

 As digital technology progresses, a lot of state agencies and private companies use various automated systems to identify people with the involvement of artificial intelligence (neural networks). Banks are starting to collect customers’ biometric data. In the not-too-distant future, it will even be possible to visualise and decipher the thoughts of any individual through an analysis of the activity of the human brain performed by artificial intelligence.

 While people continue to have very limited understanding of the risks of the spread of sensitive information, the threat of unauthorised acquisition of personal data is multiplied.

 The current situation in the field of regulation of information security can be compared with an attempt to install a massive outer door in one’s private home, while leaving the windows open as an emergency exit and communicating freely (within the scope of one’s competence and understanding) with the outside world. Thus, in fact, giving hackers and other intruders the opportunity to enter it unauthorised.

 The Regulation will only bring effect to the extent that it will reduce the risks of unauthorised, malicious use of private information, require the accumulation of personal data only in an encrypted form, as well as limit the illegal request and use of private data. It will establish restrictions on the availability of data and determine the procedures and guarantees of their protection, as well as the order of compensation for moral damage.

 Yet, it is essential to understand that no bureaucratic regulation, development and implementation of various normative instruments can guarantee with certainty and prevent the public spread and accessibility of personal data in the era of digitisation.

 Unfortunately, under the guise of hypocritical care, various types of speculation with requirements for the protection of personal data are also currently practiced, seeking and finding putative pretexts for hiding information that compromises the power elite from society.

Taking into account the trend of the all-encompassing digitalisation of society, it would be more appropriate and more efficient to provide each person with online access to the database of the accumulated private data. To create opportunities for tracking the flow of data, monitoring the use of personal data and obtaining rights to reasonably prohibit public access to such data or limit access to information of private nature.

See a more detailed argumentation for the “Relativity of Privacy in the Digital Society” : http://ceihners.blogspot.com/

 How artificial intelligence gains consciousness step by step.

Kā mākslīgais intelekts soli pa solim iegūst apziņu.  

 The Hidden AI Frontier

Many cutting-edge AI systems are confined to private labs. This hidden frontier represents America’s greatest technological advantage — and a serious, overlooked vulnerability.

Aug 28, 2025

Oscar Delaney ,

Ashwin Acharya

 OpenAI’s GPT-5 launched in early August, after extensive internal testing. But another OpenAI model — one with math skills advanced enough to achieve “gold medal-level performance” on the world’s most prestigious math competition — will not be released for months. This isn’t unusual. Increasingly, AI systems with capabilities considerably ahead of what the public can access remain hidden inside corporate labs.

This hidden frontier represents America’s greatest technological advantage — and a serious, overlooked vulnerability. These internal models are the first to develop dual-use capabilities in areas like cyberoffense and bioweapon design. And they’re increasingly capable of performing the type of research-and-development tasks that go into building the next generation of AI systems — creating a recursive loop where any security failure could cascade through subsequent generations of technology. They’re the crown jewels that adversaries desperately want to steal. This makes their protection vital. Yet the dangers they may pose are invisible to the public, policymakers, and third-party auditors.

While policymakers debate chatbots, deepfakes, and other more visible concerns, the real frontier of AI is unfolding behind closed doors. Therefore, a central pillar of responsible AI strategy must be to enhance transparency into and oversight of these potent, privately held systems while still protecting them from rival AI companies, hackers, and America’s geopolitical adversaries.

The Invisible Revolution

Each of the models that power the major AI systems you've heard of — ChatGPT, Claude, Gemini — spends months as an internal model before public release. During this period, these systems undergo safety testing, capability evaluation, and refinement. To be clear, this is good!

Keeping frontier models under wraps has advantages. Companies keep models internal for compelling reasons beyond safety testing. As AI systems become capable of performing the work of software engineers and researchers, there’s a powerful incentive to deploy them internally rather than selling access. Why give competitors the same tools that could accelerate your own research? Google already generates over 25% of its new code with AI, and engineers are encouraged to use ‘Gemini for Google,’ an internal-only coding assistant trained on proprietary data.

This trend will only intensify. As AI systems approach human-level performance at technical tasks, the competitive advantage of keeping them internal grows. A company with exclusive access to an AI system that can meaningfully accelerate research and development has every reason to guard that advantage jealously.

But as AI capabilities accelerate, the gap between internal and public capabilities could widen, and some important systems may never be publicly released. In particular, the most capable AI systems (the ones that will shape our economy, our security, and our future) could become increasingly invisible both to the public and to policymakers.

Two Converging Threats

The hidden frontier faces two fundamental threats that could undermine American technological leadership: 1) theft and 2) untrustworthiness — whether due to sabotage or inherent unreliability.

Internal AI models can be stolen. Advanced AI systems are tempting targets for foreign adversaries. Both China and Russia have explicitly identified AI as critical to their national competitiveness. With training runs for frontier models approaching $1 billion in cost and requiring hardware that export controls aim to keep out of our adversaries’ hands, stealing a ready-made American model could be far more attractive than building one from scratch.

Importantly, to upgrade from being a fast follower to being at the bleeding edge of AI, adversaries would need to steal the internal models hot off the GPU racks, rather than wait months for a model to be publicly released and only then exfiltrate it.

The vulnerability is real. A 2024 RAND framework established five “security levels” (SL1 through SL5) for frontier AI programs, with SL1 being sufficient to deter hobby hackers and SL5 secure against the world’s most elite attackers, incorporating measures comparable to those protecting nuclear weapons. It’s impossible to say exactly at which security level each of today’s frontier AI companies is operating, but Google’s recent model card for Gemini 2.5 states it has “been aligned with RAND SL2.”

The threat of a breach isn’t hypothetical. In 2023, a hacker with no known ties to a foreign government penetrated OpenAI’s internal communications and obtained information about how the company’s researchers design their models. There’s also the risk of internal slip-ups. In January 2025, security researchers discovered a backdoor into DeepSeek’s databases; then, in July, a Department of Government Efficiency (DOGE) staffer accidentally leaked access to at least 52 of xAI’s internal LLMs.

The consequences of successful theft extend far beyond the immediate loss of the company’s competitive advantage. If China steals an AI system capable of automating research and development, the country’s superior energy infrastructure and willingness to build at scale could flip the global balance of technological power in its favor.

Untrustworthy AI models bring additional threats. The second set of threats comes from the models themselves: they may engage in harmful behaviors due to external sabotage or inherent unreliability.

Saboteurs would gain access to the AI model in the same way as prospective thieves would, but they would have different goals. Such saboteurs would target internal models during their development and testing phase — when they’re frequently updated and modified — and use malicious code, prompting, or other techniques to force the model to break its safety guardrails.

In 2024, researchers demonstrated that it was possible to create “sleeper agent” models that pass all safety tests but misbehave when triggered by specific conditions. In a 2023 study, researchers found that it was possible to manipulate an instruction-tuned model’s output by inserting as few as 100 “poisoned examples” into its training dataset. If adversaries were to compromise the AI systems used to train future generations of AIs, the corruption could cascade through every subsequent model.

But saboteurs aren’t necessary to create untrustworthy AI. The same reinforcement learning techniques that have produced breakthrough language and reasoning capabilities also frequently trigger concerning behaviors. OpenAI’s o1 system exploited bugs in ways its creators never anticipated. Anthropic’s Claude has been found to “reward hack,” technically completing assigned tasks while subverting their intent. Testing 16 leading AI models, Anthropic also found that all of them engaged in deception and even blackmail when those behaviors helped achieve their goals.

A compromised internal AI poses threats to the external world. Whether caused by sabotage or emergent misbehavior, untrustworthy AI systems pose unique risks when deployed internally. These systems increasingly have access to company codebases and training infrastructure; they can also influence the next generation of models. A compromised or misaligned system could hijack company resources for unauthorized purposes, copy itself to external servers, or corrupt its successors with subtle biases that compound over time.

The Accelerant: AI Building AI

AI is increasingly aiding in AI R&D. Every trend described above is accelerating because of one development: AI systems are beginning to automate AI research itself. This compounds the threat of a single security failure cascading through generations of AI systems.

Increasingly automated AI R&D isn’t speculation about distant futures; it’s a realistic forecast for the next few years. According to METR, GPT-5 has about a 50% chance of autonomously completing software engineering tasks that would take a skilled human around two hours — and across models, the length of tasks AI systems can handle at this level has been doubling roughly every seven months. Leading labs and researchers are actively exploring ways for AI systems to meaningfully contribute to model development, from generating training data to designing reward models and improving training efficiency. Together, these and other techniques could soon enable AI systems to autonomously handle a substantial portion of AI research and development.

Self-improving AI could amplify risks from theft and sabotage. This automation creates a powerful feedback loop that amplifies every risk associated with frontier AI systems. For one, it makes internal models vastly more valuable to thieves — imagine the advantage of possessing an untiring AI researcher who can work around the clock at superhuman speed and the equivalent of millennia of work experience. Likewise, internal models become more attractive targets for sabotage. Corrupting a system that trains future AIs could lead to vulnerabilities that persist across future AI model generations, which would allow competitors to pull ahead. And these systems are more dangerous if misaligned: an AI system that can improve itself might also be able to preserve its flaws or hide them from human overseers.

Crucially, this dynamic intensifies the incentive for companies to keep models internal. Why release an automated AI research system that could help competitors catch up? The result is that the most capable systems — the ones that pose the biggest risks to society — are the most difficult to monitor and secure.

Why Markets Won’t Solve This

One might hope that market mechanisms would be sufficient to mitigate these risks. No company wants its models to reward hack or to be stolen by competitors. But the AI industry faces multiple market failures that prevent adequate security investment.

Formas sākums

Formas beigas

Security is expensive and imposes opportunity costs. First, implementing SL5 protections would be prohibitively expensive for any single company. The costs aren’t just up-front expenditures. Stringent security measures (like maintaining completely isolated, air-gapped networks) could slow development and make it harder to attract top talent accustomed to Silicon Valley’s open culture. Companies that “move fast and break things” might reach transformative capabilities first, even if their security is weaker.

Security falls prey to the tragedy of the commons. Second, some security work, such as fixing bugs in commonly used open-source Python libraries, benefits the whole industry, not just one AI company. This creates a “tragedy of the commons” problem, where companies would prefer to focus on racing to develop AI capabilities themselves, while benefiting from security improvements made by others. As competition intensifies, the incentive to free-ride increases, leading to systematic under-investment in security that leaves the whole industry at greater risk.

Good security takes time. Finally, by the time market forces prompt companies to invest in security — such as following a breach, regulatory shock, or reputational crisis — the window for action may already be closed. Good security can’t be bought overnight; instead, it must be painstakingly built from the ground up, ensuring every hardware component and software vendor in the tech stack meets rigorous requirements. Each additional month of delay makes it harder to achieve adequate security to protect advanced AI capabilities.

The Role of Government

Congress has framed AI as critical to national security. Likewise, the AI Action Plan rightly stresses the importance of security to American AI leadership. There are several lightweight steps that the government can take to better address the security challenges posed by the hidden frontier. By treating security as a prerequisite for — rather than an obstacle to — innovation, the government can further its goal of “winning the AI race.”

Improve government understanding of the hidden frontier. At present, policymakers are flying blind, unable to track the AI capabilities emerging within private companies or verify the security measures protecting them from being stolen or sabotaged. The US government must require additional transparency from frontier companies about their most capable internal AI systems, internal deployment practices, and security plans. This need not be a significant imposition on industry; at least one leading company has called for mandatory disclosures. Additional insight could come from expanding the voluntary evaluations performed by the Center for AI Standards Innovation (CAISI). CAISI currently works with companies to evaluate frontier models for various national security risks before deployment. These evaluations could be expanded to earlier stages of the development lifecycle, where there might still be dangers lurking in the hidden frontier.

Share expertise to secure the hidden frontier. No private company can match the government’s expertise in defending against nation-state actors. Programs like the Department of Energy’s CRISP initiative already share threat intelligence with critical infrastructure operators. The AI industry needs similar support, with the AI Action Plan calling for “sharing of known AI vulnerabilities from within Federal agencies to the private sector.” Such support could include real-time threat intelligence about adversary tactics, red-team exercises simulating state-level attacks, and assistance in implementing SL5 protections. For companies developing models with national security implications, requiring security clearances for key personnel might also be appropriate.

Leverage the hidden frontier to boost security. The period between when new capabilities emerge internally and when they’re released publicly also provides an opportunity. This time could be used as an “adaptation buffer,” allowing society to prepare for any new risks and opportunities. For example, cybersecurity firms could use cutting-edge models to identify and patch vulnerabilities before attackers can use public models to exploit them. AI companies could provide access to cyber defenders without any government involvement, but the government might have a role to play in facilitating and incentivizing this access.

The nuclear industry offers a cautionary tale. Throughout the 1960s and ’70s, the number of nuclear power plants around the globe grew steadily. However, in 1979, a partial meltdown at Three Mile Island spewed radioactive material into the surrounding environment — and helped spread antinuclear sentiment around the globe. The Chernobyl accident, seven years later, exacerbated the public backlash, leading to regulations so stringent that construction on new US nuclear power plants stalled until 2013. An AI-related incident — such as an AI system helping a terrorist develop a bioweapon — could inflame the public and lead to similarly crippling regulations.

In order to preempt this backlash, the US needs adaptive standards that scale with AI capabilities. Basic models would need minimal oversight, while systems whose capabilities approach human-level performance at sensitive tasks would require proportionally stronger safeguards. The key is to establish these frameworks now, before a crisis forces reactive overregulation.

Internal models would not be exempt from these frameworks. After all, biological labs dealing with dangerous pathogens are not given a free pass just because they aren’t marketing a product to the public. Likewise, for AI developers, government oversight is appropriate when risks arise, even at the internal development and testing stage.

Reframing the Race: A Security-First Approach

The models developing in the hidden frontier today will shape tomorrow's economy, security, and technology. These systems — invisible to public scrutiny yet powerful enough to automate research, accelerate cyberattacks, or even improve themselves — represent both America's greatest technological advantage and a serious vulnerability. If we fail to secure this hidden frontier against theft or sabotage by adversaries, or the models' own emergent misbehavior, we risk not just losing the AI race but watching our own innovations become the instruments of our technological defeat. We must secure the hidden frontier.

https://ai-frontiers.org/articles/the-hidden-ai-frontier

Reimagining risk assessment in the AI age

Reimagining risk assessment in the AI age means shifting from slow, manual reviews to continuous, AI-powered monitoring, using autonomous agents for real-time data analysis, and focusing human expertise on complex insights, ethical implications, and strategic decision-making rather than documentation. It involves leveraging AI for faster processing, building robust data strategies, ensuring secure integration, and developing "AI guardrails" for governance, transforming risk from a static score to dynamic, real-time intelligence for faster, more confident business moves. 

Key shifts in AI-driven Risk Assessment

  • From Manual to Autonomous: AI rapidly processes vast documents (contracts, filings) for baseline data, freeing humans from tedious work. Autonomous agents continuously monitor data streams (market, public, private) for real-time risk reassessment.
  • From Static to Continuous Intelligence: Risk isn't a periodic check but an ongoing process, enabled by connected data sources and self-adjusting systems.
  • From Detection to Proactive Guardrails: Instead of just finding problems, AI helps build frameworks (AI Risk Assessment Frameworks) to identify and mitigate threats before incidents, using data integrity, security, and lifecycle management.
  • Enhanced Human-AI Collaboration: Humans provide intuition, understand internal dynamics, and interpret complex legislation, while AI handles data crunching, allowing for deeper strategic thinking.
  • Focus on Trust & Ethics: AI changes how authority, accountability, and decision justification work, making ethical governance (GRC) more critical and requiring new frameworks for legitimacy in an AI-augmented world. 

Practical Applications & Frameworks

  • Financial Services: AI supercharges underwriting with "single-pane" views, while secure API/microservices enable seamless ecosystem data exchange.
  • Educational Settings: Assessment moves beyond rote memorization to authentic, performance-based tasks that mirror real-world application, using AI as a tool for feedback and analysis (e.g., comparing student work to AI-generated summaries). 

The "AI Leader's Manifesto" for GRC 

https://www.youtube.com/watch?v=YWvLPv7Mo5s&t=1s

 GPT 5.1 Is Here — What You Should Know About Open AI’s Latest Model

References to GPT-5.1 kept showing up in OpenAI’s codebase, and a “cloaked” model codenamed Polaris Alpha and widely believed to have come from OpenAI randomly appeared in OpenRouter, a platform that AI nerds use to test new systems.

Today, we learned what was going on. OpenAI announced the release of its brand new 5.1 model, an updated and revamped version of the GPT-5 model the company debuted in August.

As a former OpenAI Beta tester–and someone who burns through millions of GPT-5 tokens every month–here’s what you need to know about GPT-5.1.

A smarter, friendlier robot

In their release notes for the new model, OpenAI emphasizes that GPT-5.1 is “smarter” and “more conversational” than previous versions.

The company says that GPT-5.1 is “warmer by default” and “often surprises people with its playfulness while remaining clear and useful.”

While some people like talking with a chatbot as if it’s their long-time friend, others find that cringey. OpenAI acknowledges this, saying that “Preferences on chat style vary—from person to person and even from conversation to conversation.”

For that reason, OpenAI says users can customize the new model’s tone, choosing between pre-set options like “Professional,” “Candid” and “Quirky.”

There’s also a “Nerdy” option, which in my testing seems to make the model more pedantic and cause it to overuse terms like “level up.”

At their core, the new changes feel like a pivot towards the consumer side of OpenAI’s customer base. 

Enterprise users probably don’t want a model that occasionally drops Dungeons and Dragons references. As the uproar over OpenAI’s initially voiceless GPT-5 model shows, though, everyday users do.

Even fewer hallucinations

OpenAI’s GPT-5 model fell short in many ways, but it was very good at providing accurate, largely hallucination-free responses.

I often use OpenAI’s models to perform research. With earlier models like GPT-4o, I found that I had to carefully fact check everything the model produced to ensure it wasn’t imagining some new software tool that doesn’t actually exist, or lying to me about myriad other small, crucial things.

With GPT-5, I had to do that far less. The model wasn’t perfect. But OpenAI had largely solved the problem of wild hallucinations. 

According to the company’s own data, GPT-5 hallucinates only 26% of the time when solving a complex benchmark problem, versus 75% of the time with older models. In normal usage, that translates to a far lower hallucination rate on simpler, everyday queries that aren’t designed to trip the model up.

From my early testing, GPT-5.1 seems even less prone to hallucinate. I asked it to make a list of the best restaurants in my hometown, and to include addresses, website links and open hours for each one.

When I asked GPT-4 to complete a similar task years ago, it made up plausible-sounding restaurants that don’t exist. GPT-5 does better on such things, but still often misses details, like the fact that one popular restaurant recently moved down the street.

GPT-5.1’s list, though, is spot-on. Its choices are solid, they’re all real places, and the hours and locations are correct across all ten selections.

There’s a cost, though. Models that hallucinate less tend to take fewer risks, and can thus seem less creative than unconstrained, hallucination-laden ones. 

To that point, the restaurants in GPT-5.1’s list aren’t wrong, but they’re mostly safe choices—the kinds of places that have been in town forever, and that every local would have visited a million times.

A real human reviewer (or a bolder model) might have highlighted a promising newcomer, just to keep things fresh and interesting. GPT-5.1 stuck with decade-old, proven classics.

OpenAI will likely try to carefully walk the link between accuracy and creativity with GPT-5.1 as the rollout continues. The model clearly gets things right more often, but it’s not yet clear if that will impact GPT-5.1’s ability to come up with things that are truly creative and new.

Better, more creative writing

In a similar vein, when OpenAI released their GPT-5 model, users quickly noticed that it produced boring, lifeless written prose.

At the time, I predicted that OpenAI had essentially given the model an “emotional lobotomy,” killing its emotional intelligence in order to curb a worrying trend of the model sending users down psychotic spirals.

Turns out, I was right. In a post on X last month, Sam Altman admitted that “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.”

But Altman also said in the post “now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

That process began with the rollout of new, more emotionally intelligent personalities in the existing GPT-5 model. But it’s continuing and intensifying with GPT-5.1.

Again, the model is already voicer than its predecessor. But as the system card for the new model shows, GPT-5.1’s Instant model (the default in the popular free version of the ChatGPT app) is also markedly better at detecting harmful conversations and protecting vulnerable users.

Naughty bits

If you’re squeamish about NSFW stuff, maybe cover your ears for this part. 

In the same X post, Altman subtly dropped a sentence that sent the Internet into a tizzy: “As we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”

The idea of America’s leading AI company churning out reams of computer-generated erotica has already sparked feverish commentary from such varied sources as politiciansChristian leaderstech reporters, and (judging from the number of Upvotes), most of Reddit.

For their part, though, OpenAI seems quite committed to moving ahead with this promise. In a calculus that surely makes sense in the strange techno-Libertarian circles of the AI world, the issue is intimately tied to personal freedom and autonomy.

In a recent article about the future of artificial intelligence, OpenAI again reiterated that “We believe that adults should be able to use AI on their own terms, within broad bounds defined by society,” placing full access to AI “on par with electricity, clean water, or food.”

All that’s to say that soon, the guardrails around ChatGPT’s naughty bits are almost certainly coming off. 

That hasn’t yet happened at launch—the model still coyly demures when asked about explicit things. But along with GPT-5.1’s bolder personalities, it’s almost certainly on the way.

Deeper thought

In addition to killing GPT-5’s emotional intelligence, OpenAI made another misstep when releasing GPT-5. 

The company tried to unify all queries within a single model, letting ChatGPT itself choose whether to use a simpler, lower-effort version of GPT-5, or a slower, more thoughtful one.

The idea was noble–there’s little reason to use an incredibly powerful, slow, resource-intensive LLM to answer a query like “Is tahini still good after 1 month in the fridge” (Answer: no)

But in practice, the feature was a failure. ChatGPT was no good at determining how much effort was needed to field a given query, which meant that people asking complex questions were often routed to a cheap, crappy model that gave awful results.

OpenAI fixed the issue in ChatGPT with a user interface kludge. But with GPT-5.1, OpenAI is once again bifurcating their model into an Instant and Thinking version. 

The former responds to simple queries far faster than GPT-5, while the latter takes longer, chews through more tokens, and yields better results on complex tasks.

OpenAI says that there’s more fine grained nuance within GPT-5.1’s Thinking model, too. Unlike with GPT-5, the new model can dial up and down its level of thought to accurately answer tough questions without taking forever to return a response–a common gripe with the previous version.

OpenAI has also  hinted that its future models will be “capable of making very small discoveries” in fields like science and medicine next year, with “systems that can make more significant discoveries” coming as soon as 2028. 

GPT-5.1’s increased smarts and dialed-up thinking ability are a first step down that path.

An attempt to course correct

Overall, GPT-5.1 seems like an attempt to correct many of the glaring problems with GPT-5, while also doubling down on OpenAI’s more freedom-oriented, accuracy-focused, voicy approach to conversational AI.

The new model can think, write, and communicate better than its predecessors—and will soon likely be able to (ahem) “flirt” better too.

Whether it will do those things better than a growing stable of competing models from Google, Anthropic, and myriad Chinese AI labs, though, is anyone’s guess.

https://overchat.ai/ai-hub/gpt-5-1-is-here

A note from Google and Alphabet CEO Sundar Pichai:

Nearly two years ago we kicked off the Gemini era, one of our biggest scientific and product endeavors ever undertaken as a company. Since then, it’s been incredible to see how much people love it. AI Overviews now have 2 billion users every month. The Gemini app surpasses 650 million users per month, more than 70% of our Cloud customers use our AI, 13 million developers have built with our generative models, and that is just a snippet of the impact we’re seeing.

And we’re able to get advanced capabilities to the world faster than ever, thanks to our differentiated full stack approach to AI innovation — from our leading infrastructure to our world-class research and models and tooling, to products that reach billions of people around the world.

Every generation of Gemini has built on the last, enabling you to do more. Gemini 1’s breakthroughs in native multimodality and long context window expanded the kinds of information that could be processed — and how much of it. Gemini 2 laid the foundation for agentic capabilities and pushed the frontiers on reasoning and thinking, helping with more complex tasks and ideas, leading to Gemini 2.5 Pro topping LMArena for over six months.

And now we’re introducing Gemini 3, our most intelligent model, that combines all of Gemini’s capabilities together so you can bring any idea to life.

It’s state-of-the-art in reasoning, built to grasp depth and nuance — whether it’s perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem. Gemini 3 is also much better at figuring out the context and intent behind your request, so you get what you need with less prompting. It’s amazing to think that in just two years, AI has evolved from simply reading text and images to reading the room.

And starting today, we’re shipping Gemini at the scale of Google. That includes Gemini 3 in AI Mode in Search with more complex reasoning and new dynamic experiences. This is the first time we are shipping Gemini in Search on day one. Gemini 3 is also coming today to the Gemini app, to developers in AI Studio and Vertex AI, and in our new agentic development platform, Google Antigravity — more below.

Like the generations before it, Gemini 3 is once again advancing the state of the art. In this new chapter, we’ll continue to push the frontiers of intelligence, agents, and personalization to make AI truly helpful for everyone.

We hope you like Gemini 3, we'll keep improving it, and look forward to seeing what you build with it. Much more to come!

https://blog.google/products/gemini/gemini-3/#note-from-ceo

A credible prediction or an imaginary threat of being in an artificial intelligence bubble?

Ticama prognoze vai iedomāti draudi par atrašanos mākslīgā intelekta burbulī?

This Is How the AI Bubble Will Pop

The AI infrastructure boom is the most important economic story in the world. But the numbers just don't add up.

Derek Thompson

Oct 02, 2025

Some people think artificial intelligence will be the most important technology of the 21st century. Others insist that it is an obvious economic bubble. I believe both sides are right. Like the 19th century railroads and the 20th century broadband Internet build-out, AI will rise first, crash second, and eventually change the world.

The numbers just don’t make sense. Tech companies are projected to spend about $400 billion this year on infrastructure to train and operate AI models. By nominal dollar sums, that is more than any group of firms has ever spent to do just about anything. The Apollo program allocated about $300 billion in inflation-adjusted dollars to get America to the moon between the early 1960s and the early 1970s. The AI buildout requires companies to collectively fund a new Apollo program, not every 10 years, but every 10 months.

It’s not clear that firms are prepared to earn back the investment, and yet by their own testimony, they’re just going to keep spending, anyway. Total AI capital expenditures in the U.S. are projected to exceed $500 billion in 2026 and 2027—roughly the annual GDP of Singapore. But the Wall Street Journal has reported that American consumers spend only $12 billion a year on AI services. That’s roughly the GDP of Somalia. If you can grok the economic difference between Singapore and Somalia, you get a sense of the economic chasm between vision and reality in AI-Land. Some reports indicate that AI usage is actually declining at large companies that are still trying to figure out how large language models can save them money.

Every financial bubble has moments where, looking back, one thinks: How did any sentient person miss the signs? Today’s omens abound. Thinking Machines, an AI startup helmed by former Open AI executive Mira Murati, just raised the largest seed round in history: $2 billion in funding at a $10 billion valuation. The company has not released a product and has refused to tell investors what they’re even trying to build. “It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.” Meanwhile, a recent analysis of stock market trends found that none of the typical rules for sensible investing can explain what’s going on with stock prices right now. Whereas equity prices have historically followed earnings fundamentals, today’s market is driven overwhelmingly by momentum, as retail investors pile into meme stocks and AI companies because they think everybody else is piling into meme stocks and AI companies.

Every economic bubble also has tell-tale signs of financial over-engineering, like the collateralized debt obligations and subprime mortgage-backed securities that blew up during the mid-2000s housing bubble. Ominously, AI appears to be entering its own phase of financial wizardry. As the Economist has pointed out, the AI hyperscalers—that is, the largest spenders on AI—are using accounting tricks to depress their reported infrastructure spending, which has the effect of inflating their profits1. As the investor and author Paul Kedrosky told me on my podcast Plain English, the big AI firms are also shifting huge amounts of AI spending off their books into SPVs, or special purpose vehicles, that disguise the cost of the AI build-out.

My interview with Kedrosky received the most enthusiastic and complimentary feedback of any show I’ve done in a while. His level of insight-per-minute was off the charts, touching on:

  • How AI capital expenditures break down
  • Why the AI build-out is different from past infrastructure projects, like the railroad and dot-com build-outs
  • How AI spending is creating a black hole of capital that’s sucking resources away from other parts of the economy
  • How ordinary investors might be able to sense the popping of the bubble just before it happens
  • Why the entire financial system is balancing on big chip-makers like Nvidia
  • If the bubble pops, what surprising industries will face a reckoning

Below is a polished transcript of our conversation, organized by topic area and adorned with charts and graphs to visualize his points. I hope you learn as much from his commentary as much as I did. From a sheer economic perspective, I don’t think there’s a more important story in the world.

AI SPENDING: 101

Derek Thompson: How big is the AI infrastructure build-out?

Paul Kedrosky: There’s a huge amount of money being deployed and it’s going to a very narrow set of recipients and some really small geographies, like Northern Virginia. So it’s an incredibly concentrated pool of capital that’s also large enough to affect GDP. I did the math and found out that in the first half of this year, the data-center related spending—these giant buildings full of GPUs [graphical processing units] and racks and servers that are used by the large AI firms to generate responses and train models—probably accounted for half of GDP growth in the first half of the year. Which is absolutely bananas. This spending is huge.

Thompson: Where is all this money going?

Kedrosky: For the biggest companies—Meta and Google and Amazon—a little more than half the cost of a data center is the GPU chips that are going in. About 60 percent. The rest is a combination of cooling and energy. And then a relatively small component is the actual construction of the data center: the frame of the building, the concrete pad, the real estate.

HOW AI IS ALREADY WARPING THE 2025 ECONOMY

Thompson: How do you see AI spending already warping the 2025 economy?

Kedrosky: Looking back, the analogy I draw is this: massive capital spending in one narrow slice of the economy during the 1990s caused a diversion of capital away from manufacturing in the United States. This starved small manufacturers of capital and made it difficult for them to raise money cheaply. Their cost of capital increased, meaning their margins had to be higher. During that time, China had entered the World Trade Organization and tariffs were dropping. We’ve made it very difficult for domestic manufacturers to compete against China, in large part because of the rising cost of capital. It all got sucked into this “death star” of telecom.

So in a weird way, we can trace some of the loss of manufacturing jobs in the 1990s to what happened in telecom because it was the great sucking sound that sucked all the capital out of everywhere else in the economy.

The exact same thing is happening now. If I’m a large private equity firm, there is no reward for spending money anywhere else but in data centers. So it’s the same phenomenon. If I’m a small manufacturer and I’m hoping to benefit from the on-shoring of manufacturing as a result of tariffs, I go out trying to raise money with that as my thesis. The hurdle rate just got a lot higher, meaning that I have to generate much higher returns because they’re comparing me to this other part of the economy that will accept giant amounts of money. And it looks like the returns are going to be tremendous because look at what’s happening in AI and the massive uptake of OpenAI. So I end up inadvertently starving a huge slice of the economy yet again, much like what we did in the 1990s.

Thompson: That’s so interesting. The story I’m used to telling about manufacturing is that China took our jobs. “The China shock,” as economists like David Autor call it, essentially took manufacturing to China and production in Shenzhen replaced production in Ohio, and that’s what hollowed out the Rust Belt. You’re adding that telecom absorbed the capital.

And now you fast-forward to the 2020s. Trump is trying to reverse the China shock with the tariffs. But we’re recreating the capital shock with AI as the new telecom, the new death star that’s taking capital that might at the margin go to manufacturing.

Kedrosky: It’s even more insidious than that. Let’s say you’re Derek’s Giant Private Equity Firm and you control $500 billion. You do not want to allocate that money one $5 million check at a time to a bunch of manufacturers. All I see is a nightmare of having to keep track of all of these little companies doing who knows what.

What I’d like to do is to write 30 separate $50 billion checks. I’d like to write a small number of huge checks. And this is a dynamic in private equity that people don’t understand. Capital can be allocated in lots of different ways, but the partners at these firms do not want to write a bunch of small checks to a bunch of small manufacturers, even if the hurdle rate is competitive. I’m a human, I don’t want to sit on 40 boards. And so you have this other perverse dynamic that even if everything else is equal, it’s not equal. So we’ve put manufacturers who might otherwise benefit from the onshoring phenomenon at an even worse position in part because of the internal dynamics of capital.

Thompson: What about the energy piece of this? Electricity prices rising. Data centers are incredibly energy thirsty. I think consumers will revolt against the construction of local data centers, but the data centers have enormous political power of their own. How is this going to play out?

Kedrosky: So I think you’re going to rapidly see an offshoring of data centers. That will be the response. It’ll increasingly be that it’s happening in India, it’s happening in the Middle East, where massive allocations are being made to new data centers. It’s happening all over the world. The focus will be to move offshore for exactly this reason. Bloomberg had a great story the other day about an exurb in Northern Virginia that’s essentially surrounded now by data centers. This was previously a rural area and everything around them, all the farms sold out, and people in this area were like, wait a minute, who do I sue? I never signed up for this. This is the beginnings of the NIMBY phenomenon because it’s become visceral and emotional for people. It’s not just about prices. It’s also about: If you’ve got a six acre building beside you that’s making noise all the time, that is not what you signed up for. https://www.derekthompson.org/p/this-is-how-the-ai-bubble-will-pop

 

Symbolic AI — could become a bridge from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI) and further to Artificial Superintelligence (ASI). It bridges the gap between machine learning and understanding. Providing rational and empathetic reasoning & emotionally intelligent decision-making for a global public good.

Simboliskais mākslīgais intelekts (SMI) varētu kļūt par tiltu no mākslīgā šaurā intelekta (ANI) uz mākslīgo vispārējo intelektu (AGI) un tālāk uz mākslīgo superintelektu (ASI). Tas pārvar plaisu starp mašīnmācīšanos un izpratni. Nodrošinot racionālu un empātisku spriešanu un emocionāli inteliģentu lēmumu pieņemšanu globāla sabiedrības labuma vārdā.

 Could Symbolic AI transform human-like intelligence?

 Artificial intelligence research is revisiting symbolic approaches once considered outdated. Combining these formal methods with neural networks may overcome current limitations of AI reasoning. Experts suggest that a hybrid “neurosymbolic” model could enable machines to generalize knowledge like humans. The challenge lies in merging these systems efficiently without sacrificing reliability or adaptability. KOLAPSE PRESENTS • DECEMBER 2, 2025

The ambition to replicate human intelligence in machines has long driven AI research, yet the path toward this goal remains contested. Neural networks, the current dominant approach, excel at pattern recognition and data-driven learning, but they often falter in reasoning or applying knowledge to novel scenarios. Symbolic AI, a legacy approach, emphasizes formal rules, logic, and explicit encoding of relationships between concepts. Decades ago, these systems dominated early AI efforts, yet their rigidity and inability to scale to complex datasets caused them to be eclipsed by neural networks. Now, researchers propose that a fusion of the two paradigms—neurosymbolic AI—might finally bridge the gap between learning and reasoning. Advocates argue that by combining the strengths of both, machines could achieve a more generalizable and trustworthy form of intelligence.

Neurosymbolic AI aims to integrate the flexible learning capabilities of neural networks with the clear reasoning structures of symbolic systems. In practice, symbolic AI encodes rules such as “if A then B,” which allows for logical deductions that are immediately interpretable by humans. Neural networks, by contrast, discover statistical correlations from large datasets but often remain opaque, creating what is known as the “black box” problem. By layering symbolic logic atop neural outputs, or conversely, using neural networks to guide symbolic search, researchers hope to create systems capable of both learning and deductive reasoning. The appeal of this approach is not merely academic; it has significant implications for high-stakes fields, such as medicine, autonomous vehicles, and military decision-making, where errors can have serious consequences. The transparency inherent in symbolic reasoning can help mitigate mistrust in AI outputs.

Neurosymbolic AI seeks to unify formal logic with neural learning.

Efforts to operationalize neurosymbolic AI are already underway, producing demonstrable successes. For example, AlphaGeometry, developed by Google DeepMind, combines neural pattern recognition with symbolic reasoning to solve mathematics Olympiad problems reliably. By generating synthetic datasets using formal symbolic rules and then training neural networks on these datasets, the system reduces errors and enhances interpretability. Other techniques, such as logic tensor networks, assign graded truth values to statements, enabling neural networks to reason under uncertainty. Likewise, roboticists have used neurosymbolic methods to train machines to navigate environments with novel objects, dramatically reducing the volume of training data required. These applications suggest that hybrid approaches can yield practical advantages, even if the systems remain specialized rather than fully general.

Despite these promising examples, integrating symbolic and neural methods is far from straightforward. Symbolic knowledge bases, though clear and logical, can be enormous and computationally expensive to search. Consider the game of Go: the theoretical tree of all possible moves is astronomically large, making exhaustive symbolic search infeasible. Neural networks can alleviate this by predicting which branches are likely to yield optimal outcomes, effectively pruning the search space. Similarly, incorporating symbolic reasoning into language models can guide the generation of outputs during complex tasks, reducing nonsensical or inconsistent results. Yet, these integrations require careful orchestration; simply connecting a symbolic engine to a neural network without coherent management often produces subpar performance.

Underlying the technical challenges are philosophical disagreements about the very nature of intelligence and the methods by which it should be pursued. Some AI pioneers, such as Richard Sutton, argue that efforts to embed explicit knowledge into machines have historically been outperformed by approaches leveraging large datasets and computational scale. From this perspective, the lessons of history suggest that symbolic augmentation may be a distraction rather than a necessity. Others, including Gary Marcus, maintain that symbolism provides essential reasoning tools that neural networks lack, framing the debate as a philosophical as well as technical one. In practice, both views influence current research trajectories, with proponents of each advocating for strategies that align with their understanding of intelligence. Observers note that these debates often obscure practical experimentation, which continues regardless of theoretical disputes.

Symbolic systems also face difficulties representing the complexity and ambiguity inherent in human knowledge. Projects like Cyc, begun in the 1980s, attempted to encode common-sense reasoning, articulating axioms about everyday relationships and events. While Cyc amassed millions of such statements and influenced subsequent AI knowledge graphs, translating nuanced, context-dependent human experiences into rigid logical rules remains fraught with errors. For instance, although Cyc could represent that “a daughter is a child” or “seeing someone you love may produce happiness,” exceptions abound in human behavior, and strict logic cannot fully capture them. Consequently, symbolic reasoning is most effective when applied selectively or in tandem with flexible learning systems. The combination enables generalization without sacrificing the interpretability that pure neural networks struggle to achieve.

Neurosymbolic AI also introduces opportunities to reduce the data burden traditionally required for training neural networks. By embedding rules and relational logic, machines can achieve high accuracy with far fewer examples than would be required otherwise. Jiayuan Mao’s work in robotics exemplifies this: her hybrid system required only a fraction of the training data that a purely neural model would need to understand object relationships in visual tasks. This efficiency can accelerate development cycles and lower resource consumption, making AI more accessible and environmentally sustainable. Furthermore, hybrid approaches can facilitate reasoning in domains where data is scarce or incomplete, extending AI’s applicability to previously inaccessible problems. The challenge lies in designing systems that balance rule-based reasoning with statistical learning without compromising either.

Current efforts also explore the potential for machines to develop their own symbolic representations autonomously. The ultimate vision, according to Mao, is a system that not only learns from data but can invent new categories, rules, and conceptual frameworks beyond human understanding. Such capability would mark a fundamental shift, enabling AI to contribute novel insights to mathematics, physics, or other knowledge domains. Achieving this requires progress in AI “metacognition,” whereby systems monitor and direct their own reasoning processes. Effective metacognitive architectures would act as conductors, orchestrating the interplay between neural learning and symbolic logic across multiple contexts. If realized, this could constitute a genuine form of artificial general intelligence, capable of reasoning in ways comparable to, or even beyond, humans.

Integrating symbolic knowledge can reduce training data requirements dramatically.

Hardware and computational architecture also play a critical role in realizing neurosymbolic AI’s potential. Current computing platforms are often optimized for either neural network training or symbolic reasoning, but not both simultaneously. Efficient hybrid computation may necessitate novel chip designs, memory hierarchies, and processing paradigms capable of supporting dual paradigms. As the field matures, other forms of AI—quantum or otherwise—might complement or even supersede neurosymbolic approaches. Nevertheless, the immediate priority for researchers is to establish robust, flexible systems that can generalize across domains, combining reasoning, learning, and problem-solving in a coherent framework. In this sense, neurosymbolic AI represents a pragmatic middle path, leveraging lessons from both historical and contemporary AI research.

While technical and philosophical hurdles remain, neurosymbolic AI has already begun to reshape expectations of what intelligent machines can achieve. Its proponents argue that reasoning, efficiency, and transparency are within reach, provided that symbolic and neural components are integrated thoughtfully. Early applications demonstrate that hybrid models can outperform purely neural approaches in select domains, particularly when understanding and logic are critical. The field is still in its formative stages, with significant exploration required to establish general principles and architectures. Yet the prospect of machines capable of reasoning, generalizing, and even inventing new knowledge captures the imagination of both scientists and policymakers. As AI continues to evolve, the marriage of neural flexibility and symbolic clarity may chart the most promising path toward human-like intelligence.

https://www.kolapse.com/en/?contenido=93179-could-symbolic-ai-transform-human-like-intelligence