Web Summit 2025

Agentic AI Sparks Global Debate: Progress or Precaution?

November 16, 2025 05:27 AM
This dancing robot from China was a smash hit at the conference
  • agentic AI — systems capable of performing specific tasks independently — is the new frontier.

At this year’s Web Summit, one of the world’s largest technology gatherings, artificial intelligence wasn’t just another topic on the agenda — it dominated the conversation. In the vast pavilions filled with start-ups, developers, corporate leaders and curious onlookers, a new phrase seemed to echo endlessly: agentic AI.

Some of it took the form of wearable devices designed to anticipate your needs. Others lived as software embedded into business workflows, capable of booking flights, managing logistics or handling customer service queries autonomously. More than 20 panels explored its promise and perils.

To the industry, agentic AI — systems capable of performing specific tasks independently — is the new frontier. To many observers, it is also the next big risk.

From Turing to Siri: Agentic AI’s long history

While “agentic AI” may sound like a freshly minted buzzword, the idea isn’t new. Babak Hodjat, the chief AI officer at Cognizant and one of the key architects behind Apple’s Siri, points out that the concept dates back decades.

“Back then, the fact that Siri itself was multi-agentic was a detail that we didn't even talk about,” Hodjat said from Lisbon. “Historically, the first person who talked about something like an agent was Alan Turing.”

Turing’s early theories imagined machines capable not only of answering questions, but of making decisions. Today’s agentic systems expand that vision by taking action in the real world — altering data, initiating transactions, or modifying digital environments without human instruction.

Risks amplified by autonomy

This new level of independence brings with it heightened risks.

According to the IBM Responsible Technology Board’s 2025 report, agentic AI systems can unintentionally modify datasets, introducing or amplifying biases. If such changes go undetected, the results could be “irreversible” and “difficult to correct at scale.”

Experts fear scenarios where:

  • An AI agent rebalances financial data incorrectly

  • A customer-service bot begins issuing inappropriate refunds

  • A logistics agent misroutes critical medical supplies

  • An automated system changes public-facing information without oversight

All outcomes that, once deployed, could ripple through real-world systems.

But Hodjat argues the danger comes less from the systems themselves and more from human behaviour.

“People are over-trusting AI,” he warns. “They take responses at face value without digging in… It is incumbent upon all of us to learn the boundaries — where we can trust these systems and where we cannot — and educate not just ourselves, but also our children.”

His message echoes concerns long voiced in Europe, where public scepticism toward AI adoption has been notably higher than in the US or Asia.

Too cautious — or not cautious enough?

The debate over AI’s future often splits into two camps: one fearing runaway technology, the other fearing stagnation if innovation slows.

This tension has intensified in Europe following the enforcement of the EU AI Act, the world’s first major regulatory framework governing artificial intelligence.

Under the Act, companies face strict rules on how AI may be trained, deployed and monitored — particularly for systems considered “high risk.” In the UK, where AI companies are still governed primarily by GDPR and a patchwork of existing regulations, the long-term approach remains uncertain.

Jarek Kutylowski, CEO of German language technology leader DeepL, believes that Europe may be leaning too far toward caution.

Kutylowski says “Looking at the apparent risks is easy, But the risks of missing out — of not adopting the technology fast enough — are probably the bigger threat.”

He argues that Europe risks losing economic ground to the US and China, where AI integration into business and government is moving at a far more aggressive pace.

“You won't see it until we start falling behind,” he warns. “Until our economies cannot capitalise on the productivity gains that other parts of the world will see.”

Tech journalist Omer Faruk Naim said, “After attending the sessions and speaking with innovators on the ground, one thing is clear: the conversation around AI is no longer about curiosity — it’s about urgency. What I witnessed in Lisbon is a global tech community pushing forward at a pace that regulations alone cannot match."

Naim also said that Agentic AI will redefine how we work, create, and interact, but it also demands maturity from us as users and policymakers. Europe’s caution is understandable, even responsible in some respects, but if it turns into hesitation, we risk losing the competitive edge that drives meaningful progress. The real challenge now is not choosing between innovation and safety, but learning how to pursue both with equal seriousness. That balance — not fear — should shape our future.”

The global divide widens

The AI race is not simply about building smarter tools — it is increasingly tied to national competitiveness, digital sovereignty and geopolitical influence.

In the US, tech giants are pouring billions into advanced AI models and agentic systems. China is accelerating its own domestic AI ecosystem with state backing. Meanwhile, Europe is moving more cautiously, prioritising privacy, safety and ethics.

Supporters of the EU’s approach say these guardrails prevent exploitative or discriminatory AI applications. Critics argue they may stifle innovation before it can fully develop.

A future that cannot be paused

For Kutylowski, attempts to slow AI's advance are not only ineffective — they are unrealistic.

“I do not believe technological progress can be stopped in any way,” he says. “It is more a question of how we pragmatically embrace what is coming ahead.”

That pragmatic embrace, however, remains deeply contentious.

As AI systems move from tools that support human decisions to agents capable of making them, societies must navigate a delicate balance between protection and progress.

The paradox of the moment

We are living through a strange paradox:

  • Many people rely on AI more than ever — trusting chatbots, using automated assistants, and handing off tasks to invisible algorithms.

  • Yet governments, especially in Europe, are imposing increasing restrictions on how AI can behave.

Are we too reliant? Perhaps. Too cautious? Possibly.

The answer may lie somewhere between fear and optimism — between an embrace of life-changing innovation and a sober awareness of the risks.

The world may not agree on the pace of AI development, but one thing is clear: the technology will not wait for us to decide.