Skip to main content

KubeCon EU 2026 recap: AI everywhere, but platform challenges remain

· 7 min read
by the Klutch community

KubeCon EU 2026 Recap

It’s been two weeks since KubeCon + CloudNativeCon EU 2026, and we’ve taken some time to reflect on what we saw, heard, and discussed at the conference. Between booth conversations, talks, and time spent on the expo floor, a few clear themes emerged; some we expected, but others were more revealing when it comes to how teams are actually operating today.

AI is driving the agenda AND reshaping infrastructure

Unsurprisingly, AI dominated KubeCon EU.

Booths showcasing AI-related solutions consistently drew the largest crowds, and sessions covering LLMs and data pipelines were packed. One statistic shared during a talk stood out in particular: around 80% of AI workloads are now managed on Kubernetes. Whether or not that number is exact, it reflects a broader shift: Kubernetes is increasingly the platform where teams run not just applications, but complex, stateful, and compute-intensive workloads.

That shift is already changing infrastructure decisions. Teams are moving toward GPU-based environments to support AI workloads, with NVIDIA technologies becoming the de facto standard for training, fine-tuning, and inference. At the same time, interest in vector and graph databases is growing, especially for use cases like retrieval-augmented generation, where LLMs need access to internal company data and documentation.

What’s clear is that AI adoption is no longer theoretical. It’s happening, and it’s introducing new layers of operational complexity.

Cost is becoming the reality check

Alongside the excitement, there’s a growing awareness of cost.

Many teams are experimenting with AI, but the infrastructure requirements add up quickly. The more data AI systems process, the more expensive they become to run. As a result, conversations around cost optimization are becoming just as prominent as conversations about innovation.

This is reflected in the rise of AI-driven observability tools, many of which aim to reduce operational overhead by automatically analyzing logs, summarizing incidents, and even suggesting remediation steps. At the same time, OpenTelemetry continues to emerge as the standard for telemetry collection, as teams look to simplify and consolidate how they manage observability data.

The noticeable shift here is that teams aren’t just asking what they can build with AI, they’re asking what they can afford to run.

Sovereignty is moving fast

Another theme that stood out—particularly given the European context of the event—was digital sovereignty.

This wasn’t just a side conversation. It showed up across talks, co-located events, and many case studies from large European organizations. What’s changed is that sovereignty is no longer theoretical, it’s being actively implemented in production environments.

Organizations in sectors like finance, telecommunications, public infrastructure, and government are increasingly focused on maintaining control over where their data lives and how their platforms are operated. Rather than relying entirely on hyperscalers, many are building their own cloud-native platforms using open-source technologies.

The approaches vary, but the direction is consistent: teams are combining Kubernetes with open-source tooling to create platforms that can run across private infrastructure, public cloud, or hybrid environments—while keeping control over data, compliance, and operations.

In practice, this looks like:

  • running workloads across multiple clouds while maintaining a private cloud footprint
  • standardizing operations through Kubernetes and GitOps
  • reducing provisioning times through automation
  • and designing platforms with regulatory requirements in mind from the start

This shift isn’t so much being driven by ideology, but by regulatory and operational realities. For many European organizations, sovereignty is becoming a requirement, not a preference. It makes us proud that we offer sovereignty-friendly on-premises solutions and are EU-based ourselves.

For platform teams, this introduces another layer of complexity: not just managing infrastructure across environments, but doing so in a way that remains portable, compliant, and under their control.

The push toward self-service is real

Alongside these shifts AI adoption, rising costs, and increasing focus on sovereignty, another strong theme across conversations was the move toward self-service.

Organizations are actively trying to reduce dependency on centralized operations teams and give developers more direct access to the resources they need. This isn’t limited to Kubernetes itself. Many teams are looking for ways to manage services across a mix of environments—Kubernetes, virtual machines (VMs), and managed cloud services—through a more unified experience.

What stood out is that this shift isn’t just conceptual anymore. Teams are actively evaluating tools (hello, Klutch!) and approaches that allow them to standardize workflows and integrate third-party components more easily. The broader direction is clear: platform teams are being asked to deliver internal platforms that are flexible, extensible, and developer-friendly.

Database provisioning is still a bottleneck

Despite all the progress in areas like AI and platform engineering, one insight came up again and again in conversations: many teams are still provisioning databases through ticketing systems.

Developers submit requests, operations teams handle provisioning manually, and the process remains slow and difficult to scale. This stands in stark contrast to the expectations teams now have for speed and autonomy.

At the same time, there’s clear interest in modernizing this approach. Teams are exploring running databases in Kubernetes, using managed services, and adopting tools that provide more automation and consistency. But in many cases, the day-to-day reality hasn’t caught up yet—especially if you haven’t explored open-source Klutch.

That fragmentation introduces a new challenge. Even when teams move toward more automated or self-service approaches, they often lack a consistent way to provision and operate data services across these different environments. As a result, complexity shifts rather than disappears.

This is exactly the kind of problem platform teams are now trying to solve.

Where Klutch fits in

Many of the conversations we had at KubeCon reflected this need for a more unified approach.

Klutch is designed to address this gap by acting as a control plane for data services, providing a consistent way to provision and manage them—regardless of where they run. It supports Kubernetes-native services, VM-based data services, and integrations with managed cloud offerings, all through a single, standardized interface.

This becomes especially relevant in environments where sovereignty matters. Teams need the flexibility to run data services on infrastructure they control—whether that’s private cloud, on-premises, or selected cloud providers—without introducing fragmentation or operational overhead.

The goal is to bring self-service capabilities, automation, and lifecycle management into one place, so developers can request what they need without relying on manual processes, while platform teams retain control, consistency, and compliance across environments.

Clearing up a misconception about Klutch

One interesting takeaway from our time at the booth was a recurring misconception: several visitors assumed that Klutch is only for PostgreSQL.

In reality, Klutch is designed as a control plane for managing data services more broadly. It provides a consistent way to provision and operate services across different environments, whether they run in Kubernetes, on VMs, or via managed cloud offerings like AWS.

The goal is to bring self-service, automation, and standardization to data service management, regardless of where those services live.

Closing out KubeCon EU 2026

KubeCon EU 2026 confirmed that AI is shaping the direction of the cloud-native ecosystem, but it also highlighted the practical challenges that come with it. Infrastructure complexity is increasing, costs are becoming harder to ignore, and platform teams are under pressure to deliver more streamlined, self-service experiences.

At the same time, some of the most fundamental workflows (e.g., database provisioning) still haven’t caught up.

Many teams are finding that adopting the next new technology isn’t their actual challenge, it’s making everything work together in a way that’s scalable, efficient, and usable in day-to-day operations.