Things we learned at Devoxx UK 2025

By

Alex Durham

May 15, 2025

Devoxx UK 2025 kicked off with a memorable display of balloons and a keynote featuring fascinating examples of demoscene art. The conference halls were packed with developers eager to explore everything from quantum computing to the latest in AI development.

lenses.io at Devoxx 2025

What immediately stood out was how the technical landscape has shifted since previous years. Where Devoxx once featured primarily talks about CI/CD pipelines and JVM languages, this year's agenda was dominated by AI integration, data processing at scale, and event-driven architectures.

And within this changing landscape, one pattern was unmistakable.

Data streaming went mainstream

Remember when technologies like Apache Kafka were that specialized domain only your data platform team handled? Those days are over. Event streaming has fully embedded itself in modern application development, and the questions have evolved from "how do we set this up?" to "how do we stop it from becoming our organization's next spaghetti mess?"

About 70% of developers we spoke with either worked with Apache Kafka regularly or understood its core concepts. Here is Victor deep in conversation with one of them:

lenses.io at Devoxx 2025

Two years ago, that number would have been closer to 40%. But with wider adoption comes new challenges. Instead of worrying about basic connectivity, developers now struggle with visibility, governance, and making sense of event flows across more complex systems. You know that the data you currently need is flowing through this pipeline, somewhere…

Here's what we learned from the talks that caught our attention, focusing on developer tooling, data streaming, and combing through the AI hype.

Find your way through the code (and data) jungle 

Peter Werry cut to the chase in "AI Developer Tools Are Focused on the Wrong Problem." While every AI tool vendor claims to make devs 10x more productive through magical code generation, Werry pointed out what we all know but rarely say: writing code isn't actually the hard part.

The real time-sink is figuring out how existing codebases work before you can even think about what to write. As Werry explained, the average developer loses 1-2 hours daily searching through GitHub, Confluence, Slack, and Jira trying to understand why things work the way they do.

At Lenses.io, we've seen this same pattern with developers working with data streams. They're burning hours trying to understand what data exists, where it came from, where it’s going, how it's structured, who owns it and how they can use it.

https://www.youtube.com/watch?v=WbDcRHEe76s

Your brain has limited RAM

Speaking of cognitive overload, Bas de Groot's talk on "Balancing Cognitive Load" hit where it hurts. In today's cloud-native ecosystem, we're expecting developers to juggle Kubernetes, observability tools, CI/CD pipelines, and data infrastructure, all while writing business logic.

De Groot cited recent research from the DevEx working group showing that excessive cognitive load doesn't just slow developers down; it actively kills creativity and problem-solving capacity. But fascinatingly, too little cognitive load leads to overengineering (we've all been there, creating complex architectures for simple problems just because we were bored).

This perfectly explains why many data platforms fail: they either overwhelm users with complexity or underestimate their capabilities. As ThoughtWorks recently noted in their Technology Radar publication: developer productivity tools must respect engineers' intelligence while removing unnecessary friction.

Speaking of which, here's Vaclav, one of our front-end devs, scratching his head over why they missed the script at the Java for front-end booth.

Lenses.io at Devoxx 2025

Find your meme twin with embeddings and vector databases

Guy Royse's session on vector databases provided a perfect mix of technical depth and entertainment value. Using internet memes as his dataset, Royse showed how unstructured data like images can be transformed into embeddings (hyper-dimensional arrays) for similarity search. 

The practical implementation – a system that finds your meme character doppelgänger – demystified a core concept behind many modern AI systems. 

And we couldn't help but notice that Royse had already found his meme twin; our very own Drew Oetzel. 

Lenses.io at Devoxx 2025

AI: packed rooms and practical applications

AI was clearly the hottest topic at Devoxx UK – if a talk was on AI, the room was sure to be packed. There were some great demonstrations on getting LLMs running locally and setting up chatbots using technologies like Langchain4J and Quarkus.

The concept of agentic AI generated particular excitement, with discussions around how agents might perform tasks autonomously. For many attendees, the interest was in how AI can assist the development process, particularly in generating code.

However, there was notable wariness throughout these sessions. Many talks paralleled our own findings – the need to avoid AI hallucinations and the importance of breaking problems into smaller, verifiable steps instead of expecting AI to do all your work for you.

Josh Reini's session on "Build and Evaluate Trustworthy Data Agents" tackled this head-on, demonstrating how to connect data sources to AI systems with a focus on evaluation and verification. His approach using Snowflake AI observability highlighted that trust requires visibility.

With AI hallucinations still a constant concern (OpenAI reported approximately 33% hallucination rates even in their most advanced models), Reini's approach to building trust through visibility matters. 

The platform engineering reality check 

Paula Kennedy gave us the reality check we needed in "Platform Engineering: Evolution, Trends, and Future Impact." She traced how we've gone from the DevOps revolution to "DevOps is dead" to internal platforms and now platform engineering… all in just over a decade.

Her most compelling point was that successful platform teams focus on reducing cognitive load, not just providing tools. According to Kennedy, the most mature internal platforms enable developers to ship code that is faster, safer, and easier to maintain, without having to become experts in every underlying technology.

Netflix’s stream processing evolution

Sujay Jain's talk on "From Data Movement to Data Insights: Evolving Data Mesh at Netflix with Streaming SQL" was a masterclass in simplifying complexity.

Netflix's journey from using low-level Flink DataStream APIs to embracing SQL as the interface for stream processing shows what happens when you prioritize developer experience. Their Data Mesh SQL Processor now allows teams to transform streaming data without becoming Flink experts.

Taming chaotic data streams

Practical sessions showed how Apache Kafka, Flink, and Iceberg can transform unpredictable data floods into neatly organized datasets. One interesting perspective focused on writing to Apache Iceberg tables using Snowflake, demonstrating how streaming data can connect with modern analytics platforms.

Our developer advocate Drew Oetzel shared the story behind K2K, the new Lenses universal Kafka replicator. After years of watching customers struggle with complex, vendor-locked replication solutions, we decided to build something that just works, regardless of which Kafka flavor you're using.

Lenses.io Kafka replication at Devoxx 2025

K2K tackles the nightmare scenario of copying data between clusters with different Schema Registries; automatically translating schema IDs during replication. It also lets you rename, merge, or split topics during replication, and filter or mask sensitive data to meet compliance requirements.

The tool exemplifies our "less manual work, more getting stuff done" philosophy with its Kubernetes-native design and intuitive UI. And there's an alpha version to try out, for everyone who's tired of wrestling with MirrorMaker configurations.

What does it all mean?

A clear pattern emerged from Devoxx UK 2025: the industry is finally acknowledging that developer experience matters deeply, especially in data engineering. The days of "just deal with that complexity" are over.

Here's what we're taking away:

  • Context is everything: Whether it's understanding code or data, finding the right context is the biggest productivity killer. Tools that provide context (not just code) win.

  • SQL is having its streaming moment: The familiar power of SQL is transforming stream processing, making it accessible without requiring specialized expertise.

  • Cognitive load is the new performance metric: The best tools and platforms aren't only technically capable, they're designed to minimize mental overhead.

  • Trust requires visibility: Especially with AI, systems need to go beyond delivering results to show their work.

A huge thank you to the Devoxx UK team for a flawlessly organized event – from the excellent venue to the diverse lineup of talks. Devoxx continues to draw an impressively varied crowd, and we loved the enthusiasm from attendees eager to learn new technologies, especially those not yet working with Kafka. (As for the party... that probably deserves its own post.)


Dive deeper: