Confluent Current 2025 highlights

By

Alex Durham

May 21, 2025

Current 2025 featured two days of engineers figuring out how streaming tech needs to evolve in an AI-driven world. Gone are the days when talks focused on basic Kafka setup. This year, everyone was tackling complex integrations, developer happiness, and practical AI implementation.

Still, the event drew a range of people, with plenty of new faces stopping by the Lenses.io booth – clear evidence that Kafka and data streaming continue to attract newcomers. Our conversations about developer experience and our latest Kafka AI agents reinforced that while streaming technology is maturing, there's still an exciting influx of new learners and adopters curious about what's possible.

Confluent Current 2025 - Lenses.io team

Batch vs. Streaming? Same arguments, new motivation

The opening keynote merged two worlds: the wall between batch and streaming is crumbling, motivated by AI. Jay Kreps explained it clearly: "UI-centric apps are giving way to data-intensive apps... meaning software is now running parts of the business, not people."

The old mental model – batch for state, streaming for events – is merging into something more practical. Kreps used the census example: "Census data is hard coded as a batch process that runs every 10 years. It doesn't match the current state of things." Confluent's answer is Tableflow, bringing together these approaches by converting Kafka topics into Apache Iceberg or Delta Lake tables. This continuously updates analytical systems with operational data, while maintaining full historical context. 

AI + Streaming

RAG is having a moment

If there was one pattern that dominated discussions, it was Retrieval Augmented Generation (RAG) for streaming vector data. The conference featured several sessions showing how companies are implementing this in their production environments.

A standout talk by StarTree introduced a real-time RAG architecture (originally built by Uber) that enables analysis on fresh vector embeddings. Traditional RAG systems struggle with latency issues, and this approach addresses that fundamental challenge.

In the finance sector, Alpian demonstrated how they're building AI applications while maintaining regulatory compliance. Their approach emphasized that "context needs to be fresh, otherwise data isn't relevant" – achieved through Kafka-based streaming architectures.

Stream Processing as AI Agents

Airy shared how Apache Flink jobs can serve as AI agents: “When the AI agent makes a call to the foundation model, it can rely on having the perfect context available at low latency."

What makes this approach significant is the accountability aspect: "For the first time, you have accountability that you can not only observe what decision was made, but also what reasoning led to that decision." This addresses a critical gap in most AI systems.

They demonstrated how AI can generate complex Flink SQL to detect fraud patterns (like a series of small transactions followed by a large one) and a natural language interface that lets business users ask questions like "What's our revenue today in stores?" and get immediate answers without SQL expertise.

Speaking of agents, our booth attracted curious crowds with our new robot team members, Streamy to demo our recently announced Lenses AI agents to unsuspecting passersby, and Iris, who offered wildly inaccurate but flattering predictions about attendees’ ages.

No grander learning here, other than robots spark joy.

Confluent Current 2025 - Robots

AI and developer productivity

How much control should you have?

ShadowTraffic's Michael Drogalis gave a session on "Kafka productivity tools in the age of AI", which cut through the hype to address pressing developer concerns. All the foundational models are changing; things are moving so fast that it's a confusing time to be an engineer.

Drogalis challenged the idea that AI will magically make developers more productive, framing it as a question of "interface vs. implementation." How much control should you have as a developer? He placed LLM tools on a spectrum from "manual (auto complete)" to "copilot (assisted - defer control)" to "agent (write a whole module - takes over control)."

He had positive reports of his experience with a copilot-style approach: they're "congruent with a lot of what we're building" because they "don't implement, but give you configuration to check. Lets you control the output. Review it, share it, modify, fork it."

Self-service data during steep Kafka adoption

Adidas shared an honest account of their journey with Kafka adoption. Their central team became a bottleneck, with requests piling up and resolution times extending from hours to days. Manual processes led to mistakes and operational challenges.

Their solution was a vendor-agnostic self-service platform that delegated topic and schema management to stakeholders. The results were impressive – resolution times reduced "from days to seconds." Their AsyncAPI documentation model enabled better discovery and reduced duplication across teams.

OpenAI's talk on "Taming the Kafka Chaos" complemented this perfectly, revealing how they're simplifying Kafka consumption for their engineering teams, while their session on "Changing engines mid-flight: Kafka migrations" covered keeping systems running during infrastructure changes.

This was an honest conversation where they also pointed to the trade-offs they needed to make. OpenAI combined different tools, including the open source uForwarder from Uber, though this solution felt somewhat specific for their needs.

Data democracy: Stream processing for the masses with data mesh

Netflix demonstrated their approach to democratization with their Data Mesh platform. The challenge they addressed was clear: how to make stream processing available to people who don't want to become stream processing experts, using data mesh principles.

Their SQL processor provides  immediate feedback as users write queries, automatic schema inference, and interactive testing with sample data. The platform has been remarkably successful – over 1,200 SQL processors created within a year, processing 100 million events per second across 5,000+ pipelines.

As the speaker noted: "When we were building the SQL processor, we had no idea it would get so popular so quickly." Their experience with SQL highlights the value of making powerful technology more accessible.

Enterprise-grade streaming

On-prem still on the roadmap

Confluent revealed that "Confluent 8 is going to be their biggest ON PREM release ever," showing they're still investing in customers who can't go all-in on cloud.

Their cross-cloud cluster linking now works across AWS and Azure (with GCP coming next), letting you build truly resilient multi-cloud streaming setups around Confluent Cloud. Mirroring Kafka clusters for resilience is something we're working on with Lenses K2K, our Kafka to Kafka replicator, and it was great to see confirmation that this pattern is proving popular with larger companies. Confluent is also pushing "governance" through shift-left principles, bringing streaming closer to the source with Flink.

E.ON shared a real-world example of using these capabilities to "source renewable energy at the right price for the right contracts" and balance the power grid – showing how mature streaming platforms are running critical infrastructure today.

When Apache Kafka goes sideways

One of the most valuable talks was a post-mortem of a Kafka incident brought to us by Adidas. What started as a routine certificate renewal quickly became problematic, with brokers blocking 80% of connections via the "acceptor metric" – a term unfamiliar to many Kafka operators.

Adidas’s willingness to share their experience – "We're sharing our mistakes so you don't have to make them" – provided genuine value. Their insights about connection handling, circuit breaking, and preparation strategies offered practical guidance for teams running Kafka at scale.

The event wasn't all technical talks, of course. The party offered a different kind of "streaming" experience, with VR activities that sent our team on an alternative wonky journey with Kafka. Alvaro was particularly mesmerized by the VR rollercoaster.

Confluent Current 2025 VR

So what’s the deal? 

Confluent Current 2025 highlights revealed five key trends:

  • Streaming + AI integration is becoming standard, particularly with RAG patterns

  • Developer experience is gaining importance as organizations focus on usability

  • Self-service platforms are replacing gatekeepers at companies like Adidas and Netflix

  • Batch vs. streaming is evolving into a unified approach

  • Operational excellence is essential as streaming becomes mission-critical

What's interesting is the contrast between forward-looking innovation and fundamental challenges. While Current featured impressively advanced topics about integrating streams with AI, the most valuable sessions addressed basic questions: how to drive adoption, how to make powerful technology accessible, and why we're building this software in the first place.

Companies like Netflix and Adidas demonstrated that meaningful innovation often involves taking complex technology and making it approachable. That's the real achievement.

As streaming and AI continue to converge, we need to balance exploring new possibilities with mastering the fundamentals that enable widespread adoption. Confluent Current 2025 showed that streaming is expanding beyond specialists to become essential infrastructure for everyone – whether seizing the day with AI, or biding their time.


Next, see what we announced at Current 2025

Introducing our Lenses AI agent for SRE, built to lighten the load of heavy Kafka troubleshooting tasks such as fixing unresponsive Kafka consumers: