• Pricing
  • Install Now
installNow icon
installNow icon
Install Now
homeMobile icon
homeMobile icon
Home
picingMobile icon
picingMobile icon
Pricing
blogMobile icon
blogMobile icon
Blog

Kafka Migrations Need More Than a Replicator

Jonas Best
By Jonas BestMarch 17, 2026
image
In this article:
  • 01.The real problem with Kafka migrations
  • 02.What a real migration needs
  • 03.K2K + Lenses: Replication with the full migration workflow
  • 04.See it in action: Application Migration to AWS Express Brokers
  • 05.What's next for K2K

Jonas Best & Patrick Polster

Kafka migrations are one of the riskiest infrastructure projects a platform team can take on. 

Miss a dependency and a downstream app starts reprocessing events it already handled leading to breaking SLAs and eroding trust with application teams. Migrate without visibility and you risk a major production issue.

The instinct is to reach for a replication tool and call it done. 

But replication is only one piece of the puzzle. And teams that treat it as the whole solution are the ones that end up with war stories.

The real problem with Kafka migrations

When engineers talk about "doing a Kafka migration," what they actually mean is a chain of carefully planned and coordinated steps: mapping all your producers, consumers, and topic dependencies; replicating data across clusters; migrating consumer offsets for applications to resume exactly where they left off; validating that nothing is behind or reprocessing; and switching producers over, all with minimal downtime.

Replication tools like MirrorMaker 2 (MM2) are powerful but notoriously complex to configure correctly, and they don't give you the migration workflow, just the data movement. 

MSK Replicator and Confluent Replicator are simpler to operate, but only work within their own ecosystem. None of these tools answer the harder questions: 

What am I actually migrating? Are my consumers healthy? Did the offsets land correctly?

A Kafka migration without those answers is a migration with hidden risk baked in.

What a real migration needs

Beyond replication, a Kafka migration requires tooling for:

1. Context for planning: a full picture of your application lineage: which apps produce to which topics, which consumer groups are active, what the lag looks like, and what depends on what. Without this, you can't build a migration plan you actually trust.

2. Live observability: side-by-side visibility into both clusters as you migrate, so you can watch consumer groups transition from active on the source to active on the target, in real time. Not after the fact.

3. Consumer offset preservation: this is the one that catches teams out. If you restart an application on the new cluster without migrating its offsets, it either starts from the beginning (reprocessing everything) or from the latest offset (losing events). Neither is acceptable.

4. Rollback capability: the ability to pause, validate, and if needed, failover back to the source cluster without losing state.

Today, no agnostic solution covers all four. Confluent comes closest, but their tooling only works within their own ecosystem and is expensive.

And this isn't just a problem for individual teams. It's a bottleneck for the entire industry. Delaying migrations by months, sometimes years.  

The Kafka ecosystem has seen real innovation: diskless architectures, new managed services, next-generation brokers. But without a viable migration path, most organisations can't adopt them, and vendors can't reach the existing installed base.

This is the gap that K2K, deeply integrated into Lenses, is built to close. Unsurprisingly, Kafka vendors themselves were among the first to take notice.

K2K + Lenses: Replication with the full migration workflow

K2K started as an ambitious engineering challenge: build a Kubernetes-native replicator that works across any Kafka, on-premise, cloud, edge, and every managed flavour in between. 

MM2 works everywhere but is complex. MSK Replicator is simple but lives in the AWS ecosystem. Confluent Replicator requires Confluent on both ends. All three are based on the somewhat troublesome Kafka Connect framework. 

K2K was designed to be both compatible and user-friendly: a cloud-native replicator that deploys into existing infrastructure without dragging in new dependencies.

But the real unlock came when we integrated K2K into the Lenses developer experience and built dedicated migration workflows around it, for topic/schema replication, application migration, and everything in between. Replication became one step in a coordinated process, with topology views for planning, live offset dashboards for monitoring, and dedicated migration jobs for consumer offset preservation. Everything in one place to migrate with confidence.

See it in action: Application Migration to AWS Express Brokers

In a recent webinar with AWS, we demoed a full Kafka cluster migration with two banking applications to AWS Express Brokers, a five-step workflow covering topic replication, consumer offset migration, app restart, and producer cutover, with full visibility throughout in Lenses 6.2.

K2K in Lenses enables a smooth migration experience with minimal downtime, no data loss or reprocessing, and full visibility at every step.


What's next for K2K

There's a lot more to come. 

Consumer group migration is already supported. Continuous offset replication (keeping offsets in sync across clusters as you run) is a major feature on the near-term roadmap. 

Everything shown in this post will be available in Lenses 6.2 and K2K 1.3. Talk to us now for an early preview if you’re already planning a migration. 

Or Download the community edition 6.1 to experience self-service data replication with K2K now.

Back to all blogs

Related Blogs

Lenses 6.2 Oauth
Lenses 6.2 Oauth
Blog

Lenses 6.2 - Trusting Agents to build & operate event-driven applications

andrew
andrew
By
Andrew Stevenson
kafkaconnections hero banner
kafkaconnections hero banner
Blog

Self-Service Data Replication with K2K - part 1

Drew Oetzel
Drew Oetzel
By
Drew Oetzel
MCP Rocket
MCP Rocket
Blog

The Challenges in Productionising MCP Servers

tun shwe
tun shwe
By
Tun Shwe

Lenses, autonomy in data streaming

Install now
Products
Developer Experience
Kafka replicator
Lenses AI
Kafka Connectors
Pricing
Company
About
Careers
Contact
Solutions by industry
Financial services
For engineers
Docs
Ask Marios Discourse
Github
Slack
For executives
Case studies
Resources
Blog
Press room
Events
LinkedIn
Youtube
Legal
Terms
Privacy
Cookies
SLAs
EULA
© 2026Apache, Apache Kafka, Kafka and associated open source project names are trademarks of the Apache Software Foundation