Adamos Loizou
The Kafka replicator comparison guide
Why we built Lenses K2K for platform-neutral Kafka-to-Kafka replication.

Adamos Loizou
Let's talk about a problem that might sound simple but gets complex quickly: copying data from one Kafka cluster to another. As our Kafka usage grows, many of us find ourselves managing multiple clusters and needing to share data between them. Or worst still, sharing data to an external cluster.
During a London meetup, we explored why this happens, what existing solutions offer, and why we decided to build our own Kafka replicator. Here's what we learned.
We launched Lenses 6.0 (Panoptes) in 2024 with an ambition: build a simple way to work with all your data streams from one place - no matter which Kafka vendors you use.
First off, if you're running Kafka, you might wonder why anyone would need multiple clusters in the first place. We surveyed our meet-up audience and found about half were running multiple clusters. Why?
Whether for disaster recovery, migration, safe testing or data sharing, organizations want to confidently move data from Cluster A to Cluster B. Each scenario demands slightly different capabilities from your replication solution, from topic configuration to robust governance.
Let's look at what's already out there. We developed a simple "matrix" to evaluate replication solutions across nine factors, color-coded from great (orange) to okay (peach) to not-so-great (grey).
The open-source standard built into Apache Kafka:
The Good:
It's free and open-source
It's vendor agnostic, working with any Kafka flavor
Exactly-once delivery support
The Not-So-Good:
Complex to set up and use (unless you're a Kafka Connect expert)
No commercial support
Can be difficult to isolate workloads effectively, leading to resource contention and operational challenges
Starting to show its age with limitations in workload management and modern deployment patterns
Limited transformation and routing capabilities. You'd need to create your own SMTs to try and filter or obfuscate messages and even write custom code to rename topics.
Confluent's commercial take on MM2:
The Good:
Provides enterprise support
Works with any Kafka flavor
Some additional features over MM2
The Not-So-Good:
Still based on MM2, inheriting its complexity
Lacks exactly-once semantics
Expensive
Not Kubernetes-native
Limited on transformations and routing
AWS's managed MM2 service:
The Good:
Fully managed SaaS (less operational overhead)
Provides some support through AWS
Offers basic observability and operations, standard with AWS
Relatively easy to use through AWS console
The Not-So-Good:
Vendor-specific (target must be AWS MSK)
No exactly-once semantics
Zero transformation capabilities
Zero routing capabilities
Locked to AWS ecosystem
Confluent's alternative native approach:
The Good:
Native replication built into Kafka (not MM2-based)
Excellent at exactly-once replication
Includes consumer offset translation
Easy to use
Fully managed
Good support
The Not-So-Good:
Vendor-specific (target must be a Confluent Kafka)
No transformation capabilities
No filtering capabilities
No routing capabilities
Expensive (consumption-based pricing)
Learning from these existing solutions, we wanted to create something that combines the best features while addressing the shortcomings. Here's what makes Lenses K2K different:
Kubernetes-first design: Rather than building on Kafka Connect, we package it in a lean container that deploys natively in Kubernetes and scales easily.
Vendor agnostic: Works with any Kafka flavor, giving you freedom to choose your infrastructure, on-prem or cloud.
Smart Schema Registry handling: One unique challenge we solved is copying data between clusters with different Schema Registries - making it work even with Avro or Protobuf data.
Flexible data routing: Want to rename topics during replication? Merge topics? Split topics? K2K handles these cases.
Data filtering and masking: Critical for safe testing and compliance when sharing data across environments or organizations.
Exactly-once delivery: Ensures your data arrives without duplicates.
Control plane integration: Manage everything through an intuitive UI with full observability.
Self-serve: Combined with the Lenses IAM, your teams can replicate topics with as little as select, click and go.
GitOps: K2K is fully declarative in YAML. Make a change, CI/CD it and Lenses will ensure the changes are reflected in your environments.
Auto-Scaling - to save your compute costs. The Kafka replicator will automatically scale-up/scale-down depending on the volume of data that needs to be replicated
Free community edition: Use it without cost, with optional enterprise support when you need it.
There are two options to deploy Lenses K2K:
Standalone container - to be deployed & managed through your standard CI/CD practices
Through Lenses (from version 6.1) onto your Kubernetes directly or via CI/CD
We are launching a free community edition of Lenses K2K in alpha as a standalone deployment before we release it in Lenses 6.1 this summer.
Want to test it out and give us feedback? Get access to our registry.
We're also considering open-sourcing the codebase. Let us know if that's important to you, and whether you would be interested in contributing to the project.