Christina Daskalaki
Christina Daskalaki
Kafka is a ubiquitous component of a modern data platform.
It has acted as the buffer, landing zone, and pipeline to integrate your data to drive analytics, or maybe surface after a few hops to a business service.
More recently, though, it has become the backbone for new digital services with consumer-facing applications that process live off the stream.
As such, Kafka is being adopted by dozens, (if not hundreds) of software and data engineering teams in your organization.
What happens when you’re not supporting a few developers, but hundreds of different teams? With this mass appeal comes new challenges, demands for new frameworks, different tools for those using Kafka - and pressure to improve developer productivity.
Our latest release responds to this need for enterprise-wide adoption of Kafka and the best developer experience. Our engineering team has put in thousands of hours to build in greater support and easier access to a wider set of technologies, engineered for performance and scale.
AVRO has been the common serialization for those doing big data processing, especially for those developing on JVM frameworks.
But with these new engineering teams adopting Kafka, supporting different serialization formats is now important. This includes Protobuf in particular, popularized by its wide language support and by gRPC.
This is why you’ll be pleased that Lenses now fully supports Protobuf managed in a schema registry across the entire Lenses platform.
You can now create and evolve Protobuf schemas just as you could AVRO, as well as being able to query data and build streaming applications with SQL - for example, an application that converts data from an AVRO producer for a Protobuf-based consumer.
The Lenses Kafka Topology has been one of our most popular features. It shines a light on Kafka in what is otherwise a black box. It maps the flow of data across your streaming applications, from custom microservices to stream processing and data integration applications.
It is not uncommon to see hundreds if not thousands of streaming applications connected to your Kafka cluster.
This is a governance minefield!
You need to know where data is going and how it is being processed, as well as how it's associated with your applications, the owners and the SLAs.
Our Topology was engineered many years ago when teams had a few dozen streaming microservices: today, our customers have thousands. It was time for a re-work.
Firstly, it has had a major facelift - look at that beautiful UX.
But it is not just botox. Behind the scenes the Kafka Topology has been redeveloped in React from an Angular legacy. The Kafka Topology mapping your streaming landscape can now scale to render many thousands of applications with a metadata search bar to help navigate and pinpoint any application in your estate.
The Lenses streaming SQL engine is designed to help build streaming applications without needing to understand the nuances of stream processing. It simplifies deployment over Kubernetes. But we have also taken the simplification a step further with an intuitive user experience to build your streaming applications.
This includes a new consolidated view to deploy both SQL applications and external apps such as custom microservices.
When defining your SQL and to make you even more productive, we’ve vastly improved error handling and offer a handy panel to copy/paste SQL snippets for common tasks such as re-keying and converting serialization.
When deploying apps over Kubernetes, you now have more control over parameters (such as memory) allocated to your application, as well as far more visibility into the state and config of the pod.
Behind the scenes, there are performance improvements to how your apps run and a new versioning system that decouples your apps from the version of Lenses you are running (imagine running 5.0 SQL apps with version 5.1 of Lenses).
One of the biggest changes in this release is how we manage configuration and connections to external systems.
Previously, configuring Lenses would involve changes to config files followed by a restart. This involved trial and error and led to frustration.
Managing configuration can now be done through APIs, applied dynamically and with error-handling. Config files are still used to store the state for those adopting GitOps of course.
One of the best applications of these new APIs is a streamlined installation wizard that helps you connect Lenses to your data platform. This includes everything from uploading your TLS certs to configuring Broker JMX collection. For a complex environment, it can reduce the time to setup Lenses from hours to a few minutes.
Once your Lenses is up and running, many customers will be especially pleased that they can dynamically modify their Lenses environment on-the-go, such as adding a Kafka Connect cluster or configuring JMX collection without requiring any restarts.
Things have changed since Kafka’s earlier growth spurt.
Beyond the products of Twitter, Uber, Netflix and LinkedIn, Kafka now links the transformation we’re seeing in every industry.
We saw your pressure to leverage Kafka in new and important ways, and responded.
We hope that Lenses 5.0 energizes and enables any developer across the stack, and across many different businesses, to easily and continuously build amazing digital services.
We are proud to have laid the groundwork for the next phase of Lenses, to stay in step with Kafka as it gains momentum with the community.
As always, we’re looking forward to hearing what you think.