Building and operating a data platform requires a multitude of different technologies.
It’s easy to pick the latest and greatest technologies. The difficulty is delivering that infrastructure into a single safe and productive experience and allowing anyone to build and operate real time data applications. Our mission is to make this happen.
You focus on selecting the data infrastructure you want. We’ll bind it together and integrate your corporate tools to create a real-time application and data operations portal. No need for deep infrastructure expertise for every single operation. Enable your teams to focus on delivering value with real time data, building data intensity.
After 1000s of engineering hours, we're thrilled to announce release 3.1. As well laying the foundations for some big further things to come (more on that later..), we have packed the release with capabilities to help you in the following areas.
You may know us for our association with Apache Kafka.
Many that operate Kafka also have Elasticsearch. Like two peas in a pod, the two technologies are always close.
We’ve heard from you that having a single application and data operation portal across both technologies would simplify your operations.
Self-service data discoverability & observability across your organisation is a vital part of promoting DataOps.
Developers require a global view of schemas and metadata across all data technologies to be efficient.
In 3.1, we enable you to create secure and centralised connections into Elasticsearch to look deep inside.
Not only does this give you visibility of the indexes, metadata and entities (including sharding and replication information) in your ES environment but it also provides data observability, enabling you to explore data using the same SQL engine that our customers love for Kafka.
As you would expect, the experience between Kafka and Elasticsearch is seamless: data is protected with unified data policies (to anonymise or discover sensitive data) and role-based security backed by namespaces.
For a developer, this provides numerous productivity benefits.
An example being that you have a dataset in Kafka and an analytics team wants you to stream data into Elasticsearch.
You want to explore the ES indexes and metadata and use SQL to explore the data to know how you’re going to map to their schema. You then create a sink connector before validating the data is mapped correctly in Elasticsearch at the end. You've avoided the endless back and forth between different teams and tools while at the same time ensuring compliance.
Support for Elasticsearch connections is part of a large pluggable framework that our engineering teams have been busy working on.
Connections, with security, to many different data stores will arrive in the coming releases. These connections will open up more possibilities and help us on our mission to support you whatever underlying infrastructure you deploy. No spoilers, so stay tuned!
Managing Apache Kafka consumers has always been a challenge for operations. Doing so in a safe, self-service, efficient and audited fashion is even more difficult.
We already allowed individual consumer instance offsets to be changed, typically used if you wanted to skip a particular corrupted message, for example.
With Lenses 3.1 now you can easily manage consumer offsets dynamically and at scale.
Move the consumers of an entire group and across all (or some) Kafka topics based on a specific common offset or common point in time. If you had 100+ instances of your application and wanted to replay messages by resetting the offsets to last Tuesday, you could do this with a couple of clicks.
Like all features in Lenses, this is available via a web interface or our APIs, and fully audited of course.
Whilst building a data platform, we hear that complying with internal IT and Security policies can be one of the biggest challenges.
Teams and projects may not want to onboard and go live onto a data platform without meeting the key requirements.
Alerts associated with the platform, flows and microservices state or health should to be triggered and routed correctly across different teams within your business.
In a large organisation, different tenants of the data platform (such as product teams) may have different internal solutions and processes to manage alerts.
So for this release, we now allow teams to create different alert channels to popular alert management solutions such as Slack, PagerDuty, DataDog, Prometheus and AWS Cloudwatch.
For example you could have high consumer group lag alerts for different flows routed to different Slack channels.
A successful data platform grows with the number of teams and users that use it.
Beyond a certain size or criticality of service, data project teams need to comply with stricter corporate identity management policies ensuring employees have a single identity across all business applications.
Lenses already supports a number of different authentication strategies including LDAP, AD and Kerberos (including the use of multiple simultaneous strategies). With this release now we have increased support for Azure AD SSO over SAML 2.0. Stay tuned for coming support for Okta, OneLogin, KeyCloak & Google!
This means data platform teams now have the challenge of managing multiple different Kafka distributions across different clouds and data centers.
Lenses 3.1 introduces the first public release of multi-cluster capabilities.
From a single global portal, one can now access and monitor the health of a larger set of Kafka deployments across any environment.
You can download the portal now for free and connect it to your existing Lenses instance from here
Read the full release notes of these features and more!
Enjoy the release!