Confidence with Apache Kafka starts with visibility. This goes beyond being able to monitor your Kafka infrastructure.
Apache Kafka is a complex a black box, requiring monitoring for many services including Schema Registry, Kafka Connect and real-time flows.
For productive data operations & avoiding incidents, teams need complete visibility over the health of Kafka infrastructure and data flows.
Engineers need self-service data access to monitor and heal the applications they deploy, without constant ticketing and frenzied Slack messages.
What if you had Kafka monitoring tooling that allowed your entire streaming platform to be observed and navigated by everyone in ops, not only the Kafka literate?
"Lenses has been critical for us making our teams productive with Kafka and having production-ready confidence across hundreds of developers."
VP of IT Engineering at Playtika - Ella Vidra
It’s important to monitor the health of your Kafka deployment to maintain reliable performance from the applications that depend on it. Some best practices for Kafka monitoring:
What components should I monitor?
These components should help you answer the following questions.
Monitor the performance and view the state of your Kafka projects from a single role-based and secured UI. Spinning up full-service visibility helps stay on top of your data and app health.
See a global view of all topics and their configuration through your Kafka monitoring tools. Drilldown to inspect the data or zoom into consumer performance, messages and partitions, without a command line.
View, create, edit and delete Kafka Topics, Quotas and ACLs from a single unified UI and API with full role-based access controls and audits. Tenants can then export as configuration files and port across different environments using GitOps practices.
Exploring a universe of events using a simple SQL-like syntax helps Babylon Health find and act on patterns in patient data.