• Pricing
  • Install Now
installNow icon
installNow icon
Install Now
homeMobile icon
homeMobile icon
Home
picingMobile icon
picingMobile icon
Pricing
blogMobile icon
blogMobile icon
Blog

Using Lenses to scale SQL processors in Kubernetes

Andrew Stevenson
By Andrew StevensonJanuary 10, 2018
lenses-k8
In this article:
  • 01.Kubernetes
  • 02.Configuration is code

In this previous post we showed how to scale out Lenses SQL processors with Kafka Connect. Connect is one on three execution modes for Lenses SQL processors via Lenses, we also have in process, mainly for developers and Kubernetes, the subject of this post.

execution modes two


Kubernetes

Kubernetes is quickly becoming the default container orchestration, originally developed at Google it provides a rock-solid platform for streaming application and micro services in general. We have been working with Kubernetes successfully with several clients, and now Lenses v1.1 is a the first “Kubernetes native” streaming application platform! I’ve also talked here about how we have used Helm, a Kubernetes package manager to create repeatable deployments for Kafka Connect.

Containers allow developers to quickly compose applications and make them portable, Kubernetes manages these containers making sure your desired state is achieved. For example, we can use the deployment resource and set the replicas (our desired state) to be 3, i.e. we want three instances running. Kubernetes will handle this for us.

Configuration is code

That’s great but for Kafka I still need to write my java application, compile, test and build my Docker. Using Lenses and Kubernetes in combination we can deploy and manage SQL processors on the data flowing through Kafka with ease. If we see a consumer group struggling we can scale up, Kubernetes will handle the deployment for us and Kafka’s consumer group semantics will rebalance the load on the consumer for us. Magic.

Lenses SQL Processors and Connectors are config, driven by SQL.


Data pipelines with NO coding.

At Lenses.io we are excited by Kubernetes and the potential it offers to make deployment and management of streaming technologies easier. Its our motto, make streaming easy.

Checkout the video below for a demo of deploying processors from outside of Lenses and creating and scaling from within. Naturally Lenses has APIs backing the frontend so this can all be done via a continuous integration and deployment pipeline.

If you are paying attention to the video and not distracted by the cool visualization and cool music you will see the global topology view at the end, ever wondered what your organizations data flow looks like? Where is your data going? GDPR? Is this my desired state? More on this in future posts.

Additional Resources

Find out more about Lenses & Lenses SQL Processors: Lenses Documentation

Download Lenses Now at Downloads Page

Relevant Blogs

  • How to explore data in Kafka topics with Lenses - part 1
  • Kafka stream processing via SQL - part 2
  • Apache Kafka Streaming, count on Lenses SQL. Quick and easy way to perform count aggregates.
  • Describe and execute Kafka stream topologies with Lenses SQL
  • Kafka Connect and Kubernetes
Back to all blogs

Related Blogs

Lenses 6.2 Oauth
Lenses 6.2 Oauth
Blog

Lenses 6.2 - Trusting Agents to build & operate event-driven applications

andrew
andrew
By
Andrew Stevenson
image
image
Blog

Kafka Migrations Need More Than a Replicator

Jonas Best Profile Picture
Jonas Best Profile Picture
By
Jonas Best
kafkaconnections hero banner
kafkaconnections hero banner
Blog

Self-Service Data Replication with K2K - part 1

Drew Oetzel
Drew Oetzel
By
Drew Oetzel

Lenses, autonomy in data streaming

Install now
Products
Developer Experience
Kafka replicator
Lenses AI
Kafka Connectors
Pricing
Company
About
Careers
Contact
Solutions by industry
Financial services
For engineers
Docs
Ask Marios Discourse
Github
Slack
For executives
Case studies
Resources
Blog
Press room
Events
LinkedIn
Youtube
Legal
Terms
Privacy
Cookies
SLAs
EULA
© 2026Apache, Apache Kafka, Kafka and associated open source project names are trademarks of the Apache Software Foundation