Introducing Kafka Skills for AI Engineering Agents
If you've written a line of code in the last 18 months, you already know this. Tools like Claude, Codex, Bob, Kiro and Cursor have made agentic software engineering the default. Most developers today are writing prompts as much as they are writing code.
That shift changes what ‘developer experience’ means. Clean UIs, useful tools and good docs are still the foundation but the focus has shifted to ensuring a coding agent actually knows what it is doing, in the hands of a developer.
The problem with AI agents and Kafka today
Out of the box, AI coding agents are generalists. They know a lot about a lot. But knowing about your data and writing data-intensive pipelines and products on streaming data, of course they do not.
Like a human engineer,, they need to understand so many nuances of streaming data and infrastructure.
For example the distribution and profile of data, cardinality of fields, quality of the data, how a schema evolves, optimal topic configuration based on number of brokers in the cluster - these are all things that fundamentally impact how a pipeline or software should be written. It represents the gap between code that runs in a demo and code that holds up in production.
Agents frequently get this stuff wrong. They give you confident, plausible answers that miss the details that matter. At best it costs tokens and cycles. At worst it causes a serious production issue.
Ask an agent to ‘build a consumer for the orders topic’ and it will give you something that compiles and runs. But it probably will not handle deserialization errors properly, or set up dead letter queues correctly. The code looks right. In production, it breaks and makes debugging difficult.
The reason is straightforward: agents are only as good as the context they have access to. Give them better instructions and they give you better output.
What are Skills?
Skills are made up of structured context files, typically Markdown, that tell an AI agent exactly how to approach a specific domain or task. Think of them as the expert briefing you would give a new engineer before they wrote their first line of Kafka code in your codebase.
When connected to a Kafka MCP server, skills enable agents running in Claude to ask the right questions and write production-grade apps and systems that process data with far more precision: building consumers that scale efficiently based on topic partitioning, properly handling null values or data quality problems, writing producers with DLQ policies. This reduces applications that seem to work, but are of poor quality when deployed in production.
What we are releasing
We are open-sourcing a library of Kafka Skills, hosted on the Lenses GitHub.
The files are grouped by use case, because the process a data engineer follows to build for Kafka is different from how a traditional backend developer does it. A data engineer cares about schema compatibility, pipeline reliability, and data quality guarantees. A backend developer wants to produce and consume correctly without getting buried in Kafka internals. A streaming developer needs to reason about state, windowing, and exactly-once processing. We built the initial set around these role-based workflows, refined with real customers, so they reflect how engineers actually write software on Kafka rather than how the docs say they should.
The repo ships with Lenses-observed examples by default. But the structure is intentionally open: if you are running a different Kafka MCP server, you can add your own examples and submit a PR.
Why it is a community project
No single team has seen every Kafka problem. The engineer running 200 topics on a multi-tenant cluster knows things we do not. The team that spent a month debugging a connector edge case has context that belongs in a skill file. Skills are only as valuable as the breadth of real scenarios they cover, and the collective knowledge of the Kafka community is far deeper than anything one company can capture alone.
The files are also meant to be adapted. Every team's Kafka setup is different. Fork a skill, tailor it to your environment, and it becomes something specific to how your team works. General enough to be useful out of the box, simple enough to make your own.
If you have found yourself coaching an AI agent through the same Kafka problem more than twice, that is a skill waiting to be written.
Where to start
- Browse the repo: https://github.com/lensesio/agentic-engineering-for-apache-kafka
- Drop the Skills into the skill folder of your AI copilot (we’ve prepared it for Cursor and Claude Code) and your Kafka MCP server will automatically know when to use them.
- Feel the difference immediately — the files are plain text and straightforward to adapt, regardless of which MCP server you are using.
- Found something missing? Open an issue or PR. That is how this gets better.







