Removing Kafka bottlenecks with DataOps
Our CTO, Andrew Stevenson was interviewed by Alan Shimel for TechStrong TV. The discussion was all about hot data topics such as DataOps, DevOps and practices to successfully enable Kafka. Andrew narrates his journey from civil engineering to starting Lenses.io with Antonios, our CEO, to help organizations succeed with real-time data.
Tech Intensity brings DataOps into the spotlight
The DataOps discussion is more timely than ever, and as Alan agrees, the term is becoming more than a “buzzword”. People are realizing the close relationship between their data, data applications and DevOps, as well as how optimizing their data processes enables project success. However, adopting Open Source technologies such as Apache Kafka to fuel real-time data experiences is a daunting challenge. The choices are clear. Hire a team of highly skilled and hard-to-find developers to build and maintain this technology, or invest in a smarter, more scalable approach that provides data access to a broader set of users, including more business stakeholders.
The world is changing fast and we are witnessing major tech intensity. There are a lot of new platforms that take the heavy lifting out of the infrastructure, so end users can focus on driving value from their data, regardless of the industry. In this light, technology should not be adopted for the sake of technology. It should be a loyal servant to the business. And DataOps, alongside DevOps, leads this movement.
It’s about bringing data to a broader set of users, to collaborate efficiently. As Andrew says,
“Technology should be an enabler and we often tend to lose sight of that. People from any industry should be able to understand it and use it”.
Andrew talks more about data and tech intensity in Network Computing here.
The best of both worlds: tech to serve data intensity
Apache Kafka is a must-have if companies want to accelerate digital revenue and become players in the data economy. What if you could leverage real-time data easily, without dev bottlenecks? Moving forward, we need to empower those who understand the data, not solely those who understand the technology. These are the people who can bring the data to market, and turn it into value. And that means investing in the right technology to make sure anyone who speaks the language of data can safely access it.
This approach to DataOps can help every industry to understand and generate more value from their data. Take Babylon Health for example, a healthcare startup that focuses on delivering and expanding healthcare into the third world. What they really needed was a technology to enable their doctors and nurses to be more efficient.
Vortexa, an energy commodity trading company, faced difficulties in running Apache Kafka by themselves until they used Lenses over their real-time data. They finally had the visibility they needed via a Topology view and they could query their topics in zero time with Lenses SQL engine. This helped them automate and simplify many of their admin processes and remove bottlenecks from their platform team. As we shed light onto the inner-workings of their Kafka, they were able to optimize their data flows and deliver greater value.
Adapting to the new reality of COVID-19
Many of our customers were not ready to proceed with their original plans to embrace real-time data and faced some difficulties prioritizing projects during COVID-19. Lenses can be helpful in moving to a world of data democratization and self-service: It means that teams don’t need a centralized way to manage a multi-tenant solution on top of Kafka and developers can easily onboard.
To adapt to this new world and define a data-driven roadmap, technology teams need to ask themselves a couple of key questions:
Is Open-Source technology the most efficient and effective option?
Will you need to scale, and how will you do this?
Look closely at what technologies are out there, always be mindful about what you need - and remember: You are not here to serve the technology, technology is here to serve you.