With version 5.1, Lenses is now offering enterprise support for our popular open-source Secret Provider to customers.
In this blog, we’ll explain how secrets for Kafka Connect connectors can be safely protected using Secret Managers and walk you through configuring the Lenses S3 Sink Connector with the Lenses Secret Provider plugin and AWS Secret Manager.
Security is of paramount importance in the information age. The world is becoming more complex and so is keeping information systems secure.
Passwords, Pin numbers, Access Keys, Certificates, Key stores, Trust stores - these are all virtual concepts used by our systems to allow them to authenticate the caller's identity before allowing access to data or operations.
Regular rotation of secrets is a very good practice to reduce risk. For example if a secret is attained by a nefarious third party then that secret will only have a limited shelf life.
But managing & rotating secrets would be a full time job if not automated. The cascade effect of changing passwords on downstream systems can lead to extra work, confusion, and system errors caused by mismatched credentials. That’s why we use technologies like password safes and secret managers.
Kafka Connect is one of the building blocks of streaming architecture, and as an interface with other systems it needs to be able to have the valid secrets for the systems it interacts with.
The clue is in the name - the secret provider provides the secrets to the system that needs them - for this blog, a Kafka Connector. This way connectors don’t have to worry about where the secrets come from, or when they expire, just focus on using them.
A good example is our AWS S3 Sink Connector with your secrets held in a Hashicorp Vault store. It doesn’t make sense for the AWS S3 connector and every other connector to have to support all the different secret stores - this would create a lot of duplication of work! So one secret provider can interface with all the connectors using Kafka Connect’s plug-in architecture.
Secret providers use indirect references. The Connect workers are configured with connection details, e.g. AWS Secret Manager. These providers are then hooked into and referenced in each Connector configuration.
Example: worker.props
In this example the config provider, aws, is declared in the worker properties, the class to the provider is set (located in our plugin path) and then referenced in the connector configuration. The provider will then look up the value of my_username_key in an AWS Secret Manager called my-aws-secret.
The provider will then look up the value of my_username_key in an AWS Secret Manager called my-aws-secret. The value of the secrets will be passed to the connector and assigned to the username and password.
A secret in a Secret Manager may have automatic rotation configured. If it does, there will be a record of either when the next rotation will occur or the TTL (time-to-live) of the secret. After this point we can assume that the secret may have changed.
In the secret provider we want to support any connector but there is no way of notifying the connector that the secret has changed (unless support is built into the connector specifically for this reason). Connect does have a subscribe method in the API signature for external secrets but this is not implemented. So if the secret changes, i.e. rotates, we will need to restart the connector.
There’s a gotcha here to ensure that the connector restarted correctly or not at all. According to the spec, Connect adds a property config.reload.action that controls the behaviour of the connector upon encountering a config change.
Now, after a bit of digging, it's not config.reload.action but config.action.reload. Documentation is always a sticking point with any software.
Setting up Secret Manager in AWS couldn’t be easier. You might be using the AWS Console, CloudFormation, Terraform or have some manual scripts via the AWS command line.
In my example I have set up a secret in AWS Secrets Manager called “rotateTest/myAwsSecretPath” with the following content:
For a quick start, we can create this secret using the AWS CLI command:
AWS doesn’t support setting up a TTL on the secret, you do this with rotation rules. This can be configured by setting up automatic rotation with a lambda function. Most AWS services allow you to enable secret rotation, for example, RDS.
Secret Manager will then track when to call the lambda but generating the new secret and updating the secret in whatever system the secret is for is the job of your lambda function because the secret could be for any service, for example your own custom APIs.
More in this AWS article on Rotating Secrets
I’m going to be using the Lenses S3 connector as this has proven to be one of our most popular connectors and it's installed by default in the Lenses Kafka Docker Box.
Sign up for the Lenses Box here. Once signed up you will receive an email with the trail license identification for the Box.
Here is a sample configuration for the S3 sink that will copy the messages from the “backblaze_smart” topic to “mytestbucket” in S3.
But I don’t want to have to include my aws.access.key and aws.secret.key in every request.
That would leak out in Connect’s API requests and we don’t want them lying around in connector config files and make its way into Git.
Additionally you don’t want to expose this to anyone setting up connectors, even though Lenses guards the access to Connectors and who can deploy them. Managing secrets may be the responsibility of a different team, and manually editing the configuration each time a secret is rotated is a logistical nightmare.
So how do we use the Lenses AWS Secret Provider instead?
To install the Lenses secret provider we need to get into to the plugin path of the Kafka Connect and then configure it. You do this by copying it into the plugin.path of each Worker in your connect cluster. Below is an example of typical plugin path locations.
Luckily the Lenses Box already has the Lenses secret providers installed.
Even though the secret providers are installed in the Lenses Box we still need to configure it with the following:
aws.auth.method - The authentication methods, credentials or default for the standard AWS provider chain
aws.access.key - AWS IAM access key
aws.secret.key - AWS IAM secret key
aws.region - AWS region the secret manager is running in
For the Lenses Box we need to pass them in as environment variables. The Box will translate any variable beginning prefixed with CONNECT_ into a the Connect workers properties file.
The secret providers also support AWS profiles, this is especially helpful if your Connect workers are deployed on an EC2 instance.
Here is an example of configuring your S3 Sink using profiles by setting the aws.auth.mode to credentials:
Now lets run the following command at your console to run the Lenses Box, pay attention to set the id of the licenses you got in your email from earlier.
Once the Box is up, login with admin/admin, navigate to the Connectors screen, select Create Connector, paste in the configuration and hit deploy. And that’s is, we can deploy a secured connector via Lenses.
Not only will the Lenses Secret Provider Kafka Connect Plugin retrieve your secret from AWS Secrets Manager during the connector start up, it will schedule a restart following the time-to-live of your secret.
When the connector is restarted then the secret will be retrieved again with the updated value.
You can check the docs for the plug.
Any questions? Ask Marios in our new community answers site or join our Slack Community and ask us directly.