Introduction

GitOps is a great way to define the state of your cluster in a versioned controlled environment and have a controller keep everything in sync. This all sounds great, but what if you have secrets you want to keep in sync too?

Kubernetes and Secrets

There are various options for having secrets available to your pods in Kubernetes:

  1. Native secrets object
  2. A hosted secrets service like Google’s Secret Manager
  3. A self hosted service like Hashicorp Vault

Option 1 is nice and easy, your secrets are injected as env vars or files depending on the manifest. A tried and true method which is well defined and easy but not as secure as it could be. Option 2 has little operational burden but requires code changes. Option 3 has a lot more operational burden but is the most secure as you’d get to manage all your data with your own keys.

Another thing is that there would be some work to make options 2 and 3 more gitops. Traditionally, you would interact with them via a gui or api rather than pulling stuff from a git repo. Of course, this can be remedied with code but adding a link to your security process can introduce risk. (I might be wrong about this bit, I’m not that well versed in these products).

What did we do?

We have a small team so wanted to introduce some security practises without causing too much burden and also keep things easy. So we decided against options 2 and 3 and instead, keep stuff in native objects.

Now of course, we don’t want plain text secrets in git repos, even if the repo is private. So we can’t just use the native secrets object as that’s just encoded. We needed something to provide some level of encryption. There were two options:

  1. Bitnami Sealed Secrets
  2. Mozilla Sops

The big difference between these two is that Sealed Secrets is more slightly more secure in that a public key is used to encrypt manifests and a private key in a controller is used to decrypt. This can make local decryption a bit gnarly as you’d need to pull the key from Kubernetes. Sops uses symmetric encryption with the some integration into some key managers to make this easy.

We went with Sops backed by Google’s KMS so that:

1. We can control authentication/authorisation via Google and Terraform with 2FA enabled

2. It provides a nicer way of modifying secrets locally but storing in an encrypted fashion without needing access to Kubernetes. This is important for the way we work and reducing reliance on access clusters directly.

It’s not perfect, we know Vault is more secure but it’s a decent middle ground. The imperfections are that secrets can still be stored locally, the Sops decryption mechanism generates temporary plain text files and if you don’t manage your git process properly, you could inadvertently commit them.

Integration into GitOps

Now that we’ve picked our tool, we can integrate it into GitOps. Our controller of choice is ArgoCD so we know that will need the rights to decrypt manifests as well as the Sops binary to do that with. Here’s how we add the binary to the repo server component:

initContainers:
- name: download-sops
image: alpine:3.8
command: [sh, -c]
args:
- wget https://github.com/mozilla/sops/releases/download/v3.5.0/sops-v3.5.0.linux && mv sops-v3.5.0.linux /custom-tools/sops && chmod +x /custom-tools/sops
volumeMounts:
- name: custom-tools
mountPath: /custom-tools

Sops uses the gcloud sdk, therefore we need to mount the creds and specify their location:

...
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /app/config/gcp/gcp-creds.json
...
volumes:
- secret:
secretName: gcp-creds
name: gcp-creds

Of course, the IAM service account will need the cloudkms.cryptoKeyDecrypter GCP role (dammit medium, fix how you do inline codeblocks please).

Now that we’ve given ArgoCD the rights, we need to modify the process so that it’s taken into account. ArgoCD simply looks for a kube manifest within the directory we told it to look at but now it’s encrypted. Thankfully, ArgoCD allows you to add custom plugins via the configmap:

configManagementPlugins: |
- name: sops
generate:
command: [sh, -c]
args: ["sops -d --input-type=json --output-type=json $MANIFEST_PATH"]

The job of a custom plugin is to output a Kubernetes valid manifest via stdout. Once it does that, ArgoCD will apply it will take output and process it. That sops command will do just that. But where does $MANIFEST_PATH come from?

ArgoCD is driven by ArgoCD custom Kubernetes resources. This custom resource has a section for defining where things come from. The section is called source and looks like this:

...
"source": {
"repoURL": <gitrepo>,
"targetRevision": <revision_of_git_repo>,
"path": <directory_of_manifests>,
"plugin": {
"name": "sops",
"env": [
{
"name": "MANIFEST_PATH",
"value": "manifest.json"
}
]
}
},
...

As you can see between this and the command, the working directory is the directory specified so we don’t need to build the path up. The environment key can contain any manner of key value pairs, we’re using that for defining the manifest.

Conclusion

Now we store manifests encrypted within the git repo and have them decrypted by ArgoCD, increasing security overy plain text secrets being stored in git but not as good as having something like Vault.

One thing to note is that this will make review more difficult as you’ll just see encrypted text in value fields given they’re encrypted. We’ve integrated gitops into our pipeline so we don’t have this issue and encrypt the entire manifest but it’s worth looking at Sops configuration around specifying whether fields are encrypted or unencrypted. Shout out to Dan P for doing most of the work for this.