External Secrets Operator integrates with HashiCorp Vault for secret management.
The KV Secrets Engine is the only one supported by this provider. For other secrets engines, please refer to the Vault Generator.
First, create a SecretStore with a vault backend. For the sake of simplicity we'll use a static token root:
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "http://my.vault.server:8200"
path: "secret"
# Version is the Vault KV secret engine version.
# This can be either "v1" or "v2", defaults to "v2"
version: "v2"
auth:
# points to a secret that contains a vault token
# https://www.vaultproject.io/docs/auth/token
tokenSecretRef:
name: "vault-token"
key: "token"
---
apiVersion: v1
kind: Secret
metadata:
name: vault-token
data:
token: cm9vdA== # "root"
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace for tokenSecretRef with the namespace of the secret that we just created.
Then create a simple k/v pair at path secret/foo:
vault kv put secret/foo my-value=s3cr3t
Can check kv version using following and check for Options column, it should indicate [version:2]:
vault secrets list -detailed
If you are using version: 1, just remember to update your SecretStore manifest appropriately
Now create a ExternalSecret that uses the above SecretStore:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vault-example
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: example-sync
data:
- secretKey: foobar
remoteRef:
key: foo
property: my-value
# metadataPolicy to fetch all the labels in JSON format
- secretKey: tags
remoteRef:
metadataPolicy: Fetch
key: foo
# metadataPolicy to fetch a specific label (dev) from the source secret
- secretKey: developer
remoteRef:
metadataPolicy: Fetch
key: foo
property: dev
---
# That will automatically create a Kubernetes Secret with:
# apiVersion: v1
# kind: Secret
# metadata:
# name: example-sync
# data:
# foobar: czNjcjN0
Keep in mind that fetching the labels with metadataPolicy: Fetch only works with KV sercrets engine version v2.
You can fetch all key/value pairs for a given path If you leave the remoteRef.property empty. This returns the json-encoded secret value for that path.
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vault-example
spec:
# ...
data:
- secretKey: foobar
remoteRef:
key: /dev/package.json
Vault supports nested key/value pairs. You can specify a gjson expression at remoteRef.property to get a nested value.
Given the following secret - assume its path is /dev/config:
{
"foo": {
"nested": {
"bar": "mysecret"
}
}
}
You can set the remoteRef.property to point to the nested key using a gjson expression.
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vault-example
spec:
# ...
data:
- secretKey: foobar
remoteRef:
key: /dev/config
property: foo.nested.bar
---
# creates a secret with:
# foobar=mysecret
If you would set the remoteRef.property to just foo then you would get the json-encoded value of that property: {"nested":{"bar":"mysecret"}}.
You can extract multiple keys from a nested secret using dataFrom.
Given the following secret - assume its path is /dev/config:
{
"foo": {
"nested": {
"bar": "mysecret",
"baz": "bang"
}
}
}
You can set the remoteRef.property to point to the nested key using a gjson expression.
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vault-example
spec:
# ...
dataFrom:
- extract:
key: /dev/config
property: foo.nested
That results in a secret with these values:
bar=mysecret
baz=bang
You can extract multiple secrets from Hashicorp vault by using dataFrom.Find
Currently, dataFrom.Find allows users to fetch secret names that match a given regexp pattern, or fetch secrets whose custom_metadata tags match a predefined set.
!!! warning
The way hashicorp Vault currently allows LIST operations is through the existence of a secret metadata. If you delete the secret, you will also need to delete the secret's metadata or this will currently make Find operations fail.
Given the following secret - assume its path is /dev/config:
{
"foo": {
"nested": {
"bar": "mysecret",
"baz": "bang"
}
}
}
Also consider the following secret has the following custom_metadata:
{
"environment": "dev",
"component": "app-1"
}
It is possible to find this secret by all the following possibilities:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vault-example
spec:
# ...
dataFrom:
- find: #will return every secret with 'dev' in it (including paths)
name:
regexp: dev
- find: #will return every secret matching environment:dev tags from dev/ folder and beyond
tags:
environment: dev
will generate a secret with:
{
"dev_config":"{\"foo\":{\"nested\":{\"bar\":\"mysecret\",\"baz\":\"bang\"}}}"
}
Currently, Find operations are recursive throughout a given vault folder, starting on provider.Path definition. It is recommended to narrow down the scope of search by setting a find.path variable. This is also useful to automatically reduce the resulting secret key names:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: vault-example
spec:
# ...
dataFrom:
- find: #will return every secret from dev/ folder
path: dev
name:
regexp: ".*"
- find: #will return every secret matching environment:dev tags from dev/ folder
path: dev
tags:
environment: dev
Will generate a secret with:
{
"config":"{\"foo\": {\"nested\": {\"bar\": \"mysecret\",\"baz\": \"bang\"}}}"
}
We support five different modes for authentication: token-based, appRole, kubernetes-native, ldap, userPass, jwt/oidc, awsAuth and tlsCert, each one comes with it's own trade-offs. Depending on the authentication method you need to adapt your environment.
If you're using Vault namespaces, you can authenticate into one namespace and use the vault token against a different namespace, if desired.
A static token is stored in a Kind=Secret and is used to authenticate with vault.
{% include 'vault-token-store.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in tokenSecretRef with the namespace where the secret resides.
AppRole authentication reads the secret id from a
Kind=Secret and uses the specified roleId to acquire a temporary token to fetch secrets.
{% include 'vault-approle-store.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in secretRef with the namespace where the secret resides.
Kubernetes-native authentication has three options of obtaining credentials for vault:
serviceAccountRefKind=Secret referenced by the secretRefVault validates the service account token by using the TokenReview API. ⚠️ You have to bind the system:auth-delegator ClusterRole to the service account that is used for authentication. Please follow the Vault documentation.
{% include 'vault-kubernetes-store.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in serviceAccountRef or in secretRef, if used.
LDAP authentication uses
username/password pair to get an access token. Username is stored directly in
a Kind=SecretStore or Kind=ClusterSecretStore resource, password is stored
in a Kind=Secret referenced by the secretRef.
{% include 'vault-ldap-store.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in secretRef with the namespace where the secret resides.
UserPass authentication uses
username/password pair to get an access token. Username is stored directly in
a Kind=SecretStore or Kind=ClusterSecretStore resource, password is stored
in a Kind=Secret referenced by the secretRef.
{% include 'vault-userpass-store.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in secretRef with the namespace where the secret resides.
JWT/OIDC uses either a
JWT token stored in a Kind=Secret and referenced by the
secretRef or a temporary Kubernetes service account token retrieved via the TokenRequest API. Optionally a role field can be defined in a Kind=SecretStore
or Kind=ClusterSecretStore resource.
{% include 'vault-jwt-store.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in secretRef with the namespace where the secret resides.
AWS IAM uses either a
set of AWS Programmatic access credentials stored in a Kind=Secret and referenced by the
secretRef or by getting the authentication token from an IRSA enabled service account
TLS certificates auth method allows authentication using SSL/TLS client certificates which are either signed by a CA or self-signed. SSL/TLS client certificates are defined as having an ExtKeyUsage extension with the usage set to either ClientAuth or Any.
Under specific compliance requirements, the Vault server can be set up to enforce mutual authentication from clients across all APIs by configuring the server with tls_require_and_verify_client_cert = true. This configuration differs fundamentally from the TLS certificates auth method. While the TLS certificates auth method allows the issuance of a Vault token through the /v1/auth/cert/login API, the mTLS configuration solely focuses on TLS transport layer authentication and lacks any authorization-related capabilities. It's important to note that the Vault token must still be included in the request, following any of the supported authentication methods mentioned earlier.
{% include 'vault-mtls-store.yaml' %}
You can store Access Key ID & Secret Access Key in a Kind=Secret and reference it from a SecretStore.
{% include 'vault-iam-store-static-creds.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in accessKeyIDSecretRef, secretAccessKeySecretRef with the namespaces where the secrets reside.
This feature lets you use short-lived service account tokens to authenticate with AWS. You must have Service Account Volume Projection enabled - it is by default on EKS. See EKS guide on how to set up IAM roles for service accounts.
The big advantage of this approach is that ESO runs without any credentials.
{% include 'vault-iam-store-sa.yaml' %}
Reference the service account from above in the Secret Store:
{% include 'vault-iam-store.yaml' %}
This is basically a zero-configuration authentication approach that inherits the credentials from the controller's pod identity.
This approach supports both IRSA (IAM Roles for Service Accounts) and AWS Pod Identity:
The provider automatically detects which authentication method is available and uses the appropriate one.
{% include 'vault-iam-store-controller-pod-identity.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace for serviceAccountRef with the namespace where the service account resides.
{% include 'vault-jwt-store.yaml' %}
NOTE: In case of a ClusterSecretStore, Be sure to provide namespace in secretRef with the namespace where the secret resides.
Vault supports PushSecret features which allow you to sync a given Kubernetes secret key into a Hashicorp vault secret. To do so, it is expected that the secret key is a valid JSON object or that the property attribute has been specified under the remoteRef.
To use PushSecret, you need to give create, read and update permissions to the path where you want to push secrets for both data and metadata of the secret. Use it with care!
!!! note
Since Vault KV v1 API is not supported with storing secrets metadata, PushSecret will add a `custom_metadata` map to each secret in Vault that he will manage. It means pushing secret keys named `custom_metadata` is not supported with Vault KV v1.
Here is an example of how to set up PushSecret:
{% include 'vault-pushsecret.yaml' %}
Note that in this example, we are generating two secrets in the target vault with the same structure but using different input formats.
Vault KV v2 supports Check-And-Set operations to prevent unintentional overwrites when multiple clients modify the same secret. When CAS is enabled in your Vault configuration, External Secrets Operator can be configured to include the required version parameter in write operations.
To enable CAS support, add the checkAndSet configuration to your Vault provider:
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "http://my.vault.server:8200"
path: "secret"
version: "v2" # CAS only works with KV v2
checkAndSet:
required: true # Enable CAS for all write operations
auth:
# ... authentication config
!!! note "CAS Requirements"
- CAS is only supported with Vault KV v2 stores
- When `checkAndSet.required` is true, all PushSecret operations will include version information
- For new secrets, External Secrets Operator uses CAS version 0
- For existing secrets, it automatically retrieves the current version before updating
- CAS helps prevent conflicts when multiple External Secrets instances manage the same secrets
When using Vault Enterprise with performance standby nodes, any follower can handle read requests immediately after the provider has authenticated. Since Vault becomes eventually consistent in this mode, these requests can fail if the login has not yet propagated to each server's local state.
Below are two different solutions to this scenario. You'll need to review them and pick the best fit for your environment and Vault configuration.
Vault namespaces are an enterprise feature that support multi-tenancy. You can specify a vault namespace using the namespace property when you define a SecretStore:
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "http://my.vault.server:8200"
# See https://www.vaultproject.io/docs/enterprise/namespaces
namespace: "ns1"
path: "secret"
version: "v2"
auth:
# ...
In some situations your authentication backend may be in one namespace, and your secrets in another. You can authenticate into one namespace, and use that token against another, by setting provider.vault.namespace and provider.vault.auth.namespace to different values. If provider.vault.auth.namespace is unset but provider.vault.namespace is, it will default to the provider.vault.namespace value.
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "http://my.vault.server:8200"
# See https://www.vaultproject.io/docs/enterprise/namespaces
namespace: "app-team"
path: "secret"
version: "v2"
auth:
namespace: "kubernetes-team"
# ...
Vault 1.10.0 and later encodes information in the token to detect the case when a server is behind. If a Vault server does not have information about the provided token, Vault returns a 412 error so clients know to retry.
A method supported in versions Vault 1.7 and later is to utilize the
X-Vault-Index header returned on all write requests (including logins).
Passing this header back on subsequent requests instructs the Vault client
to retry the request until the server has an index greater than or equal
to that returned with the last write. Obviously though, this has a performance
hit because the read is blocked until the follower's local state has caught up.
Vault also supports proxying inconsistent requests to the current cluster leader for immediate read-after-write consistency.
Vault 1.10.0 and later support a replication configuration that detects when forwarding should occur and does it transparently to the client.
In Vault 1.7 forwarding can be achieved by setting the X-Vault-Inconsistent
header to forward-active-node. By default, this behavior is disabled and must
be explicitly enabled in the server's replication configuration.