Welcome to the KES documentation site. These pages give a high-level overview of how KES works, information about KES components, general architecture, and access controls.
For more detailed documentation on setting up KES, see the Configuration Guide.
Consider a basic setup with one application instance and one KES server.
The application connects to the KES server via TLS. Then, the application uses the KES server API to perform operations like creating a new cryptographic key. The KES server talks to a central key-management system (KMS).
The central KMS contains all of the state information, including the cryptographic keys. For any stateful operation, like creating a cryptographic master key, the KES server reaches out to the KMS.
The KES server directly handles stateless operations, like generating a new data encryption key (DEK), requiring no interaction with the central KMS. As the majority of key-management operations are stateless, the KES server handles the load, including operations for encryption, decryption, and key derivation.
Larger workloads demand larger resources, requiring more application instances. If all these instances would talk to a traditional KMS directly, e.g. a dedicated server or hardware appliance, they eventually exceed the KMS capabilities.
Kubernetes automatically adds or removes resources based on the current workload. However, a hardware security appliance designed to protect cryptographic keys typically cannot automatically scale up. For those appliances that support clustering, scaling means buying more expensive hardware.
In contrast, KES scales horizontally with the application.
The KES server decouples the application from the KMS / Key Store and can handle almost all application requests on its own. It only has to talk to the Key Store when creating or deleting a cryptographic key.
Similarly, the KES server only uses the KMS to encrypt or decrypt the cryptographic keys stored at or fetched from the Key Store. Therefore, the KES server reduces the load on the KMS / Key Store up to several orders of magnitude.
In general, all KES server operations require authentication and authorization. However, KES uses the same application-independent mechanism for both: mutual TLS authentication (mTLS).
The KES client needs a private key / public key pair and a X.509 certificate. In the following section, we explicitly distinguish the public key from the certificate to explain how authentication and authorization works.
KES relies on mutual TLS (mTLS) for authentication. Both the KES client and the KES server need their own private key / certificate pair.
By default, each mTLS peer has to trust the issuer of the peer’s certificate. This means that the client must trust the issuer of the server’s certificate and the server must trust the issuer of the client’s certificate. If the same authority issued both the client’s certificate and the server’s certificate then the client and the server each only have to trust a single entity. If different authorities issued the client’s certificate and the server’s certificate, then the client and the server must each trust both authorities.
Extended Key Usage extension, the certificate describes the valid use cases for a particular public key.
In case of mTLS the client certificate must have an Extended Key Usage containing
Similarly, the server certificate has to have an Extended Key Usage containing
If your setup is not working as expected, check that the certificates contain the correct Extended Key Usage values.
View a certificate in a human-readable format with the following command:
openssl x509 -noout -text -in <your-certificate.cert>
In general, a KES server only accepts TLS connections from clients that can present a valid and authentic TLS certificate (📜) during the TLS handshake.
- A valid certificate means that the certificate is both well-formed and not expired.
- An authentic certificate means KES trusts the certificate authority that signed and issued the certificate .
When a KES client tries to establish a connection to the KES server, the TLS protocol checks that:
- The KES client has the private key (🗝️) that corresponds to the public key in the certificate (📜) presented by the client.
- The certificate presented by a client was issued by a Certificate Authority (CA) that the KES server trusts.
If the TLS handshake succeeds, then the KES server considers the request authentic.
Disabling Authentication During Testing
It is possible to skip the certificate verification during testing or development.
- Start the KES server with the
- Then clients still provide a certificate, but the server does not verify whether the certificate has been issued by a trusted CA. Instead, the client can present a self-signed certificate.
--auth=offfor testing or development.
After determining the authenticity of a request, the KES server checks the client’s authorization to perform the requested operation. KES relies on a role and policy-based authorization model. The authorization check compares the request to the policy associated to the client.
When the KES server receives an authentic client request, it computes the client identity from the client certificate using the client’s public key. After computing the identity, the KES server checks whether the identity has an associated named policy. If such an identity-policy mapping exists, the KES server validates that the request complies with the policy. Otherwise, the server rejects the request.
The KES server considers a request as authorized if the following statements are true:
- An identity successfully computed from the client’s certificate.
- A policy associated to the identity exists.
- The associated policy explicitly allows the operation that the request wants to perform.
The KES server policies determine whether to allow a client request. A policy contains a set of rules that define which API operations are allowed or denied on which resources. KES uses policy definitions designed for human-readability and comprehension rather than flexibility.
In general, policy patterns have the following format:
Write each allow/deny rule as a glob pattern. The glob pattern allows a single rule to match an entire class of requests.
A KES server evaluates a policy as follows:
- Evaluate all deny patterns.
If any deny pattern matches, reject the request with a
prohibited by policyerror.
- Evaluate all allow patterns. If at least one allow pattern matches, KES accepts the request.
- If no allow pattern matches, reject the request with a
prohibited by policyerror.
Let’s take a look at an example policy:
policy: my-policy: allow: - /v1/metrics - /v1/key/create/my-key - /v1/key/generate/my-key* - /v1/key/decrypt/my-key* deny: - /v1/key/*/my-key-internal*
my-policy contains four allow rules and one deny rule.
KES processes the
deny rule first.
my-policy contains a deny rule that prevents any key API operation (
key/*/) for all resources (i.e. keys) with a name prefix
If a client submits any type of API operation using a key with that prefix, KES prohibits it.
For example, KES would reject any of the following under this policy:
If the request does not match any
deny pattern, KES evaluates the request against the
In case of the
my-policy, KES allows requests under the policy to create a key named
If the user tries to create a key named
my-key2 or any other character combination, the request returns with the
prohibited by policy error since no
allow rule matches the request.
When the user requests to generate new data encryption keys (DEKs) or to decrypt encrypted (DEKs), the policy allows any key with a name prefix of
KES allows either
/v1/key/generate/my-key2, but prohibits
- For more information about policies and more examples refer to: Policy Configuration
- For a comprehensive overview over the KES server APIs refer to: Server API
The policy-identity mapping is a one-to-many relation, meaning you may associate many identities to the same policy. However, you can only associate an identity to one policy at a time on a KES server.
Multiple KES servers can each have their own policy-identify relationship sets.
For example, KES server
Server1 may associate identity
Ann to the policy
Server2 can associate the same identity,
Ann to a different policy,
The two KES servers
Server2 have distinct and independent policy-identity relationships.
As previously described, the KES server computes the client identity from its certificate.
This normally computes to a cryptographic SHA-256 value.
However, when specifying the identity-policy mapping it is totally valid to associate an arbitrary identity value to a policy.
The associated identity can be
"foobar123" or any other value.
This is in particular useful for dealing with the special root identity.
The KES server has a special
root identity that you must specify.
You specify the
root identify either by the KES configuration file or the
--root CLI option.
root acts like any other identity with the exception that it cannot be associated to a policy.
root can perform arbitrary API operations.
root identity is especially useful for initial provisioning and management tasks.
Centrally managed or automated deployments, such as Kubernetes, do not require the
root identity, which serves only as a security risk.
If an attacker gains access to the
root identity’s private key and certificate, the attacker can perform arbitrary operations.
Even though a
root identity must always be specified, you can effectively disable it.
This can be done by specifying a
root identity value that will never be an actual (SHA-256) hash value.
--root=_ (underscore) or
Since KES does not ever compute a cryptographic identity to
disabled, it becomes impossible to perform an operation as
rootcan perform arbitrary API operations, it cannot change the
rootidentity itself. The
rootidentity can only be specified or changed through the CLI or the configuration file. Therefore, an attacker cannot become the
rootidentity by tricking the current
root. The attacker either has to compromise the
rootidentity’s private key or change the initial server configuration.