Using Kyverno To Enforce AWS Load Balancer Annotations For Centralized Logging To S3
Introduction
In this guide, I will go over the steps to take in order to use Kyverno to automatically configure the annotations that enable access logs for an AWS Network Load Balancer (NLB) to be forwarded to an s3 bucket for a Kubernetes kind: Service
of type: LoadBalancer
.
In brief, Kyverno is a policy engine for Kubernetes that lets administrators take advantage of Validating and Mutating webhooks that can validate and modify Kubernetes resources to ensure a consistent environment. Learn more about Kyverno here.
In this example I will be targeting the configuration of an AWS Load Balancer of type NLB. However, you can use the information in this guide in any similar type of mutating policy where you want to include an expected set of annotations that are defined in the resource manifest.
This method can be useful when you want to centralize the creation and management of an s3 bucket where you would want all Load Balancers created within the cluster to log to the same bucket for easy retrieval.
Prerequisites
In this guide I will be using Kubernetes version 1.21
in the local and AWS EKS environments. The creation of the following resources will not be covered in this guide.
- An s3 bucket (named
poc-lb-logs-ue2
in this example) with a policy attached to allow write permissions from the NLB (see below atAWS Testing
for an example policy). The bucket must be in the same region as the NLB. - A certificate ARN for a certificate that is created/imported in the AWS Certificate Manager. For access logs to propogate to the S3 bucket, the Load Balancer must have a TLS Listener which requires a certificate ARN annotation be defined in the Load Balancer manifest. More info here. Example certificate ARN:
arn:aws:acm:us-east-2:(ACCOUNT_ID):certificate/(CERTIFICATE_UID)
aws-load-balancer-controller
should be installed on the EKS cluster in order to handle the creation and configuration of the AWS Load Balancer resources. Installation steps are here. For this example, the installed version isv2.4.2
.
Local Binaries to have installed:
kubectl (1.21.13
) - Install
krew (v0.4.3
) - Install
k3d (v5.4.1
) - Install
helm (v3.8.2
) - Install
Local Testing
We will want to create a local testing environment to verify the behavior of the Kyverno mutating policy. To do this we’ll create a local test cluster with k3d
at a specific Kubernetes version and install Kyverno and the mutating policy manifest.
Create k3d cluster
k3d
is a tool, similar to kind
and minikube
, that facilitates the installation of lightweight k3s Kubernetes images that are suitable for local testing.
With k3d
installed, run the following to install a cluster at Kubernetes version v1.21.12
.
k3d cluster create k3d-1-21-12-medium --image rancher/k3s:v1.21.12-k3s1
To view all available k3d image versions, see the k3s docker hub repository.
Install kyverno
We’ll use helm
to install Kyverno on to the cluster.
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno kyverno/kyverno -n kyverno --create-namespace --set replicaCount=1 --version v2.4.1
To verify it’s installed:
> kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kyverno 1/1 1 1 6m7s
To verify that the kyverno CRDs are installed and can be used:
> kubectl api-resources |grep kyverno
clusterpolicies cpol kyverno.io/v1 false ClusterPolicy
clusterreportchangerequests crcr kyverno.io/v1alpha2 false ClusterReportChangeRequest
generaterequests gr kyverno.io/v1 true GenerateRequest
policies pol kyverno.io/v1 true Policy
reportchangerequests rcr kyverno.io/v1alpha2 true ReportChangeRequest
updaterequests ur kyverno.io/v1beta1 true UpdateRequest
Install kyverno mutating policy
Policy reference:
# injection-policy.yaml
---
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: inject-s3-logging-annotation-to-lb
annotations:
policies.kyverno.io/title: Inject S3 Logging Annotation to LB
policies.kyverno.io/category: Logging
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Service
policies.kyverno.io/description: >-
Any new service of LoadBalancer type will be injected with the annotation for AWS S3 bucket access logging
spec:
background: false
validationFailureAction: enforce
rules:
- name: inject-nlb-s3-annotation
match:
resources:
kinds:
- Service
preconditions:
all:
- key: "{{request.object.spec.type}}"
operator: Equals
value: "LoadBalancer"
mutate:
patchStrategicMerge:
metadata:
annotations:
+(service.beta.kubernetes.io/aws-load-balancer-attributes): "access_logs.s3.enabled=true,access_logs.s3.bucket=poc-lb-logs-ue2,access_logs.s3.prefix={{request.object.metadata.namespace}}/{{request.object.metadata.name}}"
Anatomy of a mutating policy
Given the above policy, we can examine a few details:
kind: ClusterPolicy
means this is a cluster-wide policy (you can usekind: Policy
for namespaced policies)background: false
means the policy will not apply to existing resources, only those created AFTER the policy is installed.+()
The parenthesis around an annotation key is an 'add-if-not-present anchor' and means the policy will only add the annotation value IF the annotation is not already defined in resource manifest, otherwise it will skip it.- The prefix value
"{{request.object.metadata.namespace}}/{{request.object.metadata.name}}"
is a Kyverno variable lookup which will use thekind: Service
metadata to populate the lookups. This designates the folder within the s3 bucket where logs will flow. This refers to the namespace where the Load Balancer was created and the name of the Load Balancer. This would bedefault/test-load-balancer
for this example.
Install and verify the policy
Now that Kyverno is installed and we understand what the policy is doing, we can install the mutating policy which will watch for new Kubernetes resources of kind: Service
and type: LoadBalancer
.
Install and verify:
❯ kubectl apply -f injection-policy.yaml
clusterpolicy.kyverno.io/inject-s3-logging-annotation-to-lb created❯ kubectl get clusterpolicy
NAME BACKGROUND ACTION READY
inject-s3-logging-annotation-to-lb false enforce true
Create Test Load Balancer and Verify Injection
Now that the mutating policy is installed, we can create a new resource of kind: Service
with type: LoadBalancer
to ensure that Kyverno is processing the mutation correctly. Since we're testing locally, the annotations will not have any effect just yet.
# lb-example.yaml
---
apiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.io/ingress.class: nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:(ACCOUNT_ID):certificate/(CERTIFICATE_UID)
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
labels:
app: test-load-balancer
name: test-load-balancer
namespace: default
spec:
externalTrafficPolicy: Cluster
selector:
app: test-load-balancer
sessionAffinity: None
type: LoadBalancer
ports:
- name: dummy-port
port: 443
protocol: TCP
targetPort: 99
In the logs for the kyverno controller we’ll see information regarding the injection:
> kubectl logs po/kyverno-59f98cc95-g25hk |grep mutation
I0603 17:20:29.998021 1 mutation.go:25] WebhookServer "msg"="mutation rules from policy applied successfully" "action"="mutate" "gvk"="/v1, Kind=Service" "operation"="CREATE" "resource"="default/Service/test-load-balancer" "policy"="inject-s3-logging-annotation-to-lb" "rules"=["inject-nlb-s3-annotation"]
We can check the resource and see the annotations have been added:
> kubectl get svc/test-load-balancer -n default -oyamlapiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.io/ingress.class: nlb
policies.kyverno.io/last-applied-patches: |
inject-nlb-s3-annotation.inject-s3-logging-annotation-to-lb.kyverno.io: added /metadata/annotations/service.beta.kubernetes.io~1aws-load-balancer-attributes
service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=poc-lb-logs-ue2,access_logs.s3.prefix=default/test-load-balancer
...
(REST OMITTED)
Note that the annotation metadata.annotations.policies.kyverno.io/last-applied-patches
is now populated with a log message to indicate that policy has been applied to this resource.
metadata:
annotations:
policies.kyverno.io/last-applied-patches: |
inject-nlb-s3-annotation.inject-s3-logging-annotation-to-lb.kyverno.io: added /metadata/annotations/service.beta.kubernetes.io~1aws-load-balancer-attributes
AWS Testing
Now that we’ve tested that the mutating policy is working locally, we can verify that the policy does what we expect it to do in a live environment. We’ll install Kyverno, create an s3 bucket, apply an s3 bucket access policy to that bucket and verify that the access logs for the load balancer are going to the expected s3 bucket.
Create S3 bucket with permissions
Create a new S3 bucket named poc-lb-logs-ue2
and select the same region where the EKS cluster is (i.e where the NLB would be created). In this case, us-east-2
.
See AWS Documentation for more information on bucket policies for access logs.
Attach the following policy to allow logs to be pushed to the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::033677994240:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::poc-lb-logs-ue2/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::poc-lb-logs-ue2/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::poc-lb-logs-ue2"
}
]
}
Based on the documentation, 033677994240
in this example is the account ID to use for Load Balancers that are created in the us-east-2
region.
Create Test Load Balancer and Verify Logging in S3
Return to the steps above under Install kyverno
and Install kyverno mutating policy
to setup Kyverno on the EKS cluster.
Create the LoadBalancer (same manifest as before):
# lb-example.yaml
---
apiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.io/ingress.class: nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:(ACCOUNT_ID):certificate/(CERTIFICATE_UID)
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
labels:
app: test-load-balancer
name: test-load-balancer
namespace: default
spec:
externalTrafficPolicy: Cluster
selector:
app: test-load-balancer
sessionAffinity: None
type: LoadBalancer
ports:
- name: dummy-port
port: 443
protocol: TCP
targetPort: 99
You should see the same expected annotations get added to the service with kubectl get svc/test-load-balancer -n default -oyaml
.
metadata:
annotations:
... (REST OMITTED)
service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=poc-lb-logs-ue2,access_logs.s3.prefix=default/test-load-balancer
... (REST OMITTED)
As long as aws-load-balancer-controller
is configured appropriately, a new NLB will be created for the service.
The kind: Service
resource status will be updated to show the hostname of the Load Balancer that was created:
status:
loadBalancer:
ingress:
- hostname: (REDACTED)
You should then start to see logs in the s3 bucket immediately after the annotation is added.
For more information on how to interpret the access logs for an NLB, see the AWS Documentation for Access logs for your Network Load Balancer.