Generate Resources

Create additional resources based on resource creation or updates.

A generate rule can be used to create additional resources when a new resource is created or when the source is updated. This is useful to create supporting resources, such as new RoleBindings or NetworkPolicies for a Namespace.

The generate rule supports match and exclude blocks, like other rules. Hence, the trigger for applying this rule can be the creation of any resource. It is also possible to match or exclude API requests based on subjects, roles, etc.

The generate rule is triggered during the API CREATE operation. To keep resources synchronized across changes, you can use the synchronize property. When synchronize is set to true, the generated resource is kept in-sync with the source resource (which can be defined as part of the policy or may be an existing resource), and generated resources cannot be modified by users. If synchronize is set to false then users can update or delete the generated resource directly.

When using a generate rule, the origin resource can be either an existing resource defined within Kubernetes, or a new resource defined in the rule itself. When the origin resource is a pre-existing resource such as a ConfigMap or Secret, for example, the clone object is used. When the origin resource is a new resource defined within the manifest of the rule, the data object is used. These are mutually exclusive, and only one may be specified in a rule.

Kubernetes has many default resource types even before considering CustomResources defined in CustomResourceDefinitions (CRDs). While Kyverno can generate these CustomResources as well, both these as well as certain default Kubernetes resources may require granting additional privileges to the ClusterRole responsible for the generate behavior. To enable Kyverno to generate these other types, edit the ClusterRole typically named kyverno:generatecontroller and add or update the rules to cover the resources and verbs needed.

Kyverno will create an intermediate object called a GenerateRequest which is used to queue work items for the final resource generation. To get the details and status of a generated resource, check the details of the generate request. The following will give the list of generate requests.

1kubectl get generaterequests -A

A generate request status can have one of four values:

Completed: the generate request controller created resources defined in the policy Failed: the generate request controller failed to process the rules Pending: the request is yet to be processed or the resource has not been created Skip: marked when triggering the generate policy by adding a label/annotation to the existing resource, while the selector is not defined in the policy itself.

Generate a ConfigMap using inline data

This policy sets the Zookeeper and Kafka connection strings for all namespaces based upon a ConfigMap defined within the rule itself. Notice that this rule has the generate.data object defined in which case the rule will create a new ConfigMap called zk-kafka-address using the data specified in the rule’s manifest.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: zk-kafka-address
 5spec:
 6  rules:
 7  - name: k-kafka-address
 8    match:
 9      resources:
10        kinds:
11        - Namespace
12    exclude:
13      resources:
14        namespaces:
15        - kube-system
16        - default
17        - kube-public
18        - kyverno
19    generate:
20      synchronize: true
21      kind: ConfigMap
22      name: zk-kafka-address
23      # generate the resource in the new namespace
24      namespace: "{{request.object.metadata.name}}"
25      data:
26        kind: ConfigMap
27        metadata:
28          labels:
29            somekey: somevalue
30        data:
31          ZK_ADDRESS: "192.168.10.10:2181,192.168.10.11:2181,192.168.10.12:2181"
32          KAFKA_ADDRESS: "192.168.10.13:9092,192.168.10.14:9092,192.168.10.15:9092"

Clone a ConfigMap and propagate changes

In this policy, the source of the data is an existing ConfigMap resource named config-template which is stored in the default namespace. Notice how the generate rule here instead uses the generate.clone object when the origin data exists within Kubernetes.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: basic-policy
 5spec:
 6  rules:
 7  - name: Clone ConfigMap
 8    match:
 9      resources:
10        kinds:
11        - Namespace
12    exclude:
13      resources:
14        namespaces:
15        - kube-system
16        - default
17        - kube-public
18        - kyverno
19    generate:
20      # Kind of generated resource
21      kind: ConfigMap
22      # Name of the generated resource
23      name: default-config
24      # namespace for the generated resource
25      namespace: "{{request.object.metadata.name}}"
26      # propagate changes from the upstream resource
27      synchronize : true
28      clone:
29        namespace: default
30        name: config-template

Generating Bindings

In order for Kyverno to generate a new RoleBinding or ClusterRoleBinding resource, its ServiceAccount must first be bound to the same Role or ClusterRole which you’re attempting to generate. If this is not done, Kubernetes blocks the request because it sees a possible privilege escalation attempt from the Kyverno ServiceAccount. This is not a Kyverno function but rather how Kubernetes RBAC is designed to work.

For example, if you wish to write a generate rule which creates a new RoleBinding resource granting some user the admin role over a new Namespace, the Kyverno ServiceAccount must have a ClusterRoleBinding in place for that same admin role.

Create a new ClusterRoleBinding for the Kyverno ServiceAccount by default called kyverno.

 1apiVersion: rbac.authorization.k8s.io/v1
 2kind: ClusterRoleBinding
 3metadata:
 4  name: kyverno:generate-admin
 5roleRef:
 6  apiGroup: rbac.authorization.k8s.io
 7  kind: ClusterRole
 8  name: admin
 9subjects:
10- kind: ServiceAccount
11  name: kyverno
12  namespace: kyverno

Now, create a generate rule as you normally would which assigns a test user named steven to the admin ClusterRole for a new Namespace. The built-in ClusterRole named admin in this rule must match the ClusterRole granted to the Kyverno ServiceAccount in the previous ClusterRoleBinding.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: steven-rolebinding
 5spec:
 6  rules:
 7  - name: steven-rolebinding
 8    match:
 9      resources:
10        kinds:
11        - Namespace
12    generate:
13      kind: RoleBinding
14      name: steven-rolebinding
15      namespace: "{{request.object.metadata.name}}"
16      data:  
17        subjects:
18        - kind: User
19          name: steven
20          apiGroup: rbac.authorization.k8s.io
21        roleRef:
22          kind: ClusterRole
23          name: admin
24          apiGroup: rbac.authorization.k8s.io

When a new Namespace is created, Kyverno will generate a new RoleBinding called steven-rolebinding which grants the user steven the admin ClusterRole over said new Namespace.

Generate a NetworkPolicy

In this example, new namespaces will receive a NetworkPolicy that denies all inbound and outbound traffic. Similar to the first example, the generate.data object is used to define, as an overlay pattern, the spec for the NetworkPolicy resource.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: default
 5spec:
 6  rules:
 7  - name: deny-all-traffic
 8    match:
 9      resources:
10        kinds:
11        - Namespace
12    exclude:
13      resources:
14        namespaces:
15        - kube-system
16        - default
17        - kube-public
18        - kyverno
19    generate:
20      kind: NetworkPolicy
21      name: deny-all-traffic
22      namespace: "{{request.object.metadata.name}}"
23      data:  
24        spec:
25          # select all pods in the namespace
26          podSelector: {}
27          policyTypes:
28          - Ingress
29          - Egress

Linking resources with ownerReferences

In some cases, a triggering (source) resource and generated (downstream) resource need to share the same lifecycle. That is, when the triggering resource is deleted so too should the generated resource. This is valuable because some resources are only needed in the presence of another, for example a Service of type LoadBalancer necessitating the need for a specific network policy in some CNI plug-ins. While Kyverno will not take care of this task internally, Kubernetes can by setting the ownerReferences field in the generated resource. With the below example, when the generated ConfigMap specifies the metadata.ownerReferences[] object and defines the following fields including uid, which references the triggering Service resource, an owner-dependent relationship is formed. Later, if the Service is deleted, the ConfigMap will be as well. See the Kubernetes documentation for more details including an important caveat around the scoping of these references. Specifically, Namespaced resources cannot be the owners of cluster-scoped resources, and cross-namespace references are also disallowed.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: demo-ownerref
 5spec:
 6  background: false
 7  rules:
 8  - name: demo-ownerref-svc-cm
 9    match:
10      resources:
11        kinds:
12        - Service
13    generate:
14      kind: ConfigMap
15      name: "{{request.object.metadata.name}}-gen-cm"
16      namespace: "{{request.namespace}}"
17      synchronize: false
18      data:
19        metadata:
20          ownerReferences:
21          - apiVersion: v1
22            kind: Service
23            name: "{{request.object.metadata.name}}"
24            uid: "{{request.object.metadata.uid}}"
25        data:
26          foo: bar

Generating resources into existing namespaces

Use of a generate rule is common when creating net new resources from the point after which the policy was created. For example, a Kyverno generate policy is created so that all future namespaces can receive a standard set of Kubernetes resources. However, it is also possible to generate resources into existing resources, namely the Namespace construct. This can be extremely useful when deploying Kyverno to an existing cluster in use where you wish policy to apply retroactively.

Normally, Kyverno does not alter existing objects in any way as a central tenet of its design. However, using this method of controlled roll-out, you may use generate rules to create new objects into existing namespaces. To do so, follow these steps:

  1. Identify some Kubernetes label or annotation which is not yet defined on any Namespace but can be used to add to existing ones signaling to Kyverno that these namespaces should be targets for generate rules. The metadata can be anything, but it should be descriptive for this purpose and not in use anywhere else nor use reserved keys such as kubernetes.io or kyverno.io.

  2. Create a ClusterPolicy with a rule containing a match statement which matches on kind Namespace as well as the label or annotation you have set aside. In the sync-secret policy below, it matches on not only namespaces but a label of mycorp-rollout=true and copies into these namespaces a Secret called corp-secret stored in the default Namespace.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: sync-secret
 5spec:
 6  rules:
 7  - name: sync-secret
 8    match:
 9      resources:
10        kinds:
11        - Namespace
12        selector:
13          matchLabels:
14            mycorp-rollout: "true"
15    generate:
16      kind: Secret
17      name: corp-secret
18      namespace: "{{request.object.metadata.name}}"
19      synchronize : true
20      clone:
21        namespace: default
22        name: corp-secret
  1. Create the policy as usual.

  2. On an existing namespace where you wish to have the Secret corp-secret copied into it, label it with mycorp-rollout=true. This step must be completed after the ClusterPolicy exists. If it is labeled before, Kyverno will not see the request.

1$ kubectl label ns prod-bus-app1 mycorp-rollout=true
2
3namespace/prod-bus-app1 labeled
  1. Check the Namespace you just labeled to see if the Secret exists.
1$ kubectl -n prod-bus-app1 get secret
2
3NAME                                               TYPE                                  DATA   AGE
4corp-secret                                        Opaque                                2      10s
  1. Repeat these steps as needed on any additional namespaces where you wish this ClusterPolicy to apply its generate rule.

If you would like Kyverno to remove the resource it generated into these existing namespaces, you may unlabel the namespace.

1$ kubectl label ns prod-bus-app1 mycorp-rollout-

The Secret from the previous example should be removed.

Last modified February 08, 2022 at 11:53 AM PST: address comments (b51f531)