Helm hooks examples in Kubernetes for beginners

Helm hooks provides a means to hook into events in the release process and take action. This is useful if you want to bundle actions as part of a release—for example, building in the ability to back up a database as part of the upgrade process while ensuring that the backup occurs prior to upgrading the Kubernetes resources.

 

Overview on Helm Hooks

Hooks are like regular templates and the functionality they encapsulate is provided through containers running in Kubernetes clusters alongside the other resources for your application. What distinguishes hooks from other resources is when a special annotation is set. When Helm sees the helm.sh/hook annotation, it uses the resource as a hook instead of a resource to be installed as part of the application installed by the chart.

Advertisement

These are the different helm hooks which are available with helm charts:

Annotation ValueDescription
pre-installExecution happens after resources are rendered but prior to those resources being uploaded to Kubernetes.
post-installExecution happens after resources have been uploaded to Kubernetes.
pre-deleteExecution happens on a deletion request prior to any resources being deleted from Kubernetes.
post-deleteExecution happens after all resources have been deleted from Kubernetes.
pre-upgradeExecution happens after resources are rendered but prior to resources being updated in Kubernetes.
post-upgradeExecution happens after resources have been upgraded in Kubernetes.
pre-rollbackExecution happens after resources have been rendered but prior to any resources in Kubernetes being rolled back.
post-rollbackExecution happens after resources have been rolled back in Kubernetes.
testExecution occurs when the helm test command is run

A single resource can implement more than one hook by listing them as a comma-separated list. For example:

annotations:
  "helm.sh/hook": pre-install,pre-upgrade

The weight, specified by the helm.sh/hook-weight annotation key, is a number represented as a string. It should always be a string. The weight can be a positive or negative number and has a default value of 0. Prior to executing hooks, Helm sorts them in ascending order.

The deletion policy, set using the annotation key helm.sh/hook-delete-policy, is a comma-separated list of policy options. Following are the three possible deletion policies:

Policy ValueDescription
before-hook-creationThe previous resource is deleted before a new instance of this hook is launched. This is the default.
hook-succeededDelete the Kubernetes resource after the hook is successfully run.
hook-failedDelete the Kubernetes resource if the hook failed while executing.

 

Helm hooks vs Init-containers

We do get this confusion most of the time where we wonder how different is helm hooks from init-containers as atleast the pre-install hook seems to be similar to init-containers.

  • init-container is executed before the execution of each container. For example if you have a deployment or statefulset with 3 replicas each having single container, then init-container would be executed for each of these container. Assuming a pod crashes in which case the init-container would be executed again before the pod container comes up.
  • The helm hook on the other side is a separate Kubernetes object, typically a Job You can arrange for this Job to be run exactly once on each helm upgrade and at no other times, which makes it a reasonable place to run things like migrations. We can use a single hook such as pre-install which will be executed only once before the Pod comes up. Later even if the pod crashes then the replica would start a new pod without executing the hook as it is only a one time Job.
  • Helm hook creates a completely separate pod or Job altogether which means that it will not have access to main pods directly using localhost or they cannot use same volume mount etc while initcontainer can do so.
  • Init container which is comparable to helm pre install hooks is limited in way because it can only do initial tasks before the main pod is started and can not do any tasks which need to be executed after the pod is started for example any clean up activity.

I have already brought up multi-node kubernetes cluster using kubeadm and will user /helm-charts directory to store all my helm charts.

Advertisement

 

Example-1: Create pre-install and pod-install pod

In this example we will use pre-install and post-install hook to run as a Pod with helm chart. First step would be to create a chart:

[root@controller helm-charts]# helm create chart-2
Creating chart-2

Following is the structure of the chart, as we know by default helm will create a chart for nginx deployment.

Helm hooks examples for beginners in Kubernetes

We will create our own template files so I will delete the existing YAML files:

[root@controller helm-charts]# rm -rf chart-2/templates/*

Next we will create a pre-install helm hook as Pod, the helm.sh/hook properties are defined in the annotations section:

[root@controller helm-charts]# cat chart-2/templates/pre-install-hook.yaml
apiVersion: v1
kind: Pod
metadata:
   name: preinstall-hook
   annotations:
       "helm.sh/hook": "pre-install"
spec:
  containers:
  - name: pre-install-container
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ['sh', '-c', 'echo The pre-install hook is running && sleep 20' ]
  restartPolicy: Never
  terminationGracePeriodSeconds: 0

Similarly I will create one post-install helm hook as Pod:

[root@controller helm-charts]# cat chart-2/templates/post-install-hook.yaml
apiVersion: v1
kind: Pod
metadata:
   name: postinstall-hook
   annotations:
       "helm.sh/hook": "post-install"
spec:
  containers:
  - name: post-install-container
    image: busybox
    imagePullPolicy: Always
    command: ['sh', '-c', 'echo The post-install hook is running && sleep 15' ]
  restartPolicy: Never
  terminationGracePeriodSeconds: 0

Now we will also create one statefulset pod as post-install hook would be executed only once our main Pod is up and Running. Following is the YAML file for our statefulset pod:

[root@controller helm-charts]# cat chart-2/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-statefulset
spec:
  selector:
    matchlabels:
      name: nginx-statefulset
  servicename: nginx-statefulset
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx-statefulset
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx-statefulset
        image: nginx
        imagePullPolicy: Always

So our pre-install hook will sleep for 20 seconds while the post-install hook will sleep for 15 seconds. This will help us understand the sequence and execution time of individual helm hooks.

Let us lint the chart to make sure there are no issues:

Advertisement
[root@controller helm-charts]# helm lint chart-2/
==> Linting chart-2/
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

Next before we install the chart, let us perform a –dry-run to validate the YAML files execution:

[root@controller helm-charts]# helm install --dry-run helm-hooks ./chart-2

If everything is correct then you should get following output (I have trimmed the output):

[root@controller helm-charts]# helm install --dry-run helm-hooks ./chart-2
name: helm-hooks
LAST DEPLOYED: Thu Mar 18 12:52:21 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
---
# Source: chart-2/templates/post-install-hook.yaml
apiVersion: v1
kind: Pod
...

So now we can install our chart, we will deploy our chart with the name “helm-hooks“:

[root@controller helm-charts]# helm install helm-hooks ./chart-2
name: helm-hooks
LAST DEPLOYED: Thu Mar 18 12:53:33 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Once deployed, check the list of pods. So we have 3 pods now where preinstall-hook and postinstall-hook are marked as completed while the main pod is in running state:

Helm hooks examples for beginners in Kubernetes

We can check the time of execution for both pre and post install hooks:

Helm hooks examples for beginners in Kubernetes

As expected, the pre-install helm hook executed for 20 seconds while the post-install hook executed for 15 seconds after the main statefulset pod was created.

We will delete these hook pods and the helm chart:

[root@controller helm-charts]# helm uninstall helm-hooks
release "helm-hooks" uninstalled

As you see, deleting the helm chart does not deletes the hooks which were created by the pods:

Advertisement
[root@controller helm-charts]# kubectl get pods
NAME               READY   STATUS      RESTARTS   AGE
postinstall-hook   0/1     Completed   0          12m
preinstall-hook    0/1     Completed   0          12m

This is why we must use "helm.sh/hook-delete-policy" to delete the hook once it has successfully executed. For now let us manually delete this hook:

[root@controller ~]# kubectl delete pod postinstall-hook preinstall-hook
pod "postinstall-hook" deleted
pod "preinstall-hook" deleted

 

Example-2: Create pre-install ConfigMap and Secret as Job

We can use helm hooks to declare variables using configmap and secrets for the Pod. Although I would prefer to just create normal ConfigMaps and Secrets instead of using hooks for this purpose.

Let us create chart-3 to demonstrate the helm hook pre-install usecase with ConfigMap and Secrets:

[root@controller helm-charts]# helm create chart-3
Creating chart-3

We will delete the existing templates so we can create our own template files:

[root@controller helm-charts]# rm -rf chart-3/templates/*

[root@controller helm-charts]# tree chart-3/
chart-3/
├── charts
├── Chart.yaml
├── templates
└── values.yaml

2 directories, 2 files

Next we will create our ConfigMap, Secret and Job file to use them:

[root@controller helm-charts]# cat chart-3/templates/pre-install-hook.yaml
apiVersion: v1
kind: Secret
metadata:
  name: pre-install-secret
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "0"
#    "helm.sh/hook-delete-policy": before-hook-creation
type: Opaque
data:
  KEY3: YWRtaW4=
  KEY4: UGFzc3cwcmQ=
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: pre-install-configmap
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "1"
#    "helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
data:
   KEY1: value1
   KEY2: value2
---
apiVersion: batch/v1
kind: Job
metadata:
  name: pre-install-job
  labels:
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "2"
#    "helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
spec:
  backoffLimit: 0
  template:
    metadata:
      name: testing-hooks
    spec:
      restartPolicy: Never
      containers:
      - name: hook-test
        image: alpine
        imagePullPolicy: IfNotPresent
        command: ['sh', '-c', 'printenv | grep KEY']
        envFrom:
           - configMapRef:
               name: pre-install-configmap
           - secretRef:
               name: pre-install-secret

Here,

  • We have a configmap pre-install hook which will set KEY1 and KEY2 environment variable
  • The Secret pre-install hook will set KEY3 and KEY4 environment variable. the values of these keys are in base64 encoded format. You can achieve this by using “echo TEXT | base64” from the shell
  • I am setting the environment variables but you can also store these values on a file and then mount the file using Volumes
  • In the third pre-install hook we have a Job which will use the ConfigMap and Secret.
  • To make sure our environment variables are properly applied, we will use printenv to grep our KEYs
  • We have explicitly commented out the hook-delete-policy so that we can verify the output of printenv command executed as part of the Job

We will also create one statefulset pod so the pre-install hook would be executed before the main pod is instantiated:

[root@controller helm-charts]# cat chart-3/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-statefulset
spec:
  selector:
    matchlabels:
      name: nginx-statefulset
  servicename: nginx-statefulset
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx-statefulset
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx-statefulset
        image: nginx
        imagePullPolicy: Always

Next let us lint out chart templates to identify any possible issues:

[root@controller helm-charts]# helm lint chart-3/
==> Linting chart-3/
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

Next we will execute the helm install command with --dry-run to identify any potential instantiation issues (output is trimmed):

[root@controller helm-charts]# helm install --dry-run helm-hooks chart-3/
name: helm-hooks
LAST DEPLOYED: Thu Mar 18 13:55:50 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
---
# Source: chart-3/templates/pre-install-hook.yaml
apiVersion: v1
kind: Secret
...

Since the --dry-run execution was successful, now we can install our helm chart:

[root@controller helm-charts]# helm install helm-hooks chart-3/
name: helm-hooks
LAST DEPLOYED: Thu Mar 18 13:45:28 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

The deployment of the hooks was successful, check the list of pods:

[root@controller helm-charts]# kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-statefulset-0     0/1     ContainerCreating   0          4s
pre-install-job-5xmjd   0/1     Completed           0          6s

Our pre-install job was successful and the main pod is now creating it’s container. Let’s check the logs of our pre-install-job to make sure the environment variables were properly assigned:

[root@controller helm-charts]# kubectl logs pre-install-job-5xmjd
KEY1=value1
KEY2=value2
KEY3=admin
KEY4=Passw0rd

So as you see, the secret values were decoded and then assigned as variables into our Job.

Let me un-install the helm chart and pre-install job before we move ahead with our next example

[root@controller helm-charts]# helm uninstall helm-hooks
release "helm-hooks" uninstalled

[root@controller helm-charts]# kubectl delete job pre-install-job
job.batch "pre-install-job" deleted

 

Example-3: Create pre-install helm hook with ConfigMap and Secret and use in main Pod

In the last example we created ConfigMap and Secret as helm pre-install hooks but used them inside our Job only. In this example we will use them inside our main Statefulset pod.

We will create chart-4 and delete the existing template files:

[root@controller helm-charts]# helm create chart-4
Creating chart-4

[root@controller helm-charts]# rm -rf chart-4/templates/*

[root@controller helm-charts]# tree chart-4/
chart-4/
├── charts
├── Chart.yaml
├── templates
└── values.yaml

2 directories, 2 files

We will use our existing ConfigMap and Secrets from the previous example. I have created separate files for both ConfigMap and Secrets, and deleted the Job as that is not required in this example.

Sample pre-install helm hook for configMap:

[root@controller helm-charts]# cat chart-4/templates/pre-install-ConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: pre-install-configmap
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "1"
#    "helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
data:
   KEY1: value1
   KEY2: value2

Sample pre-install helm hook for Secret:

[root@controller helm-charts]# cat chart-4/templates/pre-install-Secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: pre-install-secret
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "0"
#    "helm.sh/hook-delete-policy": before-hook-creation
type: Opaque
data:
  KEY3: YWRtaW4=
  KEY4: UGFzc3cwcmQ=

Following is the template for our main pod i.e. statefulset:

[root@controller helm-charts]# cat chart-4/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-statefulset
spec:
  selector:
    matchlabels:
      name: nginx-statefulset
  servicename: nginx-statefulset
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx-statefulset
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx-statefulset
        image: nginx
        imagePullPolicy: Always
        envFrom:
           - configMapRef:
               name: pre-install-configmap
           - secretRef:
               name: pre-install-secret

We are using the ConfigMap and Secret using envFrom as reference in the statefulset. Make sure the name of the configmap and secret is same as used in individual YAML file.

Next we follow our routine procedure, first lint the chart to look out for any issues:

[root@controller helm-charts]# helm lint chart-4
==> Linting chart-4
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

Next perform a dry run installation of the helm chart (output trimmed):

[root@controller helm-charts]# helm install --dry-run helm-hooks chart-4/
name: helm-hooks
LAST DEPLOYED: Thu Mar 18 14:21:35 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
---
# Source: chart-4/templates/pre-install-Secret.yaml
apiVersion: v1
kind: Secret
metadata:
...

Since the dry run was successfully executed, now we can go ahead and install our helm chart:

[root@controller helm-charts]# helm install helm-hooks chart-4/
name: helm-hooks
LAST DEPLOYED: Thu Mar 18 14:22:23 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Verify the list of available pods:

[root@controller helm-charts]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          9s

Next connect to the container from this Pod and make sure that the environment variables are properly set:

[root@controller helm-charts]# kubectl exec -it nginx-statefulset-0 -- printenv | grep KEY
KEY1=value1
KEY2=value2
KEY3=admin
KEY4=Passw0rd

So our variables from pre-install helm hook of ConfigMap and Secret are properly applied to our main pod. Let us uninstall this chart before going to the next example:

[root@controller helm-charts]# helm uninstall helm-hooks
release "helm-hooks" uninstalled

You can also store these variables into some file instead of setting them as environment variables. To save the ConfigMap and Secret into a file, you can modify the pre-install hook files as shown below:

Sample pre-install helm hook to store Secret into a separate file:

[root@controller helm-charts]# cat chart-4/templates/pre-install-Secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: pre-install-secret
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "0"
#    "helm.sh/hook-delete-policy": before-hook-creation
type: Opaque
stringData:
  variables-secrets.conf: |
     KEY3: YWRtaW4=
     KEY4: UGFzc3cwcmQ=

Sample pre-install helm hook to store ConfigMap into a separate file:

[root@controller helm-charts]# cat chart-4/templates/pre-install-ConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: pre-install-configmap
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "1"
#    "helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
data:
  variables-configmap.conf: |
     KEY1=value1
     KEY2=value2

We will also modify our statefulset.yaml to get these values and store them as files under different mount points:

[root@controller helm-charts]# cat chart-4/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-statefulset
spec:
  selector:
    matchlabels:
      name: nginx-statefulset
  servicename: nginx-statefulset
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx-statefulset
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx-statefulset
        image: nginx
        imagePullPolicy: Always
        envFrom:
           - secretRef:
               name: pre-install-secret
        volumeMounts:
         - name: variables-configmap
           mountPath: /mnt/var1
         - name: variables-secrets
           mountPath: /mnt/var2
      volumes:
      - name: variables-configmap
        configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
          name: pre-install-configmap
      - name: variables-secrets
        secret:
        # Provide the name of the Secret containing the files you want
        # to add to the container
          secretname: pre-install-secret

Now let us install this chart:

[root@controller helm-charts]# helm install helm-hooks chart-4/
name: helm-hooks
LAST DEPLOYED: Thu Mar 18 15:39:23 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Verify if your statefulset pod is running:

[root@controller helm-charts]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          4s

We can connect to this Pod and check our variables file which we created using our pre install hook files:

[root@controller helm-charts]# kubectl exec -it nginx-statefulset-0 -- /bin/bash

root@nginx-statefulset-0:/# ls -l /mnt
total 4
drwxrwxrwx 3 root root 4096 Mar 18 10:17 var1
drwxrwxrwt 3 root root  100 Mar 18 10:17 var2

root@nginx-statefulset-0:/# ls -l /mnt/var1/
total 0
lrwxrwxrwx 1 root root 31 Mar 18 10:17 variables-configmap.conf -> ..data/variables-configmap.conf

root@nginx-statefulset-0:/# ls -l /mnt/var2/
total 0
lrwxrwxrwx 1 root root 29 Mar 18 10:17 variables-secrets.conf -> ..data/variables-secrets.conf

root@nginx-statefulset-0:/# cat /mnt/var1/variables-configmap.conf
KEY1=value1
KEY2=value2

root@nginx-statefulset-0:/# cat /mnt/var2/variables-secrets.conf
KEY3: YWRtaW4=
KEY4: UGFzc3cwcmQ=

 

Summary

In this tutorial we explored helm hooks and it’s usage. We covered the pre-install and post-install helm hook in details with multiple examples. We also created ConfigMap and Secrets using pre-install hooks which could be used by a Job or by the main Pod. By default, Helm keeps the Kubernetes resources used for hooks until the hook is run again. This provides the ability to inspect the logs or look at other information about a hook after it is run. If the hooks are deleted after execution then it becomes hard to analyse any further logs from the respective Jobs so we had commented out the hooks to delete the jobs.

 

References

Kubernetes Secrets
Kubernetes ConfigMaps
Kubernetes Chart Hooks

Didn't find what you were looking for? Perform a quick search across GoLinuxCloud

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can either use the comments section or contact me form.

Thank You for your support!!

Leave a Comment