Thanos Blog

Integrate ArgoCD to deploy application through Helmfile

29 Feb 2024

header image

When my team was tasked with automating the deployment process for a project described with a Helmfile, we encountered a challenge. Our objective was not only to automate the deployment but also to incorporate environmental variables into the process.

A helpful solution was proposed by Christian Huth in his article, which you can find here. However, this solution lacked the implementation of environmental variables. In summary, the approach outlined in the article involves utilizing the cmp plugin feature of ArgoCD, which spawns a sidecar container with access to the Helmfile CLI tool. ArgoCD communicates with this sidecar container to retrieve all the Kubernetes manifests required for the application.

To follow along with the steps outlined in the article, you’ll need to deploy ArgoCD along with each official Helm chart. If you don’t already have a Kubernetes cluster set up, you can refer to one of the following articles for instructions on deploying a local k3s cluster (work in progress) or a free tier AWS EKS cluster (work in progress).

Configuring the integration between Helmfile and ArgoCD involves two main parts within the values file of the ArgoCD Helm chart: configs.cmp.plugins and repoServer.extraContainers.

Configs.cmp.plugins

configs:
  cmp:
    create: true
    plugins:
      helmfile:
        allowConcurrency: true
        discover:
          fileName: helmfile.yaml
        generate:
          command:
            - bash
            - "-c"
            - |
              declare -a domain_array
              while IFS= read -r line; do
                domain_array+=("$line")
              done < <(env | grep 'ARGOCD_ENV_' | sed 's/ARGOCD_ENV_\(.*\)=\(.*\)/\1=\2/')
              for item in "${domain_array[@]}"; do
                eval "export $item"
              done
              code .
              helmfile -n "$ARGOCD_APP_NAMESPACE" template --include-crds -q
        lockRepo: false

RepoServer.extraContainers

repoServer:
  extraContainers:
    - name: helmfile
      image: ghcr.io/helmfile/helmfile:v0.157.0
      command: ["/var/run/argocd/argocd-cmp-server"]
      env:
        - name: HELM_CACHE_HOME
          value: /tmp/helm/cache
        - name: HELM_CONFIG_HOME
          value: /tmp/helm/config
        - name: HELMFILE_CACHE_HOME
          value: /tmp/helmfile/cache
        - name: HELMFILE_TEMPDIR
          value: /tmp/helmfile/tmp
      securityContext:
        runAsNonRoot: true
        runAsUser: 999
      volumeMounts:
        - mountPath: /var/run/argocd
          name: var-files
        - mountPath: /home/argocd/cmp-server/plugins
          name: plugins
        - mountPath: /home/argocd/cmp-server/config/plugin.yaml
          subPath: helmfile.yaml
          name: argocd-cmp-cm
        - mountPath: /tmp
          name: cmp-tmp
  volumes:
    - name: argocd-cmp-cm
      configMap:
        name: argocd-cmp-cm
    - name: helmfile-tmp
      emptyDir: {}

It’s important to note that if you wish to utilize Helmfile environments, as described in the reference article, you should adjust your approach accordingly.

The script under configs.cmp.plugins.helmfile.generate.command plays a pivotal role in converting Helmfile configurations into Kubernetes manifests while addressing the challenge with environmental variables. Let’s break down how it works:

1. The script initializes an array.
2. It populates the array with environmental variables prefixed with ARGOCD_ENV\*.
3. After cleaning the prefix, it exports these variables using the eval command.
4. Finally, it parses the Helmfile into Kubernetes manifests.

By understanding and implementing these configurations, you can effectively integrate Helmfile with ArgoCD, streamlining your deployment process while ensuring compatibility with environmental variables.