Home-Lab Refresh: Kubernetes Cluster ArgoCD

February 2022

While testing the installation of Pi-hole, I found that Flux does not properly handle reconciliation of resources if there have been changes made in Kubernetes manually. This is an open issue with Flux, with no sign of resolution. Because of this issue, I’ve decided to switch from Flux to ArgoCD.

To do this, first we will need to remove Flux from the git repository and then start adding what we need for ArgoCD.

  1. Flux removal
  2. ArgoCD - Git preparation
  3. ArgoCD - CLI Install
  4. ArgoCD - K8s Install
  5. ArgoCD - App of Apps
  6. ArgoCD - Bootstrapping

Flux removal

To remove Flux we just simply need to remove the configuration that was added previously. Since we are using git to store the configuration, this is quite easy as we just need to remove the configuration from git and push the commit to GitHub.

Restoring the Kubernetes cluster is also easy, we will simply rebuild the entire cluster. Since we are using Terraform and Ansible to provision and configure the servers, this is also easy. I’ve created a rebuild script to do much of this work, It’s fairly straight forward and does the following.

  • Dynamically grabs a list of the Kubernetes servers from the Ansible inventory.
  • Taints the relevant resources in the Terraform state.
  • Runs a Terraform apply to remove and then recreate the resources previously tainted.
  • Removes any saved SSH host keys.
  • Runs the k8s-all.yml Ansible Playbook against the newly created servers.

After running this script, we have a fresh Kubernetes cluster to work with.

root@k8s-controller-01:~# kubectl get nodes
NAME                STATUS   ROLES                  AGE     VERSION
k8s-controller-01   Ready    control-plane,master   4m12s   v1.23.4
k8s-controller-02   Ready    control-plane,master   2m39s   v1.23.4
k8s-controller-03   Ready    control-plane,master   2m57s   v1.23.4
k8s-worker-01       Ready    <none>                 106s    v1.23.4
k8s-worker-02       Ready    <none>                 106s    v1.23.4
k8s-worker-03       Ready    <none>                 106s    v1.23.4

ArgoCD - Git preparation

Now we can move on to installing ArgoCD on to Kubernetes. The getting started docs for ArgoCD has general steps on quickly setting up ArgoCD. I want this to be fully managed using git though.

Luckily we can manage ArgoCD itself using ArgoCD, to achieve this we’ll use the App-of-Apps method to deploy applications and will do so in a declarative manner.

Currently the git repository looks like the following.

[user@workstation homelab]$ tree -dL 3
.
├── ansible
│   ├── group_vars
│   ├── host_vars
│   └── roles
│       ├── common
│       ├── db_server
│       ├── dns_server
│       ├── docker
│       ├── hardware
│       ├── kubernetes_common
│       ├── kubernetes_controller
│       ├── kubernetes_worker
│       ├── kvm_hypervisor
│       └── pacemaker
├── pki
└── terraform
    ├── infrastructure
    │   ├── core_services
    │   ├── hypervisors
    │   └── kubernetes
    └── modules
        └── kvm_virtual_machine

22 directories

After creating the ArgoCD directories, it now looks like the following.

[user@workstation homelab]$ tree -dL 3
.
├── ansible
│   ├── group_vars
│   ├── host_vars
│   └── roles
│       ├── common
│       ├── db_server
│       ├── dns_server
│       ├── docker
│       ├── hardware
│       ├── kubernetes_common
│       ├── kubernetes_controller
│       ├── kubernetes_worker
│       ├── kvm_hypervisor
│       └── pacemaker
├── kubernetes
│   ├── apps
│   ├── core
│   │   ├── apps
│   │   └── argocd-system
│   └── infrastructure
├── pki
└── terraform
    ├── infrastructure
    │   ├── core_services
    │   ├── hypervisors
    │   └── kubernetes
    └── modules
        └── kvm_virtual_machine

28 directories

With this new structure in place, we need to add the files required by ArgoCD.

[user@workstation homelab]$ tree kubernetes
kubernetes
├── apps
├── core
│   ├── apps
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   └── argocd-system.yaml
│   │   └── values.yaml
│   └── argocd-system
│       ├── Chart.yaml
│       └── values.yaml
├── infrastructure
└── README.md

6 directories, 6 files

Newly created ArgoCD files

The new ArgoCD files we have created are:

kubernetes/core/apps/Chart.yaml

---
apiVersion: v2
name: applications
version: 1.0.0
type: application

kubernetes/core/apps/templates/argocd-system.yaml

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: argocd
  namespace: argocd-system
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: argocd-system
    server: 
  project: default
  source:
    path: kubernetes/core/argocd-system
    repoURL: 
    targetRevision: 
  syncPolicy:
    automated: {}
    syncOptions:
      - CreateNamespace=true

kubernetes/core/apps/values.yaml

---
spec:
  destination:
    server: https://kubernetes.default.svc
  source:
    repoURL: https://github.com/eyulf/homelab.git
    targetRevision: main

kubernetes/core/argocd-system/Chart.yaml

---
name: argocd
apiVersion: v2
version: 1.0.0
dependencies:
  - name: argo-cd
    version: 3.28.1
    repository: https://argoproj.github.io/argo-helm

kubernetes/core/argocd-system/values.yaml

---
templates: {}

ArgoCD - CLI Install

Now that we have the git repository ready for ArgoCD to use, we need to install ArgoCD on the Kubernetes controllers so that we can bootstrap it. This is done using updated Ansible configuration.

Variables

The following additional variables have been used for this.

ansible/group_vars/k8s_controllers.yml

argocd_version: 'v2.2.5'

Commands

cd ansible
ansible-playbook -i production k8s-controllers.yml - argocd

Output

[user@workstation ansible]$ ansible-playbook -i production k8s-controllers.yml -t argocd

PLAY [k8s_controllers] ************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************
ok: [k8s-controller-03]
ok: [k8s-controller-01]
ok: [k8s-controller-02]
[WARNING]: flush_handlers task does not support when conditional

TASK [kubernetes_controller : kubernetes-controller | set variables] **************************************************
ok: [k8s-controller-01] => (item=/homelab/ansible/roles/kubernetes_controller/vars/default.yml)
ok: [k8s-controller-02] => (item=/homelab/ansible/roles/kubernetes_controller/vars/default.yml)
ok: [k8s-controller-03] => (item=/homelab/ansible/roles/kubernetes_controller/vars/default.yml)

TASK [kubernetes_controller : argocd | download argocd binary] ***************************************************
changed: [k8s-controller-02]
changed: [k8s-controller-03]
changed: [k8s-controller-01]

PLAY RECAP ************************************************************************************************************
k8s-controller-01          : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8s-controller-02          : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8s-controller-03          : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Next we need to install ArgoCD into Kubernetes itself.

ArgoCD - K8s Install

Now that we have both the ArgoCD CLi installed and the git repository prepared, we now need to start using them. The first task is to install the ArgoCD Kubernetes pods, since we are using Helm we will use the ArgoCD Helm chart.

First we need to pull the git repository onto the Kubernetes controller.

root@k8s-controller-01:~# git clone https://github.com/eyulf/homelab.git homelab
Cloning into 'homelab'...
remote: Enumerating objects: 1025, done.
remote: Counting objects: 100% (1025/1025), done.
remote: Compressing objects: 100% (636/636), done.
remote: Total 1025 (delta 351), reused 910 (delta 237), pack-reused 0
Receiving objects: 100% (1025/1025), 516.00 KiB | 4.26 MiB/s, done.
Resolving deltas: 100% (351/351), done.

Now we can simply use the Helm chart we previously added to the git repository.

root@k8s-controller-01:~# cd homelab/kubernetes/core/argocd-system/
root@k8s-controller-01:~/homelab/kubernetes/core/argocd-system# helm dependencies update
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config
Getting updates for unmanaged Helm repositories...
...Successfully got an update from the "https://argoproj.github.io/argo-helm" chart repository
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "projectcalico" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading argo-cd from repo https://argoproj.github.io/argo-helm
Deleting outdated charts
root@k8s-controller-01:~/homelab/kubernetes/core/argocd-system# kubectl create namespace argocd-system
namespace/argocd-system created
root@k8s-controller-01:~/homelab/kubernetes/core/argocd-system# helm install -n argocd-system argocd . -f values.yaml
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config
NAME: argocd
LAST DEPLOYED: Thu Feb 24 8:38:36 2022
NAMESPACE: argocd-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

Once the new pods are deployed ArgoCD is now installed on the cluster, looking closer at what it has done we can see the new pods.

adminuser@k8s-controller-01:~$ sudo kubectl get all -n argocd-system
NAME                                             READY   STATUS    RESTARTS   AGE
argocd-application-controller-6d79d8f8c9-vnvdw   1/1     Running   0          96s
argocd-dex-server-867457f597-ntw2j               1/1     Running   0          96s
argocd-redis-84675744fc-sbtv9                    1/1     Running   0          96s
argocd-repo-server-89ccd47b8-xdfrb               1/1     Running   0          96s
argocd-server-6d5b59bc4c-6mzhx                   1/1     Running   0          96s

NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/argocd-application-controller   ClusterIP   10.104.168.34   <none>        8082/TCP            96s
service/argocd-dex-server               ClusterIP   10.109.69.211   <none>        5556/TCP,5557/TCP   96s
service/argocd-redis                    ClusterIP   10.104.201.26   <none>        6379/TCP            96s
service/argocd-repo-server              ClusterIP   10.108.185.85   <none>        8081/TCP            96s
service/argocd-server                   ClusterIP   10.105.79.87    <none>        80/TCP,443/TCP      96s

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/argocd-application-controller   1/1     1            1           96s
deployment.apps/argocd-dex-server               1/1     1            1           96s
deployment.apps/argocd-redis                    1/1     1            1           96s
deployment.apps/argocd-repo-server              1/1     1            1           96s
deployment.apps/argocd-server                   1/1     1            1           96s

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/argocd-application-controller-6d79d8f8c9   1         1         1       96s
replicaset.apps/argocd-dex-server-867457f597               1         1         1       96s
replicaset.apps/argocd-redis-84675744fc                    1         1         1       96s
replicaset.apps/argocd-repo-server-89ccd47b8               1         1         1       96s
replicaset.apps/argocd-server-6d5b59bc4c                   1         1         1       96s

This has installed ArgoCD, but it is still not quite ready to use. As we can see in the following output, there are no Apps that ArgoCD is managing.

adminuser@k8s-controller-01:~$ sudo kubectl config set-context --current --namespace=argocd-system
Context "default" modified.
adminuser@k8s-controller-01:~$ sudo argocd app list --core
NAME  CLUSTER  NAMESPACE  PROJECT  STATUS  HEALTH  SYNCPOLICY  CONDITIONS  REPO  PATH  TARGET

We now need to install the first ArgoCD App, which in the App of Apps structure will link to other Apps.

ArgoCD - App of Apps

To get started with actually using ArgoCD we need to deploy our first first App of Apps App. This was added to our repository when we prepared it, so this is very simple as we can just create the App referencing the git repository.

adminuser@k8s-controller-01:~$ sudo argocd app create core-apps \
>      --dest-namespace argocd-system \
>      --dest-server https://kubernetes.default.svc \
>      --repo https://github.com/eyulf/homelab.git \
>      --revision main \
>      --path kubernetes/core/apps \
>      --sync-policy auto \
>      --core
application 'core-apps' created

With this applied, ArgoCD now has an App of Apps App configured and has deployed the Apps this relates to.

adminuser@k8s-controller-01:~$ sudo argocd app list --core
NAME       CLUSTER                         NAMESPACE      PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                  PATH                           TARGET
argocd     https://kubernetes.default.svc  argocd-system  default                   Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/argocd-system  main
core-apps  https://kubernetes.default.svc  argocd-system  default  Synced  Healthy  Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/apps           main

With this done, we can now look at managing the Calico pods using ArgoCD. First we need to add the Calico configuration to the git repository.

The new ArgoCD files we have created are:

kubernetes/core/apps/templates/calico-system.yaml

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: calico
  namespace: argocd-system
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: calico-system
    server: 
  project: default
  source:
    path: kubernetes/core/calico-system
    repoURL: 
    targetRevision: 
  syncPolicy:
    automated: {}
    syncOptions:
      - CreateNamespace=true

kubernetes/core/calico-system/Chart.yaml

---
name: calico
apiVersion: v2
version: 1.0.0
dependencies:
  - name: tigera-operator
    version: v3.21.4
    repository: https://docs.projectcalico.org/charts

kubernetes/core/calico-system/values.yaml

---

Now we can manually sync the changes, if we don’t do this ArgoCD will do it automatically anyway within 3 minutes time. Doing it manually allows us to capture the output shown below.

adminuser@k8s-controller-01:~$ sudo argocd app sync core-apps --core
TIMESTAMP                  GROUP              KIND    NAMESPACE                     NAME    STATUS   HEALTH        HOOK  MESSAGE
2022-02-24T20:44:51+11:00  argoproj.io  Application  argocd-system                argocd    Synced                       
2022-02-24T20:44:53+11:00  argoproj.io  Application  argocd-system                argocd    Synced                       application.argoproj.io/argocd unchanged
2022-02-24T20:44:53+11:00  argoproj.io  Application  argocd-system                calico   Running   Synced              application.argoproj.io/calico created
2022-02-24T20:44:53+11:00  argoproj.io  Application  argocd-system                calico  OutOfSync   Synced              application.argoproj.io/calico created

Name:               core-apps
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          argocd-system
URL:                https://argocd.example.com/applications/core-apps
Repo:               https://github.com/eyulf/homelab.git
Target:             main
Path:               kubernetes/core/apps
SyncWindow:         Sync Allowed
Sync Policy:        Automated
Sync Status:        Synced to main (82ae65d)
Health Status:      Healthy

Operation:          Sync
Sync Revision:      82ae65d9274102b1ee67c374fd82fee339d2a3e6
Phase:              Succeeded
Start:              2022-02-24 20:44:51 +1100 AEDT
Finished:           2022-02-24 20:44:53 +1100 AEDT
Duration:           2s
Message:            successfully synced (all tasks run)

GROUP        KIND         NAMESPACE      NAME    STATUS  HEALTH  HOOK  MESSAGE
argoproj.io  Application  argocd-system  argocd  Synced                application.argoproj.io/argocd unchanged
argoproj.io  Application  argocd-system  calico  Synced                application.argoproj.io/calico created

Now Calico shows up as an App in ArgoCD.

adminuser@k8s-controller-01:~$ sudo argocd app list --core
NAME       CLUSTER                         NAMESPACE      PROJECT  STATUS  HEALTH       SYNCPOLICY  CONDITIONS  REPO                                  PATH                           TARGET
argocd     https://kubernetes.default.svc  argocd-system  default  Synced  Progressing  Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/argocd-system  main
calico     https://kubernetes.default.svc  calico-system  default  Synced  Healthy      Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/calico-system  main
core-apps  https://kubernetes.default.svc  argocd-system  default  Synced  Healthy      Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/apps           main

There is no change to the deployed Kubernetes infrastructure though, as we already had Calico installed.

adminuser@k8s-controller-01:~$ sudo kubectl get all -A
NAMESPACE         NAME                                                 READY   STATUS    RESTARTS      AGE
argocd-system     pod/argocd-application-controller-6d79d8f8c9-w2cw6   1/1     Running   0             8m5s
argocd-system     pod/argocd-dex-server-867457f597-lrplq               1/1     Running   0             8m5s
argocd-system     pod/argocd-redis-84675744fc-tdh2h                    1/1     Running   0             8m6s
argocd-system     pod/argocd-repo-server-89ccd47b8-qxgj6               1/1     Running   0             8m5s
argocd-system     pod/argocd-server-6d5b59bc4c-pgsjt                   1/1     Running   0             8m5s
calico-system     pod/calico-kube-controllers-7dddfdd6c9-bpjf6         1/1     Running   0             9m26s
calico-system     pod/calico-node-9zb7p                                1/1     Running   0             9m27s
calico-system     pod/calico-node-cxjqm                                1/1     Running   0             9m16s
calico-system     pod/calico-node-g6v6k                                1/1     Running   0             9m27s
calico-system     pod/calico-node-pgktq                                1/1     Running   0             9m16s
calico-system     pod/calico-node-qhvmg                                1/1     Running   0             9m27s
calico-system     pod/calico-node-xxscz                                1/1     Running   0             9m16s
calico-system     pod/calico-typha-79c78c5c75-6gswb                    1/1     Running   0             9m10s
calico-system     pod/calico-typha-79c78c5c75-7fs25                    1/1     Running   0             9m20s
calico-system     pod/calico-typha-79c78c5c75-prndc                    1/1     Running   0             9m27s
kube-system       pod/coredns-64897985d-2vsn8                          1/1     Running   0             13m
kube-system       pod/coredns-64897985d-m9lk6                          1/1     Running   0             13m
kube-system       pod/etcd-k8s-controller-01                           1/1     Running   0             13m
kube-system       pod/etcd-k8s-controller-02                           1/1     Running   0             12m
kube-system       pod/etcd-k8s-controller-03                           1/1     Running   0             11m
kube-system       pod/kube-apiserver-k8s-controller-01                 1/1     Running   0             10m
kube-system       pod/kube-apiserver-k8s-controller-02                 1/1     Running   0             10m
kube-system       pod/kube-apiserver-k8s-controller-03                 1/1     Running   0             10m
kube-system       pod/kube-controller-manager-k8s-controller-01        1/1     Running   1 (12m ago)   13m
kube-system       pod/kube-controller-manager-k8s-controller-02        1/1     Running   0             12m
kube-system       pod/kube-controller-manager-k8s-controller-03        1/1     Running   0             12m
kube-system       pod/kube-proxy-2cxmw                                 1/1     Running   0             9m16s
kube-system       pod/kube-proxy-5n6zx                                 1/1     Running   0             9m16s
kube-system       pod/kube-proxy-649cc                                 1/1     Running   0             13m
kube-system       pod/kube-proxy-67dgm                                 1/1     Running   0             9m16s
kube-system       pod/kube-proxy-cdfxp                                 1/1     Running   0             12m
kube-system       pod/kube-proxy-gxb92                                 1/1     Running   0             10m
kube-system       pod/kube-scheduler-k8s-controller-01                 1/1     Running   1 (12m ago)   13m
kube-system       pod/kube-scheduler-k8s-controller-02                 1/1     Running   0             12m
kube-system       pod/kube-scheduler-k8s-controller-03                 1/1     Running   0             12m
tigera-operator   pod/tigera-operator-768d489967-4ws76                 1/1     Running   0             9m48s

NAMESPACE       NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
argocd-system   service/argocd-application-controller     ClusterIP   10.106.241.238   <none>        8082/TCP                 8m8s
argocd-system   service/argocd-dex-server                 ClusterIP   10.111.34.39     <none>        5556/TCP,5557/TCP        8m8s
argocd-system   service/argocd-redis                      ClusterIP   10.101.126.58    <none>        6379/TCP                 8m8s
argocd-system   service/argocd-repo-server                ClusterIP   10.103.62.188    <none>        8081/TCP                 8m9s
argocd-system   service/argocd-server                     ClusterIP   10.105.231.176   <none>        80/TCP,443/TCP           8m8s
calico-system   service/calico-kube-controllers-metrics   ClusterIP   10.104.146.41    <none>        9094/TCP                 7m58s
calico-system   service/calico-typha                      ClusterIP   10.102.242.198   <none>        5473/TCP                 9m27s
default         service/kubernetes                        ClusterIP   10.96.0.1        <none>        443/TCP                  13m
kube-system     service/kube-dns                          ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   13m

NAMESPACE       NAME                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR              AGE
calico-system   daemonset.apps/calico-node              6         6         6       6            6           kubernetes.io/os=linux     9m27s
calico-system   daemonset.apps/calico-windows-upgrade   0         0         0       0            0           kubernetes.io/os=windows   9m26s
kube-system     daemonset.apps/kube-proxy               6         6         6       6            6           kubernetes.io/os=linux     13m

NAMESPACE         NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
argocd-system     deployment.apps/argocd-application-controller   1/1     1            1           8m8s
argocd-system     deployment.apps/argocd-dex-server               1/1     1            1           8m8s
argocd-system     deployment.apps/argocd-redis                    1/1     1            1           8m8s
argocd-system     deployment.apps/argocd-repo-server              1/1     1            1           8m8s
argocd-system     deployment.apps/argocd-server                   1/1     1            1           8m8s
calico-system     deployment.apps/calico-kube-controllers         1/1     1            1           9m27s
calico-system     deployment.apps/calico-typha                    3/3     3            3           9m27s
kube-system       deployment.apps/coredns                         2/2     2            2           13m
tigera-operator   deployment.apps/tigera-operator                 1/1     1            1           9m48s

NAMESPACE         NAME                                                       DESIRED   CURRENT   READY   AGE
argocd-system     replicaset.apps/argocd-application-controller-6d79d8f8c9   1         1         1       8m8s
argocd-system     replicaset.apps/argocd-dex-server-867457f597               1         1         1       8m8s
argocd-system     replicaset.apps/argocd-redis-84675744fc                    1         1         1       8m8s
argocd-system     replicaset.apps/argocd-repo-server-89ccd47b8               1         1         1       8m8s
argocd-system     replicaset.apps/argocd-server-6d5b59bc4c                   1         1         1       8m8s
calico-system     replicaset.apps/calico-kube-controllers-7dddfdd6c9         1         1         1       9m27s
calico-system     replicaset.apps/calico-typha-79c78c5c75                    3         3         3       9m27s
kube-system       replicaset.apps/coredns-64897985d                          2         2         2       13m
tigera-operator   replicaset.apps/tigera-operator-768d489967                 1         1         1       9m48s

ArgoCD - Bootstrapping

We want to be able to rebuild the Kubernetes cluster with minimal manual effort required. To achieve this we need to add the steps we followed above into Ansible and commit the changes to git. To properly bootstrap this, I rebuilt the cluster again using the rebuild script I created.

Variables

The following additional variables have been used for this.

ansible/group_vars/k8s_controllers.yml

git_repo_url: https://github.com/eyulf/homelab.git
git_branch: main

Commands

./rebuild-k8s.sh

Upon logging into the controller we can confirm that ArgoCD is installed and has the App of Apps App present.

adminuser@k8s-controller-01:~$ sudo argocd app list --core
NAME       CLUSTER                         NAMESPACE      PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                  PATH                           TARGET
argocd     https://kubernetes.default.svc  argocd-system  default  Synced  Healthy  Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/argocd-system  main
calico     https://kubernetes.default.svc  calico-system  default  Synced  Healthy  Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/calico-system  main
core-apps  https://kubernetes.default.svc  argocd-system  default  Synced  Healthy  Auto        <none>      https://github.com/eyulf/homelab.git  kubernetes/core/apps           main

Now we can continue with using the Kubernetes cluster and installing Pi-Hole.

Next up: Kubernetes - Pi-Hole