Install Longhorn with FluxCD
This guide walks you through installing Longhorn distributed storage system on your Kubernetes cluster using FluxCD for GitOps-based deployment and management.
Prerequisites
- Running Kubernetes cluster (see High Availability K3s Setup)
- FluxCD installed and configured (see FluxCD Setup)
- Git repository for GitOps configuration
- Worker nodes with adequate disk space for storage pools
- Optionally: ingress controller
Architecture Overview
Longhorn provides distributed block storage with the following components:
- Longhorn Manager: Orchestrates volumes and handles API requests
- Longhorn Engine: Handles actual read/write operations for volumes
- Longhorn UI: Web-based management interface
- CSI Driver: Kubernetes Container Storage Interface implementation
- Instance Manager: Manages Longhorn engine and replica instances
Installation Steps
-
Prepare storage requirements
Verify your worker nodes have adequate storage and meet Longhorn requirements:
Terminal window # Check available disk space on worker nodeskubectl get nodes -o wide# Verify each worker node has at least 10GB available storage# Longhorn will use local storage from each node -
Create Longhorn namespace and source
Add the Longhorn Helm repository as a FluxCD source in your GitOps repository:
clusters/CLUSTER_NAME/infrastructure/longhorn/helmrepository.yaml apiVersion: source.toolkit.fluxcd.io/v1kind: HelmRepositorymetadata:name: longhornnamespace: flux-systemspec:interval: 1hurl: https://charts.longhorn.ioclusters/CLUSTER_NAME/infrastructure/longhorn/namespace.yaml apiVersion: v1kind: Namespacemetadata:name: longhornlabels:name: longhorn -
Configure Longhorn Helm release
Create the Longhorn Helm release configuration:
clusters/CLUSTER_NAME/infrastructure/longhorn/helmrelease.yaml apiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata:name: longhornnamespace: longhornspec:chart:spec:chart: longhornversion: '1.9.x'interval: 5hsourceRef:kind: HelmRepositoryname: longhornnamespace: flux-systemreleaseName: longhorninterval: 1hvalues:ingress: # optionalenabled: truehost: longhorn.local.peerlab.bedependsOn: # optional- name: traefiknamespace: traefik -
Create kustomization files
Create kustomization files to manage the deployment:
clusters/CLUSTER_NAME/infrastructure/longhorn/kustomization.yaml resources:- namespace.yaml- helmrepository.yaml- helmrelease.yamlclusters/CLUSTER_NAME/infrastructure/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources:- ./longhorn -
Update main cluster kustomization
Add the infrastructure as kustomization to your main cluster:
clusters/CLUSTER_NAME/infrastructure.yaml apiVersion: kustomize.toolkit.fluxcd.io/v1kind: Kustomizationmetadata:name: infrastructurenamespace: flux-systemspec:interval: 10m0spath: ./infrastructure/peerclusterprune: truesourceRef:kind: GitRepositoryname: flux-systemdependsOn:- name: flux-systemwhen done your flux directory should look something like this Your FluxCD repository directory structure should look like this:
flux-repo/├── apps/│ ├── base/│ └── peercluster/├── clusters/│ └── peercluster/│ ├── flux-system/│ ├── apps.yaml│ └── infrastructure.yaml└── infrastructure/├── base/│ ├── kustomization.yaml│ └── longhorn/│ ├── helmrelease.yaml│ ├── helmrepository.yaml│ ├── kustomization.yaml│ └── namespace.yaml└── peercluster/└── kustomization.yaml -
Verify installation
After committing and pushing your changes, verify the installation:
Terminal window # Check FluxCD reconciliationflux get helmreleases -A# Check Longhorn podskubectl get pods -n longhorn# Verify storage classkubectl get storageclass# Check Longhorn manager statuskubectl get daemonset longhorn-manager -n longhorn
Configuration
Default Storage Class
Longhorn will automatically become the default storage class. To verify:
kubectl get storageclass# Should show longhorn marked as (default)
Access Longhorn UI
Access the Longhorn management interface:
# Port forward to access UI locallykubectl port-forward -n longhorn svc/longhorn-frontend 8080:80
# Access via browser at: http://localhost:8080
Ingress Configuration
We configured ingress in our Helm release, so you can access the UI directly at the configured hostname (in our example: longhorn.local.peerlab.be
).
Example HAProxy Configuration (if needed for your setup):
# ... exitsing config ...
# Traefik HTTP Load Balancerfrontend traefik_http_frontend bind *:80 mode http default_backend traefik_http_backend
backend traefik_http_backend mode http balance roundrobin option tcp-check server k3s-server-01 192.168.1.11:30097 check server k3s-server-02 192.168.1.12:30097 check server k3s-server-03 192.168.1.13:30097 check server k3s-worker-01 192.168.1.21:30097 check server k3s-worker-02 192.168.1.22:30097 check server k3s-worker-03 192.168.1.23:30097 check
# Traefik HTTPS Load Balancerfrontend traefik_https_frontend bind *:443 mode tcp default_backend traefik_https_backend
backend traefik_https_backend mode tcp balance roundrobin server k3s-server-01 192.168.1.11:30551 check server k3s-server-02 192.168.1.12:30551 check server k3s-server-03 192.168.1.13:30551 check server k3s-worker-01 192.168.1.21:30551 check server k3s-worker-02 192.168.1.22:30551 check server k3s-worker-03 192.168.1.23:30551 check
Testing and Validation
Create Test Volume
Test Longhorn with a simple PVC:
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: longhorn-test-pvcspec: accessModes: - ReadWriteOnce storageClassName: longhorn resources: requests: storage: 1Gi
# Apply test PVCkubectl apply -f test-pvc.yaml
# Check PVC statuskubectl get pvc longhorn-test-pvc
# Check volume in Longhorn UI
Test Pod with Persistent Storage
apiVersion: v1kind: Podmetadata: name: longhorn-test-podspec: containers: - name: test-container image: nginx:alpine volumeMounts: - name: test-volume mountPath: /data volumes: - name: test-volume persistentVolumeClaim: claimName: longhorn-test-pvc
# Apply test podkubectl apply -f test-pod.yaml
# Verify pod is runningkubectl get pod longhorn-test-pod
# Test data persistencekubectl exec longhorn-test-pod -- sh -c 'echo "test data" > /data/test.txt'kubectl exec longhorn-test-pod -- cat /data/test.txt
Clean Up Test Resources
After testing, remove the test resources to clean up your cluster:
# Delete test podkubectl delete pod longhorn-test-pod
# Delete test PVC (this will also delete the associated PV)kubectl delete pvc longhorn-test-pvc