How to use Trilio for Kubernetes backups

This guide describes how to deploy Trilio for Kubernetes and manage backups for Kubernetes-based applications on Virtuozzo Infrastructure.

About Trilio for Kubernetes

Trilio for Kubernetes (T4K) is a cloud-native, application-centric data protection and recovery solution designed specifically for Kubernetes environments. It provides backup and recovery services that are integrated with Kubernetes to help manage and protect applications across their entire lifecycle.

t4k about
t4k use cases
t4k compatibility

Note: Trilio has a full-fledged user interface to manage the cluster and workload lifecycle, however, we are using the command-line interface to simplify and offer a quick overview of the capabilities.

Prerequisites

1. Deploy a Virtuozzo Infrastructure cluster.

2. Create the compute cluster with the Kubernetes and load balancing services.

3. Configure a storage policy named standard for boot volumes on Kubernetes master nodes. Ensure that the selected policy is available for all projects where you are planning to deploy Kubernetes.

4. Create a Kubernetes cluster. The guide is certified to work with Kubernetes versions v1.25.7–v1.27.8.

5. Use the table below to assess readiness:

t4k requirements

6. Ensure that you have the credentials (the access key and secret key) to the object storage service. In this guide, we will use the S3 object storage and a bucket provided by Virtuozzo Infrastructure.

7. Create a storage class with the snapshot functionality enabled:

7.1. Create the csi-cinder-snapclass storage class with the storage policy standard. The storage policy must be available in your project.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# cat > storage-class.yaml <<\EOT
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cinder-snapclass
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: cinder.csi.openstack.org
parameters:
  type: standard
EOT

Apply the configuration file:

1
# kubectl create -f storage-class.yaml

7.2. Install the required custom resource definitions (CRD) for volume snapshots:

7.2.1. Check the existing CRDs:

1
2
3
4
# kubectl api-resources | grep volumesnapshot
volumesnapshotclasses    snapshot.storage.k8s.io/v1beta1        false        VolumeSnapshotClass
volumesnapshotcontents   snapshot.storage.k8s.io/v1beta1        false        VolumeSnapshotContent
volumesnapshots          snapshot.storage.k8s.io/v1beta1        true         VolumeSnapshot

7.2.2. Install the missing CRDs (install only one version):

1
2
3
4
5
6
7
8
# git clone https://github.com/kubernetes-csi/external-snapshotter/
# cd ./external-snapshotter
# git checkout release-5.0
# kubectl apply -f client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
# kubectl apply -f client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
# kubectl apply -f client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
# kubectl apply -f deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml -n kube-system
# kubectl apply -f deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml -n kube-system
Important: The CSI snapshotter version must not be higher than release-5.0.

7.3. Create a snapshot class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# cat > snapshotclass.yaml <<\EOT
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-cinder-snapclass
driver: cinder.csi.openstack.org
deletionPolicy: Delete
parameters:
  force-create: "true" 
EOT

Apply the configuration file:

1
# kubectl apply -f snapshot-class.yaml

8. Check the compatibility matrix for the Kubernetes version supported by Trilio:

t4k matrix

Note: In this guide, we are using Trilio 4.0.2 and Kubernetes v1.27.8.

Installing Trilio for Kubernetes

1. Add the repository where the Trilio operator helm chart is located:

1
# helm repo add triliovault-operator https://charts.k8strilio.net/trilio-stable/k8s-triliovault-operator

2. Install the chart from the added repository. Note that we are adding a custom configuration to change the default NodePort to the LoadBalancer service type, so that you can access the T4K interface at a public IP address. Otherwise, if you do not want to add a public IP, you can access the T4K control plane with the kubectl port-forward feature:

1
# helm install tvm triliovault-operator/k8s-triliovault-operator --version 4.0.2 -n trilio-system --create-namespace --set installT4K.ComponentConfiguration.ingressController.service.type=LoadBalancer 

Verifying the Trilio installation

1. Verify the Trilio Operator has started:

1
# kubectl --n trilio-system wait --for=condition=ready pod -l "release=tvm"

2. Check the status of the deployment:

1
# kubectl --n trilio-system get triliovaultmanagers.triliovault.trilio.io triliovault-manager -o yaml

3. Verify that you can access the control plane. If you want to access with an external floating IP address and you have deployed the cluster with the LoadBalancer service type, you can check your external IP address in the command-line interface:

1
# kubectl get ing --n trilio-system 

Or

1
# kubectl get svc k8s-triliovault-ingress-nginx-controller  --n trilio-system 

Installing the license

1. Request a trial license from Trilio.

2. Apply the license to your T4K application:

1
# kubectl apply -f virtuozzo_trial.yaml --n trilio-system

Creating an S3 backup target

In this guide, we are using a Virtuozzo S3 storage bucket.

1. Create a sample secret with your access and secret keys:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# cat > sample-secret.yaml <<\EOT
apiVersion: v1
kind: Secret
metadata:
  name: sample-secret
type: Opaque
stringData:
  accessKey: AKIAS5B35DGFSTY7T55D
  secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVDcode
EOT

Apply the configuration file:

1
# kubectl apply -f sample-secret.yaml

2. Create a backup target. To enable immutability, you need to enable object locking in your bucket and define this option when creating the target YAML file (ObjectLockingEnabled). Alternatively, you can enable immutability later in the UI.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# cat > demo-s3-target.yaml <<\EOT
apiVersion: triliovault.trilio.io/v1
kind: Target
metadata:
  name: virtuozzo-s3-target
spec:
  type: ObjectStore
  vendor: Other
  objectStoreCredentials:
    bucketName: se-backup
    url: https://your.virtuozzoS3.com
    credentialSecret:
      name: sample-secret
      namespace: default
  thresholdCapacity: 100Gi
EOT

Apply the configuration file:

1
# kubectl apply -f demo-s3-target.yaml

3. Create a retention policy:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# cat > default-policy.yaml <<\EOT
apiVersion: triliovault.trilio.io/v1
kind: Policy
metadata:
  name: demo-policy
spec:
  type: Retention
  default: false
  retentionConfig:
    latest: 2
    weekly: 1
    dayOfWeek: Wednesday
    monthly: 1
    dateOfMonth: 15
    monthOfYear: March
    yearly: 1
EOT

Apply the configuration file:

1
# kubectl apply -f default-policy.yaml

t4k example1

Testing application backup

1. Create an application to test the backup functionality:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
# cat > mysql.yaml <<\EOT
## Secret for mysql password
apiVersion: v1
kind: Secret
metadata:
  name: mysql-pass
  labels:
    app: k8s-demo-app
    tier: frontend
type: Opaque
data:
  password: dHJpbGlvcGFzcw==
## password base64 encoded, plain text: triliopass
## "echo -n triliopass | base64" -> to get the encoded password
---
## PVC for mysql PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: k8s-demo-app
    tier: mysql
spec:
  storageClassName: csi-cinder-snapclass
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
## Mysql app deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: k8s-demo-app-mysql
  labels:
    app: k8s-demo-app
    tier: mysql
spec:
  selector:
    matchLabels:
      app: k8s-demo-app
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: k8s-demo-app
        tier: mysql
    spec:
      containers:
      - image: mysql:latest
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
---
## Service for mysql app
apiVersion: v1
kind: Service
metadata:
  name: k8s-demo-app-mysql
  labels:
    app: k8s-demo-app
    tier: mysql
spec:
  type: ClusterIP
  ports:
    - port: 3306
  selector:
    app: k8s-demo-app
    tier: mysql
---
## Deployment for frontend webserver
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-demo-app-frontend
  labels:
    app: k8s-demo-app
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: k8s-demo-app
      tier: frontend
  template:
    metadata:
      labels:
        app: k8s-demo-app
        tier: frontend
    spec:
      containers:
      - name: demoapp-frontend
        image: docker.io/trilio/k8s-demo-app:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
## Service for frontend
apiVersion: v1
kind: Service
metadata:
  name: k8s-demo-app-frontend
  labels:
    app: k8s-demo-app
    tier: frontend
spec:
  ports:
  - name: web
    port: 80
  selector:
    app: k8s-demo-app
    tier: frontend
EOT

Apply the configuration file:

1
# kubectl apply -f mysql.yaml

2. Create a backup plan:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# cat > backup-plan-K8s-demo-app.yaml <<\EOT
apiVersion: triliovault.trilio.io/v1
kind: BackupPlan
metadata:
  name: mysql-label-backupplan
spec:
  backupConfig:
    target:
      namespace: default
      name: virtuozzo-s3-target
    retentionPolicy:
      namespace: default
      name: demo-policy
  backupPlanComponents:
    custom:
      - matchLabels:
          app: k8s-demo-app
EOT

Apply the configuration file:

1
# kubectl apply -f backup-plan-K8s-demo-app.yaml

t4k example2

3. Create a backup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# cat > backup-demo-mysql-app.yaml <<\EOT
apiVersion: triliovault.trilio.io/v1
kind: Backup
metadata:
  name: mysql-label-backup
spec:
  type: Full
  backupPlan:
    name: mysql-label-backupplan
    namespace: default
EOT

Apply the configuration file:

1
# kubectl apply -f backup-demo-mysql-app.yaml

t4k example3

4. Now, let’s simulate a case when the application is removed by mistake:

1
# kubectl delete -f mysql.yaml

5. Check your resources:

1
2
# kubectl get pods -l "app=k8s-demo-app"
No resources found in default namespace.

As we can see in the command output, no resources labeled app=k8s-demo-app exist in the default namespace.

6. Restore the application from the created backup:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
cat > restore-mysql-app.yaml <<\EOT
apiVersion: triliovault.trilio.io/v1
kind: Restore
metadata:
  name: demo-restore
spec:
  source:
    type: Backup
    backup:
      name: mysql-label-backup
      namespace: default
  skipIfAlreadyExists: true
EOT

Apply the configuration file:

1
# kubectl apply -f restore-mysql-app.yaml

In a few minutes, the application is fully restored.

t4k example4

To learn more about Trilio for Kubernetes, refer to the official documentation.

Enjoy!