UpCloud CSI driver deployment is bundled with sidecar containers and optional snapshot validation webhook service.
Version specific deployment manifests upcloud-csi-crd.yaml
and upcloud-csi-setup.yaml
can be found under release assets.
UpCloud's Managed Kubernetes service (UKS) includes a pre-installed CSI driver, and it does not need to be installed separately.
- Kubernetes v1.16+
- UpCloud account
Execute the following commands to add UpCloud credentials as Kubernetes secret:
Replace $UPCLOUD_PASSWORD
and $UPCLOUD_USERNAME
with your UpCloud API credentials if not defined using environment variables.
$ kubectl -n kube-system create secret generic upcloud --from-literal=password=$UPCLOUD_PASSWORD --from-literal=username=$UPCLOUD_USERNAME
After the message, that secret was created, you can run this command to check the existence of upcloud
secret
in kube-system
namespace:
$ kubectl -n kube-system get secret upcloud
NAME TYPE DATA AGE
upcloud Opaque 2 18h
Deploy custom resources definitions and roles required by CSI driver:
$ kubectl apply -f https://github.com/UpCloudLtd/upcloud-csi/releases/latest/download/crd-upcloud-csi.yaml
$ kubectl apply -f https://github.com/UpCloudLtd/upcloud-csi/releases/latest/download/rbac-upcloud-csi.yaml
Deploy the CSI driver with the related Kubernetes volume attachment, driver registration, and provisioning sidecars:
$ kubectl apply -f https://github.com/UpCloudLtd/upcloud-csi/releases/latest/download/setup-upcloud-csi.yaml
The snapshot validating webhook is optional service that provides tightened validation on snapshot objects.
Service is optional but recommended if volume snapshots are used in cluster.
Validation service requires proper CA certificate, x509 certificate and matching private key for secure communication.
More information can be found from official snapshot webhook example along with example how to deploy certificates.
Manifest snapshot-webhook-upcloud-csi.yaml
can be used to deploy webhook service.
Manifest assumes that secret named snapshot-validation-secret
exists and is populated with valid x509 certificate cert.pem
(CA cert, if any, concatenated after server cert) and matching private key key.pem
.
If custom CA is used (e.g. when using self-signed certificate) caBundle
field needs to be set with CA data as value.
$ kubectl apply -f https://raw.githubusercontent.com/UpCloudLtd/upcloud-csi/main/deploy/kubernetes/snapshot-webhook-upcloud-csi.yaml
It's possible to select an option of disk type between HDD
, Standard
and MaxIOPS
. Details about different storage tiers can be found from the UpCloud product documentation.
For setting desired type you can set a storageClassName
field in PVC
to:
upcloud-block-storage-maxiops
upcloud-block-storage-hdd
upcloud-block-storage-standard
If storageClassName
field is not set, the default provisioned option will be upcloud-block-storage-maxiops
.
These storage classes use Retain
as reclaimPolicy
, which causes CSI driver to preserve underlying storage, when PVC object is deleted.
To clean up also the storage, one needs to define new storage class using Delete
as reclaimPolicy
, e.g.:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: upcloud-block-storage-custom
parameters:
tier: maxiops
provisioner: storage.csi.upcloud.com
reclaimPolicy: Delete
allowVolumeExpansion: true
storage class name is just an example, it can be anything
In example
directory you may find 2 manifests for deploying a pod and persistent volume claim to test CSI Driver
operations
$ kubectl apply -f https://raw.githubusercontent.com/UpCloudLtd/upcloud-csi/main/example/test-pvc.yaml
$ kubectl apply -f https://raw.githubusercontent.com/UpCloudLtd/upcloud-csi/main/example/test-pod.yaml
Check if pod is deployed with Running status and already using the PVC:
$ kubectl get pods -l app=csi-app
if pod is not running, you can check possible causes for problem from PVC and deployment events
$ kubectl describe deployments.apps csi-app
$ kubectl describe pvc csi-pvc
To check the persistence feature - you may create the sample file in Pod, later, delete the pod and re-create it from yaml manifest and notice that the file is still in mounted directory
$ kubectl exec -it deployments/csi-app -- /bin/sh -c "touch /data/persistent-file.txt"
deployments/csi-app
$ kubectl exec -it deployments/csi-app -- /bin/sh -c "ls -1 /data/"
lost+found
persistent-file.txt
Delete pod deployment and wait until it's deleted
$ kubectl delete deployments/csi-app
pod "csi-app" deleted
$ kubectl get pods -l app=csi-app -w
Recreate pod, wait until it's running and check contents of /data
folder
$ kubectl apply -f https://raw.githubusercontent.com/UpCloudLtd/upcloud-csi/main/example/test-pod.yaml
pod/csi-app created
$ kubectl get pods -l app=csi-app -w
$ kubectl exec -it deployments/csi-app -- /bin/sh -c "ls -1 /data/"
lost+found
persistent-file.txt
More examples are available at UKS instructions repository.
Kubernetes CSI sidecar containers are a set of standard containers which contain common logic to watch the Kubernetes API, trigger appropriate operations against the UpCloud CSI driver container, and update the Kubernetes API as appropriate.
Watches the Kubernetes API server for PersistentVolumeClaim
objects and triggers CreateVolume
and DeleteVolume
operations against the driver.
Provides the ability to request a volume be pre-populated from a data source during provisioning.
- Image: k8s.gcr.io/sig-storage/csi-provisioner
- Controller capability:
CREATE_DELETE_VOLUME
Watches the Kubernetes API server for VolumeAttachment
objects and triggers ControllerPublishVolume
and ControllerUnpublishVolume
operations against the driver.
- Image: k8s.gcr.io/sig-storage/csi-attacher
- Controller capability:
PUBLISH_UNPUBLISH_VOLUME
external-snapshotter Beta/GA
Watches the Kubernetes API server for VolumeSnapshot
and VolumeSnapshotContent
CRD objects and triggers CreateSnapshot
, DeleteSnapshot
and ListSnapshots
operations against the driver.
- Image: k8s.gcr.io/sig-storage/csi-snapshotter
- Controller capability:
CREATE_DELETE_SNAPSHOT
,LIST_SNAPSHOTS
Watches the Kubernetes API server for PersistentVolumeClaim
object edits and triggers ControllerExpandVolume
operation against the driver if volume size is increased.
- Image: k8s.gcr.io/sig-storage/csi-resizer
- Plugin capability:
VolumeExpansion_OFFLINE
Fetches driver information using NodeGetInfo
from the driver and registers it with the kubelet on that node using unix domain socket.
Kubelet triggers NodeGetInfo
, NodeStageVolume
, and NodePublishVolume
operations against the driver.
- Image: k8s.gcr.io/sig-storage/csi-node-driver-registrar