Skip to content

Commit

Permalink
Merge pull request #78 from saikat-royc/readme
Browse files Browse the repository at this point in the history
Readme
  • Loading branch information
k8s-ci-robot authored Nov 17, 2020
2 parents d085a29 + ad73a27 commit 793666e
Show file tree
Hide file tree
Showing 12 changed files with 241 additions and 39 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Note that non-default networks require extra [firewall setup](https://cloud.goog
* Volume resizing: CSI Filestore driver supports volume expansion for all supported Filestore tiers.
* Labels: Filestore supports labels per instance, which is a map of key value pairs. Filestore CSI driver enables user provided labels
to be stamped on the instance. User can provide labels by using 'labels' key in StorageClass.parameters. In addition, Filestore instance can
be labelled with information about what PVC/PV the instance was created for. To obtain the PVC/PV information, '--extra-create-metadata' flag needs to be set on the CSI external-provisioner sidecar. User provided label keys and values must comply with the naming convention as specified [here](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements)
be labelled with information about what PVC/PV the instance was created for. To obtain the PVC/PV information, '--extra-create-metadata' flag needs to be set on the CSI external-provisioner sidecar. User provided label keys and values must comply with the naming convention as specified [here](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements). Please see [this](examples/kubernetes/sc-labels.yaml) storage class examples to apply custom user-provided labels to the Filestore instance.
* Topology preferences: Filestore performance and network usage is affected by topology. For example, it is recommended to run
workloads in the same zone where the Cloud Filestore instance is provisioned in. The following table describes how provisioning can be tuned by topology. The volumeBindingMode is specified in the StorageClass used for provisioning. 'strict-topology' is a flag passed to the CSI provisioner sidecar. 'allowedTopology' is also specified in the StorageClass. The Filestore driver will use the first topology in the preferred list, or if empty the first in the requisite list. If topology feature is not enabled in CSI provisioner (--feature-gates=Topology=false), CreateVolume.accessibility_requirements will be nil, and the driver simply creates the instance in the zone where the driver deployment running.

Expand Down
95 changes: 95 additions & 0 deletions docs/kubernetes/fsgroup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# CSI driver FsGroup User Guide

>**Attention:** 'CSIVolumeFSGroupPolicy' is a Kubernetes feature which is Beta is 1.20+, Alpha(1.19).
>**Attention:** CSIDriver object 'fsGroupPolicy' field is added in Kubernetes 1.19 and cannot be set when using an older Kubernetes release. This workaround is applicable for 1.19+ k8s versions. For 1.20+ k8s versions the feature will be enabled by default, but the workaround be needed until [issue](https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/issues/77) is resolved.
K8s feature ‘CSIVolumeFSGroupPolicy’ is an alpha feature in K8s 1.19 by which CSI drivers can explicitly declare support for fsgroup.
Until the feature is beta, Kubernetes only applies fsgroup to CSI volumes that are RWO (ReadWriteOnce). Kubernetes uses fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityContext. As a workaround until the CSIVolumeFSGroupPolicy feature is beta, we can deploy a PV backing a filestore instance in RWO mode, apply the FSGroup and then recreate a PV with RWM (ReadWriteMany) mode, so that it can be used for multi reader writer workloads. This workaround does not require pods to run containers as the root user. To read more about CSIVolumeFSGroupPolicy read [here](https://kubernetes-csi.github.io/docs/csi-driver-object.html)


### FsGroup example

1. Create `StorageClass`

```console
$ kubectl apply -f ./examples/kubernetes/fsgroup/demo-sc.yaml
```
If the filestore instance is going to use a non-default network, setup the `network`

2. Create a PV with accessModes `ReadWriteOnce` and ReclaimPolicy `Retain`.

**Note:** The `volumeHandle` should be updated
based on the zone, Filestore instance name, and share name created. `storage` value
should be generated based on the size of the underlying instance. VolumeAttributes `ip` must
point to the filestore instance IP, and `volume` must point to the [fileshare](https://cloud.google.com/filestore/docs/reference/rest/v1beta1/projects.locations.instances#FileShareConfig) name.

```console
$ kubectl apply -f ./examples/kubernetes/fsgroup/preprov-pv.yaml
```

```console
$ kubectl get pvc preprov-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
preprov-pvc Bound my-pre-pv 1Ti RWO csi-filestore 9m14s
```

3. Verify that the pod is up and running and fsgroup ownerhsip change is applied in the volume.
```console
$ kubectl exec busybox-pod -- ls -l /tmp
total 16
drwxrws--- 2 root 4000 16384 Nov 16 23:25 lost+found
```

4. Now the dummy pod and the PVC can be deleted.
```console
$ kubectl delete po busybox-pod
pod "busybox-pod" deleted
```

Since PVC has 'Retain' policy, the underlying PV and Filestore instance will not be deleted. Once PVC is deleted, PV enters a 'Release' phase.
```console
$ kubectl delete pvc preprov-pvc
persistentvolumeclaim "preprov-pvc" deleted
```

5. Edit the PV to change access mode to RWM, and remove claimRef so that the PV is 'Available' again.
```
$ kubectl patch pv my-pre-pv -p '{"spec":{"accessModes":["ReadWriteMany"]}}'
$ kubectl patch pv my-pre-pv -p '{"spec":{"claimRef":null}}'
```

```
$ kubectl get pv my-pre-pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pre-pv 1Ti RWX Retain Available csi-filestore 9m54s
```

5. Re-use the same RWX PVC in a multipod deployment and ensure that the deployment is up and running.
```console
$ kubectl apply -f ./examples/kubernetes/fsgroup/demo-deployment.yaml
```

```console
$ kubectl get deployment web-server-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web-server-deployment 3/3 3 3 12m
```

6. Check the volume ownership, by performing exec for each pod of the deployment.
```console
$ kubectl exec web-server-deployment-679dc45b5b-6xdvr -- ls -l /usr/share/nginx/html
total 16
drwxrws--- 2 root 4000 16384 Nov 16 23:25 lost+found
```

```console
$ kubectl exec web-server-deployment-679dc45b5b-phcxp -- ls -l /usr/share/nginx/html
total 16
drwxrws--- 2 root 4000 16384 Nov 16 23:25 lost+found
```

```console
$ kubectl exec web-server-deployment-679dc45b5b-z2n8s -- ls -l /usr/share/nginx/html
total 16
drwxrws--- 2 root 4000 16384 Nov 16 23:25 lost+found
```
2 changes: 1 addition & 1 deletion docs/kubernetes/resize.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
This example dynamically provisions a filestore instance and performs online resize of the instance (i.e while the volume is mounted on a Pod). For more details about CSI VolumeExpansion capability see [here](https://kubernetes-csi.github.io/docs/volume-expansion.html)

1. Ensure resize field `allowVolumeExpansion`is set to True, in the example Zonal Storage Class
```
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
Expand Down
59 changes: 30 additions & 29 deletions docs/kubernetes/topology.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,12 @@ This example dynamically provisions a filestore instance and uses storage class
2. Wait for PVC to reach 'Bound' status.
```console
$ kubectl get pvc test-pvc-fs-immediate-binding-allowedtopo
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc-fs-immediate-binding-allowedtopo Bound pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e 1Ti RWX csi-filestore-immediate-binding-allowedtopo 5m7s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc-fs-immediate-binding-allowedtopo Bound pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e 1Ti RWX csi-filestore-immediate-binding-allowedtopo 5m7s
```

3. Verify that the `volumeHandle` captured in the PersistentVolume object specifies the intended zone.
```yaml
```yaml
kubectl get pv pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e -o yaml
apiVersion: v1
kind: PersistentVolume
Expand Down Expand Up @@ -85,32 +86,32 @@ This example dynamically provisions a filestore instance and uses storage class
$ gcloud beta filestore instances describe pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e --zone us-central1-a
```
```yaml
createTime: '2020-11-13T03:46:18.870400740Z'
fileShares:
- capacityGb: '1024'
name: vol1
nfsExportOptions:
- accessMode: READ_WRITE
ipRanges:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
squashMode: NO_ROOT_SQUASH
labels:
kubernetes_io_created-for_pv_name: pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e
kubernetes_io_created-for_pvc_name: test-pvc-fs-immediate-binding-allowedtopo
kubernetes_io_created-for_pvc_namespace: default
storage_gke_io_created-by: filestore_csi_storage_gke_io
name: projects/<your-gcp-project>/locations/us-central1-a/instances/pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e
networks:
- ipAddresses:
- <Filestore instance IP>
modes:
- MODE_IPV4
network: default
reservedIpRange: <IP CIDR>
state: READY
tier: STANDARD
createTime: '2020-11-13T03:46:18.870400740Z'
fileShares:
- capacityGb: '1024'
name: vol1
nfsExportOptions:
- accessMode: READ_WRITE
ipRanges:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
squashMode: NO_ROOT_SQUASH
labels:
kubernetes_io_created-for_pv_name: pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e
kubernetes_io_created-for_pvc_name: test-pvc-fs-immediate-binding-allowedtopo
kubernetes_io_created-for_pvc_namespace: default
storage_gke_io_created-by: filestore_csi_storage_gke_io
name: projects/<your-gcp-project>/locations/us-central1-a/instances/pvc-64e6ce36-523d-4172-b3b3-3c1080ab0b9e
networks:
- ipAddresses:
- <Filestore instance IP>
modes:
- MODE_IPV4
network: default
reservedIpRange: <IP CIDR>
state: READY
tier: STANDARD
```

5. Ensure that the deployment is up and running.
Expand Down
38 changes: 38 additions & 0 deletions examples/kubernetes/fsgroup/demo-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: test-pvc-rwm
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc-rwm
spec:
accessModes:
- ReadWriteMany
storageClassName: csi-filestore
resources:
requests:
storage: 1Ti
8 changes: 8 additions & 0 deletions examples/kubernetes/fsgroup/demo-sc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-filestore
provisioner: filestore.csi.storage.gke.io
parameters:
# network: default
allowVolumeExpansion: true
38 changes: 38 additions & 0 deletions examples/kubernetes/fsgroup/preprov-pod-pvc-rwo.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: v1
kind: Pod
metadata:
name: busybox-pod
labels:
app: busybox
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
- mountPath: /tmp/
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: preprov-pvc
restartPolicy: Always
securityContext:
runAsGroup: 4000 # Replace with desired GID
runAsUser: 100 # Replace with desired UID
fsGroup: 4000 # Replace with desired GID. This value of this field will be applied to the volume.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: preprov-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-filestore
resources:
requests:
storage: 1Ti
22 changes: 22 additions & 0 deletions examples/kubernetes/fsgroup/preprov-pv.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pre-pv
annotations:
pv.kubernetes.io/provisioned-by: filestore.csi.storage.gke.io
spec:
storageClassName: "csi-filestore"
capacity:
storage: 1Ti
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: "Retain"
volumeMode: "Filesystem"
csi:
driver: "filestore.csi.storage.gke.io"
fsType: "nfs" # This field is needed, because, if the PV fstype is not specified, then FsGroup apply for RWO volumes is skipped by the kubelet
# Modify this to use the one, filestore instance and share name
volumeHandle: "modeInstance/<zone>/<filestore-instance-name>/<filestore-share-name>"
volumeAttributes:
ip: <Filestore Instance IP> # Modify this to Pre-provisioned Filestore instance IP
volume: vol1 # Modify this to Pre-provisioned Filestore instance share name
4 changes: 2 additions & 2 deletions examples/kubernetes/pre-provision/preprov-pv.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ spec:
volumeMode: "Filesystem"
csi:
driver: "filestore.csi.storage.gke.io"
# Modify this to use the one, filestore instance and share name
volumeHandle: "modeInstance/us-central1-c/test-preprov-fs/vol1"
# Modify this to use the zone, filestore instance and share name.
volumeHandle: "modeInstance/<zone>/<filestore-instance-name>/<filestore-share-name>"
volumeAttributes:
ip: <Filestore Instance IP> # Modify this to Pre-provisioned Filestore instance IP
volume: vol1 # Modify this to Pre-provisioned Filestore instance share name
3 changes: 0 additions & 3 deletions test/k8s-integration/config/test-config-template.in
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,6 @@ DriverInfo:
StressTestOptions:
NumPods: 10
NumRestarts: 10
SupportedMountOption:
debug:
nouid32:
SupportedSizeRange:
Min: {{.MinimumVolumeSize}}
Max: 64Ti
Expand Down
2 changes: 1 addition & 1 deletion test/k8s-integration/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ var (
testFocus = flag.String("test-focus", "External.Storage", "test focus for Kubernetes e2e")

// SA for dev overlay
devOverlaySA = flag.String("dev-overlay-sa", "gcp-filestore-csi-driver-sa@saikatroyc-gke-dev.iam.gserviceaccount.com", "default SA that will be plumbed to the GCE instances")
devOverlaySA = flag.String("dev-overlay-sa", "", "default SA that will be plumbed to the GCE instances")
)

const (
Expand Down
7 changes: 5 additions & 2 deletions test/run-k8s-integration-local.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,11 @@ expansion_test_focus="External.*Storage.*allowExpansion"
# This version of the command builds and deploys the GCE PS CSI driver for dev overlay.
# Points to a local K8s repository to get the e2e test binary, does not bring up
# or tear down the kubernetes cluster. In addition, it runs a subset of tests based on the test focus ginkgo string.
# E.g. run command: GCE_FS_CSI_STAGING_IMAGE=gcr.io/<your-gcp-project>/gcp-filestore-csi-driver KTOP=$GOPATH/src/k8s.io/kubernetes/ test/run-k8s-integration-local.sh | tee log
# For 'dev' overlay, GCE_FS_DEV_OVERLAY_SA_NAME is expected to be set. If deploy/project_setup.sh is used to create the SA, it would be 'gcp-filestore-csi-driver-sa@<your-gcp-project>.iam.gserviceaccount.com'

# E.g. run command: GCE_FS_CSI_STAGING_IMAGE=gcr.io/<your-gcp-project>/gcp-filestore-csi-driver KTOP=$GOPATH/src/k8s.io/kubernetes/ GCE_FS_DEV_OVERLAY_SA_NAME=gcp-filestore-csi-driver-sa@<your-gcp-project>.iam.gserviceaccount.com test/run-k8s-integration-local.sh | tee log

${PKGDIR}/bin/k8s-integration-test --run-in-prow=false \
--staging-image=${GCE_FS_CSI_STAGING_IMAGE} \
--deploy-overlay-name=dev --bringup-cluster=false --teardown-cluster=false --teardown-driver=false --test-focus=${subpath_test_focus} --local-k8s-dir=$KTOP \
--deploy-overlay-name=dev --dev-overlay-sa=${GCE_FS_DEV_OVERLAY_SA_NAME:-} --bringup-cluster=false --teardown-cluster=false --teardown-driver=false --test-focus=${subpath_test_focus} --local-k8s-dir=$KTOP \
--do-driver-build=true --gce-zone="us-central1-b" --num-nodes=${NUM_NODES:-3}

0 comments on commit 793666e

Please sign in to comment.