diff --git a/docs/advanced/node-local-storage.md b/docs/advanced/node-local-storage.md index cca8c9b9..9da4e817 100644 --- a/docs/advanced/node-local-storage.md +++ b/docs/advanced/node-local-storage.md @@ -20,9 +20,10 @@ ported to other providers for this to work. This way of using node local storage works by exposing a filesystem mounted in the host as a Persistent Volume to pods. -A single Persistent Volume can be bound to only one PVC, so only one pod will be -able to use one Persistent Volume in the host. However, see the section below to -see how to create more than one Persistent Volume per node. +A single Persistent Volume can be bound to only one PVC in ReadWriteOnce access +mode, so only one pod will be able to use one Persistent Volume in the host. +However, see the section below to see how to create more than one Persistent +Volume per node. The Kubernetes documentation on Local Persistent Volumes using filesystem mode is [here](https://kubernetes.io/docs/concepts/storage/volumes/#local) (mixed with @@ -98,7 +99,7 @@ spec: persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: - path: /mnt/ + path: /mnt/node-local-storage nodeAffinity: required: nodeSelectorTerms: @@ -121,7 +122,7 @@ spec: persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: - path: /mnt/ + path: /mnt/node-local-storage nodeAffinity: required: nodeSelectorTerms: @@ -208,13 +209,13 @@ worker-1-pv 10Gi RWO Retain Bound default/local- ``` You can also connect to the pod and write to the volume which is mounted on -`/usr/test-pod`, you will see in the host on `/mnt`. +`/usr/test-pod`, you will see in the host on `/mnt/node-local-storage`. If you delete a pod, as it is part of this Stateful Set, it will be recreated and continue to use the volume. -What if you want to delete this application and use the PVs for a different -application? +**What if you want to delete this application and use the PVs for a different +application?** Basically, as the retain policy is set to `Retain` (just to be safe), you need to do some manual operations documented [here][1]. Note that using the `Delete` @@ -244,10 +245,10 @@ directory can be used for node local storage. You can **manually** mount differe devices on `/mnt/sda` or paths inside `/mnt` at your convenience. 1. Using `setup_raid = "true"` will create a single RAID 0 array with all -the spare disks and mount it in `/mnt/`. Therefore all the node storage will be -available to only one PV, thus it will be available to only one PVC and one pod. -This is the only automated setup supported in Lokomotive at the time of this -writing. +the spare disks and mount it in `/mnt/node-local-storage`. Therefore all the +node storage will be available to only one PV, thus it will be available to only +one PVC and one pod. This is the only automated setup supported in Lokomotive +at the time of writing. 1. You can share the volume in different PVs if you create a sub-directory per PV, bind mount each of them and only expose the subdirectories. It is explained @@ -342,9 +343,8 @@ https://docs.okd.io/latest/install_config/configuring_local.html#local-volume-ra ## Using OpenEBS Local PV -OpenEBS seems to have a mode to use node local storage, but is not available in -any stable release as the time of writing. For that reason, it was not evaluated -and left for future research. +OpenEBS seems to have a mode to use node local storage, but it's listed so it is +explored in the future too. Some links that might be relevant when looking into this: