Quobyte CSI is the implementation of Container Storage Interface (CSI). Quobyte CSI enables easy integration of Quobyte Storage into Kubernetes. Current Quobyte CSI driver supports the following functionality
Choose a Quobyte CSI release from available releases
Follow the instructions specific to that release
/mounts
. Please see Deploy Quobyte clients for Quobyte client installation instructions.
Note: Quobyte CSI driver automatically deletes all the application pods with
stale Quobyte CSI volumes
and leaves the new pod creation to kubernetes. To reschedule a new pod
automatically by k8s, applications should be deployed with Deployment/ReplicaSet/StatefulSets
but not as a plain Pod
.
Add quobyte-csi-driver
helm repository to your helm
repos
helm repo add quobyte-csi-driver https://quobyte.github.io/quobyte-csi-driver/helm
If the quobyte-csi-driver
helm repo already exists in your helm repositories, you should update
the repo to get the new Quobyte CSI Driver releases. Update the repo
helm repo update quobyte-csi-driver
List all available Quobyte CSI versions
helm search repo quobyte-csi-driver/quobyte-csi-driver -l
List all customization options for Quobyte CSI driver
helm show values quobyte-csi/quobyte-csi [--version <chart-version>] # or use other "show <subcommands>"
Edit Quobyte CSI driver configuration (./quobyte-csi-driver/values.yaml) and configure CSI driver with Quobyte API, other required information.
(optional) generate driver deployment .yaml
and verify the configuration.
helm template ./quobyte-csi-driver --debug > csi-driver.yaml
Deploy the Quobyte CSI driver with customizations
# Deploys helm chart with name "quobyte-csi".
# Please change quobyte-csi as required
helm install quobyte-csi quobyte-csi-driver/quobyte-csi-driver [--version <chart-version>]
\ --set quobyte.apiURL="<your-api-url>" ....
or
helm install quobyte-csi quobyte-csi-driver/quobyte-csi-driver [--version <chart-version>]
\ -f <your-customized-values.yaml> [--set quobyte.apiURL="<your-api-url>" .. other overrides]
Verify the status of Quobyte CSI driver pods
Deploying Quobyte CSI driver should create a CSIDriver object
with your csiProvisionerName
(this may take few seconds)
CSI_PROVISIONER="<YOUR-csiProvisionerName>"
kubectl get CSIDriver | grep ^${CSI_PROVISIONER}
The Quobyte CSI driver is ready for use, if you see quobyte-csi-controller-x
pod running on any one node and quobyte-csi-node-xxxxx
running on every node of the Kubernetes cluster.
CSI_PROVISIONER=$(echo $CSI_PROVISIONER | tr "." "-")
kubectl -n kube-system get po -owide | grep ^quobyte-csi-.*-${CSI_PROVISIONER}
Make sure your CSI driver is running against the expected Quobyte API endpoint
kubectl -n kube-system exec -it \
"$(kubectl get po -n kube-system | grep -m 1 ^quobyte-csi-node-$CSI_PROVISIONER \
| cut -f 1 -d' ')" -c quobyte-csi-driver -- env | grep QUOBYTE_API_URL
The above command should print your Quobyte API endpoint. Otherwise, uninstall Quobyte CSI driver and install again with the correct Quobyte API URL.
Note:
k8s storage class is
immutable. Do not delete existing definitions, such a deletion could cause issues for existing
PV/PVCs.
Note:
This section uses example/
deployment files for demonstration. These should be modified
with your deployment configurations such as namespace
, quobyte registry
, Quobyte API user credentials
etc.
We use quobyte
namespace for the examples. Create the namespace
kubectl create ns quobyte
Quobyte requires a secret to authenticate volume create and delete requests. Create this secret with
your Quobyte API login credentials (Kubernetes requires base64 encoding for secret data which can be obtained
with the command echo -n "value" | base64
). Please encode your user name, password (and optionally access key
information) in base64 and update example/quobyte-admin-credentials.yaml. If provided, access key
ensures only authorized user can access the tenant and volumes (users must be restricted to their own namespace in k8s cluster).
kubectl create -f example/quobyte-admin-credentials.yaml
Create a storage class with the provisioner
set to csi.quobyte.com
along with other configuration
parameters. You could create multiple storage classes by varying parameters
such as
quobyteTenant
etc.
kubectl create -f example/StorageClass.yaml
Creating a PVC referencing the storage class created in the previous step would provision dynamic
volume. The secret csi.storage.k8s.io/provisioner-secret-name
from the namespace csi.storage.k8s.io/provisioner-secret-namespace
in the referenced StorageClass will be used to authenticate volume creation and deletion.
Create PVC to trigger dynamic provisioning
kubectl create -f example/pvc-dynamic-provision.yaml
Mount the PVC in a pod as shown in the following example
kubectl create -f example/nginx-demo-pod-with-dynamic-vol.yaml
Wait for the pod to be in running state
kubectl get po -w | grep 'nginx-dynamic-vol'
Once the pod is running, copy the index file to the deployed nginx pod
kubectl cp example/index.html nginx-dynamic-vol:/tmp
kubectl exec -it nginx-dynamic-vol -- mv /tmp/index.html /usr/share/nginx/html/
kubectl exec -it nginx-dynamic-vol -- chown -R nginx:nginx /usr/share/nginx/html/
Access the home page served by nginx pod from the command line
curl http://$(kubectl get pods nginx-dynamic-vol -o yaml | grep ' podIP:' | awk '{print $2}'):80
Above command should retrieve the Quobyte CSI welcome page (in raw html format). If encountered error, see if you need to forward your local port to pod.
NOTE: Depending on your cluster setup (for example, kind clusters), you may need to forward your local port to container to access the nginx pod port. In such case, you could use
kubectl port-forward nginx-dynamic-vol 8086:80
and then try curl localhost:8086
Quobyte CSI requires the volume UUID to be passed on to the PV as VolumeHandle
VolumeHandle
should be of the format <Tenant_Name/UUID>|<Volume_Name>
and nodePublishSecretRef
with Quobyte API login credentials should be specified as shown in the
example PV example/pv-existing-vol.yaml
VolumeHandle
can be |<Volume_UUID>
.In order to use the pre-provisioned test
volume belonging to the tenant My Tenant
, user needs to create
a PV with volumeHandle: My Tenant|test
as shown in the example PV.
Edit example/pv-existing-vol.yaml and point it to the the pre-provisioned volume in Quobyte
storage through volumeHandle
. Create the PV with pre-provisioned volume.
kubectl create -f example/pv-existing-vol.yaml
Create a PVC that matches the storage requirements with the above PV (make sure both PV and PVC refer to the same storage class). The created PVC will automatically binds to the PV.
kubectl create -f example/pvc-existing-vol.yaml
Create a pod referring the PVC as shown in the below example
kubectl create -f example/nginx-demo-pod-with-existing-vol.yaml
Wait for the pod to be in running state
kubectl get po -w | grep 'nginx-existing-vol'
Once the pod is running, copy the index file to the deployed nginx pod
kubectl cp example/index.html nginx-existing-vol:/tmp
kubectl exec -it nginx-existing-vol -- mv /tmp/index.html /usr/share/nginx/html/
kubectl exec -it nginx-existing-vol -- chown -R nginx:nginx /usr/share/nginx/html/
Access the home page served by nginx pod from the command line
curl http://$(kubectl get pods nginx-existing-vol -o yaml | grep ' podIP:' | awk '{print $2}'):80
Above command should retrieve the Quobyte CSI welcome page (in raw html format). If encountered error, see if you need to forward your local port to pod.
NOTE: Depending on your cluster setup (for example, kind clusters), you may need to forward your local port to container to access the nginx pod port. In such case, you could use
kubectl port-forward nginx-dynamic-vol 8086:80
and then try curl localhost:8086
Quobyte CSI Driver is deployed with enableSnapshots: true
Provision a PVC for a Quobyte volume by following the instructions
Populate backing volume with nginx index file
VOLUME="<Quobyte-Volume>" # volume for which snapshot will be taken
wget https://raw.githubusercontent.com/quobyte/quobyte-csi/master/example/index.html -P <values.clientMountPoint>/mounts/$VOLUME
Create volume snapshot secrets
Our examples use same secret in all the places wherever secret is required. Please create and configure secrets as per your requirements.
kubectl create -f example/quobyte-admin-credentials.yaml
Create volume snapshot class
kubectl create -f example/volume-snapshot-class.yaml
Create dynamic volume snapshot
kubectl create -f example/volume-snapshot-dynamic-provision.yaml
The above command should create required volumesnapshotcontent
object dynamically
(optional) verify created volumesnapshot
and volumesnapshotcontent
objects
kubectl get volumesnapshot
kubectl get volumesnapshotcontent
Restore snapshot and create PVC
kubectl create -f example/restore-snapshot-pvc-dynamic-provision.yaml
This should create a PVC and a PV for the restored snapshot
Create pod with restored snapshot
kubectl create -f example/nginx-demo-pod-with-dynamic-snapshot-vol.yaml
Create volume snapshot class
kubectl create -f example/volume-snapshot-class.yaml
Create volume snapshot secrets
Our examples use same secret in all the places wherever secret is required. Please create and configure secrets as per your requirements.
kubectl create -f example/quobyte-admin-credentials.yaml
Create VolumeSnapshotContent
object for pre-provisioned volume with
required configuration
kubectl create -f example/volume-snapshot-content-pre-provisioned.yaml
Create VolumeSnapshot
object by adjusting the example snapshot object
name and namespace must match volumeSnapshotRef
details from the step 2
kubectl create -f example/volume-snapshot-pre-provisioned.yaml
(optional) verify created volumesnapshot
and volumesnapshotcontent
objects
kubectl get volumesnapshot
kubectl get volumesnapshotcontent
kubectl create -f example/restore-snapshot-pvc-pre-provisioned.yaml
Create pod with restored snapshot
kubectl create -f example/nginx-demo-pod-with-pre-provisioned-snapshot-vol.yaml
Delete Quobyte CSI containers and corresponding RBAC
List available helm charts
helm list
Delete intended chart
helm delete <Quobyte-CSI-chart-name>
The below setup is required once per k8s cluster
kubectl create -f quobyte-csi-driver/k8s-snapshot-crd.yaml
kubectl create -f quobyte-csi-driver/k8s-snapshot-controller.yaml
kubectl delete -f quobyte-csi-driver/k8s-snapshot-controller.yaml
kubectl delete -f quobyte-csi-driver/k8s-snapshot-crd.yaml