Follow

Infinidat recommends using the InfiniBox CSI Driver for every current Kubernetes environment. Contact Infinidat support for legacy Kubernetes dynamic volume provisioner documentation.

Introduction

The screenshots in this document are simply for guidance and might not display the latest version number.

Overview

The InfiniBox CSI Driver is a plugin that enables InfiniBox storage management in Kubernetes environments. 

Deploying the InfiniBox CSI Driver requires:

  • One or more secrets (one per InfiniBox)
  • A controller instance (one per cluster)
  • One or more node instances (one per worker node)

Use the InfiniBox CSI Driver to:

  • Manage multiple InfiniBox storage arrays
  • Provision and remove Persistent Volumes (PVs)
  • Take snapshots and restore from snapshots
  • Create clones of PVs
  • Create raw block storage 
  • Extend (resize) PVs
  • Import external datasets as PVs

The InfiniBox CSI Driver can be deployed using Helm chart and OpenShift Operator mechanisms.

The following access protocols are supported:

  • iSCSI
  • Fibre Channel (FC)
  • NFS

  • NFS-TreeQ - for very large clusters with hundreds of thousands of PVs per InfiniBox system

Software requirements

SoftwareVersion
InfiniBox5.0.0 or above
Container platform

Kubernetes 1.20-1.23

Red Hat OpenShift 4.6 EUS - 4.10 

Operating system

Ubuntu 16.04 / 18.04 / 20.04

CentOS 7.x / 8.x

RHEL 7.x / 8.x

Upstream Kubernetes interfaces change frequently, and these changes can break CSI Driver functionality. For this reason, NFS TreeQ usage is currently not available in Kubernetes 1.20+, or in derivative distributions like Red Hat OpenShift 4.7+.

Contact Infinidat support or your Technical Advisor for more information.

InfiniBox prerequisites

  • A dedicated pool for every Kubernetes storage class (recommended)

  • A pool admin (recommended) or an admin account 

  • A network space configured for iSCSI, NFS, or NFS-TreeQ access

Kubernetes cluster prerequisites

All worker nodes in the cluster must be able to access InfiniBox via the protocols you intend to attach.

  • For Ethernet attachment, ensure that your router/firewall configurations allow the traffic.
  • For Fibre Channel (FC) attachment, ensure that zoning is configured correctly.

To ensure proper host configuration for iSCSI and FC connectivity, it is recommended to deploy Host PowerTools on all worker nodes.

Do not pre-create hosts for worker nodes within InfiniBox. The InfiniBox CSI Driver handles that and will register nodes/workers automatically. It may behave unpredictably if you have relevant hosts already registered with the InfiniBox.

The InfiniBox CSI Driver, like similar CSI Drivers, manages low-level connectivity such as mounts and multipathing settings. This means that your Kubernetes cluster must allow "privileged pods" including both the API server and the Kubelet. Normally, privileged pods are enabled by default in many environments, including kubeadm, Rancher K3s, GCE, and GKE.

Ensure that:

  • kube-apiserver is started with --allow-privileged=true flag.
  • All PodSecurityContextPodSecurityPolicy (deprecated in Kubernetes 1.21), and other security mechanisms enable running privileged containers on relevant nodes.

For iSCSI access, nodes must have:

  • Multipath driver
  • iscsid
  • File system software (XFS / EXT3 / EXT4)

For FC access, nodes must have:

  • Multipath driver
  • FC HBA driver
  • File system software (XFS / EXT3 / EXT4)
  • Proper FC zoning
For VMware-based deployment, FC is only supported in passthrough mode.

For NFS or NFS-TreeQ, nodes must have:

  • NFS client software

Important note: If you plan to use CSI snapshots, make sure your Kubernetes cluster has a SnapshotController (CSI Snapshotter). Some Kubernetes platforms, such as OpenShift, include this as default. Contact Infinidat support if you need further assistance. 

Installation

Downloading InfiniBox CSI Driver version 2.1.2

InfiniBox CSI Driver version 2.1.2 is available on GitHub. To download this version, run:

git clone --single-branch --branch v2.1.2 https://github.com/Infinidat/infinibox-csi-driver.git

Installing InfiniBox CSI Driver using Helm chart

Go to the infinibox-csi-driver/deploy/helm/infinibox-csi-driver folder.

Update the Infinibox_Cred section in values.yaml:

  • hostname: InfiniBox management interface IP address or host name 

  • username / password: InfiniBox credentials

  • inbound_user / inbound_secret / outbound_user / outbound_secret: optional credentials for iSCSI CHAP authentication

  • SecretName: name to be used later in the StorageClass to define a specific InfiniBox for persistent volumes

It is recommended to use a dedicated namespace, such as infi, for InfiniBox CSI Driver deployment. To create the namespace, run:

kubectl create namespace infi

Install the driver using Helm:

helm install csi-infinibox -n=infi ./

Installing InfiniBox CSI Driver using Operator (OpenShift only)

InfiniBox CSI Driver supports an Operator that can be deployed using Red Hat OpenShift OperatorHub or other standard Operator deployment method. 

To deploy InfiniBox CSI Driver via Operator in OpenShift:

  1. In the OpenShift console, browse to the Operators > OperatorHub view, and search for Infinidat.
  2. Select the Operator, and click Install.
  3. In the Installation mode section, select A specific namespace to use a dedicated namespace such as infi, or the default All namespaces on the cluster.
  4. In the Update approval section, select Automatic if you want OpenShift to auto-update your InfiniBox CSI Driver whenever an update becomes available, or select Manual.
  5. Click the Install button to install the Operator.

    The CSI Driver is created and installed in the next steps.

  6. Browse to the Operators > Installed Operators view > InfiniBox CSI Driver - Operator.
  7. Open the Operator details of the newly-installed Operator, select the InfiniboxCsiDriver tab, and click the Create InfiniboxCsiDriver button.
  8. Update the InfiniBox credentials in the YAML file as needed, and click Create.
    If you created a dedicated namespace as suggested above, be sure to enter it correctly here.
  9. Browse to the Workloads > Pods view, and confirm that the Operator and the CSI Driver are running:
    • " controller-manager* "
    • "infiniboxcsidriver-sample-driver-o"

If your controller-manager pod shows CrashLoopBackOff status or a similar out-of-memory type error, you might need to increase the relevant memory limit as defined in the Operator ClusterServiceVersion, and then redeploy the Operator. Contact Infinidat support for guidance.

Upgrades - using Operator (OpenShift only)

Before upgrading, please contact your Technical Advisor for details about each version and its compatibility to your environment.

InfiniBox CSI Driver follows standard Kubernetes upgrade methods. New version deployment does not affect existing persistent volumes (PVs) or persistent volume claims (PVCs), unless otherwise indicated in the relevant release notes.

For OpenShift Operator-based deployment, the default InfiniBox CSI Driver installation value for 'Update Approval' is set to Automatic, so that your CSI Driver is always up-to-date. If the  value was changed to Manual, all upgrades must done manually.

To upgrade manually:

  1. In the OpenShift console, browse to the Operators > Installed Operators view to see if an upgrade for InfiniBox InfiniBox CSI Driver - Operator is available.
  2. Select InfiniBox InfiniBox CSI Driver - Operator, and click Upgrade available.
  3. Click the Install button.
  4. Review the details, and click the Preview manual InstallPlan button to begin the upgrade.
  5. When the upgrade finishes, the Status value changes to Complete.

Usage

Sample yaml files for different protocols are available at https://github.com/Infinidat/infinibox-csi-driver/tree/master/deploy/examples.

oc / kubectl

Note: according to your chosen platform (OpenShift / Vanilla), use the commands 'oc' or 'kubectl', respectively.

Defining a StorageClass

A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, backup policies, or other arbitrary policies determined by the cluster administrators. This concept of different classes for different purposes is sometimes called “profiles” in other storage systems. See Kubernetes documentation for more details.

A StorageClass with an InfiniBox CSI Driver maps to a specific pool on an InfiniBox. No two StorageClasses are allowed within the same InfiniBox pool.

Important note: If you plan to use "Filesystem" accessMode and XFS fsType with InfiniBox block devices (via iSCSI or FC), and you intend to attach a volume and its snapshots to the same node, enable the allowXfsUuidRegeneration option in the CSI Driver values.yaml so the driver will automatically generate a new XFS UUID and avoid conflicts.  Without this option, the filesystem and its snapshots would have the same XFS UUID, and this would cause conflicts when you try to attach the snapshot to the node. This is a global setting applicable to all StorageClasses.

StorageClass example for Fibre Channel protocol

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: infi-fc-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters: 
  # this section points to the secret file with the credentials (defining InfiniBox IP, user name and password)
  csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
  csi.storage.k8s.io/provisioner-secret-namespace: infi
  csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-publish-secret-namespace: infi
  csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
  csi.storage.k8s.io/node-stage-secret-namespace: infi
  csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/node-publish-secret-namespace: infi
  csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-expand-secret-namespace: infi

  # define file system for the provisioned volume. Supported options are xfs, ext3, ext4
  fstype: xfs

  # define InfiniBox-specific parameters 
  storage_protocol: "fc"
  pool_name: "k8s_csi"
  provision_type: "THIN"
  ssd_enabled: "true"

  # define how many volumes can be created per Worker node. It's recommended not to create more than 50 volumes per host
  max_vols_per_host: "20"

StorageClass example for iSCSI protocol

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: infi-iscsi-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters: 
  # this section points to the secret file with the credentials (defining InfiniBox IP, user name and password)
  csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
  csi.storage.k8s.io/provisioner-secret-namespace: infi
  csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-publish-secret-namespace: infi
  csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
  csi.storage.k8s.io/node-stage-secret-namespace: infi
  csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/node-publish-secret-namespace: infi
  csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-expand-secret-namespace: infi

  # define whether CHAP should be used to protect access to volumes. Supported options: none, chap, mutual_chap
  useCHAP: "mutual_chap" 

  # define file system and permissions for the provisioned volume
  fstype: xfs  				# options are xfs, ext3, ext4
  uid: 1000    				# optional - set uid if volume mountpoint should be chown'ed
  gid: 1000    				# optional - set gid if volume mountpoint should be chown'ed
  unix_permissions: 777 	# optional - set permissions if volume mountpoint should be chmod'ed

  # define InfiniBox-specific parameters 
  storage_protocol: "iscsi"
  network_space: "iscsi1"
  pool_name: "k8s_csi"
  provision_type: "THIN"
  ssd_enabled: "true"

  # define how many volumes can be created per Worker node. It's recommended not to create more than 50 volumes per host
  max_vols_per_host: "20"

StorageClass example for NFS protocol

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ibox-nfs-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions: # optional: defaults shown below, be sure to include vers=3 if you override
  - vers=3
  - tcp
  - rsize=262144
  - wsize=262144
parameters:
    # reference secret with InfiniBox credentials
    csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
    csi.storage.k8s.io/controller-expand-secret-namespace: infi
    csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
    csi.storage.k8s.io/controller-publish-secret-namespace: infi
    csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
    csi.storage.k8s.io/node-publish-secret-namespace: infi
    csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
    csi.storage.k8s.io/node-stage-secret-namespace: infi
    csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
    csi.storage.k8s.io/provisioner-secret-namespace: infi

    # InfiniBox configuration
    storage_protocol: nfs
    network_space: my_nfs_network_space # InfiniBox network space name
    nfs_export_permissions : "[{'access':'RW','client':'192.168.147.190-192.168.147.199','no_root_squash':false}]" # add node IPs here
    pool_name: my_nfs_pool # InfiniBox pool name
    provision_type: THIN
    ssd_enabled: "true"

    # optional parameters
    # snapdir_visible: "true"   # optional: specify whether .snapshot directory is visible
    # uid: 1000                 # optional: override default UID for filesystem mount 
    # gid: 1000                 # optional: override default GID for filesystem mount
    # unix_permissions: 777     # optional: override default permissions for filesystem mount
    # privileged_ports_only: no # optional: force use of  privileged ports only

StorageClass example for NFS-TreeQ

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: infi-nfs-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters: 
   # this section points to the secret file with the credentials (defining InfiniBox IP, user name and password)
   csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
   csi.storage.k8s.io/provisioner-secret-namespace: infi
   csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
   csi.storage.k8s.io/controller-publish-secret-namespace: infi
   csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
   csi.storage.k8s.io/node-stage-secret-namespace: infi
   csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
   csi.storage.k8s.io/node-publish-secret-namespace: infi
   csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
   csi.storage.k8s.io/controller-expand-secret-namespace: infi

   # define InfiniBox-specific parameters
   network_space: "nas"
   pool_name: "k8s_csi"
   provision_type: "THIN"
   storage_protocol: "nfs_treeq"
   ssd_enabled: "true"

   # define NFS client mount options
   nfs_mount_options: vers=3,hard,rsize=1048576,wsize=1048576

   # define NFS export rules
   nfs_export_permissions : "[{'access':'RW','client':'192.168.0.1-192.168.0.255','no_root_squash':true}]"

   # define max amount of file systems to be created by the driver
   max_filesystems: "2000"
   # define max amount of treeqs to be created by the driver per file system
   max_treeqs_per_filesystem: "2000"
   # define max size of a single file system
   max_filesystem_size: 4tib

Creating a sample StorageClass

$ kubectl create -f storageclass.yaml
storageclass.storage.k8s.io/ibox-nfs-storageclass-demo created

$ kubectl get storageclass
NAME                         PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION  AGE
ibox-nfs-storageclass-demo   infinibox-csi-driver   Delete          Immediate          true                  9s

Defining a persistent volume claim

A persistent volume claim (PVC) is a request for the platform to create a persistent volume (PV). Each PVC contains the spec (specification) and status of the claim.

PVC example for Fibre Channel or iSCSI

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-demo
  namespace: infi
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Block # Supported options: Block, Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: infi-storageclass-demo

PVC example for NFS or NFS-TreeQ

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: infi-pvc-demo
  namespace: infi
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: infi-storageclass-demo

Creating a sample PVC

$ kubectl create -f pvc.yaml
persistentvolumeclaim/ibox-pvc-demo created
$ kubectl get pvc -n infi
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ibox-pvc-demo Bound csi-d38f4662c8 1Gi RWX infi-nfs-storageclass-demo 4s

$ kubectl get pv csi-d38f4662c8
NAME            CAPACITY  ACCESS MODES  RECLAIM POLICY   STATUS  CLAIM               STORAGECLASS                REASON  AGE
csi-d38f4662c8  1Gi       RWX           Delete           Bound   infi/ibox-pvc-demo  infi-nfs-storageclass-demo          26s


The PV name (csi-d38f4662c8 in the example above) is also a filesystem or a volume name on the InfiniBox.


The length of a PV name is limited to 128 characters overall due to upstream Kubernetes data structures in Kubernetes version 1.21 and earlier. The standard mount path format on Linux hosts for each PV is 91 characters. This leaves a maximum of 37 characters for persistent volume names.

Defining a volume snapshot class

$ cat snapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: infi-snapshotclass-demo
  namespace: infi
snapshotter: infinibox-csi-driver
parameters:
  csi.storage.k8s.io/snapshotter-secret-name: infinibox-creds
  csi.storage.k8s.io/snapshotter-secret-namespace: infi

$ kubectl create -f snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/ibox-snapshotclass-demo created
$ kubectl get volumesnapshotclass
NAME                     AGE
ibox-snapshotclass-demo  14s

Defining a snapshot

A snapshot of a PVC is a read-only InfiniBox snapshot for the relevant PV.

$ cat snapshot.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: infi-pvc-snapshot-demo
  namespace: infi
spec:
  snapshotClassName: infi-snapshotclass-demo
  source:
    name: infi-pvc-demo
    kind: PersistentVolumeClaim

$ kubectl create -f snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/ibox-pvc-snapshot-demo created
$ kubectl get volumesnapshot -n infi
NAME                    AGE
ibox-pvc-snapshot-demo  10s
$ kubectl get volumesnapshotcontent
NAME                                              AGE
snapcontent-40cf3378-4dde-42c9-87f0-8c6a9771e40e  18s


The 'VolumeSnapshot' content name includes the name of the InfiniBox snapshot. In the example above, the snapshot name is csi-40cf33784d (snapcontent-40cf3378-4dde-42c9-87f0-8c6a9771e40e) 

Defining a PV restore from a snapshot

A PVC can be defined as a restore from a previously created snapshot. For InfiniBox, the underlying PV is created as a ReadWrite snapshot of a ReadOnly snapshot.

$ cat restoresnapshot.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: infi-snapshot-pvc-restore-demo-2
  namespace: infi
spec:
  storageClassName: infi-nfs-storageclass-demo
  dataSource:
    name: infi-pvc-snapshot-demo
    kind: VolumeSnapshot
    apiGroup: "snapshot.storage.k8s.io"
  accessModes: 
  - ReadWriteOnce
  resources:
     requests:
       storage: 2Gi

$ kubectl create -f restoresnapshot.yaml
persistentvolumeclaim/ibox-snapshot-pvc-restore-demo-2 created
$ kubectl get pvc infi-snapshot-pvc-restore-demo-2 -n infi
NAME                               STATUS   VOLUME          CAPACITY  ACCESS MODES  STORAGECLASS                AGE
ibox-snapshot-pvc-restore-demo-2   Bound    csi-40e9d7d588  1Gi       RWO           infi-nfs-storageclass-demo  13s

Defining a clone

A new PVC can be created as a ReadWrite snapshot of another PVC, creating an instant clone.

$ cat clonepvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: infi-pvc-clone-demo
  namespace: infi
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: infi-nfs-storageclass-demo
  resources:
    requests:
      storage: 2Gi
  dataSource:
    kind: PersistentVolumeClaim
    name: infi-pvc-demo

$ kubectl create -f clonepvc.yaml
persistentvolumeclaim/ibox-pvc-clone-demo created

$ kubectl get pvc infi-pvc-clone-demo -n infi
NAME                 STATUS  VOLUME          CAPACITY  ACCESS MODES   STORAGECLASS                AGE
ibox-pvc-clone-demo  Bound   csi-eb7aa34161  1Gi       RWO            infi-nfs-storageclass-demo  9s

Expanding a PV

An existing PV can be expanded using the kubectl edit command. Kubernetes interprets a change to the spec "storage" field as a request for more space, and it triggers automatic volume resizing.

$ kubectl edit pvc infi-pvc-demo -n infi
....

spec:
accessModes:
- ReadWriteMany
resources:
  requests:
    storage: 1Gi <<<<< modify this field
storageClassName: infi-nfs-storageclass-demo
....


This works instantly for NFS or NFS-TreeQ PVCs. For iSCSI and FC, the application pod must be restarted, and additional operations might be required to resize the filesystem using the xfs_growfs or resize2fs command.

Dataset deletion considerations

The CSI spec assumes that datasets are not related, allowing snapshots to have lifecycles independent of their originating datasets. This is not aligned to the InfiniBox snapshot implementation. To minimize the impact, the InfiniBox CSI Driver currently attempts to automatically delete descendant datasets when deletion of the parent dataset is requested. For example, if you try to delete a PV with existing snapshots, the snapshots will also be deleted. However, this will fail for any datasets that are currently attached to hosts. As a best practice, before deleting a dataset, confirm that you want to also delete all its descendants.

Importing an existing volume

To import an existing PV and make it manageable by the CSI Driver,  manually create a PV yaml file that describes the parameters of the existing volume.  

Importing an existing iSCSI PV

$ cat importpv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: infinibox-csi-driver
  name: gtvolpv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  csi:
    controllerExpandSecretRef:
      name: infinibox-creds
      namespace: infi
    controllerPublishSecretRef:
      name: infinibox-creds
      namespace: infi
    driver: infinibox-csi-driver
    nodePublishSecretRef:
      name: infinibox-creds
      namespace: infi
    nodeStageSecretRef:
      name: infinibox-creds
      namespace: infi
    volumeAttributes:
      Name: "gtvolpv"
      fstype: "ext4"
      max_vols_per_host: "100"
      network_space: "iscsi1"
      portals: "172.20.37.54,172.20.37.55,172.20.37.57"
      storage_protocol: "iscsi"
      useCHAP: "none"
	  iqn: iqn.2009-11.com.infinidat:storage:infinibox-sn-36000-2436
    volumeHandle: 9676520$$iscsi
  persistentVolumeReclaimPolicy: Delete
  storageClassName: infi-iscsi-storageclass-demo
  volumeMode: Filesystem

$ cat importpvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: infi-import-pvc-demo
  namespace: infi
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: infi-iscsi-storageclass-demo
  volumeName: gtvolpv

Importing an existing NFS PV

$ cat importpv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: infinibox-csi-driver
  name: gtfspv
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 2Gi
  csi:
    controllerExpandSecretRef:
      name: infinibox-creds
      namespace: infi
    controllerPublishSecretRef:
      name: infinibox-creds
      namespace: infi
    driver: infinibox-csi-driver
    nodePublishSecretRef:
      name: infinibox-creds
      namespace: infi
    nodeStageSecretRef:
      name: infinibox-creds
      namespace: infi
    volumeAttributes:
      ipAddress: 172.20.37.53
      volPathd: /gtfs_pv
      storage_protocol: nfs
      exportID: "10098" #InfiniBox export ID
    volumeHandle: 7955656$$nfs #InfiniBox file system ID
  persistentVolumeReclaimPolicy: Delete
  storageClassName: infi-nfs-storageclass-demo
  volumeMode: Filesystem



$ cat importpvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: infi-import-pvc-demo
  namespace: infi
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: infi-nfs-storageclass-demo
  volumeName: gtfspv

Managing additional InfiniBox storage arrays

A separate secret must be defined within the Kubernetes cluster for every managed InfiniBox array. It must include an InfiniBox hostname, administrator credentials, and optional CHAP authentication credentials (for iSCSI), all encoded in Base64.

To encode an entry:

$ echo -n infi0001.company.com | base64
aWJveDAwMDEuY29tcGFueS5jb20=

Sample secret file:

apiVersion: v1
kind: Secret
metadata:
  name: infi0001-credentials
  namespace: infi
type: Opaque
data:
  hostname: aWJveDAwMDEuY29tcGFueS5jb20=
  node.session.auth.password: MC4wMDB1czA3Ym9mdGpv
  node.session.auth.password_in: MC4wMDI2OHJ6dm1wMHI3
  node.session.auth.username: aXFuLjIwMjAtMDYuY29tLmNzaS1kcml2ZXItaXNjc2kuaW5maW5pZGF0OmNvbW1vbmlu
  node.session.auth.username_in: aXFuLjIwMjAtMDYuY29tLmNzaS1kcml2ZXItaXNjc2kuaW5maW5pZGF0OmNvbW1vbm91dA==
  password: MTIzNDU2
  username: azhzYWRtaW4=

Uninstalling

Uninstalling InfiniBox CSI Driver using Helm chart

To uninstall the driver in the infi namespace, run:

helm uninstall csi-infinibox -n=infi

Replace infi in the command if you installed into a different namespace.

Uninstalling InfiniBox CSI Driver using OpenShift Operator

Uninstalling the Operator removes also the CSI Driver itself. 

The following instructions apply to Operator versions 2.1.x and above. Contact Infinidat support for uninstallation instructions for earlier Operator versions.

Uninstalling using the GUI

  1. Browse to the Operators > Installed Operators view, and in the menu option for InfiniBox CSI Driver - Operator, click Uninstall Operator.
  2. In the confirmation window, click Uninstall.

Uninstalling using CLI

  1. Run the following command to confirm that the operator is installed:
    ~ oc get subscription infinibox-operator-certified -n openshift-operators -o yaml | grep currentCSV
  2. Run the following command to completely remove the operator:
    ~ oc delete subscription infinibox-operator-certified -n openshift-operators

Troubleshooting

Use standard Kubernetes troubleshooting actions to debug CSI Driver issues. By default, debugging is enabled in the values configuration file ('values.yaml') and can be disabled:

Enabled → logLevel: "debug"

Disabled → logLevel: "info"

To generate a log file for the CSI Driver, run:

kubectl logs <pod name> <container name>

# For OpenShift run:
oc logs <pod name> <container name>

Pods of interest related to the InfiniBox CSI Driver include:

  • Controller (infiniboxcsidriver-sample-driver-0), which includes 5 containers:
    • driver (the main container to focus on for troubleshooting)
    • resizer
    • snapshotter
    • provisioner
    • attacher
  • Node - select an instance running on the relevant worker. Each instance includes 2 containers:
    • driver (the main container to focus on for troubleshooting)
    • registrar
  • Operator - for operator-managed deployments only

Relevant error messages can be found using the kubectl describe command.

$ kubectl get nodes
NAME         STATUS  ROLES   AGE  VERSION
gtouret-k51  Ready   master  21d  v1.18.0
gtouret-k52  Ready   <none>  21d  v1.18.0
gtouret-k53  Ready   <none>  21d  v1.18.0

$ kubectl get pods -n infi -o wide
NAME                      READY  STATUS   RESTARTS  AGE   IP             NODE 
csi-infinibox-driver-0    5/5    Running  0         2d4h  10.244.2.55    gtouret-k53 
csi-infinibox-node-85g4t  2/2    Running  0         2d4h  172.20.87.214  gtouret-k51 
csi-infinibox-node-jw9hx  2/2    Running  0         2d4h  172.20.87.99   gtouret-k52 
csi-infinibox-node-qspr4  2/2    Running  0         2d4h  172.20.78.63   gtouret-k53

$ kubectl logs csi-infinibox-driver-0 driver -n infi | tail -2

time="2020-04-25T05:07:03Z" level=info msg="Called createVolumeFrmPVCSource"
time="2020-04-25T05:07:03Z" level=info msg="Request made for method: GET and apiuri /api/rest/filesystems/10748988"




Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Last edited: 2022-08-06 08:43:29 UTC

Comments