Follow


Infinidat recommends using the InfiniBox CSI Driver for every Kubernetes environment running version 1.13 or newer.

The legacy Kubernetes provisioner documentation is available here.

Introduction

Overview

The InfiniBox CSI Driver (plugin) enables InfiniBox storage management in Kubernetes environment. 

Deploying the plugin requires:

  • One or more secrets (one per InfiniBox)
  • A Controller instance (one per cluster), and 
  • One or more Node instances (one per Worker node)

The plugin supports the following features:

  • Manage multiple InfiniBox storage arrays
  • Provision and remove PVs (Persistent Volumes)
  • Take snapshots and restore from snapshots
  • Create clones of PVs
  • Create raw block storage 
  • Extend (resize) PVs
  • Import external datasets as PVs

The plugin can be deployed using Helm Chart and OpenShift Operator mechanisms.

The following access protocols are supported:

  • iSCSI
  • FC
  • NFS
  • NFS-TreeQ - for very large clusters with hundreds of thousands of PVs per InfiniBox system

Software requirements

SoftwareVersion
InfiniBoxv4.x / 5.x
Kubernetes

Kubernetes 1.13+ (1.17+ is recommended)

Red Hat OpenShift 4.x

VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) 1.5+ (previously known as Pivotal Kubernetes Service or VMware Enterprise PKS)

Operating systems

Ubuntu 16.04 / 18.04

Centos 7.x / 8.x

RHEL 7.x / 8.x

InfiniBox prerequisites

  • A dedicated pool for every Kubernetes storage class

  • A pool admin (recommended) or an admin account 

  • A Network Space configured for iSCSI, NFS or NFS-TreeQ access

Kubernetes cluster prerequisites

All worker nodes in the cluster must be configured for proper access to InfiniBox.

Deploying Host Power Tools on all worker nodes is recommended to ensure proper host configuration for Fibre Channel (FC) and iSCSI connectivity.

For FC access:

  • Multipath driver
  • FC HBA driver
  • File system software (XFS / EXT3 / EXT4)
  • All worker nodes must be properly zoned
  • Note: for VMware-based deployment, FC is supported only in passthrough mode

For iSCSI access:

  • Multipath driver
  • iscsid
  • File system software (XFS / EXT3 / EXT4)

For NFS or NFS-TreeQ:

  • NFS client software

With older Kubernetes versions (pre-1.17), some of CSI features like snapshots, clones, or raw block volumes might not be supported or enabled by default. Refer to relevant documentation for more details on feature-gate enablement.

Installation

Downloading the driver

This documentation refers to the InfiniBox CSI Driver version 1.1.0 which is available on GitHub. Use the following command to download this version:

git clone --single-branch --branch 1.1.0 https://github.com/Infinidat/infinibox-csi-driver.git

Installing the driver using Helm chart

Go to the infinibox-csi-driver/deploy/helm/infinibox-csi-driver folder.

Update the Infinibox_Cred section in values.yaml:

  • hostname: IP or host name for the InfiniBox management interface

  • username / password: InfiniBox credentials

  • inbound_user / inbound_secret / outbound_user / outbound_secret: optional credentials for iSCSI CHAP authentication

  • SecretName: defines secret name, to be used later in the StorageClass to define specific InfiniBox for persistent volumes

It is recommended to use a dedicated namespace for InfiniBox CSI Driver deployment. Create such namespace using the following command:

kubectl create namespace infi

Install the driver using Helm (3.x version is recommended):

helm install csi-infinibox -n=ibox ./

Uninstalling the driver using Helm chart

To uninstall the driver, run:

helm uninstall csi-infinibox -n=ibox

Installing the driver using OpenShift Operator

Using installation script

Go to the infinibox-csi-driver/deploy/operator/ folder.

Modify the Infinibox_Cred section in infinibox-operator/deploy/crds/infinibox-csi-driver-service.yaml:

  • hostname: IP or host name for the InfiniBox management interface

  • username / password: InfiniBox credentials

  • inbound_user / inbound_secret / outbound_user / outbound_secret: optional credentials for iSCSI CHAP authentication

  • SecretName: defines secret name, to be used later in the StorageClass to define specific InfiniBox for persistent volumes

Install the operator using the script:

install_operator_openshift.sh 

Note: The script will create a dedicated namespace "infi" for the InfiniBox CSI Driver deployment. Modify opnmspace value in the script if different name is required 

Using OpenShift OperatorHub

InfiniBox Operator can be deployed via OpenShift OperatorHub.

  • CSI driver requires elevated permissions settings. It is recommended to create a dedicated namespace for the Operator and Driver installations. Run the following commands:
oc create namespace ibox
oc create -f infinibox-csi-driver/deploy/operator/scc/iboxcsiaccess_scc.yaml --as system:admin
oc adm policy add-scc-to-user iboxcsiaccess -z infinibox-csi-driver-node -n ibox
oc adm policy add-scc-to-user iboxcsiaccess -z infinibox-csi-driver-driver -n ibox
  • Launch OpenShift console, browse to the Operators -> OperatorHub view, search for "infinibox"


  • Click Install, choose "A specific namespace on the cluster" and select the namespace, then click "Subscribe"


  • Browse to the Operators -> Installed Operators view and click on the Infinibox Operator -> Create Instance link


  • Update InfiniBox credentials in the YAML file as needed and click CREATE


  • Browse to Workloads -> Pods tab to validate that the Operator and the CSI driver are running

Uninstalling the driver using OpenShift Operator

Go to the infinibox-csi-driver/deploy/operator/ folder.

Uninstall the operator using the script:

uninstall_operator_openshift.sh 

Usage

Sample yaml files for different protocols are available at https://github.com/Infinidat/infinibox-csi-driver/tree/1.1.0/deploy/examples.

Defining a StorageClass

A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, backup policies, or other arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called “profiles” in other storage systems. See Kubernetes documentation for more details.

A StorageClass with InfiniBox CSI driver maps to a specific pool on an InfiniBox. No two StorageClasses are allowed with the same InfiniBox pool.

StorageClass example for Fibre Channel protocol

$ cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ibox-fc-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters: 
  # this section points to the secret file with the credentials (defining InfiniBox IP, user name and password)
  csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
  csi.storage.k8s.io/provisioner-secret-namespace: ibox
  csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-publish-secret-namespace: ibox
  csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
  csi.storage.k8s.io/node-stage-secret-namespace: ibox
  csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/node-publish-secret-namespace: ibox
  csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-expand-secret-namespace: ibox

  # define file system for the provisioned volume. Supported options are xfs, ext3, ext4
  fstype: xfs

  # define InfiniBox-specific parameters 
  pool_name: "k8s_csi"
  provision_type: "THIN"
  storage_protocol: "fc"
  ssd_enabled: "true"

  # define how many volumes can be created per Worker node. It's recommended not to create more than 50 volumes per host
  max_vols_per_host: "20"


StorageClass example for iSCSI protocol

$ cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: ibox-iscsi-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters: 
 # this section points to the secret file with the credentials (defining InfiniBox IP, user name and password)
 csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
 csi.storage.k8s.io/provisioner-secret-namespace: ibox
 csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
 csi.storage.k8s.io/controller-publish-secret-namespace: ibox
 csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
 csi.storage.k8s.io/node-stage-secret-namespace: ibox
 csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
 csi.storage.k8s.io/node-publish-secret-namespace: ibox
 csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
 csi.storage.k8s.io/controller-expand-secret-namespace: ibox
 #define whether CHAP should be used to protect access to volumes. Supported options: none, chap, mutual_chap
 useCHAP: "mutual_chap" 
 # define file system for the provisioned volume. Supported options are xfs, ext3, ext4
 fstype: xfs
 # define InfiniBox-specific parameters 
 network_space: "iscsi1"
 pool_name: "k8s_csi"
 provision_type: "THIN"
 storage_protocol: "iscsi"
 ssd_enabled: "true"
 # define how many volumes can be created per Worker node. It's recommended not to create more than 50 volumes per host
 max_vols_per_host: "20"

StorageClass example for NFS protocol

$ cat storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: ibox-nfs-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters: 
 # this section points to the secret file with the credentials (defining InfiniBox IP, user name and password)
 csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
 csi.storage.k8s.io/provisioner-secret-namespace: ibox
 csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
 csi.storage.k8s.io/controller-publish-secret-namespace: ibox
 csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
 csi.storage.k8s.io/node-stage-secret-namespace: ibox
 csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
 csi.storage.k8s.io/node-publish-secret-namespace: ibox
 csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
 csi.storage.k8s.io/controller-expand-secret-namespace: ibox
 # define InfiniBox-specific parameters 
 network_space: "nas"
 pool_name: "k8s_csi"
 provision_type: "THIN"
 storage_protocol: "nfs"
 ssd_enabled: "true"
 #define NFS client mount options
 nfs_mount_options: hard,rsize=1048576,wsize=1048576
 #define NFS export rules
 nfs_export_permissions : "[{'access':'RW','client':'192.168.0.1-192.168.0.255','no_root_squash':true}]"

StorageClass example for NFS-TreeQ

$ cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ibox-nfs-storageclass-demo
provisioner: infinibox-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters: 
  # this section points to the secret file with the credentials (defining InfiniBox IP, user name and password)
  csi.storage.k8s.io/provisioner-secret-name: infinibox-creds
  csi.storage.k8s.io/provisioner-secret-namespace: ibox
  csi.storage.k8s.io/controller-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-publish-secret-namespace: ibox
  csi.storage.k8s.io/node-stage-secret-name: infinibox-creds
  csi.storage.k8s.io/node-stage-secret-namespace: ibox
  csi.storage.k8s.io/node-publish-secret-name: infinibox-creds
  csi.storage.k8s.io/node-publish-secret-namespace: ibox
  csi.storage.k8s.io/controller-expand-secret-name: infinibox-creds
  csi.storage.k8s.io/controller-expand-secret-namespace: ibox

  # define InfiniBox-specific parameters

  network_space: "nas"
  pool_name: "k8s_csi"
provision_type: "THIN"
storage_protocol: "nfs_treeq"
ssd_enabled: "true"

#define NFS client mount options
nfs_mount_options: hard,rsize=1048576,wsize=1048576

#define NFS export rules
nfs_export_permissions : "[{'access':'RW','client':'192.168.0.1-192.168.0.255','no_root_squash':true}]"

 #define max amount of file systems to be created by the driver
 max_filesystems: "2000"
 #define max amount of treeqs to be created by the driver per file system
 max_treeqs_per_filesystem: "2000"
 #define max size of a single file system
 max_filesystem_size: 4tib

Creating a sample StorageClass

$ kubectl create -f storageclass.yaml
storageclass.storage.k8s.io/ibox-nfs-storageclass-demo created

$ kubectl get storageclass
NAME                         PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION  AGE
ibox-nfs-storageclass-demo   infinibox-csi-driver   Delete          Immediate          true                  9s

Defining a Persistent Volume Claim

A persistent volume claim (PVC) is a request for the platform to create a Persistent Volume (PV). Each PVC contains a spec and status, which is the specification and status of the claim.

PVC example for Fibre Channel or iSCSI

$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-demo
  namespace: ibox
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Block # Supported options: Block, Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: ibox-storageclass-demo

PVC example for NFS or NFS-TreeQ

$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ibox-pvc-demo
  namespace: ibox
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: ibox-storageclass-demo

Creating a sample PVC

$ kubectl create -f pvc.yaml
persistentvolumeclaim/ibox-pvc-demo created
$ kubectl get pvc -n ibox
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ibox-pvc-demo Bound csi-d38f4662c8 1Gi RWX ibox-nfs-storageclass-demo 4s

$ kubectl get pv csi-d38f4662c8
NAME            CAPACITY  ACCESS MODES  RECLAIM POLICY   STATUS  CLAIM               STORAGECLASS                REASON  AGE
csi-d38f4662c8  1Gi       RWX           Delete           Bound   ibox/ibox-pvc-demo  ibox-nfs-storageclass-demo          26s


PV name (csi-d38f4662c8 in the example above) is also a filesystem or a volume name on the InfiniBox.

Defining a Volume Snapshot Class

$ cat snapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotClass
metadata:
  name: ibox-snapshotclass-demo
  namespace: ibox
snapshotter: infinibox-csi-driver
parameters:
  csi.storage.k8s.io/snapshotter-secret-name: infinibox-creds
  csi.storage.k8s.io/snapshotter-secret-namespace: ibox

$ kubectl create -f snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/ibox-snapshotclass-demo created
$ kubectl get volumesnapshotclass
NAME                     AGE
ibox-snapshotclass-demo  14s

Defining a Snapshot

A snapshot of a Persistent Volume Claim is a read-only InfiniBox snapshot for the relevant Persistent Volume.

$ cat snapshot.yaml
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: ibox-pvc-snapshot-demo
  namespace: ibox
spec:
  snapshotClassName: ibox-snapshotclass-demo
  source:
    name: ibox-pvc-demo
    kind: PersistentVolumeClaim

$ kubectl create -f snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/ibox-pvc-snapshot-demo created
$ kubectl get volumesnapshot -n ibox
NAME                    AGE
ibox-pvc-snapshot-demo  10s
$ kubectl get volumesnapshotcontent
NAME                                              AGE
snapcontent-40cf3378-4dde-42c9-87f0-8c6a9771e40e  18s


Volume Snapshot Content name includes the name of the InfiniBox snapshot. With the above example the snapshot name will be csi-40cf33784d 

Restoring a Persistent Volume from a Snapshot

A Persistent Volume Claim can be defined as a restore from previously created snapshot. For InfiniBox, the underlying Persistent Volume will be created as a ReadWrite snapshot of a Read-Only Snapshot

$ cat restoresnapshot.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ibox-snapshot-pvc-restore-demo-2
  namespace: ibox
spec:
  storageClassName: ibox-nfs-storageclass-demo
  dataSource:
    name: ibox-pvc-snapshot-demo
    kind: VolumeSnapshot
    apiGroup: "snapshot.storage.k8s.io"
  accessModes: 
  - ReadWriteOnce
  resources:
     requests:
       storage: 2Gi

$ kubectl create -f restoresnapshot.yaml
persistentvolumeclaim/ibox-snapshot-pvc-restore-demo-2 created
$ kubectl get pvc ibox-snapshot-pvc-restore-demo-2 -n ibox
NAME                               STATUS   VOLUME          CAPACITY  ACCESS MODES  STORAGECLASS                AGE
ibox-snapshot-pvc-restore-demo-2   Bound    csi-40e9d7d588  1Gi       RWO           ibox-nfs-storageclass-demo  13s

Defining a Clone

A new Persistent Volume Claim can be created as a ReadWrite snapshot of another PVC, representing an instant clone.

$ cat clonepvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ibox-pvc-clone-demo
  namespace: ibox
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: ibox-nfs-storageclass-demo
  resources:
    requests:
      storage: 2Gi
  dataSource:
    kind: PersistentVolumeClaim
    name: ibox-pvc-demo

$ kubectl create -f clonepvc.yaml
persistentvolumeclaim/ibox-pvc-clone-demo created

$ kubectl get pvc ibox-pvc-clone-demo -n ibox
NAME                 STATUS  VOLUME          CAPACITY  ACCESS MODES   STORAGECLASS                AGE
ibox-pvc-clone-demo  Bound   csi-eb7aa34161  1Gi       RWO            ibox-nfs-storageclass-demo  9s

Expanding a Persistent Volume

An existing Persistent Volume can be expanded using kubectl edit command. Kubernetes will interpret a change to the spec "storage" field as a request for more space, and will trigger automatic volume resizing.

$ kubectl edit pvc ibox-pvc-demo -n ibox
....

spec:
accessModes:
- ReadWriteMany
resources:
  requests:
    storage: 1Gi <<<<< modify this field
storageClassName: ibox-nfs-storageclass-demo
....


This will work instantly for NFS or NFS-TreeQ PVCs. For iSCSI and Fibre Channel, the application pod must be restarted, and additional operations might be required to resize the filesystem using xfs_growfs or resize2fs command.

Importing an existing volume

It is possible to import an existing PV and make it manageable by the CSI driver. To do so, a PV yaml file must be manually created, describing some of the parameters of the existing volume. 

Importing an existing iSCSI PV

$ cat importpv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: infinibox-csi-driver
  name: gtvolpv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  csi:
    controllerExpandSecretRef:
      name: infinibox-creds
      namespace: ibox
    controllerPublishSecretRef:
      name: infinibox-creds
      namespace: ibox
    driver: infinibox-csi-driver
    nodePublishSecretRef:
      name: infinibox-creds
      namespace: ibox
    nodeStageSecretRef:
      name: infinibox-creds
      namespace: ibox
    volumeAttributes:
      Name: "gtvolpv"
      fstype: "ext4"
      max_vols_per_host: "100"
      network_space: "iscsi1"
      portals: "172.20.37.54,172.20.37.55,172.20.37.57"
      storage_protocol: "iscsi"
      useCHAP: "none"
	  iqn: iqn.2009-11.com.infinidat:storage:infinibox-sn-36000-2436
    volumeHandle: 9676520$$iscsi
  persistentVolumeReclaimPolicy: Delete
  storageClassName: ibox-iscsi-storageclass-demo
  volumeMode: Filesystem

$ cat importpvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ibox-import-pvc-demo
  namespace: ibox
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: ibox-iscsi-storageclass-demo
  volumeName: gtvolpv

Importing an existing NFS PV

$ cat importpv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: infinibox-csi-driver
  name: gtfspv
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 2Gi
  csi:
    controllerExpandSecretRef:
      name: infinibox-creds
      namespace: ibox
    controllerPublishSecretRef:
      name: infinibox-creds
      namespace: ibox
    driver: infinibox-csi-driver
    nodePublishSecretRef:
      name: infinibox-creds
      namespace: ibox
    nodeStageSecretRef:
      name: infinibox-creds
      namespace: ibox
    volumeAttributes:
      ipAddress: 172.20.37.53
      volPathd: /gtfs_pv
      storage_protocol: nfs
      exportID: "10098" #InfiniBox export ID
    volumeHandle: 7955656$$nfs #InfiniBox file system ID
  persistentVolumeReclaimPolicy: Delete
  storageClassName: ibox-nfs-storageclass-demo
  volumeMode: Filesystem



$ cat importpvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ibox-import-pvc-demo
  namespace: ibox
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: ibox-nfs-storageclass-demo
  volumeName: gtfspv

Managing additional InfiniBox storage arrays

A separate secret must be defined within the Kubernetes cluster for every managed InfiniBox array. It must include InfiniBox hostname, administrator credentials and optional CHAP authentication credentials (for iSCSI), encoded in base64.

To encode an entry:

$ echo -n ibox0001.company.com | base64
aWJveDAwMDEuY29tcGFueS5jb20=

Sample secret file:

$ cat ibox0001-creds.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ibox0001-credentials
  namespace: ibox
type: Opaque
data:
  hostname: aWJveDAwMDEuY29tcGFueS5jb20=
  node.session.auth.password: MC4wMDB1czA3Ym9mdGpv
  node.session.auth.password_in: MC4wMDI2OHJ6dm1wMHI3
  node.session.auth.username: aXFuLjIwMjAtMDYuY29tLmNzaS1kcml2ZXItaXNjc2kuaW5maW5pZGF0OmNvbW1vbmlu
  node.session.auth.username_in: aXFuLjIwMjAtMDYuY29tLmNzaS1kcml2ZXItaXNjc2kuaW5maW5pZGF0OmNvbW1vbm91dA==
  password: MTIzNDU2
  username: azhzYWRtaW4=

Troubleshooting

Standard Kubernetes troubleshooting actions can be used to debug CSI driver issues. 

To generate a log file for the CSI driver, run:

kubectl logs <pod name> <container name>

Pods of interest related to InfiniBox CSI Driver may include:

  • Controller, which includes 5 containers:
    • driver (the main container to focus on in case of issues)
    • resizer
    • snapshotter
    • provisioner
    • attacher
  • Node - select an instance running on the relevant worker, it includes 2 containers:
    • driver (the main container to focus on in case of issues)
    • registrar
  • Operator - for operator-managed deployments only

Relevant error messages can be found also using kubectl describe command.

$ kubectl get nodes
NAME         STATUS  ROLES   AGE  VERSION
gtouret-k51  Ready   master  21d  v1.18.0
gtouret-k52  Ready   <none>  21d  v1.18.0
gtouret-k53  Ready   <none>  21d  v1.18.0

$ kubectl get pods -n ibox -o wide
NAME                      READY  STATUS   RESTARTS  AGE   IP             NODE 
csi-infinibox-driver-0    5/5    Running  0         2d4h  10.244.2.55    gtouret-k53 
csi-infinibox-node-85g4t  2/2    Running  0         2d4h  172.20.87.214  gtouret-k51 
csi-infinibox-node-jw9hx  2/2    Running  0         2d4h  172.20.87.99   gtouret-k52 
csi-infinibox-node-qspr4  2/2    Running  0         2d4h  172.20.78.63   gtouret-k53

$ kubectl logs csi-infinibox-driver-0 driver -n ibox | tail -2

time="2020-04-25T05:07:03Z" level=info msg="Called createVolumeFrmPVCSource"
time="2020-04-25T05:07:03Z" level=info msg="Request made for method: GET and apiuri /api/rest/filesystems/10748988"



Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Comments