Follow


Purpose

This guide provides instructions for configuring IBM System Storage SAN Volume Controller Software to work with INFINIDAT InfiniBox.

Supported InfiniBox Models

InfiniBox F-series models are supported for use with the IBM SVC.

Supported Firmware Levels

InfiniBox versions

1.5.x, 1.7.x, 2.x, 3.x, 4.x , 5.x

SVC version

7.4.0.3 and above

For latest details on supported versions, please consult:

Related Documentation

Review the information contained within this guide in conjunction with the available documentation:

Additional SVC Documentation

Visit the INFINIDAT web page to learn the newest information about InfiniBox versions, features and best practices.

SVC also denotes IBM Storwize Controllers

InfiniBox Storage Overview

INFINIDAT features high-end, hyperscale storage with highly-dense capacity, high performance, and infinite use cases. INFINIDAT offers you an environment-friendly system with low power consumption, low CapEx & OpEx and unmatched TCO.
With more than 90 patent applications (many of which are approved), INFINIDAT is a global leader in the areas of data protection, performance, automation, energy savings, ease of management, and scaling.
InfiniBox has been designed in order to comply with the following requirements:

  • Cost - Low CapEx and OpEx.
  • Reliability - Multiple data protection layers.
  • Performance – Load is balanced across all hardware components to achieve optimal utilization.
  • Redundancy - Three nodes in Active-Active-Active configuration, each node has constant access to all of the drives.
  • Availability:
    • No single point of failure
    • Protection against double disk failure 
    • Fast rebuild in case of drive failure

This is NOT an InfiniBox Administrators Guide. For InfiniBox documentation. Please visit www.infinidat.com for additional documentation. 

Technical Information and Best Practices

InfiniBox offers several ways of management, all available from a web browser.

InfiniBox Management Console

HTML5-based GUI

InfiniShell

Command-line interface

InfiniAPIRESTful API 


Supported volume sizeFrom 1GB to 2PB (system-size)

Exports

Up to 100,000 volumes are supported.

RAID protection

All InfiniBox volumes have a "double-parity InfiniRAID™" protection.

Target Ports

InfiniBox system has 24 FC ports in total (8 x 8 Gbit per node)

LUN Numbering

The volume LUN number is assigned automatically but, if needed, it can be set manually.
All of the system FC ports are present the same LUNs.
There are no special considerations for Logical Unit numbering. However, it is recommended not to map volumes to LUN 0

LUN Identification

InfiniBox identifies exported Logical Units through SCSI Identification Descriptor type 3.
The 64-bit, NAA IEEE Registered Extended Identifier (NAA=6) for the Logical Unit is in the form 6-OUI-VSID.
InfiniBox WWNN and WWPN settings look like this:

5742B0FSSSSSSSPP

where:
742B0F is the InfiniBox IEEE unique VSID of OEM for INFINIDAT.
SSSSSSS is the unique vendor-supplied serial number for the device.
PP is the vendor specific code used to identify a specific node and port.

For example:
WWNN 5742B0F000042900
WWPN 5742B0F000042928
WWPN 5742B0F000042938
WWPN 5742B0F000042918
WWPN 5742B0F000042917
WWPN 5742B0F000042927
WWPN 5742B0F000042937

LU access model

All three nodes are Active/Active/Active. In all conditions, it is recommended to cross connect the ports across FC switches to avoid an outage due to node failure.
All InfiniBox nodes are equal in priority, so there is no benefit to using an exclusive set for a specific LU.

LU preferred access port

There are no preferred access ports on the InfiniBox for a given LUN. All LUNs are presented as Active/Active/Active across the three nodes.

Detecting ownership

Detecting Ownership does not apply to InfiniBox.

Switch zoning limitations for InfiniBox

There are no zoning limitations for InfiniBox.
The InfiniBox system presents itself to a SAN Volume Controller cluster as a single WWNN with an associated WWPN for each FC port connection.
For example, if one of these storage systems has 6 ports zoned to the SAN Volume Controller, it will appear as 1 controller with six WWPNs. A given logical unit (LU) is mapped to the SAN Volume Controller through all controller ports zoned to the SAN Volume Controller using the same logical unit number (LUN).

Fabric zoning

Each Fabric that is zoned to all the SVC backend ports should contain at least one InfiniBox storage system port from each node. Example: switch port of controller (name server)
N 020600;
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="b3b8742f-b544-4888-9831-6422be9ef673"><ac:plain-text-body><![CDATA[3;57:42:b0:f0:00:04:29:22;57:42:b0:f0:00:04:29:00; na FC4s: FCP [NFINIDATInfiniBox 0h ]
]]></ac:plain-text-body></ac:structured-macro>
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="5c4c574a-1003-400f-872f-2009988fa055"><ac:plain-text-body><![CDATA[ NodeSymb: [35] "QLE2564 FW:v7.02.00 DVR:v8.02.01-k4"
]]></ac:plain-text-body></ac:structured-macro>
Fabric Port Name: 20:14:00:05:1e:e6:ba:15
Permanent Port Name: 57:42:b0:f0:00:04:29:22
Port Index: 20
Share Area: No
Device Shared in Other AD: No
Redirect: No Partial: No

Target port sharing

The InfiniBox storage system does not support port-level LUN masking. LUNs are masked to host WWN's, allowing flexible zoning and target port sharing.

Host splitting

This can be done if the Multipathing Drivers are compatible.

Controller splitting

The controllers (nodes) may be shared with other hosts, provided SVC has sole access to the LUNs presented to it as mdisks.

Configuration settings for InfiniBox storage system

InfiniBox is designed for simple configuration. There are few configurable settings.

Quorum disks on InfiniBox

The SAN Volume Controller cluster selects disks that are presented by the InfiniBox storage system as quorum disks.
To maintain availability for the cluster, ideally each quorum disk should reside on a separate disk subsystem.

Clearing SCSI reservations and registrations to SVCThis option is not available on the InfiniBox Management Console.

This should never be done. LUNs must be exclusive to SVC.


Availability of copy functions for InfiniBox storage systemThe InfiniBox replication and snapshot features are not supported under SVC.
Thin provisioningThe usage of InfiniBox thin provisioning feature has to be accompanied by meticulous monitoring of the available storage capacity of the array and the storage pool used for SVC.
Thin provisioning requires a small number of large and equally sized LUNs for each SVC mdiskgroup.
Refer to the IBM SVC Infocenter and Redbooks for best practices.
For more information, refer to the SVC Information Center.
Performance measurements

Configuring InfiniBox to Work with SVC

There are several steps that must be performed to set up InfiniBox with SVC

  1. Setup zoning between the InfiniBox and SVC
  2. Create SVC as a host on InfiniBox
  3. Create SVC as a cluster on InfiniBox
  4. Create a pool (optional)
  5. Creating volumes for SVC to use as mdisks
  6. Mapping the volumes to the SVC host/cluster

General:

Settings are default unless specified.

The instructions apply to the InfiniBox GUI.

1 - Zoning:

  1. Zone SVC to the InfiniBox system, and verify that the SVC can see all the expected InfiniBox ports as controllers (svcinfo detectmdisk, svcinfo lscontroller).
  2. When zoning is set, SVC detects the InfiniBox storage array automatically.

For redundancy and availability concerns, it is recommended that you use two fabrics with at least 2 FC ports per connected InfiniBox node (one port to each fabric from each node - a minimum of 6 FC ports across the 3 nodes).


2 -  Defining SVC Nodes on InfiniBox

All the SVC nodes have to be defined on the InfiniBox system.

  1. Select Hosts from left hand side of the main GUI.
    The Hosts screen opens.

  2. Click Create. The Host Create dialog screen opens.

    Leave the Host option selected.
    Name the host and manually provide a WWPN. Click Add More Ports to set additional FC ports to the SVC host.

    Note: SVC ports are generally of the form: 5005076801ppxxxx
    where pp is port 10,20,30,40… and xxxx is a nodespecific identifier.

  3. Repeat the above steps and define all the SVC nodes as hosts on the InfiniBox system


3 -  Defining SVC Cluster on InfiniBox

All SVC nodes that belong to same SVC cluster have to be grouped as a cluster on the InfiniBox system. This is not mandatory but it prevents LUN mapping mismatches. 

To create a cluster:

  1. Select Hosts from left hand side of the main GUI.
    The Hosts screen opens.
  2. Click Create. The Host Create dialog screen opens.
    Select the Cluster option-button.
    Name the cluster.
    Click Create.
    The created cluster is displayed on screen.

  3. Add all the SVC nodes to the cluster by clicking the Add Host button and choosing all the SVC nodes from the list.


4 - Creating a Pool 

Define a storage pool for the SVC volumes.
NB. If there is already a pool defined on the system which can be used for SVC you can skip this step.

  1. Select Pools from the left hand side of the main GUI.
    The Pools screen opens.
  2. Click Create. The Pool Create dialog screen opens.

    The screen provides an indication of how much free space there is on InfiniBox.
    Enter the pool’s details: pool name, physical and virtual capacity.
    Click the link icon to allow over provisioning for this pool.
    Optionally, click Advanced to set the emergency buffer, thresholds and pool admins.
    Click Create.
    The pool is created and can be viewed from the Pools screen.

    Note: If over-allocating the storage capacity, care should be taken to monitor the systems remaining capacity to avoid taking related SVC mdiskgroups offline. 

5 - Creating a Volume (or Volumes) for SVC to Use

Creating a Volume for SVC

  1. Select Volumes from the left hand side of the main GUI.
    The Volumes screen opens.
  2. Click Create. The Volume Create dialog screen opens.
    Set the volume's name, size and whether it is thin provisioned.
    Select a pool from the Pool drop-down list. Consult the physical and virtual capacity indicators.
    Optionally, click Advanced.
    Optionally, enhance volume performance by using SSD cache – by checking Enable SSD , the volume will be able to use SSD cache (only available on systems that has SSD drives). 
    Optionally create multiple volumes - by adding a numerical value for Series field. That number of volumes will be created.
    Click Create.
    The volume is created and can be viewed from the Volumes screen. 

6 - Mapping the Volumes to SVC

  1. Select Hosts from left hand side of the main GUI. 
    The Hosts screen opens.
  2. Click the SVC cluster to open its page.
  3. Click Map Volume. The volumes that are available for mapping are displayed on screen.
  4. Select volumes and click Map.
    Optionally: manually assign LUN IDs.
    Note: LUNs are exported through all the zoned InfiniBox ports

  5. Go to SVC and verify that all volumes are displayed as expected and no errors corrected (svctask detectmdisk, svcinfo lsdiscoverystatus)

Example:

lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 1 INFINIDAT NFINIDAT InfiniBox x

InfiniBox Best Practices with SVC

Due to the nature of InfiniBox there are very few best practices that need to be followed.

  • If managing multiple InfiniBox arrays beneath a single SVC cluster consider using an SVC Extent Size larger than the default (1GB), as this limits the managed storage capacity to 4PB. An Extent Size of 2GB increases this to 8PB, equivalent to four InfiniBox arrays utilizing 6TB drives or eight arrays utilizing 3TB drives.
  • Because InfiniBox virtualizes and protects internally the usual approach of one RAID = one LUN does not apply. To calculate the number and size of the volumes that should be mapped to SVC use following formula:
    M= ((P x C) / N) / Q where:
    • M is the number of volumes that should be created. 
    • P is the number of InfiniBox node ports that are zoned to SVC. 
    • C is the maximum queue depth SVC uses for each InfiniBox port (this depth is set internally in the SVC code to 1000).
    • N is the number of nodes in the SVC cluster.
    • Q is the maximum queue depth for each MDisk (this depth is set internally in SVC code to 60). 

For example, for a two-node SVC cluster and an InfiniBox system with 12 host ports that are zoned to SVC has the following calculation:

M = ((12 x 1000) / 2 / 60 = 100

Therefore create 100 volumes on the InfiniBox array, the size of which being determined by the array size, e.g. 13.8TB for an array using 4TB drives.

Example – Identify InfiniBox System and LUNs from SVC

See the InfiniBox target ports from SVC:

# lscontroller 2 
id 2 
controller_name controller1 
WWNN 5742B0F000042900 
mdisk_link_count 2 
max_mdisk_link_count 2 
degraded no 
vendor_id INFINIDAT 
product_id_low InfiniBo 
product_id_high x 
product_revision 0h 
ctrl_s/n 
allow_quorum yes 
fabric_type fc 
site_id 
site_name 
WWPN 5742B0F000042928 
path_count 12 
max_path_count 12 
WWPN 5742B0F000042938 
path_count 12 
max_path_count 12 
WWPN 5742B0F000042918 
path_count 12 
max_path_count 12 
WWPN 5742B0F000042917 
path_count 12 
max_path_count 12 
WWPN 5742B0F000042927 
path_count 12 
max_path_count 12 
WWPN 5742B0F000042937 
path_count 12 
max_path_count 12


See LUNs provisioned from InfiniBox to the SVC:

# lsmdisk 5
id 5
name mdisk5
status online
mode unmanaged
mdisk_grp_id
mdisk_grp_name
capacity 3.0TB
quorum_index
block_size 512
controller_name controller1
ctrl_type 4
ctrl_WWNN 5742B0F000042900
controller_id 2
path_count 36
max_path_count 36
ctrl_LUN_# 000000000000000B
UID 6742b0f000000429000000000000007700000000000000000000000000000000
preferred_WWPN 5742B0F000042937
active_WWPN many
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier enterprise
slow_write_priority
fabric_type fc
site_id
site_name
easy_tier_load high
encrypt no
Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Comments