Purpose
This guide provides instructions for configuring IBM System Storage SAN Volume Controller Software to work with Infinidat InfiniBox.
Supported InfiniBox Models
InfiniBox F-series models are supported for use with the IBM SVC.
Supported Firmware Levels
InfiniBox versions | 1.5.x, 1.7.x, 2.x, 3.x, 4.x , 5.x |
SVC version | 7.4.0.3 and above |
For latest details on supported versions, please consult:
- http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
- http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658
Related Documentation
Review the information contained within this guide in conjunction with the available documentation:
Additional SVC Documentation
- IBM System Storage Interoperation Center (SSIC)
- Support Information for SAN Volume Controller
- V7.4.x Supported Hardware List, Device Driver, Firmware and Recommended Software
- Levels for SAN Volume Controller
Visit the INFINIDAT web page to learn the newest information about InfiniBox versions, features and best practices.
SVC also denotes IBM Storwize Controllers
InfiniBox Storage Overview
Infinidat features high-end, hyperscale storage with highly-dense capacity, high performance, and infinite use cases. Infinidat offers you an environment-friendly system with low power consumption, low CapEx & OpEx and unmatched TCO.
With more than 90 patent applications (many of which are approved), Infinidat is a global leader in the areas of data protection, performance, automation, energy savings, ease of management, and scaling.
InfiniBox has been designed in order to comply with the following requirements:
- Cost - Low CapEx and OpEx.
- Reliability - Multiple data protection layers.
- Performance – Load is balanced across all hardware components to achieve optimal utilization.
- Redundancy - Three nodes in Active-Active-Active configuration, each node has constant access to all of the drives.
- Availability:
- No single point of failure
- Protection against double disk failure
- Fast rebuild in case of drive failure
This is NOT an InfiniBox Administrators Guide. For InfiniBox documentation. Please visit www.infinidat.com for additional documentation.
Technical Information and Best Practices
InfiniBox offers several ways of management, all available from a web browser.
InfiniBox Management Console | HTML5-based GUI |
InfiniShell | Command-line interface |
InfiniAPI | RESTful API |
Supported volume size | From 1GB to 2PB (system-size) |
Exports | Up to 100,000 volumes are supported. |
RAID protection | All InfiniBox volumes have a "double-parity InfiniRAID™" protection. |
Target Ports | InfiniBox system has 24 FC ports in total (8 x 8 Gbit per node) |
LUN Numbering | The volume LUN number is assigned automatically but, if needed, it can be set manually. |
LUN Identification | InfiniBox identifies exported Logical Units through SCSI Identification Descriptor type 3. 5742B0FSSSSSSSPP where: For example: |
LU access model | All three nodes are Active/Active/Active. In all conditions, it is recommended to cross connect the ports across FC switches to avoid an outage due to node failure. |
LU preferred access port | There are no preferred access ports on the InfiniBox for a given LUN. All LUNs are presented as Active/Active/Active across the three nodes. |
Detecting ownership | Detecting Ownership does not apply to InfiniBox. |
Switch zoning limitations for InfiniBox | There are no zoning limitations for InfiniBox. |
Fabric zoning | Each Fabric that is zoned to all the SVC backend ports should contain at least one InfiniBox storage system port from each node. Example: switch port of controller (name server) |
Target port sharing | The InfiniBox storage system does not support port-level LUN masking. LUNs are masked to host WWN's, allowing flexible zoning and target port sharing. |
Host splitting | This can be done if the Multipathing Drivers are compatible. |
Controller splitting | The controllers (nodes) may be shared with other hosts, provided SVC has sole access to the LUNs presented to it as mdisks. |
Configuration settings for InfiniBox storage system | InfiniBox is designed for simple configuration. There are few configurable settings. |
Quorum disks on InfiniBox | The SAN Volume Controller cluster selects disks that are presented by the InfiniBox storage system as quorum disks. |
Clearing SCSI reservations and registrations to SVC | This option is not available on the InfiniBox Management Console. |
This should never be done. LUNs must be exclusive to SVC. |
Availability of copy functions for InfiniBox storage system | The InfiniBox replication and snapshot features are not supported under SVC. |
Thin provisioning | The usage of InfiniBox thin provisioning feature has to be accompanied by meticulous monitoring of the available storage capacity of the array and the storage pool used for SVC. Thin provisioning requires a small number of large and equally sized LUNs for each SVC mdiskgroup. Refer to the IBM SVC Infocenter and Redbooks for best practices. For more information, refer to the SVC Information Center. |
Performance measurements |
Configuring InfiniBox to Work with SVC
There are several steps that must be performed to set up InfiniBox with SVC
- Setup zoning between the InfiniBox and SVC
- Create SVC as a host on InfiniBox
- Create SVC as a cluster on InfiniBox
- Create a pool (optional)
- Creating volumes for SVC to use as mdisks
- Mapping the volumes to the SVC host/cluster
General:
Settings are default unless specified.
The instructions apply to the InfiniBox GUI.
1 - Zoning:
- Zone SVC to the InfiniBox system, and verify that the SVC can see all the expected InfiniBox ports as controllers (svcinfo detectmdisk, svcinfo lscontroller).
- When zoning is set, SVC detects the InfiniBox storage array automatically.
For redundancy and availability concerns, it is recommended that you use two fabrics with at least 2 FC ports per connected InfiniBox node (one port to each fabric from each node - a minimum of 6 FC ports across the 3 nodes).
2 - Defining SVC Nodes on InfiniBox
All the SVC nodes have to be defined on the InfiniBox system.
Select Hosts from left hand side of the main GUI.
The Hosts screen opens.Click Create. The Host Create dialog screen opens.
Leave the Host option selected.
Name the host and manually provide a WWPN. Click Add More Ports to set additional FC ports to the SVC host.Note: SVC ports are generally of the form:
5005076801ppxxxx
where pp is port 10,20,30,40… and xxxx is a nodespecific identifier.- Repeat the above steps and define all the SVC nodes as hosts on the InfiniBox system
3 - Defining SVC Cluster on InfiniBox
All SVC nodes that belong to same SVC cluster have to be grouped as a cluster on the InfiniBox system. This is not mandatory but it prevents LUN mapping mismatches.
To create a cluster:
- Select Hosts from left hand side of the main GUI.
The Hosts screen opens. Click Create. The Host Create dialog screen opens.
Select the Cluster option-button.
Name the cluster.
Click Create.
The created cluster is displayed on screen.Add all the SVC nodes to the cluster by clicking the Add Host button and choosing all the SVC nodes from the list.
4 - Creating a Pool
Define a storage pool for the SVC volumes.
NB. If there is already a pool defined on the system which can be used for SVC you can skip this step.
- Select Pools from the left hand side of the main GUI.
The Pools screen opens. - Click Create. The Pool Create dialog screen opens.
The screen provides an indication of how much free space there is on InfiniBox.
Note: If over-allocating the storage capacity, care should be taken to monitor the systems remaining capacity to avoid taking related SVC
Enter the pool’s details: pool name, physical and virtual capacity.
Click the link icon to allow over provisioning for this pool.
Optionally, click Advanced to set the emergency buffer, thresholds and pool admins.
Click Create.
The pool is created and can be viewed from the Pools screen.mdiskgroups
offline.
5 - Creating a Volume (or Volumes) for SVC to Use
Creating a Volume for SVC
- Select Volumes from the left hand side of the main GUI.
The Volumes screen opens. - Click Create. The Volume Create dialog screen opens.
Set the volume's name, size and whether it is thin provisioned.
Select a pool from the Pool drop-down list. Consult the physical and virtual capacity indicators.
Optionally, click Advanced.
Optionally, enhance volume performance by using SSD cache – by checking Enable SSD , the volume will be able to use SSD cache (only available on systems that has SSD drives).
Optionally create multiple volumes - by adding a numerical value for Series field. That number of volumes will be created.
Click Create.
The volume is created and can be viewed from the Volumes screen.
6 - Mapping the Volumes to SVC
- Select Hosts from left hand side of the main GUI.
The Hosts screen opens. - Click the SVC cluster to open its page.
- Click Map Volume. The volumes that are available for mapping are displayed on screen.
Select volumes and click Map.
Optionally: manually assign LUN IDs.
Note: LUNs are exported through all the zoned InfiniBox portsGo to SVC and verify that all volumes are displayed as expected and no errors corrected (
svctask detectmdisk
,svcinfo
lsdiscoverystatus
)
Example:
lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 1 INFINIDAT NFINIDAT InfiniBox x |
---|
InfiniBox Best Practices with SVC
Due to the nature of InfiniBox there are very few best practices that need to be followed.
- If managing multiple InfiniBox arrays beneath a single SVC cluster consider using an SVC Extent Size larger than the default (1GB), as this limits the managed storage capacity to 4PB. An Extent Size of 2GB increases this to 8PB, equivalent to four InfiniBox arrays utilizing 6TB drives or eight arrays utilizing 3TB drives.
- Because InfiniBox virtualizes and protects internally the usual approach of one RAID = one LUN does not apply. To calculate the number and size of the volumes that should be mapped to SVC use following formula:
M= ((P x C) / N) / Q where:- M is the number of volumes that should be created.
- P is the number of InfiniBox node ports that are zoned to SVC.
- C is the maximum queue depth SVC uses for each InfiniBox port (this depth is set internally in the SVC code to 1000).
- N is the number of nodes in the SVC cluster.
- Q is the maximum queue depth for each MDisk (this depth is set internally in SVC code to 60).
For example, for a two-node SVC cluster and an InfiniBox system with 12 host ports that are zoned to SVC has the following calculation:
M = ((12 x 1000) / 2 / 60 = 100
Therefore create 100 volumes on the InfiniBox array, the size of which being determined by the array size, e.g. 13.8TB for an array using 4TB drives.
Example – Identify InfiniBox System and LUNs from SVC
See the InfiniBox target ports from SVC:
# lscontroller 2 id 2 controller_name controller1 WWNN 5742B0F000042900 mdisk_link_count 2 max_mdisk_link_count 2 degraded no vendor_id INFINIDAT product_id_low InfiniBo product_id_high x product_revision 0h ctrl_s/n allow_quorum yes fabric_type fc site_id site_name WWPN 5742B0F000042928 path_count 12 max_path_count 12 WWPN 5742B0F000042938 path_count 12 max_path_count 12 WWPN 5742B0F000042918 path_count 12 max_path_count 12 WWPN 5742B0F000042917 path_count 12 max_path_count 12 WWPN 5742B0F000042927 path_count 12 max_path_count 12 WWPN 5742B0F000042937 path_count 12 max_path_count 12 |
See LUNs provisioned from InfiniBox to the SVC:
# lsmdisk 5 id 5 name mdisk5 status online mode unmanaged mdisk_grp_id mdisk_grp_name capacity 3.0TB quorum_index block_size 512 controller_name controller1 ctrl_type 4 ctrl_WWNN 5742B0F000042900 controller_id 2 path_count 36 max_path_count 36 ctrl_LUN_# 000000000000000B UID 6742b0f000000429000000000000007700000000000000000000000000000000 preferred_WWPN 5742B0F000042937 active_WWPN many fast_write_state empty raid_status raid_level redundancy strip_size spare_goal spare_protection_min balanced tier enterprise slow_write_priority fabric_type fc site_id site_name easy_tier_load high encrypt no |
Last edited: 2022-08-06 08:09:52 UTC
Comments