Follow


Introduction

This Best Practices Guide provides process-oriented guidelines that will prepare a Windows™ server (host) to work with an Infinidat storage array in SAN (Fibre Channel or iSCSI) environments.
This guide also includes configuration information for optimizing performance and connectivity.

Infinidat provides customers with a fully featured configuration and management tool called Host PowerTools, free of charge. Instructions presented in this manual are for customers who do not use Hosr PowerTools., in the exception of Queue Depth settings which is not being set by HPT in windows. For more information, see Host PowerTools.

Host PowerTools:

  • Automates the configuration and preparation of the Windows server per InfiniBox best practices
  • Allows simple configuration of the connectivity to the InfiniBox systems
  • Simplifies provisioning of volumes for use by the Windows server
  • Allows taking snapshots of InfiniBox volumes mounted on the Windows server 
  • etc.

Host Connectivity

The best performance from InfiniBox and the highest availability for hosts in a Fibre Channel environment can be achieved by zoning each host to all three storage array nodes.

Infinidat strongly recommends this method for ensuring optimal balanced utilization of all resources in the storage array.

The following guidelines should be followed when creating Fibre Channel zones :

  • Each physical host should be zoned to all 3 storage nodes via at least 2 independent HBA initiators ports on two independent SAN fabrics.
  • A maximum 1-to-3 fan-out from host (initiator) to storage (target) ports should normally be used. This means that for a host with 2 HBA ports will have 2 x 3 = 6 paths per storage Logical Unit; for a host with 4 HBA ports, there will be 4 x 3 = 12 paths per Logical Unit. 
    • It is advisable to monitor port usage not to overload the channel capacity.

Special Considerations for Windows Clusters

  • Every Windows cluster should be configured into its own Fibre Channel zone.
  • Storage Logical Units should be accessible to all hosts in the cluster and isolated from other hosts to prevent data corruption.
  • It is advisable that all hosts in the cluster will have the same hardware and software components, Host Bus Adapters, firmware level, and device driver software versions.

A basic layout for host - storage connectivity 

Windows Server Settings

Multipath Installation and Configuration

Infinidat supports native multipathing using Microsoft DSM (Device Specific Module) which enables high availability while choosing the desired multipath policy.

To configure multipathing manually use Powershell and perform the following steps

  1. Install the feature:

    Install-WindowsFeature Multipath-IO
  2. Enable the feature:

    dism /online /enable-feature:MultipathIO

    For a GUI installation process see the following link - TechNet: Installing and Configuring MPIO

  3. A reboot is required for these changes to take effect.

Multipath Tunable Settings

Check that the MPIO Tunable parameters in the Windows Registry settings are optimized.

RegistryValue NameValue
HKLM\SYSTEM\CurrentControlSet\Services\mpio\ParametersPDORemovePeriod20
HKLM\SYSTEM\CurrentControlSet\Services\mpio\ParametersUseCustomPathRecoveryInterval1
HKLM\SYSTEM\CurrentControlSet\Services\mpio\ParametersPathRecoveryInterval10
HKLM\SYSTEM\CurrentControlSet\Services\mpio\ParametersPathVerifyEnabled1

A reboot is required for these changes to take effect.

To verify the MPIO Settings, run the following PowerShell command:

Get-MPIOSetting

Example :

PS C:\Users\Administrator> Get-MPIOSetting 

PathVerificationState : Enabled
PathVerificationPeriod: 30
PDORemovePeriod : 20
RetryCount : 3
RetryInterval : 1
UseCustomPathRecoveryTime : Enabled
CustomPathRecoveryTime: 10
DiskTimeoutValue : 30

Disk Timeout

Check the Windows Registry to make sure that the disk timeout is set to 30.

RegistryValue NameValue

HKLM\System\CurrentControlSet\Services\Disk

TimeoutValue

30

Load balancing policy

The Multipath-IO policy defines how the host distributes IO operations across the available paths to the storage.

The following are Windows options for the MPIO policy:

Parameter ValueDescription
FOOFail Over Only
RRRound Robin
LQDLeast Queue Depth
LBLeast Blocks
NoneClears any currently-configured
default load balance policy

Infinidat recommend using the Least Queue Depth policy to provide better OS resilience along with optimal performance.
If you encounter problems with performance after changing the policy from Round Robin, please work with Infinidat Support Engineer to check your environment.

Setting MPIO Policy using PowerShell

Set up the policy with the command:

Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD 

Verify the policy with the command:

Get-MSDSMGlobalDefaultLoadBalancePolicy

Additionally, the mpclaim command or the Windows MPIO snippet from the Disk Management can perform the same functions.

To set the required policy on the relevant disk

  1. Open Disk Management
  2. Select and right-click on the desired drive
  3. Select Properties and click on the MPIO Tab and select the desired policy.

For example:

SAN Policy

This policy, also referred to as "Disk Policy", determines whether a newly detected disk will be auto-mounted and go online.

Infinidat recommends leaving the default Windows SAN Policy, which is :

  • Automount Enabled
  • OfflineShared which specifies that all newly discovered disks not residing on a shared bus (such as SCSI and iSCSI) are brought online and made read-write. Disks that are left offline will be read-only by default.

The automount is controlled through the diskpart utility. After entering Diskpart, one can check if automount is disabled or enabled, and also view the SAN policy.

DISKPART> automount
Automatic mounting of new volumes enabled.
DISKPART> SAN
SAN Policy : Offline Shared 

To test the current configuration on the host, run the PowerShell command:

Get-StorageSetting | Select-Object NewDiskPolicy 

To change the policy, run the following PowerShell command:

Set-StorageSetting -NewDiskPolicy OfflineShared

Queue depth setting

Queue depth is the number of I/O requests (SCSI commands) that can be queued at one time on a storage controller. Each I/O request from the host's initiator HBA to the storage controller's target adapter consumes a queue entry. Typically, a higher queue depth equates to better performance. However, if the storage controller's maximum queue depth is reached, that storage controller rejects incoming commands by returning a QFULL response to them. If a large number of hosts are accessing a storage controller, you should plan carefully to avoid QFULL conditions, which significantly degrade system performance and can lead to errors on some systems.

The maximum queue depth should be chosen carefully, and in most cases, one can use the default values set by each operating system. Modern operating systems can handle well the queue depth but if one decides to change them, one should test the actual configuration carefully.

The default setting for a standard host is 32. Hosts that use big chunks of data (such as SQL server), can have the queue depth set to 128.

QLogic HBA

For a QLogic host bus adapter, this setting can be changed using the Windows registry:

Select HKEY_LOCAL_MACHINE and follow the tree structure down to the QLogic driver (Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ql2300i\Parameters\Device) as shown in the following figure, and double-click DriverParameter

The value is set between 32 to 128 (decimal). The registry stores the value in hexadecimal, with a range between 20 to 80 hex. In the above example, the default value is set to 20 (hex).

After you click OK to set the value and Exit the registry, you need to reboot the server in order to load the parameter.

This setting is not to be confused with Execution Throttle which is a Qlogic HBA parameter value and changed using tools like QConverge or SAN surfer.

Emulex HBA

When using an Emulex Host Bus Adapter one needs to install a set of drivers which can be downloaded from Broadcom Support Site, this include a tool called OneCommand from which you can change driver parameters and also set the Queue Depth value as shown below:

If you still have problems with performance after changing the queue depth on a host, please work with Infinidat Support Engineer to find the root cause.

iSCSI Configuration

To begin with, enable the MSiscsi service if not already enabled.

These actions will start the service, and set startup type to automatic.
This can be done either using the GUI: Select  Server Manager > Tools > Services. 

Alternately, this can be done using the  PowerShell commands:

Start-Service msiscsi
Set-Service msiscsi -startuptype "automatic"

Setting iSCSI Connectivity Between the Client and Storage

This is a description of a manual process which configures both the Client and the Storage. It is advisable to do this using Host PowerTools if possible, using the infinihost iscsi connect command. 

Recommendations & Guidelines

  • Enable Jumbo Frames (MTU 9000) on the Network interface (see figure below).
    This task can be performed from the Advanced Properties on the Network Interface. This should improve performance, but must be supported through the entire stack; this means that the Switches and the Target should all use the same MTU size.
  • It is recommended to separate iSCSI traffic onto a dedicated VLAN, and use a separate network interface for connectivity between the host and the storage.

1. Before connecting the client

Before connecting the client, make sure that the iSCSI service is running on InfiniBox.
To verify and discover the target IP addresses, run the following commands from InfiniShell:

config.net_space.query service=ISCSI
config.net_space.ip.query net_space=<Network Space name>

The output will list all iSCSI target IP addresses:

Network SpaceIP AddressEnabledNodeNetwork InterfaceType
iSCSI172.20.42.48yes1IF01iSCSI
iSCSI172.20.42.49yes2IF01iSCSI
iSCSI172.20.42.50yes3IF01iSCSI
iSCSI172.20.42.52yes1IF01iSCSI
iSCSI172.20.42.56yes2IF01iSCSI
iSCSI172.20.42.57yes3IF01iSCSI
iSCSI172.20.42.59yes1IF01iSCSI
iSCSI172.20.42.61yes2IF01iSCSI
iSCSI172.20.42.64yes3IF01iSCSI
iSCSI172.20.42.66yes1IF01iSCSI
iSCSI172.20.42.68yes2IF01iSCSI
iSCSI172.20.42.70yes3IF01iSCSI

2. Create an iSCSI host on InfiniBox

To get the Windows server initiator name (IQN), run the PowerShell command on the host:

Get-InitiatorPort | Select-Object -Property NodeAddress,ConnectionType | Format-Table -AutoSize

For example:

Record the NodeAddress (iqn.1991-05.com.microsoft:ikatzir-win2019) - it is required during InfiniBox host creation (see the third line of the code below).

Run the following InfiniShell commands to create a host:

host.create name=host1-win2019
Host "host1-win2019" created
host.add_port host=host1-win2019 port=iqn.1991-05.com.microsoft:host1-win2019
iSCSI port "iqn.1991-05.com.microsoft:host1-win2019" added to host "host1-win2019"
1 ports added to host "host1-win2019" 

3. Connect iSCSI Host to InfiniBox

Both iSCS initiators need to be connected to all 3 nodes in a highly available manner, similarly as was done for FC Connectivity, see Section'on page 3.

Open the iSCSI initiator properties from the Server Manager interface, or run iscsicpl.

Click the Targets tab, then select Connect > Advanced, select the Initiator Interface and Target Portal IP, that corresponds to one of the nodes' IP addresses (according to the information in step 1) and click OK.

Be sure to select Enable Multi-Path and Add this connection to the list of Favorites.
Repeat this process for all desired paths


4. Map and Rescan

Map a volume to the host and rescan the disks from the Disk Manager utility.

Make sure that newly discovered disks are multipathed and appear only once in the Disk Manager.

Verify the MPIO policy for each path:

  1. Open Disk Management
  2. Right click on the disk mapped from the Storage to the Host
  3. Click the MPIO Tab
  4. Set the Policy to "Round Robin with Subset" .

Each path state should be Active/Optimized.

This can also be checked in InfiniShell, using the mpclaim command.
In the example below, the disk is disk 2): 

C:\Users\Administrator>mpclaim -s -d 2

The output should look like this -

MPIO Disk2: 06 Paths, Round Robin with Subset, Symmetric Access
Controlling DSM: Microsoft DSM
SN: 6742B0F000004E2F00000000006DF8C7
Supported Load Balance Policies: FOO RR RRWS LQD WP LB 
Path ID State SCSI Address Weight
---------------------------------------------------------------------------
0000000077030005 Active/Optimized 003|000|005|001 0
TPG_State : Active/Optimized , TPG_Id: 32777, : 32777 
0000000077030004 Active/Optimized 003|000|004|001 0
TPG_State : Active/Optimized , TPG_Id: 32770, : 32770 
0000000077030003 Active/Optimized 003|000|003|001 0
TPG_State : Active/Optimized , TPG_Id: 32771, : 32771 
0000000077030002 Active/Optimized 003|000|002|001 0
TPG_State : Active/Optimized , TPG_Id: 32779, : 32779 
0000000077030001 Active/Optimized 003|000|001|001 0
TPG_State : Active/Optimized , TPG_Id: 32776, : 32776 
0000000077030000 Active/Optimized 003|000|000|001 0
TPG_State : Active/Optimized , TPG_Id: 32773, : 32773 

The same information can be found in the GUI (see the figure below):

5. CHAP Configuration

Open the iSCI Initiator properties from the Server Manager interface, or run iscsicpl.

To use an Authentication method such as CHAP/MUTUAL CHAP, select Target tab > Connect > Advanced.

Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Comments