Follow


1. Introduction

This Best Practices Guide provides guidelines that will prepare a Windows Server host to work with an Infinidat storage array using SAN protocols (Fibre Channel or iSCSI). It includes configuration information for Hyper-V operating in a local or stretch cluster setup using InfiniBox Active-Active replication available in InfiniBox 5.0 and later, as well as concurrent 3-site replication available with InfiniBox 5.5 and later.
Advanced InfiniBox replication services protect application data and availability, and allow extremely fast recovery even in the event of full site failures.

1.1. Target Audience

This document is intended for storage, system and Hyper-V administrators that plan to deploy or manage InfiniBox Active-Active replication with Microsoft Hyper-V cluster configurations.

2. Solution Overview

2.1. Microsoft Hyper-V

Hyper-V clusters allow multiple business applications to share physical server resources, enabling greater flexibility, operational efficiency, and cost savings. Two types of Hyper-V cluster deployments are supported with InfiniBox:

  • Local cluster - using at least 2 Hyper-V hosts and 1 InfiniBox array.
  • Stretch cluster  - using at least 2 Hyper-V hosts and 2 InfiniBox arrays configured with Active-Active replication. 

2.2. InfiniBox

InfiniBox enterprise storage array delivers faster-than-all-flash performance, guaranteed 100% availability, and multi-petabyte scale for mixed application workloads. Zero-impact snapshots and advanced replication services dramatically improve business agility, while FIPS-validated data-at-rest encryption eliminates the need to securely erase decommissioned arrays. With InfiniBox, enterprise IT organizations and cloud service providers can exceed their service level objectives while lowering the cost and complexity of their on-premises petabyte-scale storage operations.

2.3. Host PowerTools

Infinidat provides customers with a fully-featured configuration and management tool called Host PowerTools (HPT) at no additional charge. Infinidat highly recommends using HPT, which provides the following benefits:

  • Automates the configuration and preparation of the Windows server per InfiniBox best practices
  • Allows simple configuration of the connectivity to the InfiniBox systems
  • Simplifies provisioning of volumes for use by the Windows server
  • Allows taking snapshots of InfiniBox volumes mounted on the Windows server 

Manual instructions for customers who do not use Host PowerTools, as well as Queue Depth settings which are not set by HPT in Windows, are available in the InfiniBox Host Configuration Best Practices for Windows Server

InfiniBox Active-Active replication provides zero RPO and zero RTO for systems within metro distances, enabling mission critical business services to keep operating even through a complete site failures. It is a symmetric active-active synchronous replication solution enabling applications to be geographically clustered. Active-Active replication is fully integrated into InfiniBox, as a native feature without any additional hardware or software required, besides a witness VM in a separate failure domain.

2.4. InfiniBox Concurrent Active-Active and Async Replication

InfiniBox software version 5.5 extends Active-Active configurations with the ability to add a third site for disaster recovery. In that case, Hyper-V stretch cluster and InfiniBox Active-Active replication provide zero RPO and zero RTO for the primary sites. Asynchronous replication to a third site across any distance provides protection in a disaster scenario where both primary sites are down, and allows a very tight RPO on the third site, in as low as 4 seconds.

3. Host Connectivity

Infinidat highly recommends using Host PowerTools (HPT) to prepare all hosts to work with InfiniBox. For more information, please refer to HPT user guide for information on registering and preparing the hosts to work with InfiniBox.

To manually configure the Hyper-V host according to InfiniBox best practices please refer to InfiniBox Host Configuration Best Practices for Windows Server.

InfiniBox allows hosts to access block devices over both iSCSI and FC protocols. Different protocols can be used to access the same volumes or hosts if needed.

The best performance from InfiniBox and the highest availability for hosts in a Fibre Channel environment can be achieved by zoning each host to all three storage array nodes.

Infinidat strongly recommends this method to ensure optimal balanced utilization of all resources in the storage array.

The following guidelines should be followed when creating Fibre Channel zones:

  • Each physical host should be zoned to all 3 storage nodes via at least 2 independent HBA initiator ports on two independent SAN fabrics.
  • A maximum 1-to-3 fan-out from host (initiator) to storage (target) ports should normally be used. This means that for a host with 2 HBA ports, there will be 2 x 3 = 6 paths per storage Logical Unit; for a host with 4 HBA ports, there will be 4 x 3 = 12 paths per Logical Unit. 
    • It is advisable to monitor port usage to avoid overloading the channel capacity.

For more details about FC zoning recommendations, please refer to InfiniBox Best Practices Guide for Setting Up Fibre Channel.

For iSCSI connectivity best practices, please refer to Setting Up iSCSI Hosts via Host PowerTools.

3.1. Special Considerations for Windows and Hyper-V Clusters

  • Every Windows cluster should be configured into its own Fibre Channel zone.
  • Volumes should be accessible to all hosts in the cluster and isolated from other hosts to prevent data corruption.
  • It is advisable that all hosts in the cluster will have the same hardware and software components, host bus adapters (HBAs), firmware level, and device driver software versions.
  • Ensure that all hardware and driver versions are listed as supported in the relevant Windows Server Catalog compatibility list.

4. Setting up a cluster-shared volume using InfiniBox SAN

Cluster-shared volume (CSV) is a type of Windows disk volume with the special file system CSVFS which is designed for simultaneous access from several hosts.

  1. Using HPT or the InfiniBox UI, create and map a volume to both nodes of the cluster:


2. DO NOT create NTFS filesystem in advance in order to avoid volume errors due to multi-host access

3. From the second Hyper-V node, map the same volume by selecting "Include Mapped Volumes".

4.  Ensure the volume is visible on BOTH hosts in Windows Disk Management (diskmgmt.msc) with MPIO correctly set up (showing only one new volume per host):

5. From the first Hyper-V node, select "Initialize Disk":

Drive letter is not needed

6. Assign a name to the new volume

7. The new disk can be added to the cluster. Go to 'Failover cluster manager → Storage → disks' and click 'Add Disk'.

This will add the disk as 'Available storage' which means only a single node can use the disk as a VM storage.

8. Change to a clustered disk by right-clicking the disk and select 'Add to Cluster Shared Volumes'

By doing so you'll see that the volume has become CSVFS and the mountpoint is now in C:\ClusterStorage\Volume***:


5. Stretch Cluster Configuration using InfiniBox Active-Active Replication

5.1. Introduction to Hyper-V stretch cluster

A Hyper-V stretch cluster configuration is a specific storage configuration that combines replication with array-based clustering. These solutions are typically deployed in environments where the distance between data centers is limited, often metropolitan or campus environments.

The primary benefit of a stretched cluster model is that it enables fully-active and workload-balanced data centers to be used to their full potential. It also allows for extremely fast recovery in the event of a host or even full site failure.

Microsoft Hyper-V servers can be in a single cluster and can spread across sites (separate datacenters or geographic areas).

5.2. Uniform / Non-uniform Host Access Types

In an active/active replication relationship, the Hyper-V cluster can be classified into two distinct types. These categories are based on a fundamental difference in how the hosts access the storage systems:

  • Uniform host access - This method is applicable where Fibre Channel or Ethernet connectivity from hosts to storage exists between the sites, which allows the Hyper-V hosts on both sites to be connected to the storage systems across sites. LUN paths presented to Hyper-V hosts are stretched across the sites.

  • Non-uniform host access - This method is applicable where Fibre Channel or Ethernet connectivity from hosts to storage only exists locally for each site, with no host-to-storage cross-connections across the two sites. Ethernet connectivity between the InfiniBox arrays must still exist between the sites. In this case, Hyper-V hosts at each site can be connected only to the local storage system at the same site. LUN paths presented to Hyper-V hosts from storage nodes are limited to the local site.

InfiniBox Active-Active replication is supported with both uniform and non-uniform host access types. 

5.3. Hyper-V Hosts and Active-Active Volumes Relationships

Hyper-V hosts identify both peers as the same storage device. 

With the stretch cluster storage architecture, there are two InfiniBox storage arrays linked in an Active-Active relationship, one on site A and the other on site B. The hosts on both sites can be connected to both or one of the InfiniBox systems (uniform / non-uniform). When an Active-Active volume is provisioned to the hosts, the hosts identify both peers of the Active-Active volume as the same device. It is possible to read and write simultaneously using both of the peers, while all writes are synchronously replicated between the InfiniBox systems.

Hot PowerTools requires the controller LUN be LUN 0 (zero). This happens by default if volume mapping is done using automatic LUN selection.

When mapping volumes manually, please make sure to always use a LUN ID different than 0, so that the controller LUN can remain 0.

6. Preparing the Hyper-V hosts and InfiniBox Environment

Hyper-V hosts should be installed and configured as a cluster.

6.1. Active-Active link between the InfiniBox systems

The InfiniBox systems should be configured with an Active-Active replication link. For more information on how to configure an Active-Active link, see InfiniBox Best Practices Guide for Setting Up a Replication Service

Once a link has been set between both InfiniBox arrays, the host access type can be set.

6.2. Uniform Access

When configuring uniform host access, the hosts can access the same drive through both the local and remote InfiniBox systems. In this case, FC/iSCSI storage connectivity between the sites is required.

Typically, the drive paths to the remote system will be less optimized than the paths to the drive on the local system, due to the extra travel between the sites, as there is added latency. The InfiniBox system can intelligently indicate to the hosts which are the optimal paths to serve I/O. This is further discussed later in this document. 

In uniform access:

  • Each host should use two initiators connected to two separate fabrics/networks.
  • Each initiator should be connected to all 3 nodes on both InfiniBox systems, 6 nodes in total.
  • This gives a total of 12 paths from each host to every LUN.
  • The hosts can access both InfiniBox systems and will be able to see paths to the drive from both systems.

6.3. Setting the optimized InfiniBox system for each host

The InfiniBox system can intelligently indicate the optimal paths to serve I/O for Hyper-V hosts using Asymmetric Logical Unit Access (ALUA). ALUA is a standard for path prioritization between storage and hosts, and enables the initiators to query the target about path attributes, such as the path's ALUA state.

This settings is controlled by an InfiniBox Host object option, which sets the host's "Optimized / Non-Optimized" setting. By default host objects are created as "Optimized". The InfiniBox system informs the Hyper-V hosts by setting the hosts' mapped volumes paths ALUA state to "Optimized / Non-Optimized".

Setting the optimized InfiniBox system properly is crucial when configuring a Uniform host access, as the Hyper-V hosts are presented with drive paths from both the local and the remote InfiniBox system. Remote paths are typically less optimal. This is not required for a Non-uniform configuration.

6.3.1. Configuring the InfiniBox host objects

Configure the InfiniBox Host objects as follows:

  • On InfiniBox - Site A:
    • Ensure that hosts located on Site A are set to "Optimized".
    • The hosts located on Site B should be set to "Non-Optimized".
  • On InfiniBox - Site B:
    • Ensure that hosts located on Site B are set to "Optimized".
    • The hosts located on Site A should be set to "Non-Optimized".

In order to set a host to "Optimized / Non-Optimized":

  1. Login to the InfiniBox system using the Management Console.
  2. Click the "Host & Cluster" icon on the left bar.
  3. Right click a Host object.
  4. Select "Modify Host".
  5. Set the Path ALUA state to the appropriate option. 


6.4. Non-Uniform Access

When configuring non-uniform host access, the Hyper-V hosts on each site can access the storage only through the local InfiniBox system - the system that exists on the same site (local).

The Hyper-V hosts in each site are connected only to the InfiniBox system in the same site.

  • Each host should use two initiators connected to two separate fabrics/networks.
  • Each initiator should be connected to all 3 nodes on its InfiniBox system.
  • This gives a total of 6 paths from each host to every LUN.


7. Provisioning Active-Active Drives

In case virtual machines are designed to run simultaneously on both sites, it is advised to provision at least two Active-Active drives.

When virtual machines reside in an Active-Active drive, it is recommended to set the preferred system for the Active-Active replica to be the local InfiniBox system, i.e. the system closest to the running virtual machines.
  • Login to one of the InfiniBox systems using the Management Console.
  • Create two new volumes.
  • Configure Active-Active replication to the remote system on one of the previously created volumes.
    • Keep the Preferred system option as Local.
    • Upon success, an Active-Active replication is set and a volume peer is created on the remote system.



Configure Active-Active replication to the remote system also on the other volume. 

  • This time set the Preferred system option to Remote.


HPT should recognize the new A\A drives as follow


Please follow the instructions listed in section 4 (Local cluster configuration) to finish with the Hyper-V cluster setup.

After the volumes have been mapped, in a uniformed access You will notice that half of the paths will be marked as Active and the other will be marked as Active/Unoptimized

in a Non-uniformed access all paths should be marked as active.

8. VM High Availability

VMs should be configured as highly available in order for failover to occur. this can be accomplished by using Failover Cluster Manage, System Center Virtual Machine Manager (SCVMM) or Windows PowerShell.

In Failover Cluster Manager, select Roles and then configure Role

In Windows PowerShell, use the Add-ClusterVirtualMachineRole cmdlet:

Add-ClusterVirtualMachineRole -VMName <VM_Name>


The Virtual Machine Load Balancing feature can optimize the utilization of nodes in a Failover Cluster. if it's not enabled VM distribution can become unbalanced.

It is recommended to set preferred owners for each VM and create a manual balance, to configure it select Properties on a virtual machine, then select preferred owners.

9. Failure scenarios

The following table describes the InfiniBox storage accessibility in a different failure scenarios when using Active-Active replication:


ScenarioInfiniBox system-AInfiniBox system-BReplication LinkWitnessActive-Active Volumes Access
OptimalUPUPUPUPVolumes are available through both systems
Witness is downUPUPUPDownVolumes are available through both systems
Replication Link is downUPUPDownUP

Volumes are available through the preferred system

System-A is downDownUPUPUPVolumes are available through System-B
Both systems are downDownDownN/AN/AVolumes are not available


10. Backing-up Hyper-V virtual machines

The operation scheme of backing up virtual machines on Hyper-V consists generally of the following:

  • the backup tool instructs the Hyper-V host to create a checkpoint.
  • After receiving the command, the hypervisor creates some new files (delta files) and the VM goes on working and starts saving changes in the files.
  • The backup tool copies the original VM files (no changes are written to them) to the backup media and then delete the checkpoint.
  • When deleting the checkpoint, Hyper-V consolidates (merges) the original and delta files while the VM continue working.

By using InfiniBox you can utilize its efficient snapshots and create quicker point-in-time backups.

InfiniBox is supported by all the leading backup software vendors.   



Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Last edited: 2022-05-29 15:36:16 UTC

Comments