Follow

Overview

Staring with release 5.5.10, InfiniBox supports Active-Active replication when hosts use either FC (Fibre Channel) or iSCSI to access the replicated volumes.

Until release 5.5.10 only FC was supported, and InfiniBox would prevent administrator from creating an Active-Active replica if an iSCSI network space was defined:

admin@ibox2812> replica.create system=ibox2833  source=v1 replication_type=ACTIVE_ACTIVE remote_pool=gil-pool
SYSTEM_CANNOT_SUPPORT_ISCSI_AA: Active-Active Replication and ISCSI network space cannot exist on same system. 
Please contact INFINIDAT support for further information.

To support Active-Active with iSCSI, the ALUA parameters reported by the InfiniBox iSCSI service had to be adjusted, so that hosts multipath driver could use the two InfiniBox systems correctly. This change in behavior is only possible when a new iSCSI network space is created.

New systems

New InfiniBox systems that are installed with release 5.5.10 or above will support Active-Active automatically: all the iSCSI network spaces will report the correct ALUA parameters, and the above error message would not occur.

Note: systems installed with release 5.5.0 and then upgraded to 5.5.10 will also support Active-Active automatically.

Upgraded systems 

InfiniBox systems that were upgraded to 5.5.10 from 5.0.x or an earlier release, and had existing iSCSI network spaces, would require a procedure to replace the existing network spaces with new ones that support Active-Active. Until that procedure is followed, creating Active-Active replicas would result in the above error.

The purpose of this document is to explain the procedure.

Technical Details

New iSCSI network spaces defined in InfiniBox 5.5.10 would report TPG (an ALUA parameter) value that allows the host multipath driver to correctly group together paths from each InfiniBox system (for each particular mapped volume).

In addition, the iSCSI target (the IQN) on InfiniBox 5.5.10 is identical on all iSCSI network spaces, per InfiniBox system. Previously, the iSCSI target was separate for each network space.

Both of these changes require a coordinated procedure involving changes in InfiniBox network space, and the iSCSI hosts.

When is this process required

  • Do you intend to use Active-Active replication for iSCSI hosts? 
    • If not - exit (no need for this process)
  • Do you have a new system installed with 5.5.0 or above?
    • If yes - exit (no need for this process)
  • Did you have iSCSI network spaces before the system was upgraded to 5.5.10?
    • If not - exit (no need for this process)
  • If you got this far it means you have upgraded a system that had existing iSCSI network spaces to 5.5.10, and intend to use Active-Active replication for iSCSI hosts
    • Follow the procedure 

Process

Background

We assume there is an InfiniBox system with iSCSI hosts accessing volumes, and that an additional InfiniBox system is being added as an Active-Active replication peer. If the second InfiniBox system is also being used for iSCSI, repeat the following procedure on both systems.

We assume there are two iSCSI network spaces defined on the InfiniBox system. Follow the below procedure for each iSCSI network separately, from start to finish, so that there is no access loss during the process.

Verify all iSCSI hosts connections are highly-available

During the process the iSCSI hosts will lose 1/2 of the paths to the InfiniBox system, and their connection will no longer be highly-available. It is important to verify that all iSCSI hosts are connected to two iSCSI network spaces.

This is available only via the REST API, using the endpoint /api/rest/initiators .

Here's an example output of a host that is connected to two iSCSI network spaces:

{
  result: [
    {
      type: "ISCSI",
      address: "iqn.1994-05.com.redhat:fd6d6536731",
      host_id: 127,
      port_key: 6917529027641082000,
      targets: [
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 1,
          session_id: 9613344769
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-325",
          node_id: 1,
          session_id: 9613344770
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 2,
          session_id: 9613344771
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-325",
          node_id: 2,
          session_id: 9613344772
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 3,
          session_id: 9613344773
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-325",
          node_id: 3,
          session_id: 9613344774
        }
      ]
    }, ...

Each host has connections to 6 targets, 3 targets on each address.

Here's an example output of a host that is connected to a single iSCSI network space:

{
  result: [
    {
      type: "ISCSI",
      address: "iqn.1994-05.com.redhat:fd6d653673234",
      host_id: 128,
      port_key: 6917529027641082005,
      targets: [
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 1,
          session_id: 9613344769
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 1,
          session_id: 9613344770
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 2,
          session_id: 9613344771
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 2,
          session_id: 9613344772
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 3,
          session_id: 9613344773
        },
        {
          address: "iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324",
          node_id: 3,
          session_id: 9613344774
        }
      ]
    }, ...

The host has connections to 6 targets (could be fewer), but they are all on the same address (iqn.2009-11.com.infinidat:storage:infinibox-sn-36009-324)

Document the current setup

Retrieve the current iSCSI network space setup, and store it for later use:

admin@ibox2812> config.net_space.query net_space=iscsi1
NAME   SERVICE  NETWORK         MTU   RATE LIMIT  GATEWAY  IPS  INTERFACES
iscsi1 ISCSI    172.20.32.0/19  1500           -  -        3    pg_data1

admin@ibox2812> config.net_space.params.query net_space=iscsi1
NAME   IQN                                                         TCP PORT  ISNS SERVERS
iscsi1 iqn.2009-11.com.infinidat:storage:infinibox-sn-2812-270369  3260      -

admin@ibox2812> config.net_space.ip.query net_space=iscsi1
NETWORK SPACE      IP ADDRESS     ENABLED  NODE  NETWORK INTERFACE  TYPE
iscsi1             172.20.62.250  yes      1     pg_data1           iSCSI
iscsi1             172.20.63.10   yes      2     pg_data1           iSCSI
iscsi1             172.20.63.7    yes      3     pg_data1           iSCSI

Replace the iSCSI network space

Remove the network space:

admin@ibox2812> config.net_space.ip.disable net_space=iscsi1 ip_address=172.20.62.250,172.20.63.7,172.20.63.10 -y
IP addresses disabled in network space "iscsi1": 172.20.62.250, 172.20.63.7, 172.20.63.10

admin@localhost> config.net_space.ip.delete net_space=iscsi1 ip_address=172.20.39.93,172.20.45.16,172.20.45.20 -y
IP addresses deleted in network space "iscsi1": 172.20.39.93, 172.20.45.16, 172.20.45.20

admin@ibox2812> config.net_space.delete net_space=iscsi1 -y
Network space "iscsi1" deleted

Define a new network space with the same IP addresses:

admin@ibox2812> config.net_space.create name=iscsi1 service=ISCSI interface=PG1 network=172.20.32.0/19
Network space "iscsi1" created

admin@ibox2812> config.net_space.ip.create net_space=iscsi1 ip_address=172.20.62.250,172.20.63.7,172.20.63.10
IP addresses created in network space "iscsi1": 172.20.62.250, 172.20.63.7, 172.20.63.10

Verify the network space details:

admin@ibox2812> config.net_space.query net_space=iscsi1
NAME   SERVICE  NETWORK         MTU   RATE LIMIT  GATEWAY  IPS  INTERFACES
iscsi1 ISCSI    172.20.32.0/19  1500           -  -        3    PG1

admin@ibox2812> config.net_space.params.query  net_space=iscsi1
NAME   IQN                                                  TCP PORT  ISNS SERVERS
iscsi1 iqn.2009-11.com.infinidat:storage:infinibox-sn-2833  3260      -

admin@ibox2812> config.net_space.ip.query net_space=iscsi1
NETWORK SPACE  IP ADDRESS     ENABLED  NODE  NETWORK INTERFACE  TYPE
iscsi1         172.20.62.250  yes      1     PG1                iSCSI
iscsi1         172.20.63.10   yes      2     PG1                iSCSI
iscsi1         172.20.63.7    yes      3     PG1                iSCSI

Note: the iSCSI target (IQN) for the new network space is different from the previous one.

Connect hosts to the new network space

Login to each iSCSI host you identified earlier, and add the iSCSI nodes that identify the new connections.

For each vSphere host go to the Configure tab, and select the Storage Adapters sub-tab, then select the iSCSI Software Adapter from the table, which will show the iSCSI details on the bottom.

In most cases one of the iSCSI target IP addresses will have been defined as Dynamic Discovery:

Select Static Discovery to show the current (disconnected) targets:

Click Rescan Adapter from the menu, and the connections will automatically be replaced with new ones:

Repeat this for every vSphere host.

Verify all iSCSI hosts connections are highly-available

Repeat the verification before continuing to replace the second iSCSI network space.

Make sure all iSCSI hosts connections are still highly-available.

Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Last edited: 2020-11-05 10:40:36 UTC

Comments