Reference architecture for installing and configuring Infinidat InfiniBox storage arrays with SAP HANA using NFS protocol.

SAP HANA and InfiniBox

SAP HANA enables organizations to gain a competitive advantage by providing a platform to analyze large amounts of data in real time. This document provides end users with the best practices for implementing SAP HANA with the Infinidat InfiniBox™ storage array. By using tailored data center integration (TDI), InfiniBox can achieve the highest performance at the greatest scale and availability.



Doc version


Sept. 8, 2019


  • Added support for /hana/shared on InfiniBox 4.x.

  • Minor editing changes.

July 30, 2019


  • Dedicated guide for NFS deployments.

  • Added support for InfiniBox 5.x.

  • Added support for /hana/shared on InfiniBox 5.x or later.

  • Comprehensive updates reflective of re-certification and Infinidat re-branding



Added support for InfiniBox 3.x, 4.x



Adding information on InfiniBox F4xxx models.



Improved description of:
SAP HANA and External Storage



Added section:
Setting up a /hana/shared device



  • Adding more information on InfiniBox –side operations

  • Fixing the doc version numbering scheme



Initial release.

Executive summary

SAP HANA is an in-memory database platform that is designed to provide real-time data analytics and real-time data processing, side by side, to customers in order to help drive a competitive advantage. SAP HANA can be deployed on premises or in the cloud.
Customers who can process as much data as possible as quickly as possible while minimizing expenses will be the most competitive. The SAP HANA TDI (Tailored Datacenter Integration) model combines SAP software components that are optimized on certified hardware from SAP partners. The SAP HANA TDI model is a more open and flexible model for enterprise customers. SAP HANA servers must still meet the SAP HANA requirements and be certified to run HANA. However, the storage can be a shared component of the SAP HANA environment.
Shared storage allows customers greater flexibility and the ability to take advantage of existing storage capacity they may have in their enterprise arrays. In addition, it allows customers to integrate the SAP HANA solution into their existing data center operations including data protection, monitoring and data management. This helps to improve the time to value for a SAP HANA implementation as well as reduce risk and costs.

Storage arrays used in SAP HANA TDI deployments must be pre-certified by SAP to ensure they meet all SAP HANA performance and functional requirements. Infinidat tested SAP HANA configuration and performance against all InfiniBox F-series enterprise-proven storage arrays.
Infinidat believes that the InfiniBox provides the following benefits over other storage arrays in the market to help SAP HANA customers achieve significant advantages:

  • Superior performance for processing data

  • Maximum scale to process as much data as possible

  • 99.99999% reliability

  • Low cost

  • Integration into existing data center infrastructure


This white paper describes how to deploy Infinidat InfiniBox storage array with SAP HANA, reducing capital and operational costs, decreasing risk, and increasing data center flexibility.
All configuration recommendations in this document are based on SAP requirements for high availability and the performance tests and results that are needed to meet the key performance indicators (KPIs) for SAP HANA TDI.
This whitepaper provides best practices for deploying the SAP HANA database on the InfiniBox storage array and provides the following information:

  • Introduction and overview of the solution technologies

  • Description of the configuration requirements for SAP HANA on InfiniBox

  • Method of access to InfiniBox from the SAP HANA nodes

SAP HANA and external storage

SAP HANA is an in-memory database. The data that is being processed is kept in the RAM of one or multiple SAP HANA worker hosts. Segments of the data are cached in RAM and the remaining part of the data resides on disk. This is very different from traditional databases. All SAP HANA activities such as reads, inserts, updates, or deletes are performed in the main memory of the host and not on a storage device.
Scalability for SAP HANA TDI is defined by the number of production HANA worker hosts that can be connected to enterprise storage arrays and still meet the key SAP performance metrics for enterprise storage. Because enterprise storage arrays can provide more capacity than required for HANA, scalability depends on a number of factors including:

  • Array cache size

  • Array performance

  • Array bandwidth, throughput, and latency

  • HANA host connectivity to the array

  • Storage configuration for the HANA persistence

SAP HANA uses external disk storage to maintain a copy of the data that is in memory to prevent data loss due to a power failure as well as to enable hosts to failover and have the standby SAP HANA host take over processing.
The connectivity to the external storage can be either FC-based or NFS-based. InfiniBox supports both block and NFS, for /hana/data , /hana/log and also /hana/shared.

  • In this guide, we used SUSE Linux Enterprise Server (SLES) as the operating system running the SAP HANA database.

The Infinidat InfiniBox storage array

Infinidat believes that companies that acquire, store and analyze the most amount of data possible, gives them the greatest competitive advantage. Infinidat's patented storage architecture leverages industry standard hardware to deliver InfiniBox, a storage array that yields 2M IOPS, 99.99999% reliability and over 8 PB of capacity in a single rack. Automated provisioning, management, and application integration provide a system that is incredibly efficient manage and simple to deploy. Infinidat is changing the paradigm of enterprise storage while reducing capital requirements, operational overhead, and complexity.  
The uniqueness of the Infinidat solution is a storage architecture that includes over 100 patented innovations.  The architecture provides a software driven set of enterprise storage capabilities residing on industry standard, commodity hardware. As new hardware and storage technologies become available, Infinidat can take advantage of them.  Shipping the software with a highly integrated and tested hardware reference platform, Infinidat is able to deliver a high performing, highly resilient, scalable software defined storage solution.  
Infinidat's level of integration and testing minimizes the time and risk of developing a solution like this in-house, and can deliver it at a much lower cost. In addition, all of the storage software for automated provisioning, management, and application integration enables fewer administrators to manage more storage, keeping OpEx low.
Today, InfiniBox offers its unified storage arrays in several models, ranging from 150TB to 4.149PB of usable capacity. The models feature different amounts of cache size, amount of SSD disks and the number and size of the HDDs in the system.
From an SAP HANA configuration perspective, the number of HANA nodes per system and maximum node configurations are described in the table below:




Recommended HANA Nodes

Up to 92 Nodes

Up to 74 Nodes

Up to 24 Nodes

SAP HANA I/O workloads require specific consideration for the configuration of the data and log volumes on the InfiniBox storage arrays. InfiniBox delivers the high performance needed for the persistent storage of an SAP HANA database as well as the log volumes.
The SAP HANA Storage Certification of the InfiniBox array applies to both block and file attached HANA workloads. This white paper discusses how to use InfiniBox in a block and file environments. One of the key value propositions of the InfiniBox system is that there are no complex configuration schemas to follow when it comes to system disk configuration.
All software, including our pre-configured RAID and tiering software (putting the most active data on the fastest performing drives) is all included. In addition, it is also all preconfigured. The system is designed to deliver the highest IOPS at all times.
For further implementation details please refer to the HANA nodes scale-out section on this document.

Setting up NFS for HANA datastore

InfiniBox supports NFS V3 as a standard protocol. This enables direct use of InfiniBox file storage services for data and log partitions. 

NFS file systems are mounted via OS mount: in the event of a node fail over, the standby host has immediate access to the same shared storage where the data and log volumes are located, This is called a shared everything architecture because all the data and log partitions are mounted on all nodes.

Connectivity example

Each HANA node will have a redundant connectivity to the InfiniBox, preferably with 10G interfaces. In the figure below, two separated network segments are shown - the “Private Network” for inter-node communication and the “Public Network” for client connectivity, "Storage Network" is used to connect to storage via NFS protocol and needs to be redundant with high throughput, according to SAP recommendations.

Setting up a /hana/shared device

The SAP HANA cluster requires a location shared between all HANA nodes. This is a filesystem that stores the cluster configuration and logs.

This shared location can reside on a NFSV3 Service, which supports the SAP HANA requirements, with an implementation of a connector(STONITH) supplied by the storage vendor which shuts down a failing node to prevent data corruption. See an example of STONITH method in Appendix B.

Another option is OCFS2 clustered filesystem on a block device. 

  • The creation and configuration of OCFS2 device is out of scope for this document.
  • With Infinibox version 4 and above the hana shared (/hana/shared) can be placed on an Infinibox File System Export.
  • Creating a file system for /hana/shared, is described later in the guide - Creating the Filesystem and Export on the Infinibox array.

The size of the shared filesystem

• The size of the /hana/shared file system must be greater than, or equal to the size of the main memory of all SAP HANA nodes.

• The file system type must be able to expand the size whenever a new node is added to the HANA cluster.

• To shrink a file system on a block device, or to shrink the block device itself, in most cases you need to delete and re-create the file system or block device.

Setting up a cluster file system on RHEL

Red Hat does not provide support for OCFS2. An alternative is to set up a GFS2 in a cluster environment.

For a guide about GFS2 on RHEL7, look here:

HANA node access to NFS storage

In order to enhance the scalability and load balancing between the nodes, there are two options for connecting the HANA nodes to the storage nodes. One option is using round-robin DNS configuration, The second option is using direct mount to an IP of an InfiniBox node port as described here. When one node is deactivated, its IP address will be transferred to another node in a manner that is seamless to the client. The system uses LACP trunking to load balance the IP throughput across all Ethernet ports.

Step 1: Create a NAS service

On the InfiniBox, create the following entities:

• Interface

• Network Space

Create an interface

The interface is created from InfiniBox Ethernet ports:

config.ethernet.interface.create name=pg1_data1 type=PORT_GROUP ports=ETH1,ETH2,ETH3,ETH4 repeat_on_all_nodes=yes

Create a network space and the NAS service

The NAS service uses a network space, an InfiniBox configuration that assures failover in case of a port failure.

To create the network space, use the config.net_space.create command

config.net_space.create name=NAS interface=pg1_data1,pg2_data1,pg3_data1 default_gateway= service=NAS network=

Assign an IP address to a network space

config.net_space.ip.create net_space=NAS config.net_space.ip.create net_space=,,,,,

Step 2: Create the filesystem and export on the InfiniBox array

In order to provision the filesystem, run the following commands from the InfiniShell CLI, you can also put all commands in a file and use  'infinishell -f /file/location'.

This example is for SID=H04

Create a pool:

pool.create name=nas-sap physical_capacity=3t

When using NFS, the data and log volume sizes depend on the internal memory size (RAM) installed on each HANA node and number of nodes in the cluster.

Create a filesystem and an export for each mount point that represent a filesystem partition, In this example we will use SID = H04.

fs.create name=h04_data_mnt00001 size=512G pool=nas-sap
fs.export.create export_path=/H04_data_mnt00001 fs=h04_data_mnt00001
fs.create name=h04_log_mnt00001 size=256G pool=nas-sap
fs.export.create export_path=/H04_log_mnt00001 fs=h04_log_mnt00001
fs.create name=h04_data_mnt00002 size=512G pool=nas-sap
fs.export.create export_path=/H04_data_mnt00002 fs=h04_data_mnt00002
fs.create name=h04_log_mnt00002 size=256G pool=nas-sap
fs.export.create export_path=/H04_log_mnt00002 fs=h04_log_mnt00002

Step 3: Create a filesystem and an export for /hana/shared

fs.create name=hana-shared size=512G pool=nas-sap
fs.export.create export_path=/hana-shared fs=hana-shared

Check the permissions on the export to make sure they are configured as 'No root squash' as this is an SAP requirement for /hana/shared, and other exports - data and log.

Use the following command to verify correct Access and Root Squash Permissions:

fs.export.permission.query fs=hana-shared
EXPORT PATH                                                        CLIENT                                                             ACCESS TYPE  NO ROOT SQUASH
/hana-shared                                                       *                                                                  RW           yes

fs.export.permission.query export_path=/h04_log_mnt01,/h04_log_mnt02,/h04_data_mnt01,/h04_data_mnt02
EXPORT PATH                                                        CLIENT                                                             ACCESS TYPE  NO ROOT SQUASH
/h04_log_mnt01                                                     *                                                                  RW           yes
/h04_log_mnt02                                                     *                                                                  RW           yes
/h04_data_mnt01                                                    *                                                                  RW           yes
/h04_data_mnt02                                                    *                                                                  RW           yes

If this is wrong, change the permissions using the command = fs.export.permission.modify
You can restrict the access to specific hosts by configuring their ip address or an ip range by specifying

fs.export.permission.modify export_path=/hana-shared client= A full wildcard (i.e: *), IP address or IP range (e.g

Step 4: Mount the filesystems on the hosts

The InfiniBox consists of 3 nodes which are the targets of the mount points, each node can be configured with up to 4 Ethernet interfaces (total of 12 ports).

In this configuration, each SAP HANA host has two 10G interfaces which are connected to redundant switching infrastructure.

In order to spread the load we will mount the partitions to different nodes and interfaces. To get the IP address of each node, run the following command:

config.net_space.ip.query net_space=NAS
NAS    yes      1     pg1_data1          NAS
NAS     yes      2     pg2_data1          NAS
NAS     yes      2     pg2_data1          NAS
NAS     yes      1     pg1_data1          NAS
NAS     yes      2     pg2_data1          NAS
NAS     yes      1     pg1_data1          NAS

Now we will need to create the relevant directories on the hosts and mount them, through /etc/fstab.

On each host run these commands to create the directories and change the permissions before mounting (In this example SID=H04):

mkdir -p /hana/data/H04/mnt00001
mkdir -p /hana/log/H04/mnt00001
mkdir -p /hana/data/H04/mnt00002
mkdir -p /hana/log/H04/mnt00002
mkdir -p /hana/shared

chmod –R 777 /hana/log/H04
chmod –R 777 /hana/data/H04
chmod 777 /hana/shared

See below a list of the mounts and options that should exist on all of the nodes. After mounting we should see the following list of partitions on each HANA node.

File System name

Export Path on the InfiniBoxMount on the ServerMount options, same on all hosts!





















Example of /etc/fstab on one SAP HANA node:

interop014:~ # cat /etc/fstab
/dev/system/swap swap swap defaults 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a / btrfs defaults 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /boot/grub2/x86_64-efi btrfs subvol=@/boot/grub2/x86_64-efi 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /home btrfs subvol=@/home 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /opt btrfs subvol=@/opt 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /srv btrfs subvol=@/srv 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /tmp btrfs subvol=@/tmp 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /usr/local btrfs subvol=@/usr/local 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/cache btrfs subvol=@/var/cache 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/crash btrfs subvol=@/var/crash 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/libvirt/images btrfs subvol=@/var/lib/libvirt/images 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/machines btrfs subvol=@/var/lib/machines 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/mariadb btrfs subvol=@/var/lib/mariadb 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/mysql btrfs subvol=@/var/lib/mysql 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/named btrfs subvol=@/var/lib/named 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/log btrfs subvol=@/var/log 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/opt btrfs subvol=@/var/opt 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/spool btrfs subvol=@/var/spool 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/tmp btrfs subvol=@/var/tmp 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /.snapshots btrfs subvol=@/.snapshots 0 0 /hana/data/H04/mnt00001     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0 0 /hana/data/H04/mnt00002     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0 0 /hana/log/H04/mnt00001     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0 0 /hana/log/H04/mnt00002     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0    /hana/shared nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock    0 0

Now the storage and hosts are ready for SAP HANA installation.

HA/DR provider - implementing STONITH

A STONITH implementation is required only in multi-node environments that use the NFSv3 protocol. When using NFSV3 to connect to the storage, it does not support file locking mechanism like SCSI-3 persistent reservations.

The SAP HANA Storage Connector API together with a specific Storage Connector Script allows usage of different types of storage and network architecture to ensure proper I/O fencing.

 A specific storage connector, provided by a certified storage vendor, implements a STONITH* (=shoot the other node in the head) call to reboot a failed host, which can isolate the failed node and protecting the shared resource.

Procedure for creating an HA/DR provider

This implementation of STONITH is based on the IPMI Tool and performed after the HANA system already installed. 

When implementing STONITH, SAP HANA addresses the target using its host name. Therefore, it is a best practice to configure an IPMI host based on the host name using a naming convention. In our example, the m-<hostname> is a target for ipmi action.  

1. Create a new directory on the shared location, where we will place the HA/DR provider. i.e. :


2. Change the access permissions to the directory so that <SID>adm user can access it.

3. Change to the <SID>adm user and Use the Demo script located at the following location :


Copy the script to the folder you created in step 1, and give it a 'meaningfull' name that will represent the python class. In our example we used the following file: which can be seen in Appendix B.

4. Edit the global.ini file located at: /hana/shared/<SID>/global/hdb/custom/config/

Add the following section that defines the DR provider:

provider = InfiSTONITH
path = /hana/shared/inficonnector
execution_order = 1

5. Test that the <SID>adm user can run the ipmitool, switch to the user and run the following command:

ipmitool -H <hostname-ipmi> -U admin -P <password> -I lanplus power status
h04adm@interop014:/usr/sap/H04/HDB04> ipmitool -H m-interop015 -U root -P UNDISCLOSED -I lanplus power status
Chassis Power is on

6. Verify the configuration:

  • Check that all `ipmi-hostname` nodes are resolvable, since the ipmitool invokes on the hostname. You can add them in /etc/hosts .
  • When you start the HANA system you should see that the STONITH provider is also started, search in the nameserver trace files, for example:
[3685]{-1}[-1/-1] 2019-07-14 17:44:50.679028 i ha_dr_provider   HADRProviderManager.cpp(00073) : loading HA/DR Provider 'InfiSTONITH' from /hana/shared/inficonnector
[3710]{-1}[-1/-1] 2019-07-14 17:44:50.932652 d ha_dr_InfiSTONIT : tracer 'ha_dr_InfiSTONITH' initialized
  • Perform a failover test. When you shutdown a worker node, the service should be migrated to a standby host.
    Check that the trace logs show the STONITH behavior, for example:

    [3946]{-1}[-1/-1] 2019-07-14 15:44:14.533533 e NameServer       TREXNameServer.cpp(09870) : nameserver@interop016:30401 not responding. retry in 5 sec
    [3942]{-1}[-1/-1] 2019-07-14 15:44:15.093681 i assign           MasterFileChecker.cpp(00124) : master lock file check OK
    [3942]{-1}[-1/-1] 2019-07-14 15:44:15.093705 i failover         DistributedWatchDog.cpp(00219) : Checking master lock succeeded: master is inactive
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.093804 w failover         DistributedWatchDog.cpp(00139) : master nameserver 'interop016:30401' is inactive -> electing new master
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.093862 i failover         DistributedWatchDog.cpp(00147) : daemon process not running on host interop016 -> start masterize
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.095526 i failover         TREXNameServer.cpp(02475) : master failover from interop016 to interop015 started (check masterlock: no)
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.095543 i assign           TREXNameServer.cpp(02570) : assign to volume 1 started
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.095987 i Backup           Backup_Recover.cpp(00243) : :::: RECOVERY looking for request ::::
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.096437 i Backup           BackupTracerImpl.cpp(00219) : Initializing backup tracer... housekeeping disabled
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.096744 i Backup           BackupTracerImpl.cpp(00219) : Initializing backup tracer... housekeeping disabled
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097538 i Backup           BackupMgr_Manager.cpp(04995) : Entering isDataRecoveryPending requestedVolume: 1
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097554 i Backup           BackupMgr_Manager.cpp(05020) : wait done m_RecoverRequestBarrier ( 0 )
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097564 i Backup           BackupMgr_Manager.cpp(04995) : Entering isDataRecoveryPending requestedVolume: 1
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097570 i Backup           BackupMgr_Manager.cpp(05020) : wait done m_RecoverRequestBarrier ( 0 )
    [5473]{-1}[-1/-1] 2019-07-14 15:44:15.100210 i failover         DistributedWatchDog.cpp(00351) : detected activate standby nameserver@interop014:30401 with obsolete topology
    [5473]{-1}[-1/-1] 2019-07-14 15:44:15.100220 e NameServer       TREXNameServer.cpp(03748) : setActive from DistributedWatchdog on non master nameserver
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.101545 i ha_dr_provider   PythonProxyImpl.cpp(00953) : calling HA/DR provider InfiSTONITH.stonith(failing_host=interop016)
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102185 d ha_dr_InfiSTONIT : enter stonith hook; {'failingHost': 'interop016', 'self': <InfiSTONITH.InfiSTONITH object at 0x7f288f327b50>, 'kwargs': {}}
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102257 d ha_dr_InfiSTONIT : {'execution_order': '1', 'path': '/hana/shared/inficonnector', 'provider': 'InfiSTONITH'}
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102308 i ha_dr_InfiSTONIT : stonith - power cycling host interop016
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102358 i ha_dr_InfiSTONIT : ipmitool -H m-interop016 -U root -P xxxxxx -I lanplus power  off

I/O testing with HWCCT

The process of SAP HANA Hardware Certification includes running I/O utilization testing to ensure that the performance of the HANA installation is not influenced by competing input or output operations (I/O) of other workloads.

Multiple HANA nodes connected to the storage units have to fulfill the KPI for each server even running in parallel.

The testing is done using hwcct tool, which can be run with specific parameters. These parameters are set in the fsperf script or by hdbparm or by using –param option. For hwcct details see SAP Note at SAP Note 1943937 , login required.

These setting applies to both NAS and SAN connectivity on HANA versions 1.0 only.

The recommended settings for InfiniBox are specified below:


Starting with SAP HANA 2.0, hdbparam is deprecated and the parameters have been moved to global.ini. The parameters can also be set using SQL commands or SAP HANA Studio.

See Appendix A for The parameters in section [filio] of global.ini

See SAP Note 2399079 - Elimination of hdbparam in HANA 2, for more details.

Further tips and recomendations

See SAP Note 2382421 for Optimizing Network Configuration when using a large number of Hana nodes.

See SAP Note 2205917 for Reommended OS Settings for SLES 12.


Taking advantage of Infinidat InfiniBox enterprise proven storage array with SAP HANA provides clients with a number of key benefits. Clients can reduce the amount of physical hardware required to run SAP HANA workloads, reducing CapEx and OpEx by taking advantage of existing storage management and best practices of the solution.

Clients can run their SAP HANA workloads on InfiniBox and have peace of mind that they will achieve and even exceed their competitive business objectives. InfiniBox delivers 99.99999% availability, with maximum performance and can scale to meet any of the SAP HANA needs.

This solution provides clients the following benefits:

• Easy integration of SAP HANA into existing data center infrastructure.

• Existing data center best practices for data management and protection

• Highest reliability at 99.99999% uptime

• Over 2M IOPS of performance

• Over 8PB of usable storage in a single rack, before data reduction

• Best overall storage TCO including power, cooling and floor space

Appendix A - global.ini example (for NFS protocol)

internal_network =
listeninterface = .internal

[internal_hostname_resolution] = interop016 = interop015 = interop014

mode = multidb
database_isolation = low
singletenant = yes

basepath_datavolumes = /hana/data/H04
basepath_logvolumes = /hana/log/H04
basepath_shared = yes

usage = test

provider = InfiSTONITH
path = /hana/shared/inficonnector
execution_order = 1

max_parallel_io_requests[DATA] = 128
async_write_submit_active= on
async_read_submit= on
max_parallel_io_requests[LOG] = 128

ha_dr_InfiSTONITH = debug

Appendix B - STONITH method example

h04adm@interop014:/usr/sap/H04/HDB04> cat /hana/shared/inficonnector/
Sample for a HA/DR hook provider.

When using your own code in here, please copy this file to location on /hana/shared outside the HANA installation.
This file will be overwritten with each hdbupd call! To configure your own changed version of this file, please add
to your global.ini lines similar to this:

    provider = <className>
    path = /hana/shared/haHook
    execution_order = 1

For all hooks, 0 must be returned in case of success.

from hdb_ha_dr.client import HADRBase, Helper
import os, time

class InfiSTONITH(HADRBase):

    def __init__(self, *args, **kwargs):
        # delegate construction to base class
        super(InfiSTONITH, self).__init__(*args, **kwargs)

    def about(self):
        return {"provider_company" :        "INFINIDAT",
                "provider_name" :          "InfiSTONITH", # provider name = class name
                "provider_description" :    "HANA SPS04 IPMI Stonith",
                "provider_version" :        "2.0"}

    def startup(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter startup hook; %s" % locals())
        self.tracer.debug(self.config.toString())"leave startup hook")
        return 0

    def shutdown(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter shutdown hook; %s" % locals())
        self.tracer.debug(self.config.toString())"leave shutdown hook")
        return 0

    def failover(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter failover hook; %s" % locals())
        self.tracer.debug(self.config.toString())"leave failover hook")
        return 0

    def stonith(self, failingHost, **kwargs):
        self.tracer.debug("enter stonith hook; %s" % locals())

        # e.g. stonith of params["failed_host"]
        # e-g- set vIP active "stonith - power cycling host %s" % failingHost)
    ipmi_host = "m-%s" % failingHost
    ipmi_call = "ipmitool -H %s -U root -P undisclosed -I lanplus power " % ipmi_host
    ipmi_call_off = "%s off" % ipmi_call
    ipmi_call_status = "%s status" % ipmi_call
    ipmi_call_on = "%s on" % ipmi_call"%s" % ipmi_call_off)
    print("%s" % ipmi_call_off)
        retries = 5
        retry_nr = 1

    while True:
                print("Trying to call ipmitool: %d" % retry_nr)
                # If we fail to call ipmi we need to stop here and leave the system unmounted!!!!!
                (code, output) = Helper._runOsCommand(ipmi_call_off)
                if(code != 0):
                        ret = 1 # means failure
                        print output
                (code, output) = Helper._runOsCommand(ipmi_call_status)
                if 'is off' in output:
                        msg = "successful power off %s" % failingHost
                        print msg
                        ret = 0
                if(retry_nr >= retries):
                        msg = "giving up powering off %s - NEED HELP" % failingHost
                        print msg
"leave stonith hook")
        return 0

    def preTakeover(self, isForce, **kwargs):
        """Pre takeover hook.""""%s.preTakeover method called with isForce=%s" % (self.__class__.__name__, isForce))

        if not isForce:
            # run pre takeover code
            # run pre-check, return != 0 in case of error => will abort takeover
            return 0
            # possible force-takeover only code
            # usually nothing to do here
            return 0

    def postTakeover(self, rc, **kwargs):
        """Post takeover hook.""""%s.postTakeover method called with rc=%s" % (self.__class__.__name__, rc))

        if rc == 0:
            # normal takeover succeeded
            return 0
        elif rc == 1:
            # waiting for force takeover
            return 0
        elif rc == 2:
            # error, something went wrong
            return 0

    def srConnectionChanged(self, parameters, **kwargs):
        self.tracer.debug("enter srConnectionChanged hook; %s" % locals())

        # Access to parameters dictionary
        hostname = parameters['hostname']
        port = parameters['port']
        volume = parameters['volume']
        serviceName = parameters['service_name']
        database = parameters['database']
        status = parameters['status']
        databaseStatus = parameters['database_status']
        systemStatus = parameters['system_status']
        timestamp = parameters['timestamp']
        isInSync = parameters['is_in_sync']
        reason = parameters['reason']
        siteName = parameters['siteName']"leave srConnectionChanged hook")
        return 0

    def srReadAccessInitialized(self, parameters, **kwargs):
        self.tracer.debug("enter srReadAccessInitialized hook; %s" % locals())

        # Access to parameters dictionary
        database = parameters['last_initialized_database']
        databasesNoReadAccess = parameters['databases_without_read_access_initialized']
        databasesReadAccess = parameters['databases_with_read_access_initialized']
        timestamp = parameters['timestamp']
        allDatabasesInitialized = parameters['all_databases_initialized']"leave srReadAccessInitialized hook")
        return 0

    def srServiceStateChanged(self, parameters, **kwargs):
        self.tracer.debug("enter srServiceStateChanged hook; %s" % locals())

        # Access to parameters dictionary
        hostname = parameters['hostname']
        service = parameters['service_name']
        port = parameters['service_port']
        status = parameters['service_status']
        previousStatus = parameters['service_previous_status']
        timestamp = parameters['timestamp']
        daemonStatus = parameters['daemon_status']
        databaseId = parameters['database_id']
        databaseName = parameters['database_name']
        databaseStatus = parameters['database_status']"leave srServiceStateChanged hook")
        return 0

For more information

Infinidat offers experienced storage consultants with proven methodologies who are able to assist with implementing InfiniBox with your applications. For more information, see the Infinidat website ( or ask your local Infinidat sales representative.

Legal Notice

© Copyright Infinidat 2020.

This document is current as of the date of publication and may be changed by Infinidat at any time. Not all offerings are available in every country in which Infinidat operates.


Infinidat products are warranted according to the terms and conditions of the agreements under which they are provided. 

Infinidat, the Infinidat logo, InfiniBox, InfiniRAID, InfiniSnap, InfiniMetrics, and any other applicable product trademarks are registered trademarks or trademarks of Infinidat LTD in the United States and other countries. Other product and service names might be

trademarks of Infinidat or other companies. A current list of Infinidat trademarks is available online at INFINIDAT-Trademarks.pdf.

Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Last edited: 2022-08-06 08:10:13 UTC