Reference architecture for installing and configuring Infinidat InfiniBox storage arrays with SAP HANA using Fibre Channel protocol.

SAP HANA and InfiniBox

SAP HANA enables organizations to gain a competitive advantage by providing a platform to analyze large amounts of data in real time. This document provides end users with the best practices for implementing SAP HANA with the Infinidat InfiniBox™ storage array. By using tailored data center integration (TDI), InfiniBox can achieve the highest performance at the greatest scale and availability.



Doc version


May 23, 20222.0.4Updated out-of-date technical data and information
May 22, 20222.0.3Updated performance statistics 
Sept. 9, 20192.0.2Added support for  /hana/shared  on InfiniBox

June 14, 2019


  • Dedicated guide for FC deployments.
  • Added support for InfiniBox 5.x.
  • Added support for  /hana/shared  on InfiniBox 5.x or later.
  • Comprehensive updates reflective of re-certification and Infinidat re-branding



Added support for InfiniBox 3.x, 4.x



Adding information on InfiniBox F4xxx models.



Improved description of: SAP HANA and External Storage



Added section:
Setting up a /hana/shared device



  • Adding more information on InfiniBox –side operations
  • Fixing the doc version numbering scheme



Initial release.

Executive summary

SAP HANA is an in-memory database platform that is designed to provide real-time data analytics and real-time data processing, side by side, to customers in order to help drive a competitive advantage. SAP HANA can be deployed on premises or in the cloud.

Customers who can process as much data as possible as quickly as possible while minimizing expenses will be the most competitive. The SAP HANA TDI (Tailored Datacenter Integration) model combines SAP software components that are optimized on certified hardware from SAP partners. The SAP HANA TDI model is a more open and flexible model for enterprise customers. SAP HANA servers must still meet the SAP HANA requirements and be certified to run HANA. However, the storage can be a shared component of the SAP HANA environment.

Shared storage allows customers greater flexibility and the ability to take advantage of existing storage capacity they may have in their enterprise arrays. In addition, it allows customers to integrate the SAP HANA solution into their existing data center operations including data protection, monitoring and data management. This helps to improve the time to value for a SAP HANA implementation as well as reduce risk and costs.

Storage arrays used in SAP HANA TDI deployments must be pre-certified by SAP to ensure they meet all SAP HANA performance and functional requirements. Infinidat tested SAP HANA configuration and performance against all InfiniBox F-series enterprise-proven storage arrays.

Infinidat believes that the InfiniBox provides the following benefits over other storage arrays in the market to help SAP HANA customers achieve significant advantages:

  • Superior performance for processing data
  • Maximum scale to process as much data as possible
  • 99.99999% reliability
  • Low cost
  • Integration into existing data center infrastructure


This white paper describes how to deploy Infinidat InfiniBox storage array with SAP HANA, reducing capital and operational costs, decreasing risk, and increasing data center flexibility.

All configuration recommendations in this document are based on SAP requirements for high availability and the performance tests and results that are needed to meet the key performance indicators (KPIs) for SAP HANA TDI.

This whitepaper provides best practices for deploying the SAP HANA database on the InfiniBox storage array and provides the following information:

  • Introduction and overview of the solution technologies
  • Description of the configuration requirements for SAP HANA on InfiniBox
  • Method of access to InfiniBox from the SAP HANA nodes

SAP HANA and external storage

SAP HANA is an in-memory database. The data that is being processed is kept in the RAM of one or multiple SAP HANA worker hosts. Segments of the data are cached in RAM and the remaining part of the data resides on disk. This is very different from traditional databases. All SAP HANA activities such as reads, inserts, updates, or deletes are performed in the main memory of the host and not on a storage device.

Scalability for SAP HANA TDI is defined by the number of production HANA worker hosts that can be connected to enterprise storage arrays and still meet the key SAP performance metrics for enterprise storage. Because enterprise storage arrays can provide more capacity than required for HANA, scalability depends on a number of factors including:

  • Array cache size
  • Array performance
  • Array bandwidth, throughput, and latency
  • HANA host connectivity to the array
  • Storage configuration for the HANA persistence

SAP HANA uses external disk storage to maintain a copy of the data that is in memory to prevent data loss due to a power failure as well as to enable hosts to failover and have the standby SAP HANA host take over processing.

The connectivity to the external storage can be either FC-based or NFS-based. InfiniBox supports both block and NFS, for /hana/data , /hana/log and also /hana/shared.

In this guide, we used SUSE Linux Enterprise Server (SLES) as the operating system running the SAP HANA database. More information on the best practices for configuring the OS can be found in  Appendix A: Installing and Setting-up the Linux OS.

The Infinidat InfiniBox storage array

Infinidat believes that companies that acquire, store and analyze the most amount of data possible, gives them the greatest competitive advantage. Infinidat's patented storage architecture leverages industry standard hardware to deliver InfiniBox, a storage array that yields 2M IOPS, 99.99999% reliability and over 8 PB of capacity in a single rack. Automated provisioning, management, and application integration provide a system that is incredibly efficient manage and simple to deploy. Infinidat is changing the paradigm of enterprise storage while reducing capital requirements, operational overhead, and complexity. 

The uniqueness of the Infinidat solution is a storage architecture that includes over 100 patented innovations.  The architecture provides a software driven set of enterprise storage capabilities residing on industry standard, commodity hardware.  As new hardware and storage technologies become available, Infinidat can take advantage of them.  Shipping the software with a highly integrated and tested hardware reference platform, Infinidat is able to deliver a high performing, highly resilient, scalable software defined storage solution.  

Infinidat's level of integration and testing minimizes the time and risk of developing a solution like this in house, and can deliver it at a much lower cost. In addition, all of the storage software for automated provisioning, management, and application integration enables fewer administrators to manage more storage, keeping OpEx low.

Today, InfiniBox offers its unified storage arrays in several models, ranging from 100TB to 4.149PB of usable capacity. The models feature different amounts of cache size, amount of SSD disks and the number and size of the HDDs in the system.

From an SAP HANA configuration perspective, the number of HANA nodes per system and maximum node configurations are described in the table below:

Recommended HANA NodesUp to 96 NodesUp to 92 NodesUp to 92 NodesUp to 74 NodesUp to 24 Nodes

SAP HANA I/O workloads require specific consideration for the configuration of the data and log volumes on the InfiniBox storage arrays. InfiniBox delivers the high performance needed for the persistent storage of an SAP HANA database as well as the log volumes. When using other workloads on the same InfiniBox system or adding more production nodes, please consider using the performance test tool provided by SAP, HWCCT (Hardware Configuration Check Tool), to verify that the KPI's defined by SAP are met.

The SAP HANA Storage Certification of the InfiniBox array applies to both block and file attached HANA workloads, this white paper discusses how to use InfiniBox in a block environment. One of the key value propositions of the InfiniBox system is that there are no complex configuration schemas to follow when it comes to system disk configuration.

All software, including our pre-configured RAID and tiering software (putting the most active data on the fastest performing drives) is all included. In addition, it is also all preconfigured. There is no need to set up RAID configurations or determine what drives you need. The system is designed to deliver the highest IOPS at all times.

For further implementation details please refer to the  HANA nodes scale-out  section on this document.

Infinidat's performance acceleration and workload optimization

Infinidat's  primary storage family line is the InfiniBox F-series. The series' non-SSA models are flash-optimized arrays, using a combination of DRAM, flash media (SSD), and high capacity NL-SAS disks to write, read and store data. The series' SSA models are designed to meet the demands of workloads that require consistent microsecond latency for every I/O from the storage subsystem with latency as low as 35µs* (microseconds). All models in the series use the Infinidat Neural Cache learning algorithms for data placement optimization, allowing most I/O to be serviced at DRAM speed. Coupling that with an optimized path to our solid-state storage ensures end-to-end performance. Enterprise-class storage with unmatched features and ease of use is included with the added benefit of our 100% availability guarantee.

InfiniBox delivers high levels of performance across mixed and consolidated workloads. The system is built from the ground up to adapt to different workloads within a single environment.

Write acceleration

InfiniBox accepts all writes without any pre-processing into its DRAM, and makes a second copy of the write in another node's DRAM over low-latency InfiniBand before sending the acknowledgment to the host. Accepting the write from DRAM (directly attached to the CPU), instead of an external flash device, allows InfiniBox to complete writes in the lowest possible latency. InfiniBox uses a single, large memory pool to accept writes, this allows larger write bursts to be sustained, allows data that changes frequently to get overwritten at DRAM latency and allows Neural Cache time to make smart decisions, prioritizing which data blocks will benefit from DRAM speeds and which should be destaged to SSDs and HDDs. By keeping data longer in the write cache, Neural Cache avoids unnecessary workload on the CPU and backend persistency layers.

Read acceleration

InfiniBox uses its innovative Neural Cache that aims to place all of the hot data in DRAM, The InfiniBox Neural Cache allows most reads to complete at DRAM speed, which is 1000 times faster than flash.

Since Neural Cache is a learning algorithm, it optimizes performance over time. InfiniBox leverages a thick SSD flash layer, which serves as a "cushion" for DRAM-misses. As Neural Cache learns the I/O patterns and optimizes DRAM data placement, the flash layer changes its function from handling DRAM-misses to handling changes in I/O patterns, which the algorithm may not be able to predict (e.g. periodic audit that requires data not in DRAM).


InfiniBox system features a virtualized storage service that is shared by all hosts and applications. The QoS feature aligns system performance to the varying business needs.
The user may consider using QoS to restrict IOPS and throughput, by limiting resources to relevant 'noisy' neighbor entity.

Block device setup for HANA datastore

In a production environment it is essential having dual connections for each of the Network and FC channels. In the figure below, all three InfiniBox storage nodes are shown (the disk enclosures are not shown for the sake of simplicity).

* Note - the 3 nodes showed at the bottom of this drawing are from the same InfiniBox array.

One of the benefits of working with InfiniBox system is the simple method of provisioning storage for hosts. This paper describes a 2+1 setup,  where two nodes are used with their own set of Data and Log volumes and the third node acts as a standby server. The HANA system can be scaled with multiple hosts which can be added later.

The following steps describe the basic process of implementation, zoning, creating hosts and volumes on the InfiniBox, mapping the LUNs, and configuring them from the host side.

Step 1: Host zoning

  1. In this setup we will have 3 hosts which have access to all volumes. This connectivity is essential for being able to mount these volumes in a Failover scenario.

    See more about it later in the section that explains about the SAP HANA storage connector API client.
  2. Each host has two HBA ports :

    • p2 – connected to Fabric B, for retaining High Availability in cases of failures or maintenance

    • p1 – connected to Fabric A

  3. Each InfiniBox has 3 nodes, where one port connected to Fabric A and second Port Connected to Fabric B.
    The Active zone is grouped on each Fabric as Follows:

    Zone name




















Step 2: Configure the SAP HANA hosts to work with InfiniBox

The easiest way to connect the host to the storage is using Host PowerTools (HPT), a free software program from Infinidat which is installed on each host and configures multipath settings and other deployment aspects according to best practices. It is supported on various operating systems, and can be downloaded from the  Infinidat repository site .

  1. Download and install Host PowerTools.
  2. After the installation of HPT, it is recommended to run the command:
    sudo infinihost settings check --auto-fix
    This will check the compatibility of the Host and set the needed configurations (mainly multipathing).
  3. Register the system:
    sudo infinihost system register

Step 3: Create a host cluster

  1. Create the cluster using InfiniShell.
    cluster.create name=sap_hana
  2. Add the hosts that were created by Host PowerTools to the cluster. 
    1. Run host.query to see the hosts:
      sap01 - 2 0 2016-01-07 08:00:00
      sap02 - 2 0 2016-01-07 08:00:00
      sap03 - 2 0 2016-01-07 08:00:00 
    2. Use the cluster.add_hosts command:
      cluster.add_host name=sap_hana host=sap01
      Host "sap01" added to cluster "sap_hana"
      cluster.add_host name=sap_hana host=sap02
      Host "sap02" added to cluster "sap_hana"
      cluster.add_host name=sap_hana host=sap03
      Host "sap03" added to cluster "sap_hana" 
    3. Query the cluster to verify that the hosts belong to the cluster:
      sap_hana sap01
      sap_hana sap02
      sap_hana sap03

Step 4: Provision volumes

Provision the volumes on the InfiniBox and map them to the hosts.

  1. Create the volumes on the InfiniBox system:
    If needed, we will create a Pool, a physical capacity that holds the volumes. You should have the HANA cluster sizing requirements in order to set the total capacity.
    Run the following InfiniShell command:
    pool.create name=sap-hana physical_capacity=8t ssd_cache=yes
    Run pool.query to see the details of the newly created pool.
  2. Creating the volumes.
    Note: The specific size of volumes is usually determined by the requestordevice is out of scope for this document. and derived from the RAM size of the HANA node.
    In the following example we create two sets of volumes, one for each of the two active nodes (the third node is on stand-by).
    We use the InfiniShell CLI with the vol.create command as follows:
    vol.create name=sap01-data size=2t pool=sap-hana vol.create name=sap01-log size=2t pool=sap-hana
    vol.create name=sap03-data size=2t pool=sap-hana vol.create name=sap03-log size=2t pool=sap-hana
    Another option is to use the GUI or a script.
    Run vol.query to see the created volumes:
    vol.query name=sap01-data,sap01-log,sap03-data,sap03-log --columns=name,size,pool,ssd_cache
    NAME SIZE POOL SSD CACHE sap01-data 2.00 TB sap_hana yes sap01-log 2.00 TB sap_hana yes
    sap03-data 2.00 TB sap_hana yes sap03-log 2.00 TB sap_hana yes
  3. Map the volumes to the cluster. 
    In this architecture we will map each LUN to all hosts, with the CLI command: name=sap01-data cluster=sap_hana
    Volume "sap01-data" mapped to LUN 11 in cluster "sap_hana" name=sap01-log cluster=sap_hana
    Volume "sap01-log" mapped to LUN 12 in cluster "sap_hana" name=sap03-data cluster=sap_hana
    Volume "sap03-data" mapped to LUN 13 in cluster "sap_hana" name=sap03-log cluster=sap_hana
    Volume "sap03-log" mapped to LUN 14 in cluster "sap_hana"
    Query the results:
    vol.map_query name=sap01-data,sap01-log,sap02-data,sap02-log,sap03-data,sap03-log
    sap01-data CLUSTER sap_hana 11
    sap01-log CLUSTER sap_hana 12
    sap03-data CLUSTER sap_hana 13
    sap03-log CLUSTER sap_hana 14

Step 5: SAP Storage Connector API Fibre Channel client

The Fibre Channel Storage Connector is a ready-to-use implementation of the SAP HANA Storage Connector API.

This API provides hooks for database startup and for failing-over nodes.

Storage Connector clients implement the functions defined in the Storage Connector API. The fcClient implementation is responsible for mounting the SAP HANA volumes. It also implements a proper fencing mechanism during a failover by means of SCSI-3 persistent reservations.

The configuration of the SAP storage connector API is contained within the global.ini file. The location of the file is specified during installation and managed by the cluster at the following location – /hana/shared/<SID>/global/hdb/custom/config/, where SID is the HANA system ID.

To find the WWID of data and log volumes look in the /dev/mapper directory. A sample of global.ini file is specified in  Appendix A.

For more information,  look into the SAP Note  1900823 - SAP HANA Storage Connector API

Setting up a /hana/shared  filesystem

The SAP HANA cluster requires a location shared between all HANA nodes. This is a filesystem that stores the cluster configuration and logs.

This shared location can reside on an NFS shared storage, which supports the SAP HANA requirements. The /hana/shared filesystem can be placed on an Infinibox NFS share.

I/O testing with  HCMT

The process of SAP HANA Hardware Certification includes running I/O utilization testing to ensure that the performance of the HANA installation is not influenced by competing input or output operations (I/O) of other workloads. Multiple HANA nodes connected to the storage units have to fulfill the KPI for each server even running in parallel.

The testing is done using HCMT tool, which can be run with specific parameters. These parameters are set in the fsperf script or by  hdbparm  or by using  –param  option.

For  HCMT details see SAP Note at  SAP Note 2493172 - SAP HANA Hardware and Cloud Measurement Tools (Login required).


Taking advantage of Infinidat InfiniBox enterprise proven storage array with SAP HANA provides clients with a number of key benefits. Clients can reduce the amount of physical hardware required to run SAP HANA workloads, reducing CapEx and OpEx by taking advantage of existing storage management and best practices of the solution.

Clients can run their SAP HANA workloads on InfiniBox and have peace of mind that they will achieve and even exceed their competitive business objectives. InfiniBox delivers 99.99999% availability, with maximum performance and can scale to meet any of the SAP HANA needs.

This solution provides clients the following benefits:

  • Easy integration of SAP HANA into existing data center infrastructure.
  • Existing data center best practices for data management and protection
  • Highest reliability at 99.99999% uptime
  • Over 2M IOPS of performance
  • Over 8PB of usable storage in a single rack, before data reduction
  • Best overall storage TCO including power, cooling and floor space

Appendix A: global.ini example

listeninterface = .global

mode = multidb
database_isolation = low
singletenant = yes

[internal_hostname_resolution] = internode-sap01 = internode-sap02 = internode-sap03

basepath_datavoulumes = /hana/data/H01
basepath_logvolumes = /hana/log/H01
use_mountpoints = yes

usage = test

ha_provider = hdb_ha.fcClient
partition_*_*__prType = 5
partition_*_data__mountOptions = -o relatime,inode64
partition_*_log__mountOptions = -o relatime,inode64 
partition_1_data__wwid = 36742b0f00000047e0000000000005546
partition_1_log__wwid = 36742b0f00000047e000000000000554a
partition_2_data__wwid = 36742b0f00000047e0000000000002452
partition_2_log__wwid = 36742b0f00000047e0000000000002454

max_parallel_io_requests[DATA] = 128
async_write_submit_active= on
async_read_submit= on
max_parallel_io_requests[LOG] = 128

ha_fcclient = info

Appendix B: /etc/multipath.conf example

defaults {
	force_sync no
	rr_min_io 1000
	features "0"
	prio "const"
	reassign_maps "no"
	rr_min_io_rq 1
	path_grouping_policy "failover"
	log_checker_err always
	path_selector "service-time 0"
	multipath_dir "/lib64/multipath"
	fast_io_fail_tmo 5
	bindings_file "/etc/multipath/bindings"
	alias_prefix "mpath"
	prio_args ""
	path_checker "directio"
	flush_on_last_del "no"
	polling_interval 5
	max_fds 8192
	detect_prio no
	failback "manual"
	retain_attached_hw_handler no
	rr_weight "uniform"
	verbosity 2
	wwids_file /etc/multipath/wwids
	user_friendly_names no
	max_polling_interval 20
	queue_without_daemon no
device {
	rr_min_io 1
	features "0"
	prio "alua"
	rr_min_io_rq 1
	path_grouping_policy "group_by_prio"
	dev_loss_tmo 30
	path_selector "round-robin 0"
	path_checker "tur"
	product "InfiniBox.*"
	vendor "NFINIDAT"
	flush_on_last_del "yes"
	failback 30
	rr_weight "priorities"
	no_path_retry 0

For more information

Infinidat offers experienced storage consultants with proven methodologies who are able to assist with implementing InfiniBox with your applications. For more information, see the Infinidat website ( or ask your local Infinidat sales representative. 

© Copyright Infinidat 2022.

This document is current as of the date of publication and may be changed by Infinidat at any time. Not all offerings are available in every country in which Infinidat operates.

The data discussed herein is presented as derived under specific operating conditions. Actual results may vary. THE INFORMATION IN THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT ANY WARRANTY, EXPRESSED OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.  Infinidat  products are warranted according to the terms and conditions of the agreements under which they are provided.

Infinidat, the Infinidat logo, InfiniBox, InfiniRAID, InfiniSnap, InfiniMetrics, and any other applicable product trademarks are registered trademarks or trademarks of Infinidat Ltd. in the United States and other countries. Other product and service names might be trademarks of Infinidat or other companies. A current list of Infinidat trademarks is available online at

 Please Recycle

Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Last edited: 2022-08-06 08:10:08 UTC