Follow

Introduction

Reference Architecture for Installing and Configuring INFINIDAT InfiniBox Storage Arrays with SAP HANA Using NFS Protocol.

SAP HANA and InfiniBox

SAP HANA enables organizations to gain a competitive advantage by providing a platform to analyze large amounts of data in real time. This document provides end users with the best practices for implementing SAP HANA with the Infinidat InfiniBox™ storage array. By using tailored data center integration (TDI), InfiniBox can achieve the highest performance at the greatest scale and availability.

Revision

Date

Doc version

Content

July 30, 2019

2.0.1

  • Dedicated guide for NFS deployments.
  • Added support for InfiniBox 5.x.
  • Added support for /hana/shared on InfiniBox 5.x or later.
  • Comprehensive updates reflective of re-certification and INFINIDAT re-branding

Jul-11-2017

1.0.5

Added support for InfiniBox 3.x, 4.x

Oct-13-2016

1.0.4

Adding information on InfiniBox F4xxx models.

Mar-16-2016

1.0.3

Improved description of:
SAP HANA and External Storage

Feb-18-2016

1.0.2

Added section:

Jan-10-2016

1.0.1

  • Adding more information on InfiniBox –side operations
  • Fixing the doc version numbering scheme

Dec-17-2015

1.0

Initial release.


Executive summary

SAP HANA is an in-memory database platform that is designed to provide real-time data analytics and real-time data processing, side by side, to customers in order to help drive a competitive advantage. SAP HANA can be deployed on premises or in the cloud.
Customers who can process as much data as possible as quickly as possible while minimizing expenses will be the most competitive. The SAP HANA TDI (Tailored Datacenter Integration) model combines SAP software components that are optimized on certified hardware from SAP partners. The SAP HANA TDI model is a more open and flexible model for enterprise customers. SAP HANA servers must still meet the SAP HANA requirements and be certified to run HANA. However, the storage can be a shared component of the SAP HANA environment.
Shared storage allows customers greater flexibility and the ability to take advantage of existing storage capacity they may have in their enterprise arrays. In addition, it allows customers to integrate the SAP HANA solution into their existing data center operations including data protection, monitoring and data management. This helps to improve the time to value for a SAP HANA implementation as well as reduce risk and costs.
Storage arrays used in SAP HANA TDI deployments must be pre-certified by SAP to ensure they meet all SAP HANA performance and functional requirements. Infinidat tested SAP HANA configuration and performance against all InfiniBox F-series enterprise-proven storage arrays.
Infinidat believes that the InfiniBox provides the following benefits over other storage arrays in the market to help SAP HANA customers achieve significant advantages:

  • Superior performance for processing data
  • Maximum scale to process as much data as possible
  • 99.99999% reliability
  • Low cost
  • Integration into existing data center infrastructure

Scope

This white paper describes how to deploy Infinidat InfiniBox storage array with SAP HANA, reducing capital and operational costs, decreasing risk, and increasing data center flexibility.
All configuration recommendations in this document are based on SAP requirements for high availability and the performance tests and results that are needed to meet the key performance indicators (KPIs) for SAP HANA TDI.
This whitepaper provides best practices for deploying the SAP HANA database on the InfiniBox storage array and provides the following information:

  • Introduction and overview of the solution technologies
  • Description of the configuration requirements for SAP HANA on InfiniBox
  • Method of access to InfiniBox from the SAP HANA nodes

SAP HANA and External Storage

SAP HANA is an in-memory database. The data that is being processed is kept in the RAM of one or multiple SAP HANA worker hosts. Segments of the data are cached in RAM and the remaining part of the data resides on disk. This is very different from traditional databases. All SAP HANA activities such as reads, inserts, updates, or deletes are performed in the main memory of the host and not on a storage device.
Scalability for SAP HANA TDI is defined by the number of production HANA worker hosts that can be connected to enterprise storage arrays and still meet the key SAP performance metrics for enterprise storage. Because enterprise storage arrays can provide more capacity than required for HANA, scalability depends on a number of factors including:

  • Array cache size
  • Array performance
  • Array bandwidth, throughput, and latency
  • HANA host connectivity to the array
  • Storage configuration for the HANA persistence

SAP HANA uses external disk storage to maintain a copy of the data that is in memory to prevent data loss due to a power failure as well as to enable hosts to failover and have the standby SAP HANA host take over processing.
The connectivity to the external storage can be either FC-based or NFS-based. InfiniBox supports both block and NFS, for /hana/data , /hana/log and also /hana/shared.
Infinibox version 5.0.x and above supports NLM hence /hana/shared can be placed on an Infinibox File System.
In this guide, we used SUSE Linux Enterprise Server (SLES) as the operating system running the SAP HANA database.

The Infinidat InfiniBox Storage Array

Infinidat believes that companies that acquire, store and analyze the most amount of data possible, gives them the greatest competitive advantage. Infinidat's patented storage architecture leverages industry standard hardware to deliver InfiniBox, a storage array that yields 2M IOPS, 99.99999% reliability and over 8 PB of capacity in a single rack. Automated provisioning, management, and application integration provide a system that is incredibly efficient manage and simple to deploy. Infinidat is changing the paradigm of enterprise storage while reducing capital requirements, operational overhead, and complexity.  
The uniqueness of the Infinidat solution is a storage architecture that includes over 100 patented innovations.  The architecture provides a software driven set of enterprise storage capabilities residing on industry standard, commodity hardware. As new hardware and storage technologies become available, Infinidat can take advantage of them.  Shipping the software with a highly integrated and tested hardware reference platform, Infinidat is able to deliver a high performing, highly resilient, scalable software defined storage solution.  
Infinidat's level of integration and testing minimizes the time and risk of developing a solution like this in-house, and can deliver it at a much lower cost. In addition, all of the storage software for automated provisioning, management, and application integration enables fewer administrators to manage more storage, keeping OpEx low.
Today, InfiniBox offers its unified storage arrays in several models, ranging from 150TB to 4.149PB of usable capacity. The models feature different amounts of cache size, amount of SSD disks and the number and size of the HDDs in the system.
From an SAP HANA configuration perspective, the number of HANA nodes per system and maximum node configurations are described in the table below:

 

F6xxx

F4xxx

F2xxx

Net capacity (before data reduction)

Up to 4.149PB

Up to 2.050PB

Up to 499TB

DRAM

Up to 3TB

Up to 2.3TB

Up to 1.1TB

Flash Cache

Up to 207 TB

Up to 207 TB

Up to 103 TB

IOPS

2.0M

1.4M

980K

Throughput

25.2 GB/sec

20.2 GB/sec

14 GB/sec

FC Ports

24x16Gb

24x16Gb

24x16Gb

Ethernet Ports

12x10Gb

12x10Gb

12x10Gb

Recommended HANA Nodes

Up to 92 Nodes

Up to 74 Nodes

Up to 24 Nodes


SAP HANA I/O workloads require specific consideration for the configuration of the data and log volumes on the InfiniBox storage arrays. InfiniBox delivers the high performance needed for the persistent storage of an SAP HANA database as well as the log volumes.
The SAP HANA Storage Certification of the InfiniBox array applies to both block and file attached HANA workloads. This white paper discusses how to use InfiniBox in a block and file environments. One of the key value propositions of the InfiniBox system is that there are no complex configuration schemas to follow when it comes to system disk configuration.
All software, including our pre-configured RAID and tiering software (putting the most active data on the fastest performing drives) is all included. In addition, it is also all preconfigured. The system is designed to deliver the highest IOPS at all times.
For further implementation details please refer to the section on this document.

Setting up NFS for Hana datastore

InfiniBox supports NFS V3 as a standard protocol. This enables direct use of InfiniBox file storage services for data and log partitions as well as the /hana/shared (starting infinibox 5.0.X we added support for NLM).

NFS file systems are mounted via OS mount: in the event of a node fail over, the standby host has immediate access to the same shared storage where the data and log volumes are located, This is called a shared everything architecture because all the data and log partitions are mounted on all nodes.

Connectivity example

Each HANA node will have a redundant connectivity to the InfiniBox, preferably with 10G interfaces. In the figure below, two separated network segments are shown - the “Private Network” for inter-node communication and the “Public Network” for client connectivity, "Storage Network" is used to connect to storage via NFS protocol and needs to be redundant with high throughput, according to SAP recommendations.

HANA node access to NFS storage

In order to enhance the scalability and load balancing between the nodes, there are two options for connecting the HANA nodes to the storage nodes. One option is using round-robin DNS configuration, The second option is using direct mount to an IP of an InfiniBox node port as

described here. When one node is deactivated, its IP address will be transferred to another node in a manner that is seamless to the client. The system uses LACP trunking to load balance the IP throughput across all Ethernet ports.

Step 1: Creating a NAS service

On the InfiniBox, create the following entities:

• Interface

• Network Space

CREATING AN INTERFACE

The interface is created from InfiniBox Ethernet ports:

config.ethernet.interface.create name=pg1_data1 type=PORT_GROUP ports=ETH1,ETH2,ETH3,ETH4 repeat_on_all_nodes=yes

CREATING A NETWORK SPACE AND THE NAS SERVICE

The NAS service uses a network space, an InfiniBox configuration that assures failover in case of a port failure.

To create the network space, use the config.net_space.create command

config.net_space.create name=NAS interface=pg1_data1,pg2_data1,pg3_data1 default_gateway=172.20.63.254 service=NAS network=172.20.32.0/19

ASSIGNING ip Address to A NETWORK SPACE

config.net_space.ip.create net_space=NAS config.net_space.ip.create net_space=172.20.43.93,172.20.43.94,172.20.43.96,172.20.43.98,172.20.43.99,172.20.43.10


Step 2: Creating the Filesystem and Export on the Infinibox array

In order to provision the filesystem, run the following commands from the InfiniShell CLI, you can also put all commands in a file and use  'infinishell -f /file/location'.

This example is for SID=H04

Create a Pool:

pool.create name=nas-sap physical_capacity=3t

When using NFS, the data and log volume sizes depend on the internal memory size (RAM) installed on each HANA node and number of nodes in the cluster.

Create a filesystem and an export for each mount point that represent a filesystem partition, In this example we will use SID = H04.

fs.create name=h04_data_mnt00001 size=512G pool=nas-sap
fs.export.create export_path=/H04_data_mnt00001 fs=h04_data_mnt00001
fs.create name=h04_log_mnt00001 size=256G pool=nas-sap
fs.export.create export_path=/H04_log_mnt00001 fs=h04_log_mnt00001
fs.create name=h04_data_mnt00002 size=512G pool=nas-sap
fs.export.create export_path=/H04_data_mnt00002 fs=h04_data_mnt00002
fs.create name=h04_log_mnt00002 size=256G pool=nas-sap
fs.export.create export_path=/H04_log_mnt00002 fs=h04_log_mnt00002

Create a filesystem and an export for /hana/shared.

fs.create name=hana-shared size=512G pool=nas-sap
fs.export.create export_path=/hana-shared fs=hana-shared

Check the permissions on the export to make sure they are configured as 'No root squash' as this is an SAP requirement for /hana/shared, use the following command:

fs.export.permission.query fs=hana-shared
EXPORT PATH                                                        CLIENT                                                             ACCESS TYPE  NO ROOT SQUASH
/hana-shared                                                       *                                                                  RW           yes

If this is wrong, change the permissions using the command = fs.export.permission.modify
You can restrict the access to specific hosts by configuring their ip address or an ip range by specifying

fs.export.permission.modify export_path=/hana-shared client= A full wildcard (i.e: *), IP address or IP range (e.g 10.0.0.1-10.0.0.10)

Step 3: Mount the filesystems on the hosts

The InfiniBox consists of 3 nodes which are the targets of the mount points, each node can be configured with up to 4 Ethernet interfaces (total of 12 ports).

In this configuration, each SAP HANA host has two 10G interfaces which are connected to redundant switching infrastructure.

In order to spread the load we will mount the partitions to different nodes and interfaces. To get the IP address of each node, run the following command:

config.net_space.ip.query net_space=NAS
NETWORK SPACE  IP ADDRESS     ENABLED  NODE  NETWORK INTERFACE  TYPE
NAS            172.20.43.106  yes      1     pg1_data1          NAS
NAS            172.20.43.93   yes      2     pg2_data1          NAS
NAS            172.20.43.94   yes      2     pg2_data1          NAS
NAS            172.20.43.96   yes      1     pg1_data1          NAS
NAS            172.20.43.98   yes      2     pg2_data1          NAS
NAS            172.20.43.99   yes      1     pg1_data1          NAS

Now we will need to create the relevant directories on the hosts and mount them, through /etc/fstab.

On each host run these commands to create the directories and change the permissions before mounting:

mkdir -p /hana/data/H04/mnt00001
mkdir -p /hana/log/H04/mnt00001
mkdir -p /hana/data/H04/mnt00002
mkdir -p /hana/log/H04/mnt00002
mkdir -p /hana/shared

chmod –R 777 /hana/log/H04
chmod –R 777 /hana/data/H04
chmod 777 /hana/shared

See below a list of the mounts and options that should exist on all of the nodes. After mounting we should see the following list of partitions on each HANA node.

File System name

Export Path on the InfiniBoxMount options, same on all hosts!

h04_data_mnt01

/H04_data_mnt01

rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock

h04_data_mnt02

/H04_data_mnt02

rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock

h04_log_mnt01

/H04_log_mnt01

rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock

h04_log_mnt01

/H04_log_mnt02

rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock

hana-shared

/hana-shared

rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock

Example of /etc/fstab on one Hana node;

interop014:~ # cat /etc/fstab
/dev/system/swap swap swap defaults 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a / btrfs defaults 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /boot/grub2/x86_64-efi btrfs subvol=@/boot/grub2/x86_64-efi 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /home btrfs subvol=@/home 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /opt btrfs subvol=@/opt 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /srv btrfs subvol=@/srv 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /tmp btrfs subvol=@/tmp 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /usr/local btrfs subvol=@/usr/local 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/cache btrfs subvol=@/var/cache 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/crash btrfs subvol=@/var/crash 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/libvirt/images btrfs subvol=@/var/lib/libvirt/images 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/machines btrfs subvol=@/var/lib/machines 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/mariadb btrfs subvol=@/var/lib/mariadb 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/mysql btrfs subvol=@/var/lib/mysql 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/named btrfs subvol=@/var/lib/named 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/log btrfs subvol=@/var/log 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/opt btrfs subvol=@/var/opt 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/spool btrfs subvol=@/var/spool 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /var/tmp btrfs subvol=@/var/tmp 0 0
UUID=53807cd0-25e3-42af-aa2a-57589a3c0a4a /.snapshots btrfs subvol=@/.snapshots 0 0
172.20.43.93:/H04_data_mnt01 /hana/data/H04/mnt00001     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0 0
172.20.43.94:/H04_data_mnt02 /hana/data/H04/mnt00002     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0 0
172.20.43.96:/H04_log_mnt01 /hana/log/H04/mnt00001     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0 0
172.20.43.98:/H04_log_mnt02 /hana/log/H04/mnt00002     nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock  0 
172.20.43.99:/hana-shared    /hana/shared nfs rw,vers=3,hard,timeo=600,rsize=262144,wsize=262144,actimeo=0,nolock    0 0

HA/DR Provider - Implementing STONITH

The SAP HANA Storage Connector API together with a specific Storage Connector Script allows usage of different types of storage and network architecture to ensure proper I/O fencing.

When using NFSV3 to connect to the storage, it does not support file locking mechanism like SCSI-3 persistent reservations. A specific storage connector, provided by certified storage vendor, implements a STONITH* (shoot the other node in the head) call to reboot a failed host, can isolate a failed node, protecting the shared resource.

This implementation of STONITH is based on the IPMI Tool and performed after the HANA system already installed.

Procedure for creating an HA/DR Provider

1. Create a new directory on the shared location, where we will place the HA/DR provider. i.e. :

/hana/shared/InfiConnector/

2. Change the access permissions to the directory so that <SID>adm user can access it.

3. Change to the <SID>adm user and Use the Demo script located at the following location :

exe/python_support/hdb_ha_dr/HADRDummy.py

Copy the script to the folder you created in step 1, and give it a 'meaningfull' name that will represent the python class. In our example we used the following file: InfiSTONITH.py which can be seen in .

4. Edit the global.ini file located at: /hana/shared/<SID>/global/hdb/custom/config/

Add the following section that defines the DR provider:

[ha_dr_provider_InfiSTONITH]
provider = InfiSTONITH
path = /hana/shared/inficonnector
execution_order = 1

5. Test that the <SID>adm user can run the ipmitool, switch to the user and run the following command:

ipmitool -H <hostname-ipmi> -U admin -P <password> -I lanplus power status
h04adm@interop014:/usr/sap/H04/HDB04> ipmitool -H m-interop015 -U root -P UNDISCLOSED -I lanplus power status
Chassis Power is on

6. Verify the configuration:

  • Check that all `ipmi-hostname` nodes are resolvable, since the ipmitool invokes on the hostname. You can add them in /etc/hosts .
  • When you start the HANA system you should see that the STONITH provider also started, look in the nameserver trace files, for example:
[3685]{-1}[-1/-1] 2019-07-14 17:44:50.679028 i ha_dr_provider   HADRProviderManager.cpp(00073) : loading HA/DR Provider 'InfiSTONITH' from /hana/shared/inficonnector
[3710]{-1}[-1/-1] 2019-07-14 17:44:50.932652 d ha_dr_InfiSTONIT client.py(00119) : tracer 'ha_dr_InfiSTONITH' initialized
  • Perform a failover test. When you shutdown a worker node, the service should be migrated to a standby host.
    Check that the trace logs show the STONITH behavior, for example:

    [3946]{-1}[-1/-1] 2019-07-14 15:44:14.533533 e NameServer       TREXNameServer.cpp(09870) : nameserver@interop016:30401 not responding. retry in 5 sec
    [3942]{-1}[-1/-1] 2019-07-14 15:44:15.093681 i assign           MasterFileChecker.cpp(00124) : master lock file check OK
    [3942]{-1}[-1/-1] 2019-07-14 15:44:15.093705 i failover         DistributedWatchDog.cpp(00219) : Checking master lock succeeded: master is inactive
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.093804 w failover         DistributedWatchDog.cpp(00139) : master nameserver 'interop016:30401' is inactive -> electing new master
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.093862 i failover         DistributedWatchDog.cpp(00147) : daemon process not running on host interop016 -> start masterize
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.095526 i failover         TREXNameServer.cpp(02475) : master failover from interop016 to interop015 started (check masterlock: no)
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.095543 i assign           TREXNameServer.cpp(02570) : assign to volume 1 started
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.095987 i Backup           Backup_Recover.cpp(00243) : :::: RECOVERY looking for request ::::
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.096437 i Backup           BackupTracerImpl.cpp(00219) : Initializing backup tracer... housekeeping disabled
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.096744 i Backup           BackupTracerImpl.cpp(00219) : Initializing backup tracer... housekeeping disabled
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097538 i Backup           BackupMgr_Manager.cpp(04995) : Entering isDataRecoveryPending requestedVolume: 1
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097554 i Backup           BackupMgr_Manager.cpp(05020) : wait done m_RecoverRequestBarrier ( 0 )
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097564 i Backup           BackupMgr_Manager.cpp(04995) : Entering isDataRecoveryPending requestedVolume: 1
    [3950]{-1}[-1/-1] 2019-07-14 15:44:15.097570 i Backup           BackupMgr_Manager.cpp(05020) : wait done m_RecoverRequestBarrier ( 0 )
    [5473]{-1}[-1/-1] 2019-07-14 15:44:15.100210 i failover         DistributedWatchDog.cpp(00351) : detected activate standby nameserver@interop014:30401 with obsolete topology
    [5473]{-1}[-1/-1] 2019-07-14 15:44:15.100220 e NameServer       TREXNameServer.cpp(03748) : setActive from DistributedWatchdog on non master nameserver
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.101545 i ha_dr_provider   PythonProxyImpl.cpp(00953) : calling HA/DR provider InfiSTONITH.stonith(failing_host=interop016)
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102185 d ha_dr_InfiSTONIT InfiSTONITH.py(00056) : enter stonith hook; {'failingHost': 'interop016', 'self': <InfiSTONITH.InfiSTONITH object at 0x7f288f327b50>, 'kwargs': {}}
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102257 d ha_dr_InfiSTONIT InfiSTONITH.py(00057) : {'execution_order': '1', 'path': '/hana/shared/inficonnector', 'provider': 'InfiSTONITH'}
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102308 i ha_dr_InfiSTONIT InfiSTONITH.py(00061) : stonith - power cycling host interop016
    [5475]{-1}[-1/-1] 2019-07-14 15:44:15.102358 i ha_dr_InfiSTONIT InfiSTONITH.py(00068) : ipmitool -H m-interop016 -U root -P xxxxxx -I lanplus power  off

Setting up a /hana/shared device

The SAP HANA cluster requires a location shared between all HANA nodes. This is a filesystem that stores the cluster configuration and logs.

This shared location can reside on any NFS Service which supports locking mechanism, or an OCFS2 clustered filesystem on a block device. The creation and configuration of OCFS2 device is out of scope for this document.

For creating file system for /hana/shared ,see the previous section - Creating the Filesystem and Export on the Infinibox array.

The size of the shared filesystem

• The size of the /hana/shared file system must be greater than, or equal to the size of the main memory of all SAP HANA nodes.

• The file system type must be able to expand the size whenever a new node is added to the HANA cluster.

• To shrink a file system on a block device, or to shrink the block device itself, in most cases you need to delete and re-create the file system or block device.

SETTING UP A CLUSTER FILE SYSTEM ON RHEL

Red Hat does not provide support for OCFS2. An alternative is to set up a GFS2 in a cluster environment.

For a guide about GFS2 on RHEL7, look here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_gfs2_file_systems/index

I/O Testing with HWCCT

The process of SAP HANA Hardware Certification includes running I/O utilization testing to ensure that the performance of the HANA installation is not influenced by competing input or output operations (I/O) of other workloads.

Multiple HANA nodes connected to the storage units have to fulfill the KPI for each server even running in parallel.

The testing is done using hwcct tool, which can be run with specific parameters. These parameters are set in the fsperf script or by hdbparm or by using –param option. For hwcct details see SAP Note at SAP Note 1943937 , login required.

These setting applies to both NAS and SAN connectivity on HANA versions 1.0 only.

The recommended settings for InfiniBox are specified below:

async_write_submit_active=on
async_read_submit=on
max_parallel_io_requests=128

Starting with SAP HANA 2.0, hdbparam is deprecated and the parameters have been moved to global.ini. The parameters can also be set using SQL commands or SAP HANA Studio.

See  for The parameters in section [filio] of global.ini

See SAP Note 2399079 - Elimination of hdbparam in HANA 2, for more details.

Further Tips and Recomendations

See SAP Note 2382421 for Optimizing Network Configuration when using a large number of Hana nodes.

See SAP Note 2205917 for Reommended OS Settings for SLES 12.

Summary

Taking advantage of Infinidat InfiniBox enterprise proven storage array with SAP HANA provides clients with a number of key benefits. Clients can reduce the amount of physical hardware required to run SAP HANA workloads, reducing CapEx and OpEx by taking advantage of existing storage management and best practices of the solution.

Clients can run their SAP HANA workloads on InfiniBox and have peace of mind that they will achieve and even exceed their competitive business objectives. InfiniBox delivers 99.99999% availability, with maximum performance and can scale to meet any of the SAP HANA needs.

This solution provides clients the following benefits:

• Easy integration of SAP HANA into existing data center infrastructure.

• Existing data center best practices for data management and protection

• Highest reliability at 99.99999% uptime

• Over 2M IOPS of performance

• Over 8PB of usable storage in a single rack, before data reduction

• Best overall storage TCO including power, cooling and floor space

Appendix A - global.ini example (for NFS protocol)


[communication]
internal_network = 1.1.1.0/24
listeninterface = .internal

[internal_hostname_resolution]
1.1.1.16 = interop016
1.1.1.15 = interop015
1.1.1.14 = interop014

[multidb]
mode = multidb
database_isolation = low
singletenant = yes

[persistence]
basepath_datavolumes = /hana/data/H04
basepath_logvolumes = /hana/log/H04
basepath_shared = yes

[system_information]
usage = test

[ha_dr_provider_InfiSTONITH]
provider = InfiSTONITH
path = /hana/shared/inficonnector
execution_order = 1

[fileio]
max_parallel_io_requests[DATA] = 128
async_write_submit_active= on
async_read_submit= on
max_parallel_io_requests[LOG] = 128

[trace]
ha_dr_InfiSTONITH = debug


Appendix B - STONITH method example

h04adm@interop014:/usr/sap/H04/HDB04> cat /hana/shared/inficonnector/InfiSTONITH.py
"""
Sample for a HA/DR hook provider.

When using your own code in here, please copy this file to location on /hana/shared outside the HANA installation.
This file will be overwritten with each hdbupd call! To configure your own changed version of this file, please add
to your global.ini lines similar to this:

    [ha_dr_provider_<className>]
    provider = <className>
    path = /hana/shared/haHook
    execution_order = 1


For all hooks, 0 must be returned in case of success.
"""

from hdb_ha_dr.client import HADRBase, Helper
import os, time


class InfiSTONITH(HADRBase):

    def __init__(self, *args, **kwargs):
        # delegate construction to base class
        super(InfiSTONITH, self).__init__(*args, **kwargs)

    def about(self):
        return {"provider_company" :        "INFINIDAT",
                "provider_name" :          "InfiSTONITH", # provider name = class name
                "provider_description" :    "HANA SPS04 IPMI Stonith",
                "provider_version" :        "2.0"}


    def startup(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter startup hook; %s" % locals())
        self.tracer.debug(self.config.toString())

        self.tracer.info("leave startup hook")
        return 0

    def shutdown(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter shutdown hook; %s" % locals())
        self.tracer.debug(self.config.toString())

        self.tracer.info("leave shutdown hook")
        return 0

    def failover(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter failover hook; %s" % locals())
        self.tracer.debug(self.config.toString())

        self.tracer.info("leave failover hook")
        return 0

    def stonith(self, failingHost, **kwargs):
        self.tracer.debug("enter stonith hook; %s" % locals())
        self.tracer.debug(self.config.toString())

        # e.g. stonith of params["failed_host"]
        # e-g- set vIP active
    self.tracer.info( "stonith - power cycling host %s" % failingHost)
    ipmi_host = "m-%s" % failingHost
    ipmi_call = "ipmitool -H %s -U root -P undisclosed -I lanplus power " % ipmi_host
    ipmi_call_off = "%s off" % ipmi_call
    ipmi_call_status = "%s status" % ipmi_call
    ipmi_call_on = "%s on" % ipmi_call

    self.tracer.info("%s" % ipmi_call_off)
    print("%s" % ipmi_call_off)
        retries = 5
        retry_nr = 1

    while True:
                print("Trying to call ipmitool: %d" % retry_nr)
                # If we fail to call ipmi we need to stop here and leave the system unmounted!!!!!
                (code, output) = Helper._runOsCommand(ipmi_call_off)
                if(code != 0):
                        ret = 1 # means failure
                        print output
                        self.tracer.error(output)
                time.sleep(3)
                (code, output) = Helper._runOsCommand(ipmi_call_status)
                if 'is off' in output:
                        msg = "successful power off %s" % failingHost
                        self.tracer.info(msg)
                        print msg
                        ret = 0
                        break
                if(retry_nr >= retries):
                        msg = "giving up powering off %s - NEED HELP" % failingHost
                        self.tracer.error(msg)
                        print msg
                        break


        self.tracer.info("leave stonith hook")
        return 0

    def preTakeover(self, isForce, **kwargs):
        """Pre takeover hook."""
        self.tracer.info("%s.preTakeover method called with isForce=%s" % (self.__class__.__name__, isForce))

        if not isForce:
            # run pre takeover code
            # run pre-check, return != 0 in case of error => will abort takeover
            return 0
        else:
            # possible force-takeover only code
            # usually nothing to do here
            return 0


    def postTakeover(self, rc, **kwargs):
        """Post takeover hook."""
        self.tracer.info("%s.postTakeover method called with rc=%s" % (self.__class__.__name__, rc))

        if rc == 0:
            # normal takeover succeeded
            return 0
        elif rc == 1:
            # waiting for force takeover
            return 0
        elif rc == 2:
            # error, something went wrong
            return 0

    def srConnectionChanged(self, parameters, **kwargs):
        self.tracer.debug("enter srConnectionChanged hook; %s" % locals())

        # Access to parameters dictionary
        hostname = parameters['hostname']
        port = parameters['port']
        volume = parameters['volume']
        serviceName = parameters['service_name']
        database = parameters['database']
        status = parameters['status']
        databaseStatus = parameters['database_status']
        systemStatus = parameters['system_status']
        timestamp = parameters['timestamp']
        isInSync = parameters['is_in_sync']
        reason = parameters['reason']
        siteName = parameters['siteName']

        self.tracer.info("leave srConnectionChanged hook")
        return 0

    def srReadAccessInitialized(self, parameters, **kwargs):
        self.tracer.debug("enter srReadAccessInitialized hook; %s" % locals())

        # Access to parameters dictionary
        database = parameters['last_initialized_database']
        databasesNoReadAccess = parameters['databases_without_read_access_initialized']
        databasesReadAccess = parameters['databases_with_read_access_initialized']
        timestamp = parameters['timestamp']
        allDatabasesInitialized = parameters['all_databases_initialized']

        self.tracer.info("leave srReadAccessInitialized hook")
        return 0

    def srServiceStateChanged(self, parameters, **kwargs):
        self.tracer.debug("enter srServiceStateChanged hook; %s" % locals())

        # Access to parameters dictionary
        hostname = parameters['hostname']
        service = parameters['service_name']
        port = parameters['service_port']
        status = parameters['service_status']
        previousStatus = parameters['service_previous_status']
        timestamp = parameters['timestamp']
        daemonStatus = parameters['daemon_status']
        databaseId = parameters['database_id']
        databaseName = parameters['database_name']
        databaseStatus = parameters['database_status']

        self.tracer.info("leave srServiceStateChanged hook")
        return 0


For more information

Infinidat offers experienced storage consultants with proven methodologies who are able to assist with implementing InfiniBox with your applications. For more information, see the Infinidat website (https://infinidat.com) or ask your local Infinidat sales representative.


Legal Notice

© Copyright Infinidat 2019.

This document is current as of the date of and may be changed by Infinidat at any time. Not all offerings are available in every country in which Infinidat operates.

The data discussed herein is presented as derived under specific operating conditions. Actual results may vary. THE INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS” WITHOUT ANY WARRANTY, EXPRESSED OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A  PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.

Infinidat products are warranted according to the terms and conditions of the agreements under which they are provided. 

Infinidat, the Infinidat logo, InfiniBox, InfiniRAID, InfiniSnap, InfiniMetrics, and any other applicable product trademarks are registered trademarks or trademarks of Infinidat LTD in the United States and other countries. Other product and service names might be

trademarks of Infinidat or other companies. A current list of Infinidat trademarks is available online at https://www.infinidat.com/sites/default/files/resourcepdfs/ INFINIDAT-Trademarks.pdf.



Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Comments