Follow

Introduction

Reference Architecture for Installing and Configuring INFINIDAT InfiniBox Storage Arrays with SAP HANA Using FC Protocol.

SAP HANA and InfiniBox

SAP HANA enables organizations to gain a competitive advantage by providing a platform to analyze large amounts of data in real time. This document provides end users with the best practices for implementing SAP HANA with the Infinidat InfiniBox™ storage array. By using tailored data center integration (TDI), InfiniBox can achieve the highest performance at the greatest scale and availability.

Revision

Date

Doc version

Content

Sept. 9, 20192.0.2Added support for /hana/shared on InfiniBox 4.0.x

June 14, 2019

2.0.1

  • Dedicated guide for FC deployments.
  • Added support for InfiniBox 5.x.
  • Added support for /hana/shared on InfiniBox 5.x or later.
  • Comprehensive updates reflective of re-certification and INFINIDAT re-branding

Jul-11-2017

1.0.5

Added support for InfiniBox 3.x, 4.x

Oct-13-2016

1.0.4

Adding information on InfiniBox F4xxx models.

Mar-16-2016

1.0.3

Improved description of:
SAP HANA and External Storage

Feb-18-2016

1.0.2

Added section:
Setting up a /hana/shared device

Jan-10-2016

1.0.1

  • Adding more information on InfiniBox –side operations
  • Fixing the doc version numbering scheme

Dec-17-2015

1.0

Initial release.

Executive summary

SAP HANA is an in-memory database platform that is designed to provide real-time data analytics and real-time data processing, side by side, to customers in order to help drive a competitive advantage. SAP HANA can be deployed on premises or in the cloud.

Customers who can process as much data as possible as quickly as possible while minimizing expenses will be the most competitive. The SAP HANA TDI (Tailored Datacenter Integration) model combines SAP software components that are optimized on certified hardware from SAP partners. The SAP HANA TDI model is a more open and flexible model for enterprise customers. SAP HANA servers must still meet the SAP HANA requirements and be certified to run HANA. However, the storage can be a shared component of the SAP HANA environment.

Shared storage allows customers greater flexibility and the ability to take advantage of existing storage capacity they may have in their enterprise arrays. In addition, it allows customers to integrate the SAP HANA solution into their existing data center operations including data protection, monitoring and data management. This helps to improve the time to value for a SAP HANA implementation as well as reduce risk and costs.

Storage arrays used in SAP HANA TDI deployments must be pre-certified by SAP to ensure they meet all SAP HANA performance and functional requirements. Infinidat tested SAP HANA configuration and performance against all InfiniBox F-series enterprise-proven storage arrays.

Infinidat believes that the InfiniBox provides the following benefits over other storage arrays in the market to help SAP HANA customers achieve significant advantages:

  • Superior performance for processing data
  • Maximum scale to process as much data as possible
  • 99.99999% reliability
  • Low cost
  • Integration into existing data center infrastructure

Scope

This white paper describes how to deploy Infinidat InfiniBox storage array with SAP HANA, reducing capital and operational costs, decreasing risk, and increasing data center flexibility.

All configuration recommendations in this document are based on SAP requirements for high availability and the performance tests and results that are needed to meet the key performance indicators (KPIs) for SAP HANA TDI.

This whitepaper provides best practices for deploying the SAP HANA database on the InfiniBox storage array and provides the following information:

  • Introduction and overview of the solution technologies
  • Description of the configuration requirements for SAP HANA on InfiniBox
  • Method of access to InfiniBox from the SAP HANA nodes

SAP HANA and External Storage

SAP HANA is an in-memory database. The data that is being processed is kept in the RAM of one or multiple SAP HANA worker hosts. Segments of the data are cached in RAM and the remaining part of the data resides on disk. This is very different from traditional databases. All SAP HANA activities such as reads, inserts, updates, or deletes are performed in the main memory of the host and not on a storage device.

Scalability for SAP HANA TDI is defined by the number of production HANA worker hosts that can be connected to enterprise storage arrays and still meet the key SAP performance metrics for enterprise storage. Because enterprise storage arrays can provide more capacity than required for HANA, scalability depends on a number of factors including:

  • Array cache size
  • Array performance
  • Array bandwidth, throughput, and latency
  • HANA host connectivity to the array
  • Storage configuration for the HANA persistence

SAP HANA uses external disk storage to maintain a copy of the data that is in memory to prevent data loss due to a power failure as well as to enable hosts to failover and have the standby SAP HANA host take over processing.

The connectivity to the external storage can be either FC-based or NFS-based. InfiniBox supports both block and NFS, for /hana/data , /hana/log and also /hana/shared.

In this guide, we used SUSE Linux Enterprise Server (SLES) as the operating system running the SAP HANA database. More information on the best practices for configuring the OS can be found in Appendix A: Installing and Setting-up the Linux OS.

The Infinidat InfiniBox Storage Array

Infinidat believes that companies that acquire, store and analyze the most amount of data possible, gives them the greatest competitive advantage. Infinidat's patented storage architecture leverages industry standard hardware to deliver InfiniBox, a storage array that yields 2M IOPS, 99.99999% reliability and over 8 PB of capacity in a single rack. Automated provisioning, management, and application integration provide a system that is incredibly efficient manage and simple to deploy. Infinidat is changing the paradigm of enterprise storage while reducing capital requirements, operational overhead, and complexity. 

The uniqueness of the Infinidat solution is a storage architecture that includes over 100 patented innovations.  The architecture provides a software driven set of enterprise storage capabilities residing on industry standard, commodity hardware.  As new hardware and storage technologies become available, Infinidat can take advantage of them.  Shipping the software with a highly integrated and tested hardware reference platform, Infinidat is able to deliver a high performing, highly resilient, scalable software defined storage solution.  

Infinidat's level of integration and testing minimizes the time and risk of developing a solution like this in house, and can deliver it at a much lower cost. In addition, all of the storage software for automated provisioning, management, and application integration enables fewer administrators to manage more storage, keeping OpEx low.

Today, InfiniBox offers its unified storage arrays in several models, ranging from 100TB to 4.149PB of usable capacity. The models feature different amounts of cache size, amount of SSD disks and the number and size of the HDDs in the system.

From an SAP HANA configuration perspective, the number of HANA nodes per system and maximum node configurations are described in the table below:

 

F6xxx

F4xxx

F2xxx

Recommended HANA Nodes

Up to 92 Nodes

Up to 74 Nodes

Up to 24 Nodes

SAP HANA I/O workloads require specific consideration for the configuration of the data and log volumes on the InfiniBox storage arrays. InfiniBox delivers the high performance needed for the persistent storage of an SAP HANA database as well as the log volumes. When using other workloads on the same InfiniBox system or adding more production nodes, please consider using the performance test tool provided by SAP, HWCCT (Hardware Configuration Check Tool), to verify that the KPI's defined by SAP are met.

The SAP HANA Storage Certification of the InfiniBox array applies to both block and file attached HANA workloads, this white paper discusses how to use InfiniBox in a block environment. One of the key value propositions of the InfiniBox system is that there are no complex configuration schemas to follow when it comes to system disk configuration.

All software, including our pre-configured RAID and tiering software (putting the most active data on the fastest performing drives) is all included. In addition, it is also all preconfigured. There is no need to set up RAID configurations or determine what drives you need. The system is designed to deliver the highest IOPS at all times.

For further implementation details please refer to the HANA nodes scale-out section on this document.

Infinidat's Performance Acceleration and Workload Optimization

InfiniBox is a Flash-Optimized Array, using a combination of DRAM, Flash media (SSD), and high capacity NL-SAS disks to write, read and store data. The algorithm used for data placement optimization is called Neural Cache.

InfiniBox delivers sustain and high levels of performance across mixed and consolidated workloads. The system is built from the ground up to adapt to different workloads within a single environment.

Write Acceleration

InfiniBox accepts all writes without any pre-processing into its DRAM, and makes a second copy of the write in another node's DRAM over low-latency InfiniBand before sending the acknowledgment to the host. Accepting the write from DRAM (directly attached to the CPU), instead of an external flash device, allows InfiniBox to complete writes in the lowest possible latency. InfiniBox uses a single, large memory pool to accept writes, this allows larger write bursts to be sustained, allows data that changes frequently to get overwritten at DRAM latency and allows Neural Cache time to make smart decisions, prioritizing which data blocks will benefit from DRAM speeds and which should be destaged to SSDs and HDDs. By keeping data longer in the write cache, Neural Cache avoids unnecessary workload on the CPU and backend persistency layers.

Read Acceleration

InfiniBox uses its innovative Neural Cache that aims to place all of the hot data in DRAM, The InfiniBox Neural Cache allows most reads to complete at DRAM speed, which is 1000 times faster than flash.

Since Neural Cache is a learning algorithm, it optimizes performance over time. InfiniBox leverages a thick SSD flash layer, which serves as a "cushion" for DRAM-misses. As Neural Cache learns the I/O patterns and optimizes DRAM data placement, the flash layer changes its function from handling DRAM-misses to handling changes in I/O patterns, which the algorithm may not be able to predict (e.g. periodic audit that requires data not in DRAM).

QOS

InfiniBox system features a virtualized storage service that is shared by all hosts and applications. The QoS feature aligns system performance to the varying business needs.
The user may consider using QOS to restrict IOPS and throughput, by limiting resources to relevant 'noisy' neighbor entity.

Block device setup for HANA datastore

In a production environment it is essential having dual connections for each of the Network and FC channels. In the figure below, all three InfiniBox storage nodes are shown (the disk enclosures are not shown for the sake of simplicity).

One of the benefits of working with InfiniBox system is the simple method of provisioning storage for hosts. This paper describes a 2+1 setup, where two nodes are used with their own set of Data and Log volumes. The HANA system can be scaled with multiple hosts, which can be added later.

The following steps describe the basic process of implementation, zoning, creating hosts and volumes on the InfiniBox, mapping the LUNs, and configuring them from the host side.

Step 1: Host zoning

  1. In this setup we will have 3 hosts which have access to all volumes. This connectivity is essential for being able to mount these volumes in a Failover scenario.

    See more about it later in the section that explains about the SAP HANA storage connector API client.
  2. Each host has two HBA ports :

    • p2 – connected to Fabric B, for retaining High Availability in cases of failures or maintenance

    • p1 – connected to Fabric A

  3. Each InfiniBox has 3 nodes, where one port connected to Fabric A and second Port Connected to Fabric B.
    The Active zone is grouped on each Fabric as Follows:

    Zone name

    Fabric

    Hosts

    Targets

    ibox_a_sap

    A

    sap01_p1

    sap02_p1

    sap03_p1

    ibox_n1_p1

    ibox_n2_p1

    ibox_n3_p1

    ibox_b_sap

    B

    sap01_p2

    sap02_p2

    sap03_p2

    ibox_n1_p2

    ibox_n2_p2

    ibox_n3_p2

Step 2: Configure the SAP HANA hosts to work with InfiniBox

The easiest way to connect the host to the storage is using Host PowerTools (HPT), a free software program from Infinidat which is installed on each host and configures multipath settings and other deployment aspects according to best practices. It is supported on various operating systems, and can be downloaded from the Infinidat repository site.

  1. Download and install Host PowerTools.
  2. After the installation of HPT, it is recommended to run the command:
    sudo infinihost settings check --auto-fix
    This will check the compatibility of the Host and set the needed configurations (mainly multipathing).
  3. Register the system:
    sudo infinihost system register

Step 3: Create a host cluster

  1. Create the cluster using InfiniShell.
    cluster.create name=sap_hana
  2. Add the hosts that were created by Host PowerTools to the cluster. 
    1. Run host.query to see the hosts:
      NAME CLUSTER LUNS FC PORTS CREATED AT

      sap01 - 2 0 2016-01-07 08:00:00
      sap02 - 2 0 2016-01-07 08:00:00
      sap03 - 2 0 2016-01-07 08:00:00 
    2. Use the cluster.add_hosts command:
      cluster.add_host name=sap_hana host=sap01
      Host "sap01" added to cluster "sap_hana"
      cluster.add_host name=sap_hana host=sap02
      Host "sap02" added to cluster "sap_hana"
      cluster.add_host name=sap_hana host=sap03
      Host "sap03" added to cluster "sap_hana" 
    3. Query the cluster to verify that the hosts belong to the cluster:
      cluster.host_query
      NAME HOST
      sap_hana sap01
      sap_hana sap02
      sap_hana sap03

Step 4: Provision volumes

Provision the volumes on the InfiniBox and map them to the hosts.

  1. Create the volumes on the InfiniBox system:
    If needed, we will create a Pool, a physical capacity that holds the volumes. You should have the HANA cluster sizing requirements in order to set the total capacity.
    Run the following InfiniShell command:
    pool.create name=sap-hana physical_capacity=8t ssd_cache=yes
    Run pool.query to see the details of the newly created pool.
  2. Creating the volumes.
    Note: The specific size of volumes is usually determined by the requestor, andrequestor and derived from the RAM size of the HANA node.
    In the following example we create two sets of volumes, one for each of the two active nodes (the third node is on stand-by).
    We use the InfiniShell CLI with the vol.create command as follows:

    vol.create name=sap01-data size=2t pool=sap-hana vol.create name=sap01-log size=2t pool=sap-hana
    vol.create name=sap03-data size=2t pool=sap-hana vol.create name=sap03-log size=2t pool=sap-hana
    Another option is to use the GUI or a script.
    Run vol.query to see the created volumes:
    vol.query name=sap01-data,sap01-log,sap03-data,sap03-log --columns=name,size,pool,ssd_cache
    NAME SIZE POOL SSD CACHE sap01-data 2.00 TB sap_hana yes sap01-log 2.00 TB sap_hana yes
    sap03-data 2.00 TB sap_hana yes sap03-log 2.00 TB sap_hana yes
  3. Map the volumes to the cluster. 
    In this architecture we will map each LUN to all hosts, with the CLI vol.map command:
    vol.map name=sap01-data cluster=sap_hana
    Volume "sap01-data" mapped to LUN 11 in cluster "sap_hana"
    vol.map name=sap01-log cluster=sap_hana
    Volume "sap01-log" mapped to LUN 12 in cluster "sap_hana"
    vol.map name=sap03-data cluster=sap_hana
    Volume "sap03-data" mapped to LUN 13 in cluster "sap_hana"
    vol.map name=sap03-log cluster=sap_hana
    Volume "sap03-log" mapped to LUN 14 in cluster "sap_hana"
    Query the results:
    vol.map_query name=sap01-data,sap01-log,sap02-data,sap02-log,sap03-data,sap03-log
    NAME TARGET TYPE TARGET NAME LUN ID
    sap01-data CLUSTER sap_hana 11
    sap01-log CLUSTER sap_hana 12
    sap03-data CLUSTER sap_hana 13
    sap03-log CLUSTER sap_hana 14

Step 5: SAP Storage Connector API Fibre Channel Client

The Fiber Channel Storage Connector is a ready-to-use implementation of the SAP HANA Storage Connector API.

This API provides hooks for database startup and for failing-over nodes.

Storage Connector clients implement the functions defined in the Storage Connector API. The fcClient implementation is responsible for mounting the SAP HANA volumes. It also implements a proper fencing mechanism during a failover by means of SCSI-3 persistent reservations.

The configuration of the SAP storage connector API is contained within the global.ini file. The location of the file is specified during installation and managed by the cluster at the following location – /hana/shared/<SID>/global/hdb/custom/config/, where SID is the HANA system ID.

To find the WWID of data and log volumes look in the /dev/mapper directory. A sample of global.ini file is specified in Appendix A.

For more information, look in the SAP HANA Fiber Channel Storage Connector Admin Guide.

Setting up a /hana/shared device

The SAP HANA cluster requires a location shared between all HANA nodes. This is a filesystem that stores the cluster configuration and logs.

This shared location can reside on a NFSV3 Service, which supports the SAP HANA requirements, with an implementation of a connector (STONITH) supplied by the storage vendor which shuts down a failing node to prevent data corruption. See an example of STONITH method in Appendix C.

Another option is OCFS2 clustered filesystem on a block device. 

  • The creation and configuration of OCFS2 device is out of scope for this document.
  • With Infinibox version 4 and above the hana shared (/hana/shared) can be placed on an Infinibox File System Export.
  • Creating a file system for /hana/shared, is described later in the guide - Creating the Filesystem and Export on the Infinibox array.

Storing /hana/shared on an OCFS2 device

Oracle Cluster File System 2 (OCFS2) is a general-purpose journaling filesystem that is fully integrated in the Linux kernel 2.6 and later. OCFS2 allows you to store the application binary files, data files, and logs on SAN devices. All of the cluster nodes have concurrent read and write access to the file system. A distributed lock manager helps preventing file access conflicts.

Setting-up an OCFS2 volume on SLES for /hana/shared 

Follow these instructions in order to set up an OCFS2 filesystem for /hana/shared.

  1. Create a volume on InfiniBox and map it to the cluster. This task was carried out on Block device setup for HANA datastore.
    Specifically, Step 3: Creating a cluster and Step 4: Provisioning volumes
    The expected result, run vol.map_query
    The result is:
    NAME TARGET TYPE TARGET NAME LUN ID
    sap-ocfs CLUSTER sap_hana <lun number>

  2. Configure the cluster service nodes.
    Install the following packages to support OCFS2:
    ocfs2-tools-* , ocfs2-kmp-, ocfs2-tools-o2cb- (mandatory) and ocfs2console-* (optional to use an ocfs gui console).
  3. Enable the cluster service.
    The O2CB cluster service is a set of modules and in-memory file systems that are required to manage OCFS2 services and volumes.
    1. Log in to the server as root, and run the following command on each node. This will enable the services:
      chkconfig --add o2cb
      chkconfig --add ocfs2
    2. Configure the o2cb driver to load on boot, and set some parameters. Run the following commands on each node: 
      Configure and answer properly:
      /etc/init.d/o2cb
      For example:
      Load O2CB driver on boot (y/n) [y]:
      Cluster stack backing O2CB [o2cb]:
      Cluster to start on boot (Enter "none" to clear) [sapocfs]: none
      Specify heartbeat dead threshold (>=7) [31]:
      Specify network idle timeout in ms (>=5000) [30000]:
      Specify network keepalive delay in ms (>=1000) [2000]:
      Specify network reconnect delay in ms (>=2000) [2000]:
      Writing O2CB configuration: OK
      Cluster not known
    3. Configure the cluster name and node settings.
      Cluster configuration can be done with OCFS2CONSOLE GUI or by using the o2cb utility which registers the configuration in the /etc/ocfs2/cluster.conf files and after we are done, we can copy the file to the other nodes.
      o2cb add-cluster sapocfs
      o2cb add-node --ip 172.16.76.60 --port 7777 --number 1 sapocfs sap01
      o2cb add-node --ip 172.16.91.92 --port 7777 --number 2 sapocfs sap02
      o2cb add-node --ip 172.16.77.111 --port 7777 --number 3 sapocfs sap03
      o2cb register-cluster sapocfs
      o2cb start-heartbeat sapocfs
      After changing being made to the cluster configuration restart by running:
      /etc/init.d/o2cb stop
      /etc/init.d/o2cb start
  4. Bring the cluster to an online state by running the command:
    /etc/init.d/o2cb online ocfs2
    The response indicates that the server status is good:
    Setting cluster stack "o2cb": OK
    Registering O2CB cluster "sapocfs": OK
    Setting O2CB cluster timeouts : OK
  5. Create the OCFS2 volume.
    The OCFS2 cluster must be online, because the format operation must first ensure that the volume is not mounted on any node in the cluster.
    The creation of the volume could be done via the ocfs2console or by using the mkfs.ocfs2 command. For example:
    mkfs.ocfs2 -L ocfs2hanashared --cluster-name=sapocfs --cluster-stack=o2cb --fs-feature-level=max-features -N 3 /dev/mapper/36742b0f00000047e000000000036fc39
  6. Mount the volume
    1. Create the mount point:
      mkdir –p /hana/shared
    2. Add the mount command to /etc/fstab, so it mounts during boot process.
    3. Run mount –a. Check if the mount is successful.
      Mounting OCFS2 volumes can take a few seconds because of the interaction with the cluster service.


For more information on the OCFS2 shared file system, please refer to:

Red Hat does not provide support for OCFS2.

I/O Testing with HWCCT

The process of SAP HANA Hardware Certification includes running I/O utilization testing to ensure that the performance of the HANA installation is not influenced by competing input or output operations (I/O) of other workloads. Multiple HANA nodes connected to the storage units have to fulfill the KPI for each server even running in parallel.

The testing is done using hwcct tool, which can be run with specific parameters. These parameters are set in the fsperf script or by hdbparm or by using –param option.

For hwcct details see SAP Note at http://service.sap.com/sap/support/notes/1943937 (Login required).

These setting applies to both NAS and SAN connectivity on HANA versions 1.0 only.

The recommended settings for InfiniBox are specified below:

async_write_submit_active=on
async_read_submit=on
max_parallel_io_requests=128

Starting with SAP HANA 2.0, hdbparam is deprecated and the parameters have been moved to global.ini. The parameters can also be set using SQL commands or SAP HANA Studio. See Example in Appendix A for The parameters in section [filio] of global.ini

See SAP Note 2399079 - Elimination of hdbparam in HANA 2 for details.

Summary

Taking advantage of Infinidat InfiniBox enterprise proven storage array with SAP HANA provides clients with a number of key benefits. Clients can reduce the amount of physical hardware required to run SAP HANA workloads, reducing CapEx and OpEx by taking advantage of existing storage management and best practices of the solution.

Clients can run their SAP HANA workloads on InfiniBox and have peace of mind that they will achieve and even exceed their competitive business objectives. InfiniBox delivers 99.99999% availability, with maximum performance and can scale to meet any of the SAP HANA needs.

This solution provides clients the following benefits:

  • Easy integration of SAP HANA into existing data center infrastructure.
  • Existing data center best practices for data management and protection
  • Highest reliability at 99.99999% uptime
  • Over 2M IOPS of performance
  • Over 8PB of usable storage in a single rack, before data reduction
  • Best overall storage TCO including power, cooling and floor space

Appendix A: global.ini example


[communication]
listeninterface = .global

[multidb]
mode = multidb
database_isolation = low
singletenant = yes

[internal_hostname_resolution]
1.1.1.1 = internode-sap01
1.1.1.2 = internode-sap02
1.1.1.3 = internode-sap03

[persistence]
basepath_datavoulumes = /hana/data/H01
basepath_logvolumes = /hana/log/H01
use_mountpoints = yes


[system_information]
usage = test

[storage]
ha_provider = hdb_ha.fcClient
partition_*_*__prType = 5
partition_*_data__mountOptions = -o relatime,inode64
partition_*_log__mountOptions = -o relatime,inode64 
partition_1_data__wwid = 36742b0f00000047e0000000000005546
partition_1_log__wwid = 36742b0f00000047e000000000000554a
partition_2_data__wwid = 36742b0f00000047e0000000000002452
partition_2_log__wwid = 36742b0f00000047e0000000000002454


[fileio]
max_parallel_io_requests[DATA] = 128
async_write_submit_active= on
async_read_submit= on
max_parallel_io_requests[LOG] = 128

[trace]
ha_fcclient = info

Appendix B: /etc/multipath.conf example


defaults {
	force_sync no
	rr_min_io 1000
	features "0"
	prio "const"
	reassign_maps "no"
	rr_min_io_rq 1
	path_grouping_policy "failover"
	log_checker_err always
	path_selector "service-time 0"
	multipath_dir "/lib64/multipath"
	fast_io_fail_tmo 5
	bindings_file "/etc/multipath/bindings"
	alias_prefix "mpath"
	prio_args ""
	path_checker "directio"
	flush_on_last_del "no"
	polling_interval 5
	max_fds 8192
	detect_prio no
	failback "manual"
	retain_attached_hw_handler no
	rr_weight "uniform"
	verbosity 2
	wwids_file /etc/multipath/wwids
	user_friendly_names no
	max_polling_interval 20
	queue_without_daemon no
}
device {
	rr_min_io 1
	features "0"
	prio "alua"
	rr_min_io_rq 1
	path_grouping_policy "group_by_prio"
	dev_loss_tmo 30
	path_selector "round-robin 0"
	path_checker "tur"
	product "InfiniBox.*"
	vendor "NFINIDAT"
	flush_on_last_del "yes"
	failback 30
	rr_weight "priorities"
	no_path_retry 0
}

Appendix C: STONITH method example


h04adm@interop014:/usr/sap/H04/HDB04> cat /hana/shared/inficonnector/InfiSTONITH.py
"""
Sample for a HA/DR hook provider.
 
When using your own code in here, please copy this file to location on /hana/shared outside the HANA installation.
This file will be overwritten with each hdbupd call! To configure your own changed version of this file, please add
to your global.ini lines similar to this:
 
    [ha_dr_provider_<className>]
    provider = <className>
    path = /hana/shared/haHook
    execution_order = 1
 
 
For all hooks, 0 must be returned in case of success.
"""
 
from hdb_ha_dr.client import HADRBase, Helper
import os, time
 
 
class InfiSTONITH(HADRBase):
 
    def __init__(self, *args, **kwargs):
        # delegate construction to base class
        super(InfiSTONITH, self).__init__(*args, **kwargs)
 
    def about(self):
        return {"provider_company" :        "INFINIDAT",
                "provider_name" :          "InfiSTONITH", # provider name = class name
                "provider_description" :    "HANA SPS04 IPMI Stonith",
                "provider_version" :        "2.0"}
 
 
    def startup(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter startup hook; %s" % locals())
        self.tracer.debug(self.config.toString())
 
        self.tracer.info("leave startup hook")
        return 0
 
    def shutdown(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter shutdown hook; %s" % locals())
        self.tracer.debug(self.config.toString())
 
        self.tracer.info("leave shutdown hook")
        return 0
 
    def failover(self, hostname, storage_partition, system_replication_mode, **kwargs):
        self.tracer.debug("enter failover hook; %s" % locals())
        self.tracer.debug(self.config.toString())
 
        self.tracer.info("leave failover hook")
        return 0
 
    def stonith(self, failingHost, **kwargs):
        self.tracer.debug("enter stonith hook; %s" % locals())
        self.tracer.debug(self.config.toString())
 
        # e.g. stonith of params["failed_host"]
        # e-g- set vIP active
    self.tracer.info( "stonith - power cycling host %s" % failingHost)
    ipmi_host = "m-%s" % failingHost
    ipmi_call = "ipmitool -H %s -U root -P undisclosed -I lanplus power " % ipmi_host
    ipmi_call_off = "%s off" % ipmi_call
    ipmi_call_status = "%s status" % ipmi_call
    ipmi_call_on = "%s on" % ipmi_call
 
    self.tracer.info("%s" % ipmi_call_off)
    print("%s" % ipmi_call_off)
        retries = 5
        retry_nr = 1
 
    while True:
                print("Trying to call ipmitool: %d" % retry_nr)
                # If we fail to call ipmi we need to stop here and leave the system unmounted!!!!!
                (code, output) = Helper._runOsCommand(ipmi_call_off)
                if(code != 0):
                        ret = 1 # means failure
                        print output
                        self.tracer.error(output)
                time.sleep(3)
                (code, output) = Helper._runOsCommand(ipmi_call_status)
                if 'is off' in output:
                        msg = "successful power off %s" % failingHost
                        self.tracer.info(msg)
                        print msg
                        ret = 0
                        break
                if(retry_nr >= retries):
                        msg = "giving up powering off %s - NEED HELP" % failingHost
                        self.tracer.error(msg)
                        print msg
                        break
 
 
        self.tracer.info("leave stonith hook")
        return 0
 
    def preTakeover(self, isForce, **kwargs):
        """Pre takeover hook."""
        self.tracer.info("%s.preTakeover method called with isForce=%s" % (self.__class__.__name__, isForce))
 
        if not isForce:
            # run pre takeover code
            # run pre-check, return != 0 in case of error => will abort takeover
            return 0
        else:
            # possible force-takeover only code
            # usually nothing to do here
            return 0
 
 
    def postTakeover(self, rc, **kwargs):
        """Post takeover hook."""
        self.tracer.info("%s.postTakeover method called with rc=%s" % (self.__class__.__name__, rc))
 
        if rc == 0:
            # normal takeover succeeded
            return 0
        elif rc == 1:
            # waiting for force takeover
            return 0
        elif rc == 2:
            # error, something went wrong
            return 0
 
    def srConnectionChanged(self, parameters, **kwargs):
        self.tracer.debug("enter srConnectionChanged hook; %s" % locals())
 
        # Access to parameters dictionary
        hostname = parameters['hostname']
        port = parameters['port']
        volume = parameters['volume']
        serviceName = parameters['service_name']
        database = parameters['database']
        status = parameters['status']
        databaseStatus = parameters['database_status']
        systemStatus = parameters['system_status']
        timestamp = parameters['timestamp']
        isInSync = parameters['is_in_sync']
        reason = parameters['reason']
        siteName = parameters['siteName']
 
        self.tracer.info("leave srConnectionChanged hook")
        return 0
 
    def srReadAccessInitialized(self, parameters, **kwargs):
        self.tracer.debug("enter srReadAccessInitialized hook; %s" % locals())
 
        # Access to parameters dictionary
        database = parameters['last_initialized_database']
        databasesNoReadAccess = parameters['databases_without_read_access_initialized']
        databasesReadAccess = parameters['databases_with_read_access_initialized']
        timestamp = parameters['timestamp']
        allDatabasesInitialized = parameters['all_databases_initialized']
 
        self.tracer.info("leave srReadAccessInitialized hook")
        return 0
 
    def srServiceStateChanged(self, parameters, **kwargs):
        self.tracer.debug("enter srServiceStateChanged hook; %s" % locals())
 
        # Access to parameters dictionary
        hostname = parameters['hostname']
        service = parameters['service_name']
        port = parameters['service_port']
        status = parameters['service_status']
        previousStatus = parameters['service_previous_status']
        timestamp = parameters['timestamp']
        daemonStatus = parameters['daemon_status']
        databaseId = parameters['database_id']
        databaseName = parameters['database_name']
        databaseStatus = parameters['database_status']
 
        self.tracer.info("leave srServiceStateChanged hook")
        return 0


For more information

Infinidat offers experienced storage consultants with proven methodologies who are able to assist with implementing InfiniBox with your applications. For more information, see the Infinidat website (https://infinidat.com) or ask your local Infinidat sales representative. 




© Copyright Infinidat 2019.

This document is current as of the date of and may be changed by Infinidat at any time. Not all offerings are available in every country in which Infinidat operates.

The data discussed herein is presented as derived under specific operating conditions. Actual results may vary. THE INFORMATION IN THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT ANY WARRANTY, EXPRESSED OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT. Infinidat products are warranted according to the terms and conditions of the agreements under which they are provided.

Infinidat, the Infinidat logo, InfiniBox, InfiniRAID, InfiniSnap, InfiniMetrics, and any other applicable product trademarks are registered trademarks or trademarks of Infinidat LTD in the United States and other countries. Other product and service names might be trademarks of Infinidat or other companies. A current list of Infinidat trademarks is available online at https://www.infinidat.com/sites/default/files/resource-pdfs/INFINIDAT-Trademarks.pdf.

Please Recycle

Was this article helpful?
0 out of 0 found this helpful

0 out of 0 found this helpful

Comments