23
HPE XP7 All Flash storage and HPE Integrity Superdome X for SAP HANA Tailored Datacenter Integration Technical white paper

HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

HPE XP7 All Flash storage and HPE Integrity Superdome X for SAP HANA Tailored Datacenter Integration

Technical white paper

Page 2: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper

Contents Executive summary ................................................................................................................................................................................................................................................................................................................................ 4 Target audience ........................................................................................................................................................................................................................................................................................................................................ 4 HPE Integrity Superdome X for SAP HANA TDI ........................................................................................................................................................................................................................................................ 4 HPE XP7 All Flash array overview ........................................................................................................................................................................................................................................................................................... 4 HPE XP7 All Flash storage for SAP HANA TDI ........................................................................................................................................................................................................................................................... 4 Reference architecture solution components ................................................................................................................................................................................................................................................................ 5

HPE XP7 All Flash array.............................................................................................................................................................................................................................................................................................................. 6 HPE XP7 Service Processor .................................................................................................................................................................................................................................................................................................... 6 SAP HANA N+1 scale-out database ............................................................................................................................................................................................................................................................................... 6 SAP HANA shared file system ............................................................................................................................................................................................................................................................................................... 6 Brocade 16Gb fabric....................................................................................................................................................................................................................................................................................................................... 7 Host bus adapters (HBAs)........................................................................................................................................................................................................................................................................................................ 7 Operating system .............................................................................................................................................................................................................................................................................................................................. 7 Network configuration .................................................................................................................................................................................................................................................................................................................. 7

Configuration and setup recommendations .................................................................................................................................................................................................................................................................... 7 HPE XP7 Virtual Volume definition .................................................................................................................................................................................................................................................................................. 7 Creating a file system..................................................................................................................................................................................................................................................................................................................... 8 SAP HANA shared file system ............................................................................................................................................................................................................................................................................................... 8 Installing the SAP HANA scale-out database ......................................................................................................................................................................................................................................................... 9 Using the SAP HANA-HWC-ES-1.1 (fsperf) test tool ................................................................................................................................................................................................................................. 10 SAP Connector API ...................................................................................................................................................................................................................................................................................................................... 11 SAP HANA Host Auto-Failover high availability .............................................................................................................................................................................................................................................. 11 Multipath implementation ..................................................................................................................................................................................................................................................................................................... 13 Udev tuning ........................................................................................................................................................................................................................................................................................................................................ 13

Performance observations and considerations ........................................................................................................................................................................................................................................................ 14 Using Thin Provisioned versus Fully Provisioned volumes .................................................................................................................................................................................................................... 14 Impact of RAID level on KPI performance metrics .......................................................................................................................................................................................................................................... 14 Impact of cache size .................................................................................................................................................................................................................................................................................................................... 14 Using storage compression .................................................................................................................................................................................................................................................................................................. 14 Sharing SAP HANA and non-HANA workloads ................................................................................................................................................................................................................................................ 15 Configuring multipath ............................................................................................................................................................................................................................................................................................................... 17 Tuning Udev ...................................................................................................................................................................................................................................................................................................................................... 17 Tuning queue depth ................................................................................................................................................................................................................................................................................................................... 17 HPE XP7 port configuration ............................................................................................................................................................................................................................................................................................... 18

Scalability and sizing guidelines ............................................................................................................................................................................................................................................................................................. 18 RAID 5 ..................................................................................................................................................................................................................................................................................................................................................... 18 RAID 6 ..................................................................................................................................................................................................................................................................................................................................................... 19 RAID 1 ..................................................................................................................................................................................................................................................................................................................................................... 19

Page 3: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper

HPE XP7 All Flash for SAP HANA TDI in multitenant environments.................................................................................................................................................................................................. 19 HPE high availability and business continuity solutions for SAP HANA TDI ............................................................................................................................................................................... 20 HPE Data Protection solutions for SAP HANA TDI ............................................................................................................................................................................................................................................. 21 Consulting and support services from HPE Pointnext ...................................................................................................................................................................................................................................... 22 Conclusion .................................................................................................................................................................................................................................................................................................................................................. 23 References .................................................................................................................................................................................................................................................................................................................................................. 23

Page 4: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 4

Executive summary To meet customers’ needs for SAP HANA®, Hewlett Packard Enterprise offers complete appliance, and Tailored Datacenter Integration (TDI).

HPE Integrity Superdome X architecture for SAP HANA TDI offers the performance, manageability, and reliability needed to handle the challenges of a combined transaction and analytics platform. HPE Integrity Superdome X offers a scale-up and scale-out solution for SAP HANA TDI for data centers with growing mission-critical workloads. HPE Superdome X is SAP® certified for up to 16 TB of memory in a scale-up configuration and up to 192 TB of memory in a scale-out configuration. Through our unique hard-partitioning technology, HPE nPars, HPE Superdome X adds agility and delivers 20X greater reliability than platforms relying on software partitioning alone.1

HPE XP7 storage provides top performance, extreme availability, and easy consolidation capabilities. Designed for applications requiring 100% data availability, the HPE XP7 combines a seven 9s platform (99.99999%) of fully online scalable, redundant hardware, with ultra-high performance, and advanced data replication, disaster recovery (DR), and online data migration capability. HPE XP7 storage supports clustering solutions that allow remote mirroring to be integrated with a high-availability server cluster to provide multisite disaster recovery. Flash inline hardware compression increases capacity efficiencies in addition to what SAP HANA provides.

SAP HANA TDI solutions from Hewlett Packard Enterprise include compute blocks and certified storage. HPE Compute Blocks for SAP HANA TDI are available for a number of Intel® Xeon® architectures and platforms. Certified SAP HANA TDI storage from Hewlett Packard Enterprise includes a wide range of HPE MSA, Nimble, HPE 3PAR StoreServ, and HPE XP7 platforms.

This technical white paper provides information on how to integrate HPE XP7 All Flash storage systems with SAP HANA TDI using SAP specifications for enterprise storage. This paper describes key solution components, configuration, and implementation details and provides information on the unique architectural advantages of HPE XP7 All Flash storage for SAP HANA TDI.

Target audience The intended audience includes the IT professional seeking to design and implement an SAP HANA TDI environment. Readers of this technical white paper should have a functional understanding of SAP HANA concepts and technologies.

HPE Integrity Superdome X for SAP HANA TDI HPE Superdome X TDI configurations for SAP HANA are based on the Intel Xeon E7-8890 v4 or E7-8894 v4 architectures. HPE Integrity Superdome X combines the best of x86 architectures and the SX3000 chipset to provide a performance-scalable server with the necessary RAS features to operate in mission-critical environments. Each enclosure consists of up to eight HPE BL920s Gen9 server blades, one upper midplane that provides support for the four SX3000 Xbar Fabric modules, and one lower midplane that interfaces to I/O interconnect modules that plug into bays in the rear of the enclosure. Each enclosure also includes a shared DVD module and two Global Position Service Modules (GPSMs) that are used for server management and global clock sourcing.

HPE Superdome X is certified for SAP HANA TDI use cases with up to 16 TB of memory in a scale-up configuration and up to 192 TB of memory in a scale-out configuration.

HPE XP7 All Flash array overview HPE XP7 storage is an enterprise-class data storage platform for disaster-proof storage in mission-critical environments. Designed for organizations that simply cannot afford any downtime, HPE XP7 combines an ultra-high-performance online scalable, fully redundant hardware platform with unique data replication capabilities integrated with clustering solutions for complete business continuity and data protection. HPE XP7 can adapt to changing business conditions in real time while increasing data center capacity and lifespan and providing solutions that decrease risk and costs.

HPE XP7 All Flash storage for SAP HANA TDI HPE XP7 supports Flash Module Devices (FMDs), which provide solid-state nonvolatile high-performance data capacity. Flash Module capacity can be configured for use in the array in the same way as any other HDD or SSD. The number of Flash Modules that can be installed in an HPE XP7 is flexible. Flash Module Devices (FMDs) must be added in groups of four or more. Additional capacity can be installed over time, as capacity needs to grow. Spare Flash Modules are automatically used in the event of a Flash Module Device failure.

The Gen2 Flash Modules achieve better performance without compromising on endurance. They have capacitive charging as opposed to battery based. Hence, there is no periodic charging requirement.

1 Based on Hewlett Packard Labs availability analysis and actual measured availability results, June 2015

Page 5: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 5

Up to six 48-slot Flash Module Chassis may be configured to each DKC. The Flash Module Chassis (FMC) uses a dual-ported SAS interface. Each Flash Module Chassis is connected to both blades of the redundant SAS controller by separate connections.

HPE XP7 All Flash arrays are the second generation HPE XP7 arrays certified for SAP HANA TDI. The combined performance, efficiencies, availability, and data protection make it an extremely compelling solution for TDI environments.

All-flash TDI solution

• Single rack, small footprint

• Low power consumption

• Highest scalability

Mission-critical TDI storage

• Seven nines architecture

• Active-Active HA replication option

• 3DC disaster tolerance

• 18 years proven technology

• Multitenancy resource partitioning software

Reference architecture solution components HPE Superdome X HPE Superdome X configuration for SAP HANA TDI is set up in a scale-out configuration, with CPUs, memory, and operating systems assigned to individual hardware partitions. A 16-socket, 24 cores per socket, 4 TB HPE Superdome 2 x86 system has been divided into eight individual blades each with 512 GB memory, 2 sockets, and 48 cores.

Figure 1 shows the configuration per blade.

Figure 1. HPE Superdome X Onboard Administrator

Page 6: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 6

HPE XP7 All Flash array In an SAP HANA TDI environment, the storage requirement given in the SAP HANA TDI - Storage requirements needs to be fulfilled. All internal disks can be removed from the configuration as the log and data volumes reside on the enterprise storage array.

HPE XP7 array used for testing has 1 Disk Controller Chassis (DKC), 1 Channel Adapter Pair (CHA), 2 Disk Adapters (DKA), and 2 Multi-Processor Blade Pairs (MPBs). HPE XP7 array uses firmware version 80-05-02.

The pool named SAP HANA consists of 12 RAID 5 3D+1P FMDs using slots 1-1 through 1-6 and 2-1 through 2-6, for a total of 48 FMD devices. The pool volumes use IDs 00:00-00:2B. Pool v-vols 01:00 through 01:07 have been used to create SAP HANA data volumes of 1.5 TB in size, pool v-vols 01:10 through 01:17 have been used to set up SAP HANA log volumes of 1 TB in size. A total of 16 ports have been connected: four ports have been connected on CHA-1PC, four ports on CHA-1PD, four ports on CHA-2PD, and four more ports on CHA-2PC.

Figure 2 shows HPE XP7 storage system overview.

Figure 2. HPE XP7 Remote Web Console Storage Systems overview

HPE XP7 Service Processor The Service Processor (SVP) manages HPE XP7 configuration and gathers statistical information, and it is used for maintenance activities.

HPE XP7 storage does not require a functioning SVP in order to make capacity available for reading and writing. However, as external management functions have become dependent on the availability of the SVP in HPE XP7, some customers may desire to have fast recovery from an SVP failure by having a standby SVP. If the primary SVP fails, the hot standby SVP is switched into operation automatically within approximately six minutes. HPE XP7 Continuous Track remote support functions require connection to HPE Insight Remote Support via the internet.

HPE XP7 is managed via the Remote Web Console for any storage operations.

SAP HANA N+1 scale-out database Four blades have been used to set up a scale-out SAP HANA database, including three active nodes and one SAP HANA local Host Auto-Failover Node. The SAP HANA performance KPI tests pass all requirements using this 3+1 scale-out configuration. All eight HPE Superdome X blades have been used for further scalability testing.

SAP HANA shared file system For scale-out scenarios, a shared NFS service has to be available for the SAP HANA configuration, log, and trace information to be stored. In the HPE scale-out configuration, HPE 3PAR StoreServ File Persona has been implemented to fulfil this need. The 1 Gbit RCIP port has been cabled to the network switch to enable HPE 3PAR StoreServ File Persona communication across the scale-out SAP HANA nodes.

Page 7: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 7

Brocade 16Gb fabric The SAN switches used for the SAP HANA scale-out validation are Brocade 16Gb/28c PP+ Embedded SAN switches running Fabric OS version is v7.4.1a.

Host bus adapters (HBAs) The HBAs used are the embedded HPE QMH2672 16Gb FC HBA for BladeSystem c-Class running the latest FW revision.

Operating system The SUSE operating system version supported with the HPE XP7, which matches the versions supported for SAP HANA, is version 12 with Service Pack 1 or above.

Network configuration HPE Superdome X configurations for SAP HANA TDI use a secure VLAN setup. In our setup, the following private networks have been used:

# Bond VLANID Description MTU IP Address Netmask 0 6 ‘#HANA Client Access' 1500 172.31.16.10 255.255.255.0 \ 1 7 ‘#HANA Data Provisioning' 1500 172.31.17.10 255.255.255.0 \ 0 8 ‘#HANA Replication' 1500 172.31.18.10 255.255.255.0 \ 1 9 ‘#HANA Backup' 1500 172.31.19.10 255.255.255.0 \ 0 10 ‘#HANA Shared NFS' 9000 172.31.20.10 255.255.255.0 \ 1 11 ‘#HANA Internal' 9000 172.31.21.10 255.255.255.0 \ 0 12 ‘#HANA Administration' 1500 172.31.22.10 255.255.255.0 \ 1 13 ‘#ServiceGuard Quorum' 1500 172.31.23.10 255.255.255.0 \

Configuration and setup recommendations HPE XP7 Virtual Volume definition For each SAP HANA server, a data and log Virtual Volume has to be defined and exported to all SAP HANA servers or server blades. Figure 3 provides an example of the four Hewlett Packard Enterprise scale-out configurations as seen in HPE XP7 Management Console software. The characteristics of the LUNs in terms of RAID level are shown as well. The size of the LUNs is determined by the amount of memory being used in the four HPE scale-out servers.

Figure 3. HPE XP7 Remote Web Console Virtual Volume details

Page 8: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 8

The SAP HANA data and log volumes on HPE XP7 have been created using RAID 5 and thin volumes with compression enabled. Compression algorithms on HPE XP7 FMD drives are performed by the ASIC on the FMD on a per drive basis, and is always on.

Creating a file system After scanning the operating systems, to make the exported Virtual Volumes visible as Linux® devices, it is required to create a file system on top of these devices. HPE uses the XFS file system formatting for the SAP HANA data and log files.

The following is a sample command to create a file system on one of the disk volumes:

# mkfs -t xfs -f /dev/mapper/360060e800727cf00003027cf00000100

This has to be done on all devices dedicated to SAP HANA data and log files.

SAP HANA shared file system With the HPE 3PAR File Persona Suite, Hewlett Packard Enterprise provides native file and object access capabilities within an HPE 3PAR StoreServ All Flash storage array. For SAP HANA TDI, this allows HPE XP7 block storage to be used for both data and log storage and HPE 3PAR NFS access for shared volumes such as SAP HANA binaries and configuration files.

HPE 3PAR StoreServ File Persona enables read and write access to File Shares in a cross-protocol environment over SMB, NFS, HTTP, and FTP. If applications or users use different protocols to access a common data set on a File Share, the security modes and file locks need to be translated and unified to allow proper security enforcement and data integrity. HPE File Persona allows users to access data from more than one protocol with read/write access using a mechanism known as cross-protocol locking. It ensures NFS clients can access files opened by SMB clients through share mode locks. Following are some of the benefits achieved when using cross-protocol locking:

• Configurable security modes per file store to provide near-native user experience for the preferred protocol

• Consistent default permissions specific to the preferred protocol

• Prevents fidelity loss by restricting permission changes from nonpreferred clients

If not already done, initialize the HPE 3PAR File Services by using the startfs command:

NH5_BOT_8400 cli% startfs 0:3:1 1:3:1 2:3:1 3:3:1 File Services Initialization Monitor Task 17947 in progress, started 2017-01-30 04:57:47 PST Monitoring Install Tasks: 17943 17944 17945 17946 Task 17943 in progress, started 2017-01-30 04:57:47 PST Task 17944 in progress, started 2017-01-30 04:57:47 PST Task 17945 in progress, started 2017-01-30 04:57:47 PST Task 17946 in progress, started 2017-01-30 04:57:47 PST Task 17947 done File Services initialization complete.

Create a Common Provisioning Group (CPG), File Provisioning Group (FPG), Virtual File Server (VFS), and File Share (FS) for your SAP HANA instance, for instance:

NH5_BOT_8400 cli% createfpg HANA-SHARED HANASHARED 4T NH5_BOT_8400 cli% createvfs -fpg HANASHARED -vlan 10 172.31.20.7 255.255.255.0 HANA2 NH5_BOT_8400 cli% createfshare nfs -fpg HANASHARED HANA2 HANANFS NH5_BOT_8400 cli% setfshare nfs -options rw,no_root_squash -fstore HANANFS HANA2 HANANFS NH5_BOT_8400 cli% showfshare nfs -d Share Name : HANANFS File Provisioning Group : HANASHARED Virtual File Server : HANA2 File Store : HANANFS Share Directory : ---

Page 9: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 9

Full Directory Path : /HANASHARED/HANA2/HANANFS/ State : normal Clients : *

Options : rw, wdelay, secure, no_root_squash, crossmnt, sec=sys, hide, no_all_squash, sync, auth_nlm, subtree_check Comment : ---

The NFS share will be mounted to the SAP HANA node to store binaries and configuration files, and mounted as read-only SMB share as well in a cross-protocol environment to the Windows® management server to install or update the SAP HANA Studio software. To create persistence, add the share name on the SAP HANA server to the /etc/fstab configuration file. To mount the previously created File Share on the SAP HANA system as NFSv3 or NVFv4, use the following format:

hana8S-p1:/etc # mount -t nfs -o nfsvers=3 172.31.20.7:/HANASHARED/HANA2/HANANFS /hana/shared

hana8S-p1:/etc # mount -t nfs 172.31.20.7:/HANASHARED/HANA2/HANANFS /hana/shared

When using LDAP or local users instead of using Active Directory to authenticate to the share from a Windows system, authenticate from the Windows management server client system using the format LOCAL_CLUSTER\admin.

Installing the SAP HANA scale-out database In a distributed SAP HANA scale-out environment, make sure all systems run the same patch level, are configured using the same time zone, the same hwclock settings, and the same local system date and time settings on all scale-out nodes.

To employ the fcClient during the SAP HANA database installation, the multipath.conf and global.ini must be prepared beforehand. The standard fcClient comes with the installation package and can be activated by using the parameter --storage_cfg=/some/path with /some/path pointing to the directory, which contains the global.ini.

Create the global.ini file, containing the LUN WWN for every data and log volume that is used by the SAP HANA active worker nodes:

hana8S-p1:/hana/shared/SAP_HANA_DATABASE # more global.ini [persistence] basepath_datavolumes=/hana/data basepath_logvolumes=/hana/log use_mountpoints=yes [storage] ha_provider=hdb_ha.fcClient partition_*_*__prtype=5 partition_1_data__wwid=360002ac0000000000000000e00018f3a partition_1_log__wwid=360002ac0000000000000001000018f3a partition_2_data__wwid=360002ac0000000000000000a00018f3a partition_2_log__wwid=360002ac0000000000000000c00018f3a partition_3_data__wwid=360002ac0000000000000000b00018f3a partition_3_log__wwid=360002ac0000000000000000d00018f3a

Start the SAP HANA database installation using the following command:

hana8S-p1:/tmp/SAP_HANA_DATABASE # ./hdblcm --ignore=check_signature_file --component_dirs=/tmp/HANA/SPS12_122.06/extracted/SAP_HANA_DATABASE/server,/tmp/HANA/SPS12_122.06/extracted/SAP_HANA_CLIENT/client

To specify the scale-out node roles during initial installation, add the following:

-addhosts=blade1:role=worker,blade2:role=worker,blade3:role=worker,blade4:role=standby

Page 10: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 10

Alternatively, during the installation wizard, when asked to add additional nodes, say yes and define their roles:

Select roles for host ‘blade4': Index | Host Role | Description ------------------------------------------------------------------- 1 | worker | Database Worker 2 | standby | Database Standby 3 | extended_storage_worker | Dynamic Tiering Worker 4 | extended_storage_standby | Dynamic Tiering Standby 5 | streaming | Smart Data Streaming 6 | rdsync | Remote Data Sync 7 | ets_worker | Accelerator for SAP ASE Worker 8 | ets_standby | Accelerator for SAP ASE Standby 9 | xs_worker | XS Advanced Runtime Worker 10 | xs_standby | XS Advanced Runtime Standby

Import the database in SAP HANA Studio or SAP HANA Cockpit to verify the setup, for instance:

Figure 4. SAP HANA Studio management interface

Using the SAP HANA-HWC-ES-1.1 (fsperf) test tool The SAP HANA-HWC-ES-1.1 (fsperf) test tool, introduced with SP10, requires new storage parameters. These parameters must be set when the new test tool is in use and when the storage performance is tested by using the SAP Hardware validation (HWVAL) tool. The following parameters should be set within the configuration file of SAP HWVAL JSON file when using SAP HANA 1.0 SPS 12:

• “parameter": {“async_read_submit":“on",

• “async_write_submit_active":“auto",

• “async_write_submit_blocks":“all",

• “max_parallel_io_requests":“64",

• “size_kernel_io_queue":“512"},

Page 11: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 11

Alternatively, for SAP HANA 1.0 installations, the parameters can be configured after the initial HANA installation by using the HANA hdbparam command as <sid>adm:

• sh1adm@blade1:/usr/sap/SH1/HDB00> hdbparam –-paramset fileio.async_read_submit=on

• sh1adm@blade1:/usr/sap/SH1/HDB00> hdbparam --paramset fileio.async_write_submit_active=auto

• sh1adm@blade1:/usr/sap/SH1/HDB00> hdbparam --paramset fileio.async_write_submit_blocks=all

• sh1adm@blade1:/usr/sap/SH1/HDB00> hdbparam –-paramset fileio.max_parallel_io_requests=64

• sh1adm@blade1:/usr/sap/SH1/HDB00> hdbparam –-paramset fileio.size_kernel_io_queue=512

For SAP HANA 2.0 installations, the parameters are set after the initial HANA installation is complete in the global.ini file. See SAP Note 2399079—Elimination of hdbparam in HANA 2 for more information on how to configure the hdb parameter settings.

SAP Connector API The SAP HANA implementation with HPE XP7 arrays uses the SAP Connector API to access the SAP HANA persistence of a server. This SAP HANA built-in FC client is not a shared NFS-based file system implementation for the SAP HANA data and log area. It is a direct FC access method to the SAP HANA nodes enabling the requirements of high throughput and low latency access to the HANA database.

SAP HANA Host Auto-Failover high availability The SAP FC client used with the HPE XP7 Storage All Flash system configuration provides a highly available SAP HANA deployment. If one SAP HANA node fails, the defined standby node will request access to the data and log devices of the failed node and automatically recovers the SAP HANA persistence of the failed node to enable continued SAP HANA operations.

To use block storage together with SAP HANA Host Auto-Failover, appropriate remounting and I/O fencing mechanisms must be set up. SAP HANA offers a ready to use Storage Connector, where if a SAP HANA host fails in a distributed system, the standby host takes over the persistence of the failing host by remounting the associated LUNs, together with proper fencing.

There are two fundamentally different storage configurations in a Multiple-Host System Concept: shared storage devices versus separate storage devices with failover reassignment. Using a local SAP HANA Host Auto-Failover Node requires setting up shared storage. SAP HANA Storage API connector manages LUNs; therefore, no LUNs are mounted onto the FS by the OS.

The SAP HANA nameserver provides the Host Auto-Failover and System Replication takeover process. One of the most important uses of the failover hooks is moving around a virtual IP address (in conjunction with STONITH).

SAP HANA Host Auto-Failover will protect for the following scenarios:

• Failure of an active worker node.

• Failure of the DB instance on an active master node.

• Double failover when the master node fails with no standby node being a master candidate: The second master candidate should become the new master node and the standby node should take over the worker node.

• Internal network bond failure for the SAP HANA active master node: In a SAN, either a switchover or a split-brain situation will occur.

Page 12: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 12

The following example will show a takeover process by the Storage API connector:

[46975]{-1}[-1/-1] 2017-02-21 17:28:03.409440 i ha_provider HaProviderManager.cpp(00057) : starting Storage Connector init with ha_provider = hdb_ha.fcClient [46975]{-1}[-1/-1] 2017-02-21 17:28:03.897849 i ha_provider HaProviderManager.cpp(00088) : Storage Connector API version = 2

[46991]{-1}[-1/-1] 2017-02-21 17:28:53.953459 i assign TREXNameServer.cpp(02168) : assign as standby nameserver. master nameserver is hana8s-p6:30001

[46991]{-1}[-1/-1] 2017-02-21 17:28:53.991848 i assign TREXNameServer.cpp(02297) : assign with host role=standby, subpath=2 [46991]{-1}[-1/-1] 2017-02-21 17:29:54.075726 i ha_provider HaProviderManager.cpp(00196) : attach storage partition 2 for role(s) worker

[47135]{-1}[-1/-1] 2017-02-21 17:29:54.077062 i ha_fcClient fcClient.py(00161) : fcClient.attach method called

[47135]{-1}[-1/-1] 2017-02-21 17:29:54.077232 i ha_fcClient fcClient.py(00172) : trying to attach for partition 2, usage type DATA on path /hana/data/mnt00002 [47135]{-1}[-1/-1] 2017-02-21 17:29:54.081046 i ha_fcClient fcClient.py(00198) : using --prout-type=5 for persistent reservations

[47135]{-1}[-1/-1] 2017-02-21 17:29:55.391515 i ha_fcClient fcClient.py(00237) : unmounting obsolete mount point ‘/hana/data/mnt00002' from previous failovers

[47135]{-1}[-1/-1] 2017-02-21 17:29:55.417426 i ha_fcClient fcClient.py(00072) : found ‘dm-0' as internal multipath device name for wwid ‘360060e800727cf00003027cf00000104' [47135]{-1}[-1/-1] 2017-02-21 17:29:55.686957 i ha_fcClient fcClient.py(00291) : a reservation for this host is already active, re-write reservation

[47135]{-1}[-1/-1] 2017-02-21 17:29:56.837345 i ha_fcClient fcClient.py(00354) : attached device ‘/dev/mapper/360060e800727cf00003027cf00000104' to path ‘/hana/data/mnt00002'

[47135]{-1}[-1/-1] 2017-02-21 17:29:56.837439 i ha_fcClient fcClient.py(00172) : trying to attach for partition 2, usage type LOG on path /hana/log/mnt00002 [47135]{-1}[-1/-1] 2017-02-21 17:29:56.840732 i ha_fcClient fcClient.py(00198) : using --prout-type=5 for persistent reservations

[47135]{-1}[-1/-1] 2017-02-21 17:29:56.965816 i ha_fcClient fcClient.py(00237) : unmounting obsolete mount point ‘/hana/log/mnt00002' from previous failovers

[47135]{-1}[-1/-1] 2017-02-21 17:29:56.992098 i ha_fcClient fcClient.py(00072) : found ‘dm-6' as internal multipath device name for wwid ‘360060e800727cf00003027cf00000204' [47135]{-1}[-1/-1] 2017-02-21 17:29:57.264432 i ha_fcClient fcClient.py(00291) : a reservation for this host is already active, re-write reservation

[47135]{-1}[-1/-1] 2017-02-21 17:29:58.422420 i ha_fcClient fcClient.py(00354) : attached device ‘/dev/mapper/360060e800727cf00003027cf00000204' to path ‘/hana/log/mnt00002'

[46991]{-1}[-1/-1] 2017-02-21 17:29:58.457932 i assign TREXNameServer.cpp(02497) : assign as standby finished

Figure 5. SAP HANA Studio Host Auto-Failover configuration

Page 13: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 13

Multipath implementation To access a block device from an SAP HANA physical Linux server, multipathing capabilities need to be installed and configured. The multipathing configuration in the /etc/multipath.conf file needs to be edited as follows:

hana8S-p1:/etc # more multipath.conf defaults { polling_interval 10 user_friendly_names “no" } devices { device { vendor “HP" product “OPEN-*" hardware_handler “0" path_selector “round-robin 0" path_grouping_policy “multibus" uid_attribute “ID_SERIAL" path_checker “tur" failback “immediate" rr_min_io_rq 1024 rr_weight “uniform" no_path_retry 0 features “0" detect_prio “no" }

}

To enable multipathing on SLES 12 SP 1, run the following:

hana8S-p8:/etc # systemctl enable multipathd hana8S-p8:/etc # systemctl start multipathd hana8S-p8:/etc # systemctl status multipathd multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled) Active: active (running) since Wed 2017-02-01 07:56:17 UTC; 30min ago Main PID: 1054 (multipathd) Status: “running" CGroup: /system.slice/multipathd.service 1054 /sbin/multipathd -d –s

Udev tuning To set the correct Linux I/O scheduler parameters to be parsed before the defaults, create a file called /etc/udev/rules.d/10-xp7.rules with the following entry:

ACTION==“add|change", KERNEL==“dm-*", \ ATTR{queue/nr_requests}=“4096", ATTR{queue/scheduler}=“noop"

Page 14: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 14

Performance observations and considerations Using Thin Provisioned versus Fully Provisioned volumes The introduction of SSD has introduced additional complications for administrators in sizing for performance with regard to total cost of storage. System administrators typically provide themselves with much more storage than is needed for various applications because they plan for growth.

Using traditional provisioning techniques, the administrator may have to dedicate an excess amount of SSDs to applications in order to satisfy performance and capacity service-level agreement (SLA). The inefficiencies of traditional storage provisioning can negatively affect capital costs and storage administration resources. The most obvious issue is the amount of storage that becomes unused and therefore increases the total cost of ownership. Also, since this allocated but unused storage capacity cannot typically be reclaimed for other applications, customers have to buy more storage capacity as their environments grow, increasing costs even further. At some point, customers may actually be required to buy a completely new storage system in addition to the one they have in place.

Thin Provisioning solved the customers’ need to better allocate storage based on capacity used, by virtualizing your storage to reduce costs and gain efficiency. Thin Provisioning is a technology that presents to host large v-vols, which are backed up by a pool of physical storage. A pool that has less than the total provisioned v-vol capacity is considered thin.

HPE XP7 FMD drives are used for thick LUNs as well as Thin Provisioning (THP) thin LUNs. When using THP, you can expect read performance to be equal to thick LUN performance and approximately 5% less for write performance. The majority of customers will choose to use FMD drives with THP to avoid stranding valuable storage space.

Hewlett Packard Enterprise recommends using Thin Provisioned storage volumes for SAP HANA TDI.

Impact of RAID level on KPI performance metrics Using the HANA-HWC-ES-1.1 tool, dependent on the RAID level, we require 0.5-2 FMDs per HANA node and will be able to scale to a maximum of 128 HANA nodes.

For maximum availability, RAID 5 or RAID 6 is recommended. For maximum performance, RAID 1 is recommended. Hewlett Packard Enterprise recommends the following RAID configurations:

• For a RAID 5 (3D + 1P) configuration, we will be able to scale to 96 HANA nodes with 1 or 2 FMDs per HANA node.

• For a RAID 6 (6D + 2P) configuration, we will be able to scale to 64 HANA nodes with 2 FMDs per HANA node.

• In a RAID 1 (2D + 2D) configuration, we will be able to scale to 128 HANA nodes with 0.5 FMDs per HANA node.

Impact of cache size As an example, for a RAID 6 configuration, assigning 14 GB cache per SAP HANA node will allow us to pass the HANA KPIs for a production workload. Increasing the cache size to 20 GB per SAP HANA node will provide a better safety margin for the 16 KB block size overwrite performance.

Using storage compression Hewlett Packard Enterprise has introduced software compression and dedupe with V05 firmware capable of post process data reduction. This has not been used in our testing; therefore, it is not supported for SAP HANA TDI.

HPE XP7 inline HW compression is an always-on sliding window LZ77 derivative that delivers real-time compression. During lab testing, HPE XP7 inline HW compression ratios observed for SAP HANA are showing compression ratios between 1.7:1 and 2:1.2 This is based on a 4 TB scale-up and scale-out test setup, generating a database load with 10% daily change rate, using an HPE internal load generation tool. This results in physical storage capacity savings of around 68%. Although a compression ratio of 1.7:1 was observed during our testing, these values may vary between different customer use cases. In our sizing rules, inline compression ratios are not factored in.

The Gen2 FMDs with compression are provisioned and managed through the HPE CVAE GUI, or via the Remote Web Console. With HPE CVAE, you can enable logical, physical, and host management capabilities for HPE XP7 storage, provisioning, and storage pooling for both internal and external storage and capacity analysis, and it offers multiple levels of security for storage administrators.

2 Based on HPE internal testing, 2017

Page 15: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 15

Sharing SAP HANA and non-HANA workloads If more than one HANA system is configured per HPE XP7 array, HPE XP7 is capable by design of serving multiple tenants with the possibility of logical and physical segregation as desired by utilizing cache partitioning, multi-Thin Provisioned pools, resource partitioning, and virtual DKCs. Those features ensure that there is a level of quality of service delivery and no negative performance impact between different SAP HANA and non-SAP HANA systems.

Physical separation of pools and ports will guarantee zero performance impact to the SAP HANA workload.3 There are two ways to achieve this configuration:

• Exclusive Multi-Processor Blade Pairs (MPBs): Cache Partition (no CLPRs) should be defined.

• Shared MPBs: Dedicated CLPR (Cache Partition) for SAP HANA should be defined.

The workload-sharing configuration provides scalability because it is possible to add further systems without affecting existing system performance.

The following figure shows the workload sharing options, using an 8-node SAP HANA scale-out configuration, mixed with a non-HANA workload:

Figure 6. Using exclusive MPBs, no dedicated CLPR Cache Partitions defined for SAP HANA

Figure 7. Using shared MPBs, dedicated CLPR Cache Partitions defined for SAP HANA

3 Based on HPE internal testing, 2017

Page 16: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 16

Example setup configurations:

• Up to three nodes may share a FC port pair.

• Up to six nodes may share one MPB blade.

• All nodes share the same CLPR.

• All nodes may share the same THP pool.

• One or two FMC (or more) per node (RAID 5).

• Up to two nodes may share a FC port pair.

• Up to four nodes may share one MPB blade.

• All nodes share the same CLPR.

• All nodes may share the same THP pool.

• Two FMC (or more) per node (RAID 6).

The following figure shows the performance result of a mixed SAP HANA and non-HANA workload using physically separated ports and pools:

Figure 8. Performance graph for mixed SAP HANA (green) and non-HANA (blue) workloads

Page 17: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 17

Configuring multipath For I/O path failover, open a command line interface and create and/or edit the /etc/multipath.conf file with the appropriate array configuration parameters. Hewlett Packard Enterprise recommends that you use the Linux Device Mapper config. file and multipathing parameter settings on HPE.com. Use only the array-specific settings and not the multipath.conf file bundle into the device mapper kit.

For HPE XP7 LUNs, the following path policies are recommended to be used:

hana8S-p1:/etc # more multipath.conf defaults { polling_interval 10 user_friendly_names “no" } devices { device { vendor “HP" product “OPEN-*" hardware_handler “0" path_selector “round-robin 0" path_grouping_policy “multibus" uid_attribute “ID_SERIAL" path_checker “tur" failback “immediate" rr_min_io_rq 1024 rr_weight “uniform" no_path_retry 0 features “0" detect_prio “no" } }

Enter the following command to scan the LUNs that are connected to the arrays:

hana8S-p8:/usr/bin # ./rescan-scsi-bus.sh

Tuning Udev To set the correct Linux I/O scheduler parameters to be parsed before the defaults, create a file called /etc/udev/rules.d/10-xp7.rules with the following entry:

ACTION==“add|change", KERNEL==“dm-*", \ ATTR{queue/nr_requests}=“4096", ATTR{queue/scheduler}=“noop"

Tuning queue depth The ql2xmaxqdepth parameter defines the maximum queue depth reported to SCSI midlevel per device. The queue depth setting specifies the number of outstanding requests per LUN. The default is 32, with all Flash Module Devices, this is sufficient.

The queue depth can be adjusted by creating a dedicated file for qla2xxx in the /etc/modprobe.d/ directory with the following line:

options qla2xxx ql2xmaxqdepth=32

Page 18: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 18

HPE XP7 port configuration HPE XP7 is set up using dual redundant paths.

Hewlett Packard Enterprise recommends the following FC port connections:

• For a RAID 5 (3D + 1P) configuration, we will be able to scale to 96 HANA nodes with 0.5 16 Gb FC ports per HANA node.

• For a RAID 6 (6D + 2P) configuration, we will be able to scale to 64 HANA nodes with 0.5 16 Gb FC ports per HANA node.

• For a RAID 1 (2D + 2D) configuration, we will be able to scale to 128 HANA nodes with 0.5 16 Gb FC ports per HANA node.

Scalability and sizing guidelines HPE XP7 storage capacity sizing for SAP HANA TDI should be based on the sizing rules given in the SAP HANA TDI—Storage Requirements white paper from SAP. These sizing rules should be applied based on the memory needed for the SAP HANA database.

Performance-related RAID recommendations:

• RAID 3+1 is the recommended RAID 5 type for FMD, as RAID 7+1 would expose the system to potential shelf failure outages. For small capacity FMDs, RAID 3+1 is fine to use. For FMDs greater than 3.5 TB, RAID 6 should be considered because of availability reasons.

• RAID 6+2 is the recommended RAID 6 type for FMDs for the same reason as mentioned previously (shelf failure resilience as opposed to RAID 14+2). Any capacity FMD is ok to be used with RAID 6+2. For best availability, the use of RAID 6 is the preferred RAID level, but it requires more array controller resources and leads to higher system costs.

• RAID 2+2 is the recommended RAID 1 type for FMDs when highest performance is required. RAID 1 will scale to record node count.

RAID 5 HPE XP7 storage performance RAID 5 sizing is based on the following table assuming a 1:1 compression ratio:

Table 1. RAID 5 sizing HPE XP7 All Flash solution for SAP HANA

RAID 5 (3D+1P) Single DKC Dual DKC Twin DKC

Max. nodes per HPE XP7 32 64 96

FC ports (16 Gb) 0.5 per node 0.5 per node 0.5 per node

FMD 1 per node 1 per node 2 per node

Max. nodes per MPB blade 4 4 6

Thin Provisioned Yes Yes Yes

Cache (GiB/HANA) 14 per HANA instance 14 per HANA instance 20 per HANA instance

Mixed workloads Yes, up to 24 nodes Yes, up to 48 nodes Yes, up to 48 nodes

HANA capacity (3.5 TB FMC) 116 TB 232 TB 2.8 PB

The following configuration rules need to be taken into account:

• HPE XP7 has to be connected to the SAN with a minimum of one port per FC fabric and a minimum of two ports.

• Up to three nodes share a FC port pair.

• Up to six nodes share one MPB blade.

• All nodes share the same CLPR.

• All nodes share the same THP pool.

• One FMD (or more) per node (RAID 5).

Page 19: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 19

RAID 6 Further HPE XP7 storage performance RAID 6 sizing is based on the following table assuming a 1:1 compression ratio:

Table 2. RAID 6 sizing HPE XP7 All Flash solution for SAP HANA

RAID 6 (6D+2P) Single DKC Dual DKC Twin DKC

Max. nodes per HPE XP7 32 40 64

FC ports (16 Gb) 0.5 per node 0.5 per node 0.5 per node

FMD 2 per node 2 per node 2 per node

Max. nodes per MPB blade 4 4 4

Thin Provisioned Yes Yes Yes

Cache (GiB/HANA) 20 per HANA instance 20 per HANA instance 30 per HANA instance

Mixed workloads Yes, up to 16 nodes Yes, up to 32 nodes Yes, up to 32 nodes

HANA capacity (3.5 TB FMC) 422 TB 844 TB 5.7 PB

The following configuration rules need to be taken into account:

• The dual DKC solution is effectively limited to 40 nodes because of 80 FMD slots in dual DKC layout.

• Up to two nodes share a FC port pair.

• Up to four nodes share one MPB blade.

• All nodes share the same CLPR.

• All nodes share the same THP pool.

• Two FMDs (or more) per node (RAID 6).

RAID 1 Further HPE XP7 storage performance RAID 1 sizing is based on the following table assuming a 1:1 compression ratio:

Table 3. RAID 1 sizing HPE XP7 All Flash solution for SAP HANA

RAID 1 (2D+2D) Single DKC Dual DKC Twin DKC

Max. nodes per HPE XP7 64 128 128

FC ports (16 Gb) 0.5 per node 0.5 per node 0.5 per node

FMD 0.5 per node 0.5 per node 0.5 per node

Max. nodes per MPB blade 16 16 16

Thin Provisioned Yes Yes Yes

Cache (GiB/HANA) 8 per HANA instance 8 per HANA instance 8 per HANA instance

Mixed workloads Yes, up to 24 nodes Yes, up to 48 nodes Yes, up to 48 nodes

HANA capacity (3.5 TB FMC) 422 TB 844 TB 5.7 PB

HPE XP7 All Flash for SAP HANA TDI in multitenant environments HPE XP7 array can be deployed in mixed-workloads and multitenant environments. The storage architecture of HPE XP7 allows for transaction-intensive and throughput-intensive workloads to run on the same storage resources without contention, thereby supporting massive consolidation and multitenancy. This means that, for example, the system can easily handle an OLTP application and an extremely bandwidth-consuming data warehousing application concurrently. This capability is made possible by physically isolating different I/O intensive workloads, whereby transaction-intensive workloads are not held up behind throughput-intensive workloads. As a result, HPE XP7 storage array delivers excellent performance consistently, even in mixed workload scenarios.

Page 20: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 20

HPE high availability and business continuity solutions for SAP HANA TDI By recovery-point objective (RPO), we refer to the maximal permissible amount of operational data may be lost without the ability to recover. We define recovery-time objective (RTO) as the maximal permissible time it takes to recover the system so that its operations can be resumed.

HPE XP7 All Flash solution for SAP HANA will support the standard SAP HANA host-based replication options:

• SAP HANA Host Auto-Failover (HA)

– Separate dedicated standby hosts are used for failover in case of a failure of the primary active host or hosts with the SAP HANA Host Auto-Failover (HA) functionality.

• SAP HANA System Replication (HSR)

– SAP HANA System Replication (HSR) provides continuous updates of secondary systems by a primary system, including in-memory table loading.

HPE XP7 All Flash solution for SAP HANA will also support a synchronous true Active-Active storage-based replication solution, an asynchronous storage-based replication, or a combination of synchronous and asynchronous storage-based replication across multiple sites. The following business continuity options are available:

• HPE XP7 synchronous Storage Replication with HA

– Active-Active Read-Write on both sites: Using the same WWN name on both sides, the LUNs to the host will look and feel the same. A host can be connected to both physical arrays and load balance using MPIO across both arrays. This solution provides zero RPO and zero RTO.

– In addition, HPE XP7 Active-Active HA can also operate in ALUA mode in order to keep I/O traffic data center local. The nearby (local) array identifies its host path as “optimized” while the distant (remote) array will identify as “nonoptimized.” The MPIO driver will automatically adapt to these array settings. The Linux Device Mapper config. setting for ALUA detection would look like this:

device { vendor “HP" product “OPEN-.*" path_grouping_policy group_by_prio uid_attribute “ID_SERIAL" path_selector “round-robin 0" path_checker tur detect_prio yes prio “alua" hardware_handler “0" failback immediate rr_weight uniform rr_min_io_rq 1

no_path_retry 18

• HPE XP7 synchronous + asynchronous Storage Replication with HA+3DC

– Active-Active HA across two arrays, and asynchronous replication to a third site: This solution provides zero RPO and zero RTO across each HA pair and near-zero RPO and RTO to the third site.

• HPE XP7 asynchronous Storage Replication 2DC+CAJ

– Using asynchronous replication does not add any performance penalty to the SAP HANA workload. This solution provides near-zero RPO and RTO. It is managed as normal replication failover.

• HPE XP7 (synchronous + asynchronous) or (asynchronous + asynchronous) Storage Replication 3DC/NxN

– NxN refers to three or more data centers, for example, 5DC, or cascade and multitarget configurations: The synchronous replication links will provide zero RTO but may have a performance penalty, depending on the link or bandwidth provided. The asynchronous replication links will provide near-zero RPO and RTO with no performance penalty. It is managed as normal replication failover.

Page 21: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 21

HPE XP7 All Flash solution for SAP HANA will also support a storage-based Replication Solution in a VMware® virtual HANA environment:

• HPE XP7 Storage Replication with HA integration and VMware vSphere® High Availability & VMware® vCenter™ Site Recovery Manager™ to manage 3-site DR with HA+CAJ

The following table provides an overview of the various replication options indicating the impact to RPO and RTO:

Table 4. High availability and business continuity levels of HPE XP7 All Flash solution for SAP HANA TDI

Solution Used for RPO RTO

SAP HANA 3rd party Online Backup and Recovery integration

HA & DR High High

SAP HANA 3rd party Snapshot Backup and Recovery integration

HA & DR Medium Medium

HPE SGeSAP HA & DR 0 with synchronous replication 0 with synchronous replication

SAP HANA Host Auto-Failover (HA) HA 0 0

SAP HANA sync/async System Replication (HSR) HA & DR 0 with synchronous replication 0 with synchronous replication

HPE XP7 sync Storage Replication with HA HA & DR 0 0

HPE XP7 sync+async Storage Replication with HA+3DC HA & DR Low Low

HPE XP7 async Storage Replication with 2DC+CAJ HA & DR Low Low

HPE XP7 sync+async/async+async Storage Replication with 3DC/NxN

HA & DR Low Low

VMware HA/STONITH HA 0 0

HPE XP7 sync Storage Replication with HA and VMware HA & SRM with HA+3DC+CAJ

HA & DR Low Low

HPE Data Protection solutions for SAP HANA TDI Data protection is a critical component of a robust environment for SAP HANA. While persistence storage is in place to mitigate memory failures or server power loss, storage failures, corruption, or even natural disasters can occur. A robust data protection strategy is required to mitigate these risks. A solid plan for backup and recovery of data, log files, operating system data, as well as configuration files must be in place.

SAP HANA itself has two types of backups, which are both needed to recover the database to a specific point in time:

1. Data backups can be triggered manually or scheduled in the SAP HANA Studio, DBA Cockpit, or by SQL commands. A data backup effectively replicates a database savepoint to the backup destination.

2. Log backups occur automatically when a log segment (A log segment is represented by a file on disk with a fixed size.) fills up or a configurable time threshold is exceeded. The log segment is copied to the backup destination. This may happen in parallel to a data backup.

SAP HANA offers three options to back up the database:

1. File system: An external shared file system can be used as the backup target for all nodes of an SAP HANA installation and is the easiest way to implement a backup solution within an SAP HANA environment. It is recommended by SAP that the shared file system for the backup should not use the same storage as the database. HPE StoreOnce Data Protection can be used for log and data backups.

2. Backint: An Enterprise Backup Solution (EBS) can be used via the Backint for SAP HANA. This EBS needs to be certified with SAP HANA. Learn more about all certified solutions. To search the list of solutions, enter the search term SAP HANA-BRINT and you will then see a list of partners. Click a partner name, for example, Hewlett Packard Enterprise. Then, click SAP Certified Solutions to get a list of all of HPE SAP certified solutions. All backups are transferred to third-party provider software, which transports the data and log backups to the backup storage device, for example, tape or virtual tape library. The benefit of the Backint approach is that all backup jobs are monitored in the EBS. The benefit of an HPE StoreOnce product is the deduplication of data, which can be either source or target based.

3. Storage snapshot: SAP HANA can use storage snapshots for backup since SPS 07. The SAP HANA database puts itself into backup mode prior to a storage snapshot using integrated scripting solutions. A snapshot has the benefit that it can be created with minimal impact to the SAP HANA system. Also, a restore from a snapshot is faster than a recovery via file or Backint. This backup solution is used with data backups only. Log backups are faster and are considered a good method to speed up the overall backup process for SAP HANA. Note that additional storage capacity is required to implement a snapshot backup solution.

Page 22: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper Page 22

The HPE StoreOnce Plug-in for SAP HANA can be downloaded from the web for free. The plug-in for SAP HANA leverages Backint and enables optimized protection for SAP HANA databases. It enables the database administrator (DBA) to back up and restore directly to and from a Catalyst store on an HPE StoreOnce Backup appliance. The result is flexible, high-performance protection, managed by the DBA, that can be configured to meet the protection needs of the specific databases independently or in addition to organization-wide data protection processes. The plug-in is integrated with Backint for SAP HANA to enable backups to be efficiently transferred from the SAP HANA database to HPE StoreOnce Backup target. Once the backup target is created, backup, restore, and other data protection tasks can be executed through SAP HANA Studio and/or the SAP HANA CLI. The HPE StoreOnce Plug-in for SAP HANA must be installed using the SAP HANA operating system user. The plug-in has guided installation and configuration for fast setup. This plug-in has a zero cost license to use, but a Catalyst license must be purchased and installed for all appliances that host the target HPE StoreOnce store or stores. It is supported for use with all HPE StoreOnce appliances running the required HPE StoreOnce software version. Consult the following white paper for more information: HPE Reference Architecture 2.0 for SAP HANA backup and recovery using the StoreOnce Catalyst Plug-in for SAP HANA 1.0.1.

Consulting and support services from HPE Pointnext HPE Pointnext is built on three types of services:

1. Our Advisory and Transformation Services is at the forefront, designing customers’ transformation journey and building a road map tailored to their unique challenges.

2. Our Professional Services specializes in flawless and on-time implementation, on-budget execution, and creative configurations for software and hardware.

3. Our Operational Services will offer new ways to deliver IT by managing and optimizing on-premises and cloud workloads, resources, and capacity.

HPE Pointnext is a services organization built for the future to help customers optimize and leverage the ideal technologies, partners, and operational foundations needed to accelerate their digital journey all while providing a seamless customer experience. Building on our heritage of services leadership, HPE Pointnext will invest and focus in the following areas:

1. Optimizing infrastructure—Having the right infrastructure in place is a critical first step on the digital transformation journey. For decades, HPE’s core strength has been rooted in infrastructure. Specifically, how to design, integrate, and support solutions that perform and scale to meet the unique demands of the apps and data that drive businesses.

2. Curating a best-in-class partner ecosystem—Many businesses struggle to determine which technologies and vendors can best solve their unique problem and how to bring them together. A key ingredient to create successful solutions for our customers is our ability to collaborate with the right partners. We’ve built a strong ecosystem of industry leaders like SAP and Microsoft®, innovative startups like Docker and Mesosphere, and strategic integrators such as CSC/ES, Accenture, PwC, and Deloitte.

3. Removing complexity across all areas of the business—HPE Pointnext experts help customers go beyond the technology problem of digital transformation to address culture, measurement, skills, change management as well as new approaches to funding and IT consumption options. This helps leaders across business and IT focus on innovation and create value, rather than on operations.

4. Building for speed—Our scalable approach is designed to deliver faster time to value for our customers, focusing on helping them build solid foundations in technology, process, and people to enable them to learn quickly and continuously improve.

Each time HPE Pointnext experts work with a customer, that customer is not just receiving the knowledge and expertise of the team but our decades of experience. We have developed industry-leading IP and an extensive library of enterprise-class designs and blueprints from over 11,000 successful implementations. We know what works and what doesn’t because we’ve done it many times—both for own infrastructure and for thousands of customers.

For Hewlett Packard Enterprise, HPE Pointnext is a redefined and future-focused organization with a new approach to services. It’s a way we can make a difference in our customers’ businesses, beyond providing the software-defined infrastructure they depend on. With HPE Pointnext, we will not only offer the necessary technology infrastructure and tools but we will also join our customers on their journeys to digital transformation.

To learn more about HPE Pointnext, visit hpe.com/pointnext.

Page 23: HPE XP7 All Flash storage and HPE Integrity Superdome X ... · solutions that allow remote mirroring to be integrated with a high -availability server cluster to provide multisite

Technical white paper

Sign up for updates

© Copyright 2017 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Intel Xeon is a trademark of Intel Corporation in the U.S. and other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. SAP and SAP HANA are trademarks or registered trademarks of SAP SE in Germany and in several other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. VMware, VMware vSphere High Availability, and VMware vCenter Site Recovery Manager are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other third-party trademark(s) is/are property of their respective owner(s).

a00020660ENW, August 2017

Conclusion Several storage solutions are available for SAP HANA TDI from HPE, ranging from entry-level MSA or Nimble Storage, to midrange or enterprise HPE 3PAR and XP7 solutions, increasing hardware vendor flexibility. Each provides their own level of availability and resilience, reducing hardware and operational costs, lowering risks, and improving availability and performance.

HPE XP7 offers to eliminate downtime with its proven 100% data availability with a single system, and “zero” seconds of combined installed base downtime since its inception. It also offers maximized performance for a write-intensive SAP HANA workload with the Flash Module Devices (FMDs), with significant better write performance when compared to SSDs. As such, they are the ideal platform for a mission-critical database that requires no downtime and little to no impact from mixed workloads due to hardware resource partitioning.

Inline compression data reduction technologies bring about unmatched storage efficiency and affordability for storage in mission-critical application.

For a high availability, business continuity and/or disaster recovery data protection solution, Hewlett Packard Enterprise offers storage-based replication and snapshots. HPE Serviceguard offers the industry’s only fully automated and unattended high availability and disaster tolerance solution for SAP HANA.

References SAP HANA Storage Requirements, v2.10, February 2017: assets.cdn.sap.com/sapcom/docs/2015/03/74cdb554-5a7c-0010-82c7-eda71af511fa.pdf

Learn more at hpe.com/storage/xp7 hpe.com/info/sap/hana