Upload
others
View
16
Download
0
Embed Size (px)
Citation preview
Anuj Sharma Sajid Iqubal
DEPLOYING X86 AND SAP HANA ENVIRONMENTS ON VBLOCK
2015 EMC Proven Professional Knowledge Sharing 2
Table of Contents
Introduction ................................................................................................................................ 3
Design Principles for the Architecture of Virtualized SAP HANA Environment on Vblock ........... 4
Design Principles for Architecture of Virtualized x86 Environment on Vblock ............................18
Design Principles for the 3-Site Enterprise Architecture ............................................................26
Migration Strategy for Migrating Physical x86 Environments to Vblock Virtualized Environment
.................................................................................................................................................28
Backup Strategy and best practices for backup of Vblock Converged Infrastructure Private
Cloud Tenants...........................................................................................................................30
References ...............................................................................................................................34
Disclaimer: The views, processes or methodologies published in this article are those of the
authors. They do not necessarily reflect EMC Corporation’s views, processes or methodologies.
2015 EMC Proven Professional Knowledge Sharing 3
Introduction
More and more Enterprises are adopting SAP HANA for deploying SAP Business Suite and
ERP to leverage the advantages of SAP HANA in memory computing technology. The SAP
HANA database is a hybrid in-memory database that combines row-based, column-based, and
object-based database technology. It is optimized to exploit the parallel processing capabilities
of modern multi-core CPU architectures and is 3,600 times faster than standard disks allowing
massive amounts of data to be queried and analyzed faster. Due to very strict hardware
requirements of SAP HANA, Enterprises are forced to segregate their SAP HANA environments
from x86 environments. This leads to siloed environments in the data center. This approach
contradicts the IT Transformation Vision that enterprises are aiming to achieve by deploying
infrastructure that can serve as a robust foundation for private cloud environments. This article
will discuss design principles for deploying x86 and SAP environments on Vblock using the
SAP-certified Tailored DataCenter Integration (TDI) approach. Using the SAP TDI approach,
enterprises can deploy SAP HANA and X86 on private cloud powered by Vblock Converged
Infrastructure Systems. x86 and SAP HANA environments are deployed as tenants of private
cloud infrastructure. This approach accelerates the journey to the cloud for Enterprises as the
cloud infrastructure comprises industry-leading technologies. In this article, we will discuss:
Design Principles for the Architecture of Virtualized x86 Environment on Vblock.
Design Principles for the Architecture of Virtualized SAP HANA Environment to comply
with SAP HANA Tailored DataCenter Approach on Vblock.
Design Principles for deploying 3-Site Enterprise Architecture.
Migrating strategy of existing x86 Environment to Vblock Converged Infrastructure.
Backup strategy and best practices for backup of Vblock Converged Infrastructure
Private Cloud Tenants.
2015 EMC Proven Professional Knowledge Sharing 4
Design Principles for the Architecture of Virtualized SAP HANA Environment on Vblock
Due to the in-memory nature of SAP HANA databases, SAP has very strict rules for
compute, network, and storage. SAP certifies certain CPU’s, Server Models, and
Storage Models for running SAP HANA Workloads. This approach led to additional SAP
HANA Environment Silos in the data center which is in contradiction to the IT Cloud
Transformation roadmap that Enterprises are planning. This approach has raised
serious concerns with SAP customers which led SAP to introduce SAP HANA TDI
Architecture that allows Enterprises to build SAP HANA environment as a Tenant on
private cloud infrastructures complying with SAP HANA design rules. Vblock satisfies the
stringent SAP HANA rules and allows SAP HANA to run as a tenant on the Vblock
Converged Infrastructure. We will discuss how we can build the SAP HANA Tenant on
Vblock satisfying SAP HANA requirements and best practices.
As per Figure 1, Vblock satisfies the requirements of all the zones for SAP HANA deployment
i.e. Client Zone, Internal Zone, and Storage Zone by isolating workloads in different domains.
Figure 1
2015 EMC Proven Professional Knowledge Sharing 5
1.1 SAP HANA Compute Design Principles Scale-Out Solution
We can have a Bare-Metal SAP HANA Deployment or SAP HANA Virtualized
Deployment on VMware. I recommend VMware deployment as it gives more flexibility in
terms of deployment SAP CONS, SAP PROD, SAP DEV, and SAP QAS Environment
and optimization of compute resources. We have some limitations imposed that we are
expecting to be relaxed with the release of VSphere 6.0. Below are the rules that should
be followed for both type of deployments.
2. 2.1.a Bare-Metal Implementation
a) Use UCS Blades that use Intel E7-4870 (10-core) or E7-4880/4890 v2 (15-core)
processors.
b) UCS Blades can have 2, 4, or 8 processors which would mean:
a. 4870 – 20-cores (40-threads), 40-cores (80-threads), or 80-cores (160-threads)
b. 4880/4890 v2 – 30-cores (60-threads), 60-cores (120-threads), or 120-cores
(240-threads)
c) For the 4880/4890 v2, the rules are 256GB/CPU for BW on HANA and 768GB/CPU for
Suite on HANA.
d) The current generation of servers for Suite on HANA-certified CPU and Memory
configurations for UCS are :
a. 2 x 4880/4890 v2 (30-cores, 60-threads) with 128GB
b. 2 x 4880/4890 v2 (30-cores, 60-threads) with 256GB
c. 2 x 4880/4890 v2 (30-cores, 60-threads) with 512GB
d. 2 x 4880/4890 v2 (30-cores, 60-threads) with 768GB
e. 2 x 4880/4890 v2 (30-cores, 60-threads) with 1536GB
e) The challenge is that these servers cannot be shipped with only 1 CPU, so 2 is the
minimum. With anything less than 1536GB memory, you have far more CPU power than
you require.
3. 2.1.b Virtualized-Deployment
Production Environment rules
i. Support for 1 VM per physical server which can expect to be relaxed with later
releases.
2015 EMC Proven Professional Knowledge Sharing 6
ii. Support for multiple VMs per physical server is in controlled availability, but the
rule is no more than one VM per physical CPU. Thus, a server with 2 CPU can
have 2 VMs, one with 4 CPUs can have 2 x 2 CPUs, 1 x 2 CPUs + 2 x 1 CPU, or
4 x 1 CPU.
iii. 8 CPU servers are not supported for virtualization currently.
Non-Production Environment rules
i. The rules are further relaxed.
ii. Memory must not be overcommitted. For example, a server which physically has
1024GB memory can nominally handle 8 x 128GB VMs.
iii. However, a bit memory is required for VMware so we shrink them slightly to 8 x
127GB .
iv. CPU must not be overcommitted.
v. A server with 2 CPUs has 60 threads (which equates to 60 virtual CPUs with
VMware vSphere)
vi. This gives us 7.5 vCPU per 128GB, but again we need a little CPU so we allocate
7 vCPU for every 127GB memory.
The rules above should be followed strictly while choosing the UCS Blades in the Vblock. The
supported operating systems for SAP HANA are SLES or the specialized packaged distribution
SLES for SAP Applications. The exact version and service pack release are documented by
SAP in the SAP HANA PAM.
Below are some best practices that should be followed while provisioning the UCS Blades in
VBLOCK.
Choose Equipment > Global Policies, and for Chassis/FEX Discovery Policy, select
None for Link Grouping Preference to use the pinning mode. Pinning is recommended.
Figure 2 shows the pinning policy screenshot as Port Channel column is showing as
blan.
2015 EMC Proven Professional Knowledge Sharing 7
Figure 2
Power Policy
To run Cisco UCS with two independent power distribution units, Redundancy must be
configured as Grid.
Figure 3
Power Control Policy
The Cisco UCS power-capping feature is designed to save power in traditional data
center use cases. This feature does not fit with the high-performance behavior of SAP
HANA. If power capping is configured on Cisco UCS globally, power control policy for
the SAP HANA nodes ensure that the power capping does not apply to the nodes. The
Power Capping feature should be set to No Cap as shown in Figure 4.
Figure 4
2015 EMC Proven Professional Knowledge Sharing 8
BIOS Policy
For best performance for SAP HANA, you must configure the server BIOS accordingly.
On the Main tab, change the quiet mode to disable as in Figure 5.
Figure 5
For SAP HANA, disable all Processor C states as shown in Figure 6. This configuration
will force the CPU to stay on the maximum frequency and allow SAP HANA to run with
the best performance.
Figure 6
2015 EMC Proven Professional Knowledge Sharing 9
On the RAS Memory tab, choose maximum-performance and enable NUMA.
Figure 7
2015 EMC Proven Professional Knowledge Sharing 10
Maintenance Policies
Cisco recommends defining a maintenance policy with Reboot Policy set to User Ack for
the SAP HANA server.This policy helps ensure that a configuration change in Cisco
UCS does not automatically force all SAP HANA servers to reboot. The administrator
has to acknowledge the reboot for the servers changed in Cisco UCS; otherwise, the
configuration change will take effect when the server reboots through an OS command.
Figure 8
Adapter Policy Configuration
Figure 9 shows newly created Ethernet adapter policy Linux-B440 with Receive-Side
Scaling (RSS), Receive Queues, and Interrupts values defined. This policy must be used
for the SAP HANA internal network to provide the best network performance with SLES
11.
2015 EMC Proven Professional Knowledge Sharing 11
Figure 9
2015 EMC Proven Professional Knowledge Sharing 12
2.2 SAP HANA Network Design Principles
The core requirements from SAP for SAP HANA are met by Cisco UCS defaults. Cisco UCS is
based on 10 Gigabit Ethernet and provides redundancy through the dual-fabric concept.
Figure 10
2015 EMC Proven Professional Knowledge Sharing 13
The Design principles below should be followed while configuring various network elements in
the Vblock
As the Vblock Network Layer consists of Nexus 1000v, Management Catalyst Switches,
and Nexus Aggregate Switches, it becomes a mandatory practice that all port
descriptions are updated properly in the respective switches.
As we will be running multiple tenants in Vblock, we should create multiple Ethernet
Uplink Profiles in Nexus 1000v. For example, one for SAP and another for Open-
Systems.
Mac Pinning should be used on Ethernet Profiles on Nexus 1000v so that vNIC are
pinned to one UPLINK port at a time
Nexus 1000-V and Nexus Aggregate Switches UPLINK Ports from Fabric Interconnect
should be configured as Port-Fast so as to avoid Spanning Tree re-initialization in case
of link failure which can cause packet drops .
Control-vlan and Packet-vlan on Nexus 1000-V should also be pinned.
System VLAN’s should be defined on Nexus 1000v. System VLANs instruct all the
Virtual Ethernet Modules (VEM) within your environment that a particular VLAN must
ALWAYS be in a forwarding state. Any VLAN that is not configured as a system VLAN
will not be allowed to start forwarding traffic until the VEM module has established initial
communication with the Virtual Supervisor Module. With that said, any management
communication from an ESXi host needs to be defined as a system VLAN to permit the
required communication between the ESXi host and vCenter and VEM to VSM. Without
properly configuring system VLANs, any reboot of ESXi environment can cause each
host to become isolated from the network infrastructure. Stick to defining system VLANs
as only what is absolutely required to get the vSphere environment up and running. This
2015 EMC Proven Professional Knowledge Sharing 14
means ESXi Management, N1K Management/ Control/ Packet, and VNX IP Storage
related VLANs—nothing else.
If you have jumbo frames enabled, ensure your uplink system port profile has the MTU
value set accordingly. During a reboot of the ESXi host, MTU mismatches can occur,
causing communication issues between VEM and VSM. Below is an example
configuration of Ethernet UPLINK.
port-profile type Ethernet DATA-UPLINK
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1203,1405,1506,
channel-group auto mode on mac-pinning
no shutdown
system vlan 10,11
pinning control-vlan 0
pinning packet-vlan 0
state enabled
copy run start
Virtual Port Channel should be configured between the following components for High
Availability as show in Figure 10.
Fabric Interconnect and Nexus Aggregate Switches
Catalyst Switches and Nexus Aggregate Switches
VNX Uplinks and Nexus Aggregate Switches
Nexus Aggregate Switches and Customer Core Network
Also, VLAN’s should be configured to segregate Management, Production, ESX,
vMotion, and Nexus 1000 v traffic as shown in Figure 10.
2.3 SAP HANA Storage Design Principles
Vblock 720 with VMAX 40K as back-end storage allows complete Isolation of SAP HANA
workload and other workloads running on the private cloud thereby satisfying SAP HANA
performance requirements. Engines can be dedicated to SAP HANA Tenant as per the SAP
HAN Tenant sizing. SAP HANA TDI uses VMAX Engines as building blocks. One Engine can
scale up to a total of 12 connected SAP HANA Worker nodes and requires at least 128 GB
cache. In Figure 11, we can see the workload isolation.
2015 EMC Proven Professional Knowledge Sharing 15
Engine 3 and 6 are dedicated for SAP Workload.
Engine 4 and 5 are dedicated for Open Systems Workload.
Figure 11
Each SAP HANA ESX Host should have at least 2 HBA connected using separate
Fabrics to 4 VMAX FA Ports.
On VMAX Engines, all port 0 should be used, leaving port 1 for each module as ports 1
and 0 share the same CPU on each module.
It is recommended to create different disk groups for Data and Log Volumes for SAP
HANA VM’s as per the table below and datastore LUN’s created accordingly.
2015 EMC Proven Professional Knowledge Sharing 16
The table below shows the recommended disk group configuration for different SAP
HANA VM configurations but SA can tune them accordingly for the whole array to match
the performance required for SAP HANA Nodes.
All the TDEV’s created for SAP HANA datastores should be fully pre-allocated.
All the virtual disks created from datastores should be Eager zeroed.
The Sizing Guidelines below can be followed for SAP HANA Server Memory and
LUN/VMDK Sizing. As per the latest SAP Guidelines Memory the requirements are
reduced. However, to play it safe, the below can be followed.
Server Memory = 2 X Database Size
Data VMDK/LUN Size = 3 X Physical Memory
Log VMDK/LUN Size = 1 X Physical Memory
2015 EMC Proven Professional Knowledge Sharing 17
The table below shows us some sizing calculation for different UCS Server Models.
Server Specification HANA LUN sizes
HANA DB
Size (GB)
HANA Server
Model
HANA Server
Physical
Memory HANA data size HANA log size
128 B260 M4 256 768 256
256 B260 M4 256 768 256
256 B260 M4 256 768 256
128 B260 M4 256 768 256
256 B260 M4 256 768 256
256 B260 M4 256 768 256
256 B260 M4 256 768 256
1536 B460 M4 2048 6144 2048
512 B260 M4 512 1536 512
2015 EMC Proven Professional Knowledge Sharing 18
Design Principles for Architecture of Virtualized x86 Environment on Vblock
Enterprises can leverage VPLEX Integration with VMware and Vblock Converged Infrastructure
to run Active-Active workloads across two data centers for High Availability, Application
Workload Distribution, and optimization of data center resources. Customers can take
advantage of VMware vMotion which enables live migration of running virtual machines from
one host to another with zero downtime, continuous service availability, and complete
transaction integrity. EMC VPLEX, in combination with Vblock Converged Infrastructure enables
vSphere vMotion over distance which provides a unique capability that enables customers to
transparently move and relocate virtual machines and their corresponding applications and data
over distance without requiring the use of Storage vMotion.
Figure 12 Active-Active Datacenter Architecture powered by VPLEX and VBLOCK.
2015 EMC Proven Professional Knowledge Sharing 19
In the above design High Level Diagram we can see that the two data centers have been
merged using VPLEX Metro on the Storage Layer, VMware Stretched Cluster across the
Virtualization Layer, and using OTV on the Network Layer .
VPLEX uses a unique clustering architecture to help customers break the boundaries of the
data center and allow servers at multiple data centers to have concurrent read and write access
to shared block storage devices. A VPLEX Cluster can scale up through the addition of more
engines, and scale out by connecting multiple clusters to form a VPLEX Metro configuration. A
VPLEX Metro supports up to two clusters, which can be in the same data center or at two
different sites within synchronous distances (less than 5 ms round trip time). VPLEX Metro
configurations help users transparently move and share workloads, consolidate data centers,
and optimize resource utilization across data centers. In addition, VPLEX Clusters provide non-
disruptive data mobility, heterogeneous storage management, and improved application
availability.
In this configuration, both data centers can be running active workloads and serving customer
applications. Data from the primary data center can be immediately available and synchronized
with the secondary site. EMC VPLEX and VMware combine to offer this more comprehensive
solution so that businesses experience little to no downtime. Together, they provide all of the
automatic benefits (non-decision based) of VMware FT, HA, and DRS, but allow them to be
implemented across data centers instead of being confined within the data center walls.
3.1 Basic Design Principles of VPLEX Metro
For easy management make sure to use the same device numbers on both the
VMAX arrays for creating the Distributed Device.
For WAN COM Zoning, a dedicated VSAN should be created.
For Front End and Backend Zoning of VPLEX, dedicated VSAN Should be created.
It’s preferable to use 1:1 Device mapping for creating a Distributed Device.
VPLEX Metro systems contain two clusters where each cluster can have one, two, or
four engines. Clusters in a VPLEX Metro do not necessarily need to have the same
number of engines. For example, a VPLEX Metro system could be composed of one
cluster with two engines and the other with four.
The clusters must be deployed within synchronous communication distance of RTT
5ms.
2015 EMC Proven Professional Knowledge Sharing 20
VPLEX Metro systems are usually deployed between two data centers. However,
they can also be deployed within same data center for applications requiring high
local availability.
With VPLEX Metro, virtual volumes can be mirrored between the VPLEX clusters,
allowing a host to have access to the data through either cluster. This provides
added resiliency in the case of an entire cluster failure.
HBA with dual ports are configured for high availability at the front end for host
connection.
Hosts HBA should be zoned with the VPLEX ports provided by the directors.
There should be a minimum of 2 HBA paths connected to the fabric and a minimum
of 2 paths from the hosts to the clusters.
Each VPLEX BE director should have redundant paths to every storage array.
The redundant SAN setup is connected with the BE and FE of VPLEX metro. The BE
and FE ports of each director must have connectivity with each fabric for high
availability. Connecting to a single fabric is not recommended.
The combination of one cluster with another cluster is possible that enhances the
availability factor as it allows the hardware to exist in distinct geographical locations
resulting in no single point of failure.
Fabric zoning should consist of a set of zones, each with a single initiator and up to
16 targets.
Each VPLEX director is capable of connecting both FE and BE I/O modules to both
fabrics with multiple ports. The ports connected to it on the SAN should be on
different blades or switches so a single blade so switch failure won’t cause loss of
access on that fabric overall. A good design will group VPLEX BE ports with Array
ports that will be provisioning groups of devices to those VPLEX BE ports in such a
way as to minimize traffic across blades.
Avoid incorrect FC port speed between the fabric and VPLEX. Use highest possible
bandwidth to match the VPLEX maximum port speed and use dedicated port
speeds, i.e. do not use oversubscribed ports on SAN switches.
Four "active" backend paths per Director per Storage Volume but also it’s a
requirement to not have more than 4 active paths per director per storage volume.
2015 EMC Proven Professional Knowledge Sharing 21
For the VMAX, 4096 devices can be mapped per processor. For more flexibility, it’s
better to have multiple masking views created for VPLEX with different port groups
so as not to end up utilizing 4096 addresses on all the director slices.
High quantities of storage volumes or entire arrays provisioned to VPLEX should be
divided into appropriately sized groups (i.e. masking views or storage groups, port
proups) and presented from the array to VPLEX via groups of four array ports per
VPLEX director so as not to exceed the four active paths per VPLEX director
limitation. At the same time, this will eliminate the number of 4096 mapped device
limitation of VMAX Ports.
Direct Arrays connectivity to VPLEX Directors is not recommended but is supported
which means Direct connect applies only to backend connectivity. Front-end direct
connect is not supported.
Single WAN COM port connectivity is not supported.
Two "active" backend paths per Director per Storage Volume is a minimum
requirement in NDU which dictates that to two VPLEX Director backend ports will be
connected to two array ports per Storage Volume. Depending on workload, size of
environment and array type, four "active" path configurations have proven to alleviate
performance issues and therefore are recommended over the minimum of two active
paths per director per storage volume. Try to avoid only two path connectivity in
production environments.
2015 EMC Proven Professional Knowledge Sharing 22
3.2 Basic Design Principles of VMware Stretched Cluster
Dedicated ESX Clusters should be created for Production, Non Production, and DMZ
Environments.
Dedicated VPLEX ports groups should be used for each ESX Cluster to isolate the
workloads as per the diagram below.
We should follow T-Sizing for VM creation to maintain uniformity across the
Environment. For example:
VM Size vCPU Memory (GB)
S 2 4
M 4 8
L 8 16
XL 16 32
XXL 32 64
Monster 32 128
2015 EMC Proven Professional Knowledge Sharing 23
For DataStore Sizing, we should try to follow uninform size of 2TB datastores consisting
to 2X1TB LUN’s.
VM’s belonging to the same service should reside on the same datastore to make
management easier and failover easier. The reason behind this is that if multiple
services reside on the same datastore then for failing over a single service we need to
failover all the services on the datastore .
For database VM’s, Data and Log VMDK’s should reside on different datastores.
Allocate the VM’s VMDK in multiples of 32GB to avoid storage fragmentation.
20% storage should be reserved on datastore for VMware Backup Appliance backup
snapshots
Set up redundancy for the management port (either using a separate vmnic or a
separate uplink) and an alternate isolation response gateway address (if appropriate) for
more reliability in HA isolation detection.
If jumbo frames are enabled, verify that jumbo frame support is enabled on all
intermediate devices and that there is no MTU mismatch.
Configure networks so that there is separation of traffic (physical or logical using
VLANs). Separate the following traffic:
o Management
o VMkernel for IP storage
o VMkernel for vSphere vMotion
o VMkernel for vSphere FT
o Virtual machine network traffic
Traffic separation improves performance, prevents bottlenecks, and increases security
NFS and iSCSI storage traffic should be separated physically (for performance) and
logically (for security).
Verify that VMware Tools is installed, running, and up to date for running virtual
machines.
Use automatic mode for vSphere DRS, if possible, for optimal load balancing.
Maintain compatible virtual hardware versions for virtual machines to support vMotion.
Distribute vmnics for a port group across different PCI buses for greater redundancy.
Change port group security default settings for Forged Transmits, Promiscuous Mode,
and MAC Address Changes to Reject unless the application requires the defaults.
Enable lockdown mode on ESXi Servers to prevent unauthorized access.
2015 EMC Proven Professional Knowledge Sharing 24
Select the correct guest operating system type in the virtual machine configuration to
match the guest operating system.
Use vCenter Server roles, groups, and permissions to provide appropriate access and
authorization to the VMware virtual infrastructure. Avoid using Windows built-in groups
(Administrators).
To facilitate vSphere vMotion operations of virtual machines between hosts, the
following requirements must be met:
o The source and destination hosts must use shared storage and the disks of all
virtual machines must be available on both source and target hosts.
o The virtual machine should not be connected to internal networks.
o The port group names must be the same on the source and destination hosts
(easier with vSphere Distributed Switch).
o All vMotion vmknics on a host should share a single vSwitch. The port group
on each vmknic should be configured to leverage a different physical NIC as
its active vmnic. In addition, all vSphere vMotion vmknics should be on the
same vSphere vMotion network while using multiple network adaptors.
o vSphere vMotion requires a 1Gb network. However, using a 10 GbE network
in place of a 1 GbE network will result in significant improvements in vSphere
vMotion performance. When using very large virtual machines (for example,
64 GB or more), consider using multiple 10 GbE network adaptors.
o CPU compatibility - source and destination hosts must have compatible CPUs
(relaxed for EVC - Enhanced vMotion Compatibility).
o No devices are attached that prevent vSphere vMotion (CDROM, floppy,
serial/parallel devices) being attached.
o When creating a distributed port group, do not use dynamic binding.
2015 EMC Proven Professional Knowledge Sharing 25
For best performance, use the VMXNET3 paravirtualized network adapter for operating
systems for which it is supported. This requires that the virtual machine use virtual
hardware version 7 and that VMware Tools be installed in the guest operating system. If
VMXNET3 is not supported by the guest OS, use Enhanced VMXNET (VMXNET2). Both
VMXNET3 and Enhanced VMXNET support jumbo frames. If Enhanced VMXNET is not
supported in the guest operating system, then use the Flexible device type, which
automatically converts each vlance network device to a VMXNET device when VMware
Tools is installed.
Remove devices such as floppy drives from the VM that have no significance with the
VM.
2015 EMC Proven Professional Knowledge Sharing 26
Design Principles for the 3-Site Enterprise Architecture
Figure 13
2015 EMC Proven Professional Knowledge Sharing 27
3-Site DR Ready Architecture can be deployed as per Figure 13 for SAP HANA and
OpenSystems . Below are the design principles to follow:
OpenSystems run as active-active across Site A and Site B using VPLEX Metro and
Stretched vSphere Cluster.
For VPLEX, WAN COM dedicated switches should be used.
VPLEX Witness should be hosted in 3rd Failure Domain and preferably with a Cloud
Service Provider.
As SAP HANA doesn’t support active-active so SRDF/S can be used between Site A
and Site B with VMware SRM for automated failover.
Metropoint will be used for OpenSystems Replication to DR Site C.
With Metropoint we can have some Consistency Groups replicating from Site A to Site C
and some from Site B to Site C for optimum utilization of bandwidt . A standby link will
exist in case of outage of Active Replication Link.
For SAP HANA, SRDF/A can be used between Site A and Site B with VMware SRM for
automated failover.
2015 EMC Proven Professional Knowledge Sharing 28
Migration Strategy for Migrating Physical x86 Environments to Vblock Virtualized Environment
As per Figure 14, we go a through Discovery & Analysis, Planning, and Execution phase as a
strategy for migrating Environments. Program Management Institute practices are aligned with
the Discovery & Analysis, Planning, and Execution Phase for successful completion of the
project . Let’s discuss each area in detail.
Program Management
The Project Manager creates a Project Charter that contains the scope of the work. For
example, number of servers in scope of the migration, applications in scope for the
migration, etc. Project Manager keeps track of the project from initiation to project
closure, keeps all stakeholders updated with project progress with weekly meetings,
continually updates issue register along with monitoring control of the project during
various phases. Project Management team coordinates with customer team for Change
Management and drafting communication plan.
Figure 14
2015 EMC Proven Professional Knowledge Sharing 29
Discovery and Analysis Phase
During the Discovery and Analysis phase, tools like VMware Capacity Planner and
Application Dependency Planner are used for gathering the application information in the
environment along with the application dependencies. Physical inventory should be
carried out. Discovery and Analysis tools should be run for a month at least to gather all
the data. Workbook is created after bundling and normalizing the data collected from
D&A Phase.
Planning Phase
During the Planning phase, the Migration and Test plans for every application are
drafted. Also, workshops and tabletop exercises are conducted with Application Owners.
Migration runbooks are created for different applications and a schedule is decided .
Execution Phase
Test Plans are carried out in closed environments by the application owners. Migration
Plan is executed after the application owners successfully execute the test plan by
testing the migrated applications in closed test environment. Below are the high level
steps for the P2V Migration.
o VMware Converter agent is pushed on the physical server.
o Migration is started to the Destination ESXi Host.
o Cutover Time is decided.
o Physical Server is shut down.
o Virtual Machine is powered on.
2015 EMC Proven Professional Knowledge Sharing 30
Backup Strategy and best practices for backup of Vblock Converged Infrastructure Private Cloud Tenants
VMware Backup Appliance should be deployed for taking Image Level and VMDK Level
Backups.
CBT should be enabled on VM’s to take CBT Enabled Backups for Forever Incremental
Backups.
In an Active-Active data center, VBA Appliance is deployed in Site A and Site B .
Proxies should be deployed in each ESX Cluster to support hotadd backups.
VMware Backup Appliance Checkpoint to Data Domain should be configured for
Disaster Recovery of VBA Appliance.
DNS records should be updated for vCenter Server, ESX Servers, VBA Appliance,
Proxy Appliances, and Data Domain for both forward and reverse lookup; otherwise,
backups will fail.
It is not recommended to update the VBA hosts file manually .
DDBoost user should be given admin privileges for DD Integration with Data Domain.
For better performance, VBA appliance should be installed on dedicated datastore.
Datastores have 20% free storage for the successful creation of snapshots during
backups.
User should be created in vCenter for VBA backups for easier troubleshooting.
Five external proxies per VMware Backup appliance for concurrent backups are
recommended. Fifty concurrent backup sessions are recommended on a Vcenter
Server.
Deploy the VMware Backup appliance on shared VMFS5 or higher to avoid block size
limitations.
Avoid deploying VMs with IDE virtual disks; using IDE virtual disks degrades backup
performance.
Create a default IF Group in Data Domain and add all the 10 GigE Interfaces to leverage
the benefit of 10 GiGE Interfaces. Data Domain should be added to DNS as DDBoost
Client connects to the Data Domain using Hostname and the selects any of the 10GigE
interfaces from the if group. If the interfaces in the ifgroup are not reachable, backups
are sent from management interface. Ensure that the backups are running on 10GiGE
interface by running show ddboost connections command on CLI.
2015 EMC Proven Professional Knowledge Sharing 31
Use hotadd transport mode for faster backups and restores and less exposure to
network routing, firewall, and SSL certificate issues. When planning for backups, ensure
that NetWorker VMware Protection supports the disk types. Currently, NetWorker
VMware Protection does not support the following disk types:
o Independent
o RDM Independent - Virtual Compatibility Mode
o RDM Physical Compatibility Mode
In order to support CBT:
o Ensure that all VMs run VMware hardware version 7 or higher.
o If you add a disk or dynamically expand a disk on a VM, you must take a new full
backup for CBT to function.
o Install VMware Tools on each VM that you want to back up using the EMC
Backup and Recovery user interface in the vSphere Web Client. VMware Tools
adds backup capability that quiesces certain processes on the guest OS prior to
backup. VMware Tools is also required for some features used in File Level
Restore.
o Each VM backup to a Data Domain system consumes one session on the Data
Domain device. If you exceed the maximum limit of 60 sessions, EMC
recommends that you configure additional devices.
For Database Backups like SAP HANA, SQL deploy guest-based backups. Image-based
backups don’t guarantee consistent backups in case of databases.
2015 EMC Proven Professional Knowledge Sharing 32
Below are the port requirements for VBA Backups.
2015 EMC Proven Professional Knowledge Sharing 33
2015 EMC Proven Professional Knowledge Sharing 34
References
www.sap.com www.emc.com www.emc.com EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION
MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.