Upload
ananda-babu-vodnala
View
20
Download
0
Tags:
Embed Size (px)
Citation preview
Page | 1
a
A Dell Technical White Paper
Dell U2L Platform Migration
Services
By Mahesh Pakala
Global Infrastructure Consulting Services
May 2012
Customer Success Story: Migrating a Traditional Physical Server
Configuration to a Dell SAP High Availability (HA) Tiered Private Cloud Architecture with Minimal Downtime
SAP, Redwood CPS, Enqueue Splitting, Oracle RAC 11gR2, Oracle ASM, Workload Management, Compression, VMware HA/ FT & Cross Platform Migration
May 2012
Page | 2
1 INTRODUCTION ......................................................................................................................... 3
2 ARCHITECTURE & DESIGN CONSIDERATIONS .................................................................... 4
1.1. ORACLE RAC DATABASE: ................................................................................................................. 6 1.2. SAP APPLICATION COMPONENTS ..................................................................................................... 8 1.3. VMWARE COMPONENTS .................................................................................................................... 9 1.4. BOLT-ON THIRD PARTY SOFTWARE ARCHITECTURE ...................................................................... 11 1.5. REDWOOD CPS SERVERS ........................................................................................................... 11
3 PLATFORM MIGRATION STRATEGY .................................................................................... 11
3.1.1 Migration approach ............................................................................................................... 12 3.1.2 Migration Steps on the Source System .............................................................................. 14 3.1.3 Migration Steps on the Target System ............................................................................... 15
4 IMPLEMENTATION STEPS (VMWARE & ORACLE) .............................................................. 17
1.6. SYSTEM ADMINISTRATION TASKS (OS, STORAGE, VMWARE) ....................................................... 17 4.1.1 VMware vSphere two node Cluster .................................................................................... 17
1.7. ORACLE CLUSTERWARE INSTALLATION & DATABASE CREATION .................................................. 22 4.1.1.1 Oracle Clusterware Installation ....................................................................................................... 23 4.1.1.2 ASMCA Tool for ASM Disk Group .................................................................................................. 27 4.1.1.3 Oracle Database Software Installation ONLY ............................................................................... 32 4.1.1.4 Oracle RAC Database Creation ...................................................................................................... 36
5 IMPLEMENTATION STEPS (APPLICATIONS) ....................................................................... 40
1.8. SAP EXPORT/ INSTALLATION PROCESS ......................................................................................... 40 5.1.1 Target System CI (*Interim CI before ASCS split) Installation steps: ............................ 41 5.1.2 Target System DB Installation steps: ................................................................................. 45 5.1.3 ABAP SAP Central Services(ASCS) split steps: .............................................................. 52 5.1.4 Target System Dialog Instance Installation steps: ........................................................... 55
1.9. CREATE RFC DESTINATIONS USING LOGON GROUPS ................................................................... 59 1.10. SET UP TOPCALL FAX SERVER RFC DESTINATION .................................................................... 64 1.11. SET UP TMS IN HA ENVIRONMENTS ........................................................................................... 64
6 DYNAMIC WORKLOAD MANAGEMENT ................................................................................ 67
7 BRBACKUP OF ASM DG USING EMC BCV........................................................................... 70
8 REFERENCE ............................................................................................................................. 72
9 AUTHORS ................................................................................................................................. 73
Page | 3
1 Introduction This paper describes the innovative solution, forward-thinking technologies and best practices used
by Dell Global Infrastructure Consulting Services (GICS) to successfully transform a highly transactional and complex SAP implementation in the Oil & Gas industry to a Cloud architecture.
The solution was designed utilizing a tiered architecture running VMware vSphere and Oracle
11gR2 Real Application Cluster (RAC) with Automatic Storage Management (ASM). Using VMware Fault Tolerance (FT) and High Availability (HA) features we delivered iron clad protection to our
customers critical SAP application services providing zero downtime. The solution was delivered to a fortune 100 company supporting a 10TB SAP database running SAPs R/3 mission critical applications.
A consistent challenge faced by IT organization today is the need for cost containment while
dealing with increasing demands for computing capacity and performance. The rapidly increasing capability of Intel x86_64 based servers allows a new model for the creation of agile, low-cost
infrastructures based on the Linux operating system. This models use of lower-cost x86_64 based commodity servers running Linux, combined with Oracles load balancing & fault tolerant Oracle RAC database provides a resilient, scalable, platform for enterprise solutions. SAP Application
software, can make use of this platform to deploy large mission critical systems that are fault tolerant, scalable and provide high performance.
Dells, SAPs and Oracles close engineering partnerships with major Linux distributors, such as Red Hat, results in enhancements to the Linux solution stack. They provide a joint support model that allows SAP & Oracle RAC on Linux to become an ideal choice of platform.
SAP was the first software vendor to recognize the applicability of Linux to support mission-critical enterprise applications as early as 1996. Today, Linux is one of two reference platforms for SAP
development. SAP business solutions running on Linux conform to the same high standards as they do on every other operating system on which SAP enterprise systems run. SAP solutions on Linux
go through stringent quality assurance and certification processes performed by SAP and its
partner Dell to make SAP business solutions a real option.
Some highlights of Oracles and Dells commitment and success on Linux include:
Release of the first commercial Database for Linux in 1998
Availability of all of Oracle software products on Dell/Linux.
Contributions to the Linux kernel in the form of stability and scalability enhancements
including the Clustered File System.
Integrated and code-level support for all Oracle software and the Linux OS
Widespread internal use of Linux to run mission-critical systems, such as Oracle
Outsourcing, Oracle/Dell E-mail systems, Oracle/Dell Web sites and Oracle/Dell Data centers.
Freedom and flexibility - Running SAP enterprise applications on Linux puts you in control of your IT infrastructure because youre not constrained by technology, you can make decisions based on your business objectives. Linux decouples the operating system from the underlying hardware, thereby increasing your choice of infrastructure vendors.
Savings in IT operating costs Dell Customers report that SAP applications on Linux deliver significant direct and indirect savings in their IT operations with lower administration and operating costs compared to non-open-source systems. The platform is extremely reliable so less system
Page | 4
redundancy and maintenance are required than for most traditional installations. Customers enjoy
savings on expensive license fees and are able to use commodity server hardware, which performs well under Linux but at a fraction of the cost.
The power to innovate - By streamlining your IT operations, you can reallocate resources, accelerate business process innovation, address new business requirements, or simply improve
your bottom line.
The migration of database applications from legacy platforms to Linux on x86_64 platforms is a
process many customers are undertaking in order to realize its cost and performance benefits. Transferring from a legacy UNIX system to another platform will have migration costs but with
proper planning, education, tools and services the time & costs can be significantly reduced.
The core strategy was to implement a PRIVATE CLOUD/GRID as an integrated structure of
components providing application services and integrating a variety of resources such as applications, databases, servers, storage systems and networks. The prime objective was to
implement the integration in such a way that resources can be managed as a single system image and provisioned on demand based on the service level requirements of each application.
All resources are physically connected in the form of large pool of servers and storage network. Logically, the machines function as a pool of application servers running VM based App servers and
database servers clustered with Oracle Database 11gR2 RAC. The storage network provides access to shared file system storage for the application and database server software and data storage.
The storage used by different databases managed by a single ASM cluster, which maintains disks and recovery groups for each database.
The implementation of a PRIVATE CLOUD/GRID provides the following advantages
Increases the total computing power available to applications
Exploits low-cost, high performance technologies as building blocks in flexible infrastructure
Benefits from easy deployment and integration of different types of storage systems
Can grow the total computing power and storage infrastructure in small increments
Provides on-demand adjustments of computing and storage resources based on the short-term or long-term needs of application
Allows easy and flexible migration and movement between clusters, databases or storage classes
Covers availability objectives for planned and unplanned outages by rebalancing workloads
Can be managed using a single set of controls
This document highlights the steps involved in the process that was adopted at a large O&G
Company to address the RAMP (Reliability, Availability, Manageability & Performance) requirements for the SAP implementation on x86_64 platforms.
2 Architecture & Design Considerations SAP provides a proven, scalable, fault-tolerant, multi-tier architecture technology. The individual
components can be protected either by horizontal scalability that is, the use of multiple
Page | 5
components that tolerate the failure of individual components or by cluster and switchover solutions. All SAP partners provide their own proven solutions, which enable SAP applications using additional software and hardware to achieve high availability.
The layers below the business applications are generally transparent to these applications.
However, in the event of errors, they can affect the availability of the business applications and
you must therefore protect them. Partners offer a number of proven solutions for this purpose. The most important mechanisms are described briefly below to address the requirements to
migrate the customer legacy systems to Dell Hardware.
Source system (HP):
Processor: PA-RISC 8900
DBMS: Oracle 9.2.0.7 with HP Service Guard
File system: Veritas vxfs
SAP kernel Release : 6.40
Application server platform: HP-UX
Target system (Dell):
Processor: Intel X7560
DBMS: Oracle 11.2.0.2 with Oracle RAC (Real Application Clusters)
File system: Oracle Cloud File System (ASM/ACFS)
SAP Kernel release 6.40 EX2 /VMware
Application server platform: RedHat Linux in VMWare ESXi 4.1
Application Server Platform : RHEL AS5U6
VMware configuration (Dell):
Specification : 2 VMs with 8vCPUs/48GB, 1 VM with 1vCPU/4GB
SAP Application Servers : Dell00748 (ABAP SAP Central Services), dell00746 (Dialog instance), dell00747 (Dialog instance)
CPU Reservation : No reservation
Memory Reservation : 48GB each
The high availability architecture for the SAP solutions for this particular customer was architected
to run on vSphere and Oracle 11gR2 RAC with ASM. The new VMware Fault Tolerance (FT) feature provides further protection to critical SAP application services with zero downtime, enhancing the
high availability provided by the VMware High Availability (HA) feature introduced in earlier
releases.
Page | 6
SAP Architecture takes advantage of Fault Tolerance because the SAP Central Services are mission
critical, requiring maximum uptime and protection against single points of failure.
The architecture addresses four SAP single points of failure (SPOF) to provide high availability solutions with different degrees of fault tolerance and uptime. The following single points of failure
are identified in the SAP architecture and addressed:
1.1. Oracle RAC Database:
Oracle Real Application Clusters allows the Oracle databases to run SAP or custom application unchanged across a set of clustered nodes. This capability provides the high availability for node
failures, instance failures, and most planned maintenance activities while providing flexible
scalability. If a clustered node fails, the Oracle database continues running on the surviving nodes. When more processing power is needed, another node can be added without interrupting user
access to data.
Oracle Clusterware is a cluster manager that is designed specifically for Oracle databases. In an
Oracle RAC environment, Oracle Clusterware monitors all Oracle resources (such as database instances and listeners). If a failure occurs, then Oracle Clusterware automatically attempts to
restart the failed resource. During outages, Oracle Clusterware relocates the processing performed
by the inoperative resource to a backup resource. For example, if a node fails, then Oracle Clusterware relocates the database services being used by the application to a surviving node in
the cluster.
Oracle Real Application Clusters includes a highly available (HA) application framework that
provides the necessary service and integration points between RAC and SAP applications. One of the main principles of a highly available SAP application is for it to be able to receive fast
notification when something happens to critical system components (both inside and outside the cluster). This allows the SAP work processes to execute event-handling programs. The timely
execution of such programs minimizes the impact of cluster component failures, by avoiding costly connection time-outs, application timeouts, and reacting to cluster resource reorganizations, in
both planned and unplanned scenarios. The features of Oracle Real Application Clusters 11g are
Page | 7
used by SAP applications to provide the best possible service levels by maximizing throughput,
response time and minimizing the impact of cluster component failures
Each ABAP work process initiates a private connection to the database via a service and if the connection is interrupted due to an instance failure the work processes failover to a surviving RAC
node.
This will set up a new connection and changes to "database connect" state to the surviving database instance. User sessions with SELECTS will continue but users in the middle of DML
database activity will receive SQL error messages but their logged on sessions are preserved on the application server.
Traditionally, an Oracle database provided a single service and all users connected to the same
service. A database will always have this default database service that is the database name. This
service cannot be modified and will always allow you to connect to the database. With Oracle Database 11g, a database can have many services (up to a maximum of 115 per database). The
services hide the complexity of the cluster from the client by providing a single system image for managing work. SAP applications and mid-tier connection pools select a service by using the
Service Name in their connection data.
Oracle Data Guard
Oracle Data Guard provides the management, monitoring, and automation software infrastructure
to create and maintain one or more standby databases to protect Oracle data from failures, disasters, errors, and data corruptions. SAP supports Oracle Data Guard physical and logical
standby databases, Oracle Active Data Guard, and snapshot standby databases.
See the following documents for complete information:
Oracle Data Guard Concepts and Administration for complete details about Oracle Data Guard and standby databases
Oracle Data Guard Broker for information about broker management and fast-start failover
My other Dell Paper : Implementation of SAP Applications Using Oracle Maximum Availability Architecture Best Practices
Oracle Flashback
Oracle Flashback quickly rewinds an Oracle database, table or transaction to a previous time, to
correct any problems caused by logical data corruption or user error. It is like using a 'rewind button' for your database. Oracle Flashback can also quickly returns a previously primary database
to standby operation after a Data Guard failover, thus eliminating the need to recopy or reinstantiate the entire database from a backup. See also Oracle Flashback Technology.
Oracle Automatic Storage Management (Oracle ASM)
Oracle Automatic Storage Management provides a vertically integrated file system and volume
manager directly in the Oracle kernel, resulting in:
Significantly less work to provision database storage
Higher levels of availability
Elimination of the expense, installation, and maintenance of specialized storage products
Unique capabilities for database applications
Page | 8
For optimal performance, Oracle ASM spreads files across all available storage. To protect against
data loss, Oracle ASM extends the concept of SAME (stripe and mirror everything) and adds more flexibility in that it can mirror at the database file level rather than the entire disk level.
1.2. SAP Application Components
SAP application consists of one or more instances of an application server. Each instance can run on a separate server, but it is also possible to operate multiple instances on one host. An SAP
instance can provide different service types. The standard SAP services that can be configured on all instances of the SAP component are dialog, batch, update, and spool. The failure of an SAP
instance on which these standard services are configured causes all the transactions processed on it to be terminated and rolled back. Database consistency is guaranteed at all times. Terminated
transactions can be repeated on one of the instances still available.
The key components that need to be protected are -
SAP Message Service:
The SAP Message Service is used to exchange and regulate messages between SAP instances in a
SAP network. It manages functions such as determining which instance a user logs onto during
client connect and scheduling of batch jobs on instances configured for batch. The VMware HA & VMware FT will protect SAP central services(ASCS + Message Server) by failing them over to the
surviving node.
SAP Enqueue Service:
The enqueue service is a critical component of the SAP system. It administers locking using objects within SAP transactions that can be requested by applications to ensure consistency within the SAP
system. The enqueue service component can be protected with either of the two available options
for Oracle: Oracle Clusterware Based sapctl or VMwares Fault Tolerance ( FT). VMware FT was chosen for this implementation for two main reasons: SAPCTL 11gR2 product was not certified for
an older version of SAP and VMware FT was easy to implement right out of the box.
The enqueue service manages locking of business objects at the SAP transaction level. Locks are set in a lock table stored in the shared memory of the host on which the enqueue service runs.
Failure of this service has a considerable effect on the system because all the transactions that
contain locks would have to be rolled back and any SAP updates being processed would fail. This critical service is protected the VMware HA & VMware FT . The VMware FT provides continuous
availability for applications in the event of server failures, by creating a live shadow instance of a virtual machine that is in virtual lockstep with the primary instance. By allowing instantaneous
failover between the two instances in the event of hardware failure, FT enables zero downtime for
the application.
In the SAP Appserver the message and enqueue processes have been separated from the CI and grouped into a standalone service. The ABAP variant is called ABAP SAP Central Services (ASCS).
The isolation of the message and enqueue service from the CI helps to address the high availability requirements of these SPOFs. The central services component is "lighter" than the CI and hence is
much quicker to start up after a failure. But in our case we have FT to make sure it does not fail and hence 1CPU is sufficient to address the requirements.
Page | 9
Logon Load Balancing
The assignment of user logons to application servers occurs during logon. Application servers can be combined in logon groups, whereby logon assignment depends on the load. In the event of
application server failure, the user can log on to another application server in the logon group.
However, the data from the user session on the original application server is lost.
The same procedure is also possible for linking systems by remote function call (RFC), where
application servers can be combined in RFC groups.
1.3. VMware Components
VMware FT relies on deterministic record/replay technology described above. When VMware FT is
enabled for a virtual machine (the primary), a second instance of the virtual machine (the secondary) is created by live-migrating the memory contents of the primary using VMware VMotion. Once live, the secondary virtual machine runs in lockstep and effectively mirrors the guest instruction execution of the primary.
Page | 10
The hypervisor running on the primary host captures external inputs to the virtual machine and
transfers them asynchronously to the secondary host. The hypervisor running on the secondary host receives these inputs and injects them into the replaying virtual machine at the appropriate
execution point. The primary and the secondary virtual machines share the same virtual disk on shared storage, but all I/O operations are performed only on the primary host. While the
hypervisor does not issue I/O produced by the secondary, it posts all I/O completion events to the
secondary virtual machine at the same execution point as they occurred on the primary.
When a host running the primary virtual machine fails, a transparent failover occurs to the corresponding secondary virtual machine. During this failover, there is no data loss or noticeable
service interruption. In addition, VMware HA automatically restores redundancy by restarting a
new secondary virtual machine on another host. Similarly, if the host running the secondary virtual machine fails, VMware HA starts a new secondary virtual machine on a different host. In either
case there is no noticeable outage by an end user.
The actual logging time delay between the Primary and Secondary Fault Tolerance virtual machine
is based on the network latency between the Primary and Secondary. vLockstep executes the same instructions on the Primary and Secondary, but because this happens on different hosts,
there could be a small latency, but no loss of state. This is typically less than 1 ms. Fault Tolerance includes synchronization to ensure that the Primary and Secondary are synchronized.
The failover from primary to secondary virtual machine is dynamic, with the secondary continuing execution from the exact point where the primary left off. It happens automatically with no data
loss, no downtime, and little delay. Clients see no interruption. After the dynamic failover to the
secondary virtual machine, it becomes the new primary virtual machine. A new secondary virtual machine is spawned automatically.
Page | 11
1.4. Bolt-on Third Party Software Architecture
Various Bolt-on applications were migrated and each of these components new architecture supported HA & FT. The third party bolt-on applications that were migrated along with the SAP
Applications were
Redwood CPS : Job Scheduling
Connect:Direct : Secure FTP, Telnet
TIBCO Servers : Integration
TOPLINK : FAX Services
Printing Services
FTP and
Scripts
The TIBCO servers have their own independent server for processing. They interface with the SAP
ASCS over RFC and flat files mounted using NFS.
1.5. REDWOOD CPS Servers
The Redwood HA is supported by a TWO-Tier Architecture with the repository DB as the backend to the Redwood Process servers in the front end.
- The Redwood repository is protected by Oracle RAC and which is highly available and scalable.
- The Redwood process servers are running on the VMs supported by HA and these schedulers can be restarted on the failed over VM. The database agents are sent (in a round-robin) to all RAC instances. At least one database agent is started against all instances even if that number is higher than the SERVERS setting. If an instance has one database agent, and that agent becomes the dispatcher agent, an extra database agent is started. Therefore, it is imperative that you can start the process server on all instances, even if you normally connect users (and the process server) to only one of the nodes.
3 Platform Migration Strategy The platform migration is a heterogeneous system copy to create a copy of an already existing R/3 system on the platform where either operating system, or database, or both differ from the
operating system/database of the source system.
The whole migration process consists of five main steps:
1. Preparation steps on the source system 2. Export of the source system data into database-independent format or in a format that is
acceptable by the source database 3. Transfer of the data made during the export, 4. New system installation together with data import. 5. Post-processing steps within the target system
Page | 12
3.1.1 Migration approach The main tools used for the migration are R3SETUP or sapinst depending on the kernel release. These tools call and execute installation templates, command files each with extenstion of R3S
(R3SETUP) or xml (sapinst) dependant on the tool used.
For the kernel-tools used during a migration some rules should be followed:
a) Tools used for export on the source system must have the same version as on the target system.
b) Tools used must all have the same kernel-version. (Do not mix up kernel-tools of different releases)
c) Tools must have the same kernel release as the system which is migrated. d) The java system copy tools do not depend on the kernel-version and you can always use
the latest version of these tools.
The SAP supported approach to move SAP systems to a different operating system is to perform a heterogeneous system copy. For moving an Oracle database used by SAP applications to a
different OS, SAP supports a few different approaches and each of them are listed below with their pros and cons :
Tool Options Pros Cons
R3Load Very Reliable
Easy to Upgrade to newer DB versions
Easy to Implement when used with standard SAP installation tools
Compress Data
Allow Unicode Conversion if needed Oracle 11g advanced compression in the target database is possible
**May require longer downtime, full support at kernel 7.0 or higher
Time-consuming if used without complex tuning techniques
Multiple test runs are usually required with multi-terabytes databases when complex tuning techniques are required
Downtime : >32 HRS* * Minimum downtime is heavily dependent on available CPU, memory, and IO capability of the DB server
RMAN
DUPLICATE/RESTORE/ RECOVER Mixed
Platform Support [ID
1079563.1]
Solaris-x64 Linux-x64 : Not Applicable
RMAN Backup does not support
cross endianness configuration.
Data Guard Support
for Heterogeneous
Primary and Logical Standbys in Same Data
Guard Configuration [ID 1085687.1]
Solaris-x64 Linux-x64 : Not Applicable
Data Guard does not support cross
endianness configuration.
Not Applicable
Data Guard Support
for Heterogeneous Primary and Physical Standbys in Same Data
Solaris-x64 Linux-x64 : Not Applicable
Data Guard does not support cross
endianness configuration.
Not Applicable
Page | 13
Guard Configuration
[ID 413484.1]
Cross Platform Transportable
Tablespaces
Reliable Process
Fast based on the data move speeds compared to R3Load
FREE no additional license required
Easy process
Source Database has to be > 10.2.0.4
Need a few runs
Downtime : >12 HRS*
No Failback
Cannot Compress Database * Minimum downtime is dependent on available CPU, memory, and IO capability of the DB server and DB Size
Oracle Goldengate
or ooo Reliable Process
Zero Downtime
Failback Possible
Both Target/Source can be active
Not Easy to Implement
Very Expensive
License Costs
Standard R3load-based system copy:
The data is exported from the source database to a staging area and imported into the target
database using SAPs R3load tool. This is the standard approach for installing new SAP systems and the SAP recommended approach for platform and Unicode migrations.
Oracle 10g/11g feature: Cross-platform transportable tablespaces:
This is a new approach available for databases running on Oracle 10g and higher. The data files of user tablespaces are copied to the target database server using various approaches such as NFS or
FTP, and then adapted to the target environment using Oracle tools.
Recommended Approach
For migrating the current SAP R/3 4.7 system, we recommend the R3load-based approach because of the following reasons:
The current system is running on Oracle 9.2. Data can be exported from Oracle 9.2 and directly imported into Oracle 11.2, thereby completing the database upgrade without changing the source environment.
Using the R3load method to migrate the database would reorganize the database in the process; this would remove any existing fragmentation and potentially reduce the space used. Additionally, advanced features such as compression will be implemented in the process.
Page | 14
3.1.2 Migration Steps on the Source System
Step 1 Preparation
Perform database consistency checks, verify relevant SAP configurations and correct any
inconsistencies. Data reduction methods such as SAP data archiving can be utilized to reduce the amount of data
to be exported during this phase. SAP installation CDS, Java SDKs, and migration tools should be downloaded. Analyze the database to determine the best optimization strategy to reduce
downtime.
Local storage space should be allocated on all participating app servers. On the target database server, at least the same amount of the space as on the source database server should be
allocated.
An NFS share should be created on the source database server as the central directory for
Distribution Monitor and shared by the participating servers and the target database server.
Due to the large amount of network traffic during the migration, jumbo frames should be enabled.
During the production cutover weekend, workload on the SAN storage should also be minimized if possible in order to meet the high IO requirements during the downtime. This would require
advanced planning and scheduling.
To prepare for the migration of the productive system, the SAP OS/DB Migration Check should be
ordered in advance. The service can be paid for by using the free service credit that is included as part of the SAP Standard Support or SAP Enterprise Support contract.
Step 2 Generate Control Files for R3load
The control files for R3load can be generated by using either SAPINST or R3ldctl. (R3Load Control)
Those files will contain definitions of the database objects in the source database.
Step 3 Split large tables and distribute packages to participating application servers
Largest tables in the database can be divided into smaller chunks and assigned to participating application servers by using Distribution Monitor and other SAP tools. This will allow multiple
R3load processes to run on multiple application servers in parallel to take advantage of all the
available processing power and reduce the runtime of the migration process.
Step 4 Optimize database parameters for the export
Page | 15
Create a temporary Oracle SPFILE or PFILE specifically to be used during the export process. The
parameters should be adjusted based on the most up to date SAP recommendations and calculated based on the number of CPUs and the amount of memory. Those parameters will only be used
during the export and should be changed back after the migration.
Step 5 Perform parallel export using Distribution Monitor
Start the export processes using Distribution Monitor on all of the participating servers. Carefully monitor the export process and resolve any problems during the export.
3.1.3 Migration Steps on the Target System The migration steps on the source system can be executed once the R3load control files and the
database size file generated by R3szchk become available on the disk
Step 1 Get migration key
The migration key can be generated well in advance at the SAP Service Market place using a
customer S number with the proper authorizations. It will be used in the import process for changing operating system, database, or both.
Step 2 Install SAP and database software
SAP binary files and Oracle database software should be installed on the target database server by
using SAPINST and Oracle Universal Installer.
Step 3 Create database and empty user tablespaces
SAPINST will create the target database and user tablespaces. To speed up the process, data files can be added by using sqlplus in parallel. At least the same amount of space as in the original
database should be allocated.
Step 4 Optimize database parameters for the import
Create a temporary Oracle SPFILE or PFILE specifically to be used during the import process. The parameters should be adjusted based on the most up to date SAP recommendations and calculated
based on the number of CPUs and the amount of memory. Those parameters will only be used during the import and should be changed back after the migration.
Step 5 Perform parallel import using Distribution Monitor
Start the import processes using Distribution Monitor on all of the participating servers.
Carefully monitor the import process and resolve any problems during the import. The import and export processes can be started at the same time. Packages that have been
exported completely will be picked up by available import processes automatically.
Step 6 Perform post import technical steps
Continue SAPINST to complete the post-import verification. SAPINST will also execute required batch jobs and create a temporary SAP license for the target system. New license has to be
installed as soon as possible before the temporary license expires. Database statistics can be created using multiple CPUs in parallel. Follow up steps in the SAP System Copy Guide should be
executed at this time.
Page | 16
ABAP loads for the new platform should be generated using all application servers now to avoid
compiling when end users use the system.
Step 7 Post migration test activities
After the first successful migration, functional validation should be performed in the migrated
system. The performance of the most time-critical transactions should be tested and optimized for
the new platform. SAP, Database, and Operating system parameters should be adjusted accordingly to ensure optimal performance. Interfaces and custom programs should also be
tested. Any issues discovered should be resolved before the final test run and production cutover.
After the production cutover, a small set of validation tasks should be executed to validate the
most business critical functions before Go-live.
Four weeks after the production cutover, SAP will perform a Migration Check Verification session
remotely and provide additional recommendations for the new platform.
A separate Golive cutover will be executed for each of the following environments:
Technical Sandbox
Functional Sandbox
Development
Quality Assurance
Pre-Production
Production
In order to support current implementations in SAP systems to be migrated, different transport groups will be set up to support the synchronization process between two data centers so that interruption to current projects can be kept to a minimum.
Page | 17
4 Implementation Steps (VMware & Oracle)
This section is an example of the installation & configuration steps performed at the customer site
for one of their environments. It is required that you must have performed a VMware, Oracle RAC & SAP installation prior to this section.
The implementation steps at a high level includes the following stages
Install the Operating System on all the Servers (Not Covered)
Install VMware on the ESX servers (Not Covered)
Configuration of VMware
Create the ASM LUNS
Install the Oracle Clusterware
Create ASM Disk Groups
Install the Oracle Software (Shared ACFS Disk)
Create the Database
Install the SAP software
Perform the DB Migration of SAP & Redwood CPS
1.6. System Administration Tasks (OS, Storage, VMware)
4.1.1 VMware vSphere two node Cluster
Two Node ESX clusters were created to support the app servers. Connectivity for each vSPhere
ESX 4.1 node consists of 8 NICs partitioned with Virtual LANs and 1 NIC for out-of-band
management (DELL iDRAC).
To achieve load balancing and fault tolerance the standard Vmware vswitch capabilities are used. The table below depicts the network configuration of each node:
vswitch description vmnic phisical nic Connection Type
Page | 18
vswitch0 global vswitch vmnic0 embedded port 1 Switched 1000Base-T - switchport Trunk 802.1q Mode
vswitch0 global vswitch vmnic1 embedded port 2 Switched 1000Base-T - switchport Trunk 802.1q Mode
vswitch0 global vswitch vmnic2 embedded port 3 Switched 1000Base-T - switchport Trunk 802.1q Mode
vswitch0 global vswitch vmnic3 embedded port 4 Switched 1000Base-T - switchport Trunk 802.1q Mode
vswitch0 global vswitch vmnic4 slot 5, port 1 Switched 1000Base-T - switchport Trunk 802.1q Mode
vswitch0 global vswitch vmnic5 slot 5, port 2 Switched 1000Base-T - switchport Trunk 802.1q Mode
vswitch0 global vswitch vmnic6 slot 3, port 1 Switched 1000Base-T - switchport Trunk 802.1q Mode
vswitch0 global vswitch vmnic7 slot 3, port 2 Switched 1000Base-T - switchport Trunk 802.1q Mode
To support this configuration each switch port connected to ESX must be configured with 802.1q encapsulation to include in the Ethernet frame the VLAN Tag.
The Virtual configuration is described in the following picture:
Port groups configuration and balancing logics:
Port Group Description Active Vmnic StandBy vmnic Balancing type VLAN ID
SAP SAP Application connectivity 0,1,2,4,6 7,5,3
port Id 96
Management Management interface 3 0,1,2,4,7,5,6
port Id 96
vMotion vMotion traffic 5 0,1,2,4,7,3,6 port Id 12
FT Fault Tolerance Traffic 7 0,1,2,4,6,3,5
port Id 12
vSwitch0 configuration:
Attribute Specification
Network Failover Detection Beacon Probing
Notify Switch Yes
Page | 19
FailBack Yes
Oracle nodestorage allocations
Local storage
Local disks are configured in RAID 1 (mirror) protection.
OS filesystem:
mount point Size (GB)
/tmp 5
/home 5
/ 13
/usr 10
/var 5
Swap 48
/u01 25
/local/Oracle 25
SAN storage:
DB # of LUNs
LUN Type
LUN Size (Gb)
Initial Go Live Storage Allocated (Gb)
ASM Disk Group Comments
LP3 5 Fiber 2 10 OCR Voting & Ocr
LP3 2 Fiber 100 200 ACFS ORACLE_HOME
LP3 30 Fiber 400 12000 DATA_DG1 Redo, Control files(Copy 1), system, temp, undo and application data
LP3 4 Fiber 400 1600 LP3_ARCH Control File(Second Copy), Archived Redo Log
LP3 4 Fiber 100 400 LP3_OLOG REDO LOG files (Copy 1)
LP3 4 Fiber 100 400 LP3_MLOG REDO LOG files (Copy 2)
LP3 10 Fiber 400 4000 LP3_RECO Control File(Copy3), RMAN, FRA
ESX Cluster configuration
HA parameters
Attribute Specification
Page | 20
Enable Host Monitoring Yes
Admission Control Enabled
Percentage of cluster free reserved for failover
50%
Vm options: Restart priority
Low
Vm options: Host Isolation response
Shut Down
VM Monitoring :monitoring status
Vm Monitoring only
VM Monitoring :monitoring sensitivity
High
Tiebreaker #1 das.usedefaultisolationaddress=True
Tiebreaker #2 das.isolationaddress1=dell00755.DELL.priv
Tiebreaker #3 das.isolationaddress2=dell00756.DELL.priv
Advanced Param #1 das.allowvMotionNetworks=True
Advanced Param #2 das.failuredetectiontime=50000
VMs HA restart priority:
DRS parameters
Attribute Specification
Automation Level Fully automated
Migration Threshold Level 3
Power monitoring Off
Vm options: Restart priority
Medium, all VMs use default
DRS rules
Page | 21
Rule name Action VMs
LP3 Dialog 1 Separate Virtual Machines dell00782, dell00783
LP3 Dialog 2 Separate Virtual Machines dell00760, dell00761
Vmware EVC
Attribute Specification
EVC enabled Yes
EVC baseline Intel Xeon Core i7
VM virtual Hardware configuration
SAP VMs in terms of RAM and CPU:
VM Name vCPU RAM (MB)
RAM Overhead
(MB) Reserved
RAM (MB) Preferred Host HA
dell00760 8 49152 2457,6 49152 ESX01 HA
dell00761 8 49152 2457,6 49152 ESX01 HA
dell00782 8 49152 2457,6 49152 ESX02 HA
dell00783 8 49152 2457,6 49152 ESX02 HA
dell00762 1 4096 165,98 4096 ESX01 (mirror FT on
ESX02) HA-FT
dell00784 1 4096 165,98 None ESX01 HA
Totals 34 204800 10162,36
Total RAM Allocated for VMs (MB) 214962.36
Total RAM used by Vmkernel (MB) 1024
RAM Free (MB) 46157.64
CPU oversubscription 15%
RAM oversubscription 0%
Virtual Machines VMFS placement:
Size (GB) Dir Description LUNID
49 ./dell00762 CI LP3 VMAX_LUNID2
87 ./dell00760 DI LP3 VMAX_LUNID2
87 ./dell00761 DI LP3 VMAX_LUNID3
87 ./dell00782 DI LP3 VMAX_LUNID3
Page | 22
87 ./dell00783 DI LP3 VMAX_LUNID4
87 ./SAPDI_TEMPLATE DI template VMAX_LUNID1
327,8 ./dell00784 FS LP3 VMAX_LUNID5
49 ./SAPCI_TEMPLATE CI template VMAX_LUNID1
Virtual Machines vDisk:
All the vDisk of a single VMs are placed in the same directory.
DI LP3
vDisk
dell00746, dell00747
Mount Points Lun (MB) vDisk Type
1 OS 73728 Thick,eager-zeroed
2 /usr/sap/LS6 16384 Thick,eager-zeroed
Total 90112
CI LP3 (vDisk)
dell00748
Mount Points LUN(MB) Vdisk type
1 OS 32768 Thick,eager-zeroed
2 /usr/sap/LY6 16384 Thick,eager-zeroed
Total 32768
NFS Server for LP3 (vdisk)
dell00763
Mount Points LUN (MB) Vdisk Type
1 OS 32768 Thick lazy zeroed
2 /IFR 53760 Thick lazy zeroed
3 /conversioni 102400 Thick lazy zeroed
4 /storico 102400 Thick lazy zeroed
5 /sapmntLP3 16384 Thick lazy zeroed
1.7. Oracle Clusterware Installation & Database Creation
The Oracle Clusterware installation is an independent task to the SAP software installation. The following task tasks must be completed prior to installation of the Oracle GRID
Page | 23
Installation of the REDHAT Linux AS5U6 with all required packages as per Dell best practices
o Kernel Parameters o IP address setup o User equivalence o Multipathing of HBAs o Teaming of NICs
Creations of LUNS
ASMLIB installation
4.1.1.1 Oracle Clusterware Installation
Once the above tasks are completed then we initiate the installation process of the Oracle
Clusterware.
**PLEASE NOTE MANY SCREENSHOTS HAVE BEEN REMOVED IN THE STEP SEQUENCE **
./runinstaller
Page | 24
Page | 25
Page | 26
Page | 27
4.1.1.2 ASMCA Tool for ASM Disk Group
It is recommended to perform all ASM related tasks using the Oracle supplied ASMCA tool.
After Oracle clusterware has been installed, create the required ASM DISK groups and shared file system that will be used to hold the Oracle Software and home directory (ORACLE_HOME).
Cd $ORACLE_HOME
Cd bin
./asmca
Page | 28
Create a ACFS Shared Oracle Home which is shared across the cluster.
Page | 29
Page | 30
Page | 31
Page | 32
4.1.1.3 Oracle Database Software Installation ONLY The Oracle Software is only installed to enable the SAPINST for creating the SAP SEED database.
Page | 33
Page | 34
Page | 35
******
Page | 36
4.1.1.4 Oracle RAC Database Creation
Create ORACLE RAC database using DBCA but with OHOME set to 112_64
This will create all the necessary Oracle Database Services.
Page | 37
Page | 38
Page | 39
Page | 40
Shutdown the DB
$Srvctl stop database d
Then convert it into SINGLE INSTANCE for using the SAPINST LOAD. The conversion is a simple exercise of starting the database with a new pfile where couple of init.ora parameters are modifed.
Start the database using a pfile which has the following changes
CLUSTER_DATABASE=FALSE
Sqlplus start pfile=$ORACLE_HOME/dbs/initLS6.ora.load
After the LOAD and all SAPINST activities are completed shutdown the databases and start the
database in RAC mode using
$Srvctl start database d LS6
5 Implementation Steps (Applications) 1.8. SAP Export/ Installation Process
Check tool versions:
Migration Monitor
- Download the file migmon.sar from SAP Service Marketplace from the following path: http://service.sap.com/swdc -> Download -> SAP Support Packages and Patches ->
Entry by Application Group -> Additional Components -> SYSTEM COPY TOOLS ->
SYSTEM COPY TOOLS 7.00 -> #OS independent -> MIGMON_3-20001410.SAR. - Download the latest version of R3load
Export steps:
- Check SAP system and execute pre-export steps as described in System Copy Guide
Page | 41
- Wait for SAP shutdown
- Ensure application servers are shut down permanently except for contingency scenarios
- Logon as ls6adm into the source system database server (dell00t6) - Set JAVA_HOME variable setenv JAVA_HOME /opt/java1.4
- Generate R3load control files:
/sapmnt/LS6/exe/R3ldctl -l ./R3ldctlExport.log -p /dell/export/DATA/ - Create separate packages for tables and packages
str_splitter.sh -strDirs /dell/export/DATA -extDirs /dell/export/DB/ORA -packageLimit 1000 -tableFile /dell/migmon/pkgsplit.txt
- Configure Migration Monitor for export (reference Migration Monitor Users Guide) - Start migration monitor in the background:
./export_monitor.sh bg
5.1.1 Target System CI (*Interim CI before ASCS split) Installation steps:
- Login as root on dell00746
- mkdir /saptmp
- export JAVA_HOME=/opt/IBMJava2-amd64-142 - export PATH=$JAVA_HOME/bin:$PATH
- export TMP=/saptmp - export LD_LIBRARY_PATH=/oracle/LS6/112_64/lib/:/sapmnt/LS6/exe/
- cd
/sapcd/InstMst_6.40_SR1_Ora11g/IM_LINUX_X86_64/SAPINST/UNIX/LINUXX86_64 - ./sapinst
**PLEASE NOTE MANY SCREENSHOTS HAVE BEEN REMOVED IN THE STEP SEQUENCE **
Page | 42
Page | 43
Page | 44
Click Start to start the installation.
Page | 45
5.1.2 Target System DB Installation steps:
- Login as root on dell00626
- export JAVA_HOME=/opt/IBMJava2-amd64-142
- export PATH=$JAVA_HOME/bin:$PATH - export TMP=/saptmp
- export LD_LIBRARY_PATH=/sapmnt/LS6/exe:/oracle/LS6/112_64/lib - cd
/sapcd/InstMst_6.40_SR1_Ora11g/IM_LINUX_X86_64_64/SAPINST/UNIX/LINUXX86_6
4_64 - ./sapinst
Page | 46
Page | 47
*Database Schema will be SAPSR3 for cutovers.
Page | 48
Page | 49
Page | 50
Click Start to start the installation.
Click OK as Oracle software has already been preinstalled.
Page | 51
Stop the installation at this point to rebuild database.
Configure Migration Monitor for import on the target system (Reference Migration Monitor Users Guide).
Start Migration Monitor using ./import_monitor.sh bg
After Migration Monitor finishes, click OK to continue.
Start Central Instance on dell00746 and click ok.
Enter the correct DDIC password. Reset DDIC password in client 000 if needed.
Page | 52
Installation completed successfully.
Follow the SAP System Copy Guide for Web AS 6.20 based systems to execute post-installation steps.
5.1.3 ABAP SAP Central Services(ASCS) split steps:
Preparation 1. Shutdown ALL SAP application servers including SAP OS Collector and IGS.
2. Create ls6adm and group sapsys on ASCS host.(if not already created)
3. Fix port numbers on ASCS host by removing conflicting ports in /etc/services.
(A) Steps on CI host: Logon to CI host as ls6adm and create:
work directory: /sapmnt/LS6/twork25 generated data : /sapmnt/LS6/tdata25
backup copies of original profiles:/sapmnt/LS6/profile-save20
Unpack CREATESCS.SAR to /sapmnt/LS6/twork25: createscs --- The configuration program createscs.cfg-scs --- Sample configuration for creating an SCS createscs.cfg-ci --- Sample configuration for restoring the CI
chmod 755 createscs chmod 644 createscs.cfg-scs
Page | 53
chmod 644 createscs.cfg-ci
Copy the kernel executable and SAPCAR to /sapmnt/LS6/twork25
cd /sapmnt/LS6/twork25
dell00746:ls6adm 76> ./createscs --cimode -f createscs.cfg-scs
LOG: createscs finished successfully on dell00746!
Login as root on dell00746
[root@dell00746 ~]# cd /sapmnt/LS6/tdata25
[root@dell00746 tdata25]# ./root_dell00746.pl doit LOG: root_dell00746.pl finished successfully!
(B) Steps on SCS host: Login as root
[root@dell00748 ~]# cd /sapmnt/LS6/tdata25
[root@dell00748 tdata25]# ./root_pre_dell00748.pl --doit
LOG: root_pre_dell00748.pl finished successfully!
[root@dell00748 ~]# cat /sapmnt/LS6/tdata25/services.sap_dell00748 >> /etc/services
Switch to ls6adm
[lysadm@dell00751 ~]$ cp /sapmnt/LS6/tdata25/createscs_dell00751.cfg /sapmnt/LS6/twork25/
Verify configuration file and make any necessary or desired modifications:
[ls6adm@dell00748 ~]$ cd /sapmnt/LS6/twork25
[ls6adm@dell00748 work25]$ ../tdata25/createscs_dell00748 --scsmode
LOG: createscs_dell00748 finished successfully on dell00748!
Switch back to root:
[root@dell00748 ~]# cd /sapmnt/LS6/tdata25
[root@dell00748 tdata25]# ./root_dell00748.pl --doit
root_dell00748.pl finished with return code 0
Enable automatic restart feature per SAP Note 768727
vi /sapmnt/LS6/profile/START_ASCS25_dell00748 and change Start to Restart.
(C) Test Standalone Enqueue Services Log as a user with administrator privilege to dell00746.
Start transaction SM12. Run Diagnosis
Page | 54
Result: No errors found
Run Update Test:
Result: No Errors.
Page | 55
5.1.4 Target System Dialog Instance Installation steps: Login as root on dell00747
export JAVA_HOME=/opt/IBMJava2-amd64-142
export PATH=$JAVA_HOME/bin:$PATH
export TMP=/saptmp
export JCE_POLICY_ZIP=/sapcd/unrestricted.zip
export LD_LIBRARY_PATH=/sapmnt/LS6/exe
Page | 56
Page | 57
Page | 58
Page | 59
Adopt parameters for RAC.
Start SAP and perform further tuning as needed
SAP installation is now complete.
1.9. Create RFC Destinations using Logon Groups
In transaction SM59, click on Create
Page | 60
Page | 61
Enter RFC destination name: (for example, LP3)
Enter a description: (for example, LP3 RFC connection). Press Enter
Check the radio button (Load distrib. Yes)
Enter target host: dell00762.DELL.priv or 172.22.89.13. Hit enter
Page | 62
Make sure the configuration is correct. Click to save.
Page | 63
Click on . Make sure the connection works.
For LY6
Target system: LY6
Msg. server: dell00751.DELL.priv or 172.22.68.16
Group: PROD_SRG
For LS6
Target system: LS6
Msg. server: dell00748.DELL.priv or 172.22.68.13
Group: PROD_SRG
Page | 64
1.10. Set up Topcall Fax Server RFC Destination
Reference configuration document: topcall_manual.pdf
Original configuration of Fax Server Topcall:
New configuration:
Gateway Host: dell00783
Gateway service: sapgw19
Save
1.11. Set up TMS in HA environments
Reference: SAP Note 943334 - TMS setup in high availability systems for technical details.
Instruction:
In transaction SE38, Execute Report TMS_MGR_LOADBALANCING:
Page | 65
Enter the SYSTEM(s) or DOMAIN(s) to be enabled or disabled for Load Balancing using logon
group.
Check the box LOADBLON to enable logon balancing. Uncheck it to disable it.
Execute.
Double click on the system to be updated (for example: LP3)
Page | 66
Click on change.
Update the target host and system number for the new configuration.
Save and distribute the configuration
Page | 67
6 Dynamic Workload Management Oracle RAC enables enterprise GRIDS. Enterprise GRIDs are built from standardized commodity-
priced processing, storage and network components. The enterprise GRID enables you to consolidate applications into a cluster with Oracle clusterware allocating the resources to the server
pools running the components of the application such as the SAP application server and the Oracle
RAC database.
In the example below the workload will be managed across the 4 cluster nodes using the database services. Services represent groups of applications with common attributes, service level
thresholds, and priorities. Application functions will be divided into workloads identified by services
either for Batch Jobs or OLTP Trxs based on requirements.
Page | 68
The Oracle RAC provides DB services to perform workload management. Workload management relies on the use of Services, a feature of Oracle database services to hide the complexity of a RAC
database by providing a single system image to manage workload. Services allow applications to benefit from the reliability of a cluster.
An SAP application consists of one or more instances of an application server. Each instance can run on a separate server, but it is also possible to operate multiple instances on one host. An SAP
instance can provide different service types. The standard SAP services that can be configured on all instances of the SAP component are dialog, batch and update. The failure of an SAP instance on
which only these standard services are configured causes all the transactions processed on it to be terminated and rolled back. Database consistency is guaranteed at all times. But long running SQL
SELECT queries are transparently failed over to the surviving nodes. Terminated transactions can
be repeated on one of the instances still available.
The use of Services is required to take advantage of the load balancing advisory and runtime connection load balancing features of Oracle RAC. When a service is created, a service level
goal is defined.
CLB_GOAL
Long(2). Use the LONG connection load balancing method for applications that have long-lived connections. This is typical for connection pools and SQL*Forms sessions. It does not matter if GOAL is set or not for this condition as the point behind this setting is to balance based on number of sessions. LONG is the default connection load balancing goal.
Short(1).Use the SHORT connection load balancing method for applications that have short-lived connections. The database uses first the GOAL setting to have PMON tell the Listener which node to prefer
GOAL
None(0):When set to 0(NONE), this disables the ONS notifications to the Client as to the load of the various nodes.
Service Time (1)Attempts to direct work requests to instances according to response time. So if one node takes longer to do the same work, the client can be informed of this load difference, so it can now direct further work to the node that is taking less time. Load balancing advisory data is based on elapsed time for work done in the service plus available bandwidth to the service.
Page | 69
Throughput(2):Attempts to direct work requests according to throughput. The load balancing advisory is based on the rate that work is completed in the service plus available bandwidth to the service. Instead of figuring out how long something takes, it is the frequency this work occurs that is used. So if node one is able to handle 10 transactions, while node two can handle 12, in the same amount of time, then the client will be told to go to node two. So even if node two will take longer to handle a specific job, it can handle more jobs at one time then node.
srvctl add service -d LP3 -s LP3_D01 -r LP31 -a LP32 -P BASIC -e SELECT -m BASIC -j LONG -B
SERVICE_TIME -z 10 -w 2
srvctl add service -d LP3 -s LP3_D02 -r LP32 -a LP31 -P BASIC -e SELECT -m BASIC -j LONG -B
SERVICE_TIME -z 10 -w 2
srvctl status service -s LP3_D01 -d LP3
Service LP3_D01 is not running.
srvctl start service -s LP3_D01 -d LP3
srvctl status service -s LP3_D01 -d LP3
Service LP3_D01 is running on instance(s) LP31
srvctl start service -s LP3_D02 -d LP3
srvctl status service -s LP3_D02 -d LP3
Service LP3_D02 is running on instance(s) LP32
[oracle@dell00755 ~]$ srvctl config service -d LP3
Service name: LP3_D01
Service is enabled
Server pool: LP3_LP3_D01
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 10
TAF failover delay: 2
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: SERVICE_TIME
TAF policy specification: BASIC
Edition:
Preferred instances: LP31
Available instances: LP32
Service name: LP3_D02
Service is enabled
Server pool: LP3_LP3_D02
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: BASIC
TAF failover retries: 10
TAF failover delay: 2
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: SERVICE_TIME
TAF policy specification: BASIC
Edition:
Preferred instances: LP32
Available instances: LP31
Page | 70
[oracle@dell00755 ~]$
col name format a15
col FAILOVER_METHOD format a10
col FAILOVER_TYPE format a10
select SERVICE_ID,NAME,FAILOVER_METHOD,FAILOVER_TYPE,FAILOVER_RETRIES,
GOAL,CLB_GOAL,ENABLED from dba_services;
SERVICE_ID NAME FAILOVER_M FAILOVER_T GOAL CLB_G ENA
---------- --------------- ---------- ---------- ------------ ----- ---
1 SYS$BACKGROUND NONE SHORT NO
2 SYS$USERS NONE SHORT NO
3 LP3_D01 BASIC SELECT NONE LONG NO
4 LP31 LONG NO
5 LP3XDB LONG NO
6 LP3 LONG NO
7 LP3.WORLD LONG NO
8 LP3_D02 BASIC SELECT NONE LONG NO
9 LP32 LONG NO
9 rows selected.
7 BRBACKUP of ASM DG using EMC BCV
Please setup the environment as per SAPNOTE#1598594: BR*Tools configuration for Oracle
installation under "oracle" user.
Create Control file alias :
- alter diskgroup OLOG add alias '+OLOG/lp3/cntrlLP3.dbf' for '+OLOG/lp3/controlfile/current.261.760059081'
- alter diskgroup MLOG add alias '+MLOG/lp3/cntrlLP3.dbf' for '+MLOG/lp3/controlfile/current.261.760059081'
This procedure combines Storage, EMC related actions, and Database Administration actions. All
Storage related actions are highlighted to differentiate them.
On production we have a set of scripts for each step that are run from the enterprise scheduler.
Steps in Brief:
1. Synchronize the Device Groups (establish) 2. On ASM set balance power = 0 3. Archive current on both instances 4. Place database on backup mode 5. Perform consistent split for disk group DATADG 6. End backup mode on the database 7. Create control files backup on disk group ARCHDG 8. Perform consistent split for disk group ARCHDG that contain archived logs and control file
copies 9. Optionally backup the snapshot site to tape
Snapshot Procedure:
Page | 71
1. Prepare the remote host Oracle Homes (optional)
You may configure a remote site to mount and check the backup consistency. This step is only required at implementation, afterwards do not require maintenance unless there
are structural changes.
a) copy ASM init.ora parameter file: init+ASM.ora to ASM home, dbs directory b) copy ASM password file orapw+ASM to ASM home, dbs directory c) copy Database init.ora parameter file: initTSTASM1.ora to Oracle home, dbs directory d) copy Database password file orapwTSTASM1 to Oracle home, dbs directory e) update hostname on parameter files to reflect target hostname. f) create dump directories for database and asm instance
2. Identify the LUN's in ASM Datadg disk group:
This information needs to be passed to the Storage Manager in order to configure the Symmetrix
Device Groups and is required only at the implementation stage, afterwards it needs to be updated
only if there are structural changes.
GROUP_NUMBER PATH NAME ------------ ------------------------------ ----------- 1 /dev/emcpowera ARCHDG
3 /dev/emcpowere DATADG 3 /dev/emcpowerg DATADG
3. Create Symmetrix Device Groups and Associate BCV devices:
Two groups are required because the group containing the redo logs require to be split after the group containing the data.
4. Establish (synchronize) The Device Groups:
Make sure de Device Groups are established.
5. Online Backup on Production Host:
Turn rebalance power to 0 on ASM instance
a) ALTER DISKGROUP data REBALANCE POWER 0 WAIT; o Force a log switch using archive log current
b) ALTER SYSTEM ARCHIVE LOG CURRENT o Place the database in backup mode
c) ALTER DATABASE BEGIN BACKUP o select file#, status from v$backup
6. Perform a Consistent Split Snapshot for Database Files:
Perform a point in time snapshot of DATADG
7. End Backup Mode:
d) Take the database out of backup mode e) Force a log switch
8. Create Backup of control files:
Page | 72
f) Create 2 controlfile backups, "control_start" and control_backup on ARCHDG g) Prepare an init ora pointing the control file to control_start on ARCHDG if not ready from
point 1. 9. Perform a Consistent Split Snapshot of the Recovery Area to capture the Archive Log:
h) Perform a snapshot of the volumes containing the archived logs ARCHDG 10. Start ASM instance on Backup Host and perform checks. (Optional)
If you did configure a backup server to check the R2 then follow these steps:
i) Startup the ASM instance on the Backup Host j) Check that all diskgroups are successfully mounted k) Startup and mount the database using the parfile created on "i" , that points to
control_start l) recover and startup readonly
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
alter database open read only;
Optional Backup the Backup Steps:
m) Optionally use Backup EMC snapshots of the r2 copy n) Optionally make EMC snapshots of r2 to be used as report instances, development or
tests environments. o) Optionally backup r2 to tape using RMAN.
8 Reference Oracle Paper : Configuration of SAP NetWeaver for Oracle Grid Infrastructure 11.2.0.2 and
Oracle Real Application Clusters 11g Release 2: A Best Practices Guide
Oracle Paper : SAP Databases on Oracle Automatic Storage Management 11g Release 2
Dell Paper : Implementation of SAP Applications Using Oracle Maximum Availability Architecture Best Practices
Homogeneous and Heterogeneous System Copy for SAP Systems Based on SAP Web Application Server 6.20 Document Version 1.25
Migration Monitor Users Guide
Package Splitter Users Guide
SAP Notes :
SAP Note Number Title Description
355771 Oracle: Explanation of the new tablespace layout
527843 Oracle RAC support in the SAP environment
784118 System Copy Java Tools
971646 6.40 Patch Collection Hom./Het.System Copy ABAP
Page | 73
936441 Oracle settings for R3load based system copy
954268 Optimization of export: Unsorted unloading
1043380 Efficient Table Splitting for Oracle Databases
1045847 ORACLE Direct Path Load Support in R3load
1048303 Red Hat Enterprise Linux 5.x: Installation and upgrade
1172419 Linux: Supported Java versions on the x86_64 platform
1122387 Linux: SAP Support in virtualized environments
1122388 Linux: VMware ESX 3.x or vSphere configuration guidelines
1398634 Oracle database 11g: Integration in SAP environment
1478351 Oracle 11g Support on UNIX 6.40-Based SAP Systems
1431800 Oracle 11.2.0 Central Technical Note
1524205 Oracle 11.2.0: Database Software Installation
1550133 Oracle Automatic Storage Management (ASM)
And many more.
9 Authors Customer Success Story: Migrating a traditional physical server configuration to a Dell
SAP High Availability (HA) Tiered Private Cloud Architecture
May 2012
Author: Mahesh Pakala
Mahesh Pakala is a highly experienced professional in Information Technology and Media Industry in the Silicon Valley. He has worked in various senior management roles in the industry and is on advisory panel of companies and venture firms as a technology and business consultant.
He works in the GICS (Global Infrastructure Consulting Services) Group of Dell Inc., and assists large enterprise customers with HA architectures for SAAS/Cloud Implementations. He has extensive work experience in areas of engineering, media and technology with companies, such as Oracle Corporation (System Performance Group & RDBMS Kernel Escalations), Ingres (Computer Associates), Fujitsu and startups. He has been a speaker in the areas of High Availability and System Architecture at various conferences.
Contact for more Information [email protected]
Many Thanks to my esteemed colleagues for their in-depth knowledge, contribution and involvement in this project and without which it could not have been successfully completed Mr. JungFeng Chen(SAP wizard), Matteo Mazzari(VMware wizard) , Alebardi Luca(PM Wizard), Antonio Apollonio(Services wizard) & Emanuele Riva(Fearless Leader) for assisting with the SAP Platform migration implementation on Linux/Dell for a major mission critical application.
Special thanks to our customer for their confidence and trust in our team for the end-to-end delivery capabilities!!
Reviewed By Jan Klokkers(Oracle Corp), Ron Pewitz, Morten Loderup, Nathan Saunders, Thorsten Staerk & Scott Lee
Copyright 2012, Dell Inc. All rights reserved.
Page | 74
This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.
SAP, Oracle, VMware, Redwood, Tibco, TOPCALL, and Dell, are registered trademarks of SAP, Oracle Corporation, VMware, Redwood, Tibco and DELL and/or its affiliates. Other names may be trademarks of their respective owners.
This document is intended to address migrating a database only. Regardless of the method chosen to migrate to a new platform, there are additional areas that must be considered to ensure a successful transition to the new platform, such as understanding platform-specific features and changes in the Oracle, SAP and Redwood software. Refer to the platform-specific installation guides, release notes, and READMEs for details.