52
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 52 White Paper SAP NetWeaver Deployment Using Oracle Database on Cisco UCS July 2015

SAP NetWeaver Deployment Using Oracle Database on · PDF fileSAP NetWeaver Deployment Using Oracle Database on Cisco UCS ... SAP NetWeaver Installation ... With SAP applications on

  • Upload
    haque

  • View
    248

  • Download
    0

Embed Size (px)

Citation preview

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 52

White Paper

SAP NetWeaver Deployment Using Oracle Database on Cisco UCS

July 2015

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 52

Contents

Executive Summary .....................................................................................................................3 Purpose of This Document .................................................................................................................................... 3 Benefits of the Configuration ................................................................................................................................. 3

Solution Overview.................................................................................................................................................... 4 Cisco Unified Computing System .......................................................................................................................... 6 Cisco Nexus 5548UP Switch .............................................................................................................................. 15 Oracle Database 12c .......................................................................................................................................... 16 SAP NetWeaver 7.0 ............................................................................................................................................ 17 SUSE Linux Enterprise Server 12 for SAP Applications ..................................................................................... 18 Network File System Storage .............................................................................................................................. 19

Design Topology .................................................................................................................................................... 19 Hardware and Software Used for This Solution .................................................................................................. 19 Cisco UCS Networking and NFS Storage Topology ........................................................................................... 20

High-Level Cisco UCS Manager Configuration ................................................................................................... 24 Configure Fabric Interconnects for Blade Discovery ........................................................................................... 24 Recommended Configuration for Oracle Database, VLANs, and vNICs ............................................................. 25 Configure the LAN and SAN ............................................................................................................................... 25 Configure Jumbo Frames in the Cisco UCS Fabric ............................................................................................ 27 Create QoS Policies in the Cisco UCS Fabric ..................................................................................................... 28 Configure Ethernet Uplink PortChannels ............................................................................................................ 29 Create Local Disk Configuration Policy (Optional) .............................................................................................. 31 Create FCoE Boot Policies ................................................................................................................................. 31 Create Service Profiles and Associate Them with Blade Servers ....................................................................... 33 Set Global Configurations to Enable Jumbo Frames, Create QoS and Create vPCs on Cisco Nexus 5548UP 37 Configure Cisco UCS Servers and Stateless Computing Using FCoE Boot ....................................................... 38

SUSE Linux Server Configuration and Recommendations ................................................................................ 39 Recommendations .............................................................................................................................................. 40 Configure the Directory Structure and NFS Volumes for SAP Systems .............................................................. 41

SAP NetWeaver Installation .................................................................................................................................. 43 Install the SAP NetWeaver Application ............................................................................................................... 43

Direct NFS Client for Oracle Database Configuration ........................................................................................ 49 Configure Direct NFS Client ................................................................................................................................ 49

Destructive and Hardware Failover Testing ........................................................................................................ 51

Conclusion ............................................................................................................................................................. 51

For More Information ............................................................................................................................................. 52

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 52

Executive Summary

This Cisco® reference architecture describes how to use the Cisco Unified Computing System

™ (Cisco UCS

®) in

conjunction with network-attached storage (NAS) to implement SAP applications (SAP NetWeaver) on Oracle

Database. Cisco UCS provides the computing, networking, and storage-access components of the cluster,

deployed as a single cohesive system. The result is an implementation that addresses many of the challenges that

system administrators and their IT departments face today, including needs for a simplified deployment and

operating model, high performance for SAP applications on Oracle Database, and lower total cost of ownership

(TCO). This document introduces the Cisco UCS and SAP NetWeaver on Oracle Database solution and provides

instructions for implementing it.

Historically, enterprise SAP systems have run on costly symmetric multiprocessing servers that use a vertical

scaling, or scale-up, model. However, as the cost of 1-to-4-socket x86-architecture servers continues to drop while

their processing power increases, a new model has emerged. SAP NetWeaver uses a horizontal scaling, or scale-

out, model, in which the active-active application servers uses multiple servers, each contributing its processing

power to the SAP application, increasing performance, scalability, and availability. The active-active SAP

application servers balance the workload across the servers and can provide continuous availability in the event of

a failure.

One approach used by storage, system, and application administrators to meet the I/O performance needs of

applications is to deploy high-performance drives with faster CPUs. This solution may be effective in environments

with a small number of application users and little movement in hot data sets. However, as the number of

application users increases, requiring more computing power as frequently accessed data sets change constantly,

it becomes increasingly difficult to identify data based on access frequency and redistribute it to the correct storage

media.

As global technology leaders, Cisco and SAP are uniquely positioned to provide high-quality, innovative products

to customers. Together, Cisco and SAP offer differentiated, scalable, highly secure end-to-end solutions. With SAP

applications on Cisco UCS, you can reduce deployment risk, complexity, and TCO. With this solution, you can

transform the way people that connect, communicate, and collaborate.

The Cisco UCS server platform, introduced in 2009, provides a new model for data center efficiency and agility.

Cisco UCS is designed with the performance and reliability needed to power memory-intensive, mission-critical

applications, as well as virtualized workloads. With SAP applications on Cisco UCS, you can help ensure that all

these benefits are delivered to your organization.

Purpose of This Document

This document provides design guidance for implementing SAP applications (SAP NetWeaver) on Cisco UCS,

Cisco Nexus® Family switches, and external storage. This guidance can help field engineers and customers make

decisions when implementing SAP applications on Oracle Database using Cisco UCS. The document describes

the virtual LAN (VLAN), virtual network interface card (vNIC), virtual SAN (VSAN), virtual host bus adapter (vHBA),

PortChannel, and quality-of-service (QoS) requirements and configurations needed to help ensure the stability,

performance, and resiliency demanded by mission-critical data center deployments.

Benefits of the Configuration

The history of enterprise computing has been marked by compromises between scalability and simplicity. As

systems increased in scale, they also increased in complexity. And as complexity increased, so did the expense of

deployment and ongoing management.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 52

Today, more than 70 percent of the IT budget is spent simply to maintain and manage existing infrastructure. IT

organizations must continually increase resources to maintain a growing, complex, and inflexible infrastructure

instead of using their resources to rapidly and effectively respond to business needs.

IT organizations are working with their business counterparts to identify ways to substantially decrease cost of

ownership while increasing IT business value. Cisco UCS helps address these challenges by simplifying data

center resources, scaling service delivery, and radically reducing the number of devices requiring setup,

management, power and cooling, and cabling.

Cisco UCS can deliver these benefits through:

● Reducing TCO at the platform, site, and organizational levels

● Increasing IT staff productivity and business agility through just-in-time provisioning and mobility support for

both virtualized and nonvirtualized environments

● Enabling scalability through a design for up to 320 discrete servers and thousands of virtual machines in a

single highly available management domain

● Using industry standards supported by a partner ecosystem of innovative, trusted industry leaders

The benefits of using SAP applications with Oracle Database on Cisco UCS include the following:

● High availability for the SAP application stack

● High availability for Oracle Database

● Stateless computing and easy deployment model

● Prioritization of network bandwidth using QoS policy

Solution Overview

This solution provides a high-level architecture with Cisco UCS, SAP, and Oracle technologies that demonstrate

the implementation of SAP applications with Oracle Database on Cisco UCS using Network File System (NFS)

storage.

This solution uses the following infrastructure and software components:

● Cisco Unified Computing System*

● Cisco Nexus 5548UP Switches

● NFS storage components

● SAP NetWeaver

● Oracle Database

● SUSE Linux

*Cisco Unified Computing System includes all the hardware and software components required for this deployment solution.

Figure 1 shows the architecture and the connectivity layout for this deployment model.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 52

Figure 1. Solution Architecture

Cisco UCS 6248 Fabric Interconnect -A Cisco UCS 6248 Fabric Interconnect -B

Ethernet 2 X 10 Gbps Ethernet 2 X 10 Gbps

Cisco Nexus 5548 UP - A Cisco Nexus 5548 UP -B

vPC-Storage

vPC (Private-Storage-Backup-Restore)vPC (Public-Storage-Backup-Restore)

vPC-Storage

Chassis 01: Cisco UCS 5108 Chassis 02: Cisco UCS 5108

Cisco UCS 2204XP IOM

Oracle DB Server

SAP Application Server1

Oracle DB-DR Server

SAP Application Server2

B200 M3 B200 M3 B200 M3 B200 M3

Cisco UCS 2204XP IOM

NFS Storage1 NFS Storage2

This section describes the individual components that define this architecture.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 52

Cisco Unified Computing System

Cisco UCS is a third-generation data center platform that unites computing, networking, storage access, and

virtualization resources into a cohesive system designed to reduce TCO and increase business agility (Figure 2).

Figure 2. Cisco UCS Third-Generation Computing: The Power of Unification

VIC 1340& 1380

UCS 6248 FabricInterconnect

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 52

The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-

architecture servers. The system is an integrated, scalable, multichassis platform in which all resources participate

in a unified management domain that is controlled and managed centrally (Figure 3).

Figure 3. Cisco UCS Unites Computing, Networking, Storage Access, and Virtualization Resources into a Cohesive System

The main components of Cisco UCS (Figure 4) are summarized here:

● Computing: The system is based on an entirely new class of computing system that incorporates blade

servers based on Intel® Xeon

® processor E5-2600 series CPUs. Cisco UCS B-Series Blade Servers work

with virtualized and nonvirtualized applications to increase performance, energy efficiency, flexibility, and

productivity.

● Networking: The system is integrated onto a low-latency, lossless, 80-Gbps unified network fabric. This

network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks, which in

traditional systems are separate networks. The unified fabric lowers costs by reducing the number of

network adapters, switches, and cables and by decreasing the power and cooling requirements.

● Storage access: The system provides consolidated access to both SAN and NAS over the unified fabric.

By unifying storage access, Cisco UCS can access storage over Ethernet, Fibre Channel, Fibre Channel

over Ethernet (FCoE), and Small Computer System Interface over IP (iSCSI). This capability gives

customers options for setting storage access and provides investment protection. Additionally, server

administrators can reassign storage-access policies for system connectivity to storage resources, thereby

simplifying storage connectivity and management for increased productivity.

● Management: The system uniquely integrates all system components, which enables the entire solution to

be managed as a single entity by Cisco UCS Manager. The manager has an intuitive GUI, a command-line

interface (CLI), and a robust API for managing all system configuration and operations.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 52

Figure 4. Cisco UCS Components

Cisco UCS is designed to deliver:

● Reduced TCO, increased return on investment (ROI), and increased business agility

● Increased IT staff productivity through just-in-time provisioning and mobility support

● A cohesive, integrated system that unifies the technology in the data center, with the system managed,

serviced, and tested as a whole

● Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the

capability to scale I/O bandwidth to match demand

● Industry standards supported by a partner ecosystem of industry leaders

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 52

Cisco UCS Blade Chassis

The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of Cisco UCS, delivering a scalable

and flexible blade server chassis.

The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-

inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can

accommodate both half-width and full-width blade form factors (Figure 5).

Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power

supplies are 92 percent efficient and can be configured to support nonredundant, N+1 redundant, and grid-

redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one

per power supply), and two I/O bays for Cisco UCS 2204XP Fabric Extenders.

A passive midplane provides up to 40 Gbps of I/O bandwidth per server slot and up to 80 Gbps of I/O bandwidth

for two slots. The chassis is capable of supporting future 80 Gigabit Ethernet standards.

Figure 5. Cisco Blade Server Chassis (Front, Rear, and Populated with Blade Servers)

SLOT1

SLOT5

SLOT3

SLOT7

SLOT2

SLOT6

SLOT4

SLOT8

!

UCS 5108

OK FAIL OK FAIL OK FAIL OK FAIL

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

FAN STATUS

CHS A56

FAN 1 FAN 5 FAN 2 FAN 6 FAN 3 FAN 7 FAN 4 FAN 8

!

8

7

6

5

UCS 2208XP

4

3

2

1

8

7

6

5

UCS 2208XP

4

3

2

1

SLOT1

SLOT5

SLOT3

SLOT7

SLOT2

SLOT6

SLOT4

SLOT8

!

UCS 5108

OK FAIL OK FAIL OK FAIL OK FAIL

! ResetConsole

UCS B200 M3

! ResetConsole

UCS B200 M3

! ResetConsole

UCS B200 M3

! ResetConsole

UCS B200 M3

! ResetConsole

UCS B200 M3

! ResetConsole

UCS B200 M3

! ResetConsole

! !

UCS B200 M1

! ResetConsole

! !

UCS B200 M1

UCS 5108 Front UCS 5108 Rear

UCS 5108 with B200M3 and B200M2

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 52

Cisco UCS B200 M3 Blade Server

The Cisco UCS B200 M3 Blade Server (Figure 6) is a half-width, 2-socket blade server. The system uses two Intel

Xeon processor E5-2600 series CPUs, up to 384 GB of DDR3 memory, two optional hot-swappable small form-

factor (SFF) serial-attached SCSI (SAS) disk drives, and two virtual interface card (VIC) adaptors, providing up to

80 Gbps of I/O throughput. The server balances simplicity, performance, and density for production-level

virtualization and other mainstream data center workloads.

Figure 6. Cisco UCS B200 M3 Blade Server

! ResetConsole

UCS B200 M3

Cisco UCS Virtual Interface Card 1340

The Cisco UCS VIC 1340 (Figure 7) is a 2-port, 40 Gigabit Ethernet, FCoE-capable modular LAN on motherboard

(mLOM) mezzanine adapter. This innovation is designed exclusively for the B200 M3 and M4 generation of Cisco

UCS B-Series Blade Servers.

Figure 7. Cisco UCS VIC 1340

The VIC provides the capability to define, create, and use interfaces on demand provides a stateless and agile

server infrastructure. The personality of the card is determined dynamically at boot time using the service profile

associated with the server. The service profile is used to determine the number of PCI Express (PCIe) interfaces,

their type (vNIC or vHBA), identity (MAC address) and worldwide name (WWN), failover policy, bandwidth, and

QoS.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 52

The VIC also offers next-generation data center features. The hardware classification engine provides support for

advanced data center requirements. including:

● Stateless network offloads for Virtual Extensible LAN (VXLAN) and Network Virtualization Using Generic

Routing Encapsulation (NVGRE)

● Low-latency features of the Cisco user-space NIC (usNIC)

● High-bandwidth protocol Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE)

● Performance-optimization applications such as Virtual Machine Queue (VMQ), Intel Data Plane

Development Kit (DPDK), and NetFlow

Cisco UCS 6248UP Fabric Interconnect

The fabric interconnects provide a single point for connectivity and management for the entire system. Typically

deployed as an active-active pair, the system’s fabric interconnects integrate all components into a single, highly

available management domain controlled by Cisco UCS Manager. The fabric interconnects manage all I/O

efficiently and securely at a single point, resulting in deterministic I/O latency regardless of a server or virtual

machine’s topological location in the system.

Cisco UCS 6200 Series Fabric Interconnects support the system’s 80-Gbps unified fabric with low-latency,

lossless, cut-through switching that supports IP, storage, and management traffic using a single set of cables. The

fabric interconnects provide virtual interfaces that terminate both physical and virtual connections equivalently,

establishing a virtualization-aware environment in which blade servers, rack servers, and virtual machines are

interconnected using the same mechanisms. The Cisco UCS 6248UP (Figure 8) is a 1RU fabric interconnect with

up to 48 universal ports that can support 80 Gigabit Ethernet, FCoE, and native Fibre Channel connectivity.

Figure 8. Cisco UCS 6248UP 20-Port Fabric

Cisco UCS 6248UP Rear

Cisco UCS 6248UP Front

Cisco UCS Manager

Cisco UCS Manager is an embedded, unified manager that provides a single point of management for Cisco UCS.

The manager can be accessed through an intuitive GUI, a CLI, or the comprehensive open XML API. It manages

the physical assets of the server and storage and LAN connectivity. It is designed to simplify the management of

virtual network connections through integration with several major hypervisor vendors. It provides IT departments

with the flexibility to allow people to manage the system as a whole, or to assign specific management functions to

individuals based on their roles as managers of server, storage, or network hardware assets. It simplifies

operations by automatically discovering all the components available in the system and enabling a stateless model

for resource use.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 52

Some of the main elements managed by Cisco UCS Manager are:

● Cisco UCS Integrated Management Controller (IMC) firmware

● RAID controller firmware and settings

● BIOS firmware and settings, including server universal user ID (UUID) and boot order

● Converged network adapter (CNA) firmware and settings, including MAC addresses and WWNs and SAN

boot settings

● Virtual port groups used by virtual machines, using Cisco Data Center Virtual Machine Fabric Extender

(VM-FEX) technology

● Interconnect configuration, including uplink and downlink definitions, MAC address and WWN pinning,

VLANs, VSANs, QoS, bandwidth allocations, Data Center VM-FEX settings, and EtherChannels to

upstream LAN switches

Cisco UCS is designed from the foundation to be programmable and self-integrating. A server’s entire hardware

stack, ranging from server firmware and settings to network profiles, is configured through model-based

management. With Cisco VICs, even the number and type of I/O interfaces are programmed dynamically, making

every server ready to power any workload at any time.

With model-based management, administrators manipulate a desired system configuration and associate a

model’s policy-based service profiles with hardware resources, and the system configures itself to match the

requirements. This automation accelerates provisioning and workload migration with accurate and rapid scalability.

The result is increased IT staff productivity, improved compliance, and reduced risk of failure due to inconsistent

configurations. This approach represents a radical simplification compared to traditional systems, reducing capital

expenditures (CapEx) and operating expenses (OpEx) while increasing business agility, simplifying and

accelerating deployment, and improving performance.

Cisco UCS Service Profiles

A server’s identity is made up of many properties such as UUID, boot order, IPMI settings, BIOS firmware, BIOS

settings, RAID settings, disk scrub settings, number of NICs, NIC speed, NIC firmware, MAC and IP addresses,

number of HBAs, HBA WWNs, HBA firmware, Fibre Channel fabric assignments, QoS settings, VLAN

assignments, and remote keyboard, video, and monitor (KVM). This entire long list of points of configuration need

to be configured to give the server its identity and make it unique compared to every other server in your data

center. Some of these parameters are stored in the hardware of the server itself (BIOS firmware version, BIOS

settings, boot order, Fibre Channel boot settings, etc.), and some settings are stored on your network and storage

switches (VLAN assignments, Fibre Channel fabric assignments, QoS settings, access control lists [ACLs], etc.). In

a traditional configuration process (Figure 9), this complexity results in the following server deployment challenges:

● Long deployment cycles

◦ Coordination among server, storage, and network teams required for every deployment

◦ Need to verify correct firmware and settings for hardware components

◦ Need for appropriate LAN and SAN connectivity

● Long response time to address business needs

◦ Tedious deployment processes

◦ Manual, error-prone processes that are difficult to automate

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 52

◦ High OpEx and outages caused by human errors

● Limited OS and application mobility

◦ Storage and network settings tied to physical ports and adapter identities

◦ Static infrastructure that leads to overprovisioning and higher OpEx

Figure 9. Traditional Approach to Provisioning

Cisco UCS uniquely addresses these challenges with the introduction of service profiles that enable integrated,

policy-based infrastructure management. Service profiles contain the details for nearly all configurable parameters

required to set up a physical server. A set of user-defined policies (rules) allow quick, consistent, repeatable, and

secure deployment of Cisco UCS servers (Figure 10).

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 52

Figure 10. Cisco UCS Service Profiles Provide Integrated Policy-Based Infrastructure Management

Cisco UCS service profiles contain values for a server's property settings, including vNICs, MAC addresses, boot

policies, firmware policies, fabric connectivity, external management, and high-availability information. These

settings are abstracted from the physical server to a service profile, and the service profile can then be deployed to

any physical computing hardware within the Cisco UCS domain. Furthermore, service profiles can, at any time, be

migrated from one physical server to another. This logical abstraction of the server personality eliminates the

dependency on the hardware type or model and is a result of Cisco’s unified fabric (rather than a fabric with

software tools overlayed on top).

This innovation is still unique in the industry despite competitors’ claims of offering similar capabilities. In most

cases, these vendors rely on several different methods and interfaces to configure these server settings.

Furthermore, Cisco is the only hardware provider to offer a truly unified management platform, with Cisco UCS

service profiles and hardware abstraction capabilities extending to both blade and rack servers.

Some of main features and benefits of service profiles are discussed in the following sections.

Service Profiles and Templates

A service profile contains configuration information about the server hardware, interfaces, fabric connectivity, and

server and network identity. Cisco UCS Manager provisions servers using service profiles. The manager

implements role-based and policy-based management focused on service profiles and templates. A service profile

can be applied to any blade server to provision it with the characteristics required to support a specific software

stack. A service profile allows server and network definitions to move within the management domain, enabling

flexibility in the use of system resources.

Service profile templates are stored in the Cisco UCS 6200 Series Fabric Interconnects for reuse by server,

network, and storage administrators. Service profile templates consist of server requirements and the associated

LAN and SAN connectivity. Service profile templates allow different classes of resources to be defined and applied

to a number of resources, each with its own unique identities assigned from predetermined pools.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 52

Cisco UCS Manager can deploy the service profile on any physical server at any time. When a service profile is

deployed to a server, the manager automatically configures the server, adapters, fabric extenders, and fabric

interconnects to match the configuration specified in the service profile. A service profile template parameterizes

the UIDs that differentiate server instances.

This automation of device configuration reduces the number of manual steps required to configure servers, NICs,

HBAs, and LAN and SAN switches.

Programmatically Deployed Server Resources

Cisco UCS Manager provides centralized management capabilities, creates a unified management domain, and

serves as the central nervous system of Cisco UCS. Cisco UCS Manager is embedded device management

software that manages the entire system as a single logical entity through an intuitive GUI, CLI, or XML API. The

manager implements role- and policy-based management using service profiles and templates. This construct

improves IT productivity and business agility. Now infrastructure can be provisioned in minutes instead of days,

shifting IT’s focus from maintenance to strategic initiatives.

Dynamic Provisioning

Cisco UCS resources are abstract in the sense that their identity, I/O configuration, MAC addresses and WWNs,

firmware versions, BIOS boot order, and network attributes (including QoS settings, ACLs, pin groups, and

threshold policies) all are programmable using a just-in-time deployment model. A service profile can be applied to

any blade server to provision it with the characteristics required to support a specific software stack. A service

profile allows server and network definitions to move within the management domain, enabling flexibility in the use

of system resources. Service profile templates allow different classes of resources to be defined and applied to a

number of resources, each with its own unique identities assigned from predetermined pools.

Cisco Nexus 5548UP Switch

The Cisco Nexus 5548UP (Figure 11) is a 1RU 1 and 10 Gigabit Ethernet switch offering throughput of up to 960

gigabits per second and scaling up to 48 ports. It offers thirty-two 1 and 10 Gigabit Ethernet fixed enhanced Small

Form-Factor Pluggable (SFP+) Ethernet and FCoE or 1-, 2-, 4-, and 8-Gbps native Fibre Channel unified ports and

three expansion slots. These slots have a combination of Ethernet and FCoE and native Fibre Channel ports.

Figure 11. Cisco Nexus 5548UP Switch

The Cisco Nexus 5548UP delivers innovative architectural flexibility, infrastructure simplicity, and business agility,

with support for networking standards. For traditional, virtualized, unified, and HPC environments, it offers a long

list of IT and business advantages, including the following:

● Architectural flexibility

◦ Unified ports that support traditional Ethernet, Fibre Channel, and FCoE

◦ Synchronized system clocks with accuracy to less than one microsecond, based on IEEE 1588

◦ Support for secure encryption and authentication between two network devices, based on Cisco

TrustSec® security and IEEE 802.1AE

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 52

◦ Converged fabric extensibility, based on emerging standard IEEE 802.1BR, with Cisco Fabric Extender

Technology (FEX Technology) portfolio, including:

− Cisco Nexus 2000 Series Fabric Extenders

− Cisco Adapter FEX

− Cisco Data Center VM-FEX

● Infrastructure simplicity

◦ Common high-density, high-performance, data-center-class, fixed-form-factor platform

◦ Consolidation of LAN and storage

◦ Support for any transport over an Ethernet-based fabric, including Layer 2 and Layer 3 traffic

◦ Support for storage traffic, including iSCSI, NAS, Fibre Channel, RoE, and IBoE Reduced number of

management points with FEX Technology

● Business agility

◦ Support for diverse data center deployments on one platform

◦ Rapid migration and transition for traditional and evolving technologies

◦ Performance and scalability to meet growing business needs

● Specifications at a glance

◦ 1RU 1 and 10 Gigabit Ethernet switch

◦ 32 fixed unified ports on the base chassis plus one expansion slot, totaling 48 ports

◦ Slot can support any of the three modules: unified ports, 1-, 2-, 4-, and 8-Gbps native Fibre Channel, and

Ethernet and FCoE

◦ Throughput of up to 960 Gbps

Oracle Database 12c

Oracle Database 12c introduces a new multitenant architecture that makes it easy to consolidate many databases

quickly and manage them as a cloud service. Database 12c also includes in-memory data processing capabilities,

delivering outstanding analytical performance. Additional database innovations deliver new levels of efficiency,

performance, security, and availability. Database 12c comes in three editions to fit your business needs and

budget: Enterprise Edition, Standard Edition, and Standard Edition One.

Oracle Multitenant is a new option for Database 12c Enterprise Edition that helps customers reduce IT costs by

simplifying consolidation, provisioning, upgrades, and more. It is supported by a new architecture that allows a

container database to hold many pluggable databases. And it fully complements other options, including Oracle

Real Application Clusters (RAC) and Oracle Active Data Guard. An existing database can be simply adopted, with

no change, as a pluggable database; no changes are needed in the other tiers of the application. The benefits of

Oracle Multitenant include the following:

● High consolidation density: The many pluggable databases in a single container database share its memory

and background processes, letting you operate many more pluggable databases on a particular platform

than you can single databases that use the old architecture. This is the same benefit that schema-based

consolidation offers. But there are significant barriers to the adoption of schema-based consolidation, and

this model causes ongoing operating problems. The new architecture eliminates these adoption barriers

and operating problems.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 52

● Rapid provisioning and cloning using SQL: A pluggable database can be unplugged from one container

database and plugged into another. Alternatively, you can clone one within the same container database or

from one container database to another. These operations, together with creation of a pluggable database,

are performed with new SQL commands and take just seconds. When the underlying file system supports

thin provisioning, many terabytes can be cloned almost instantaneously simply by using the keyword

snapshot in the SQL command.

● New models for rapid patching and upgrades: The time and effort invested to patch one container database

patches all of Oracle Multitenant’s many pluggable databases. To patch a single pluggable database, you

simply unplug and then plug in to a container database with a different Oracle Database software version.

● Manage many databases as one: By consolidating existing databases as pluggable databases,

administrators can manage many databases as one. For example, tasks like backup and disaster recovery

are performed at the container database level.

● Dynamic pluggable database resource management: Oracle Database 12c Resource Manager is extended

with specific functions to instantly control the competition between the pluggable databases within the

container database.

Oracle Database 12c Direct NFS Client

Direct NFS Client is a client developed, integrated, and optimized by Oracle. It runs in the user space rather than

within the operating system kernel. This architecture provides enhanced scalability and performance compared to

traditional NFS Version 3 (NFSv3) clients. Unlike traditional NFS implementations, Direct NFS supports

asynchronous I/O across all operating system environments. In addition, performance and scalability are

dramatically improved with its automatic link aggregation feature. This feature allows the client to scale across up

to four individual network pathways with the added benefit of improved resiliency when network connectivity is

occasionally compromised. It also allows Direct NFS Client to achieve near-block-level performance.

SAP NetWeaver 7.0

SAP NetWeaver is SAP’s primary technology computing platform and the technical foundation for many SAP

applications. It is a solution stack of SAP's technology products. The SAP Web Application Server (sometimes

referred to as WebAS) is the runtime environment for the SAP applications, and all the mySAP Business Suite

solutions (Supplier Relationship Management [SRM], Customer Relationship Management [CRM], Supply Chain

Management [SCM], Product Lifecycle Management [PLM], and Enterprise Resource Planning [ERP]) run on SAP

WebAS.

The NetWeaver technology platform provides the shared technology foundation for SAP business applications. The

foundation components that are part of NetWeaver provide infrastructure support for the creation, extension,

deployment, and management of SAP applications across the development lifecycle. The components also enable

the extension of SAP applications into new solution areas through a large partner ecosystem of experienced

developers.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 52

NetWeaver offers:

● Reliable and scalable application server infrastructure for industry-leading business applications

● Proven robustness through around 100,000 productive installations worldwide

● Design support for team development in the Advanced Business Application Programming (ABAP) and

Java programming languages based on open standards

● Lifecycle management and operations functions to reduce cost of ownership

● Vast partner ecosystem providing software enhancements and support for best practices

SAP NetWeaver Application Server ABAP is one of the major foundation components of SAP’s technology platform

powering the vast majority of SAP business applications.

The product is marketed as a service-oriented application and integration platform. It can be used for custom

development and integration with other applications and systems and is built primarily using the ABAP

programming language, but also uses C (programming language), C++, and Java EE. It can also be extended with,

and interoperate with, technologies such as Microsoft .NET, Java EE, and IBM WebSphere.

SUSE Linux Enterprise Server 12 for SAP Applications

SUSE Linux Enterprise Server (SLES) 12 for SAP Applications is based on SUSE Linux Enterprise 12, a versatile

server operating system for deploying highly available enterprise-class IT services in mixed IT environments with

best-in-class performance and reduced risk.

SLES is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in

physical, virtual, and cloud environments.

With this affordable, interoperable, and manageable open-source foundation, enterprises can cost-effectively

deliver core business services, enable secure networks, and easily manage their heterogeneous IT resources,

increasing efficiency and value. It is also SAP’s own development platform for Linux, and SAP and SUSE have a

close joint testing and development relationship that starts at the SAP Linux Lab in Germany. The result: for SAP

workloads on Linux, there is no smarter choice.

SLES 12 for SAP Applications offers the following benefits to enterprise customers running mission-critical SAP

workloads:

● Full operating system rollback: The Full System Rollback feature gives SAP customers better resiliency by

allowing them to take snapshots of the system, including the kernel files, and roll back if needed. Because

the bootloader is now integrated into the rollback process, system administrators can boot from a snapshot,

enhancing fault recovery, change tracking and comparison, and data safety.

Ready for SUSE Linux Enterprise Live Patching: SLES 12 includes the infrastructure for a live kernel patching

technology delivered through SUSE Linux Enterprise Live Patching. With it, SAP customers can update security

patches without rebooting their machines and without waiting for the next service window. This feature improves

the availability of mission-critical SAP workloads and virtual hosts, which increases business uptime.

Extensions for clustering: The SUSE Linux Enterprise High Availability Extension offers an industry-leading,

mature, high-availability solution that enhances business continuity through easier monitoring capabilities; a

mature, fifth-generation Pacemaker-based high-availability solution; and an update to the latest Relax and Recover

(ReaR) version.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 52

Hardware enablement: SLES 12 for SAP Applications uses the hardware enablement provided by SUSE Linux

Enterprise 12 to allow customers to run SAP solutions on new hardware platforms. This feature makes SLES 12 for

SAP Applications an excellent platform for SAP in-memory products such as SAP HANA.

Network File System Storage

NFS is a distributed file system protocol originally developed by Sun Microsystems. It allows a user on a client

computer to access files over a network much like local storage is accessed. NFS, like many other protocols, builds

on the Open Network Computing Remote Procedure Call (ONC RPC) system. NFS is an open standard defined in

RFCs, allowing anyone to implement the protocol.

Network File System Version 3

NFSv3 offers the following features:

● Support for 64-bit file sizes and offsets, to handle files larger than 2 GB

● Support for asynchronous write operations on the server, to improve write performance

● Additional file attributes in many replies, to avoid the need to refetch them

● READDIRPLUS operation, to get file handles and attributes along with file names when scanning a

directory

● Assorted other improvements

For detailed information about the NFS parameters to access NFS volumes for the Oracle database used for SAP

applications, see https://docs.oracle.com/database/121/CWLIN/storage.htm - CWLIN278.

Design Topology

This section presents physical and logical high-level design considerations for Cisco UCS networking and

computing using NFS storage for SAP NetWeaver with Oracle Database deployments.

Hardware and Software Used for This Solution

Table 1 lists the software and hardware used for SAP NetWeaver with Oracle Database deployments.

Table 1. Software and Hardware for SAP NetWeaver with Oracle Database Deployments

Vendor Name Version or Model Description

Cisco Cisco UCS 6248UP Cisco UCS Manager 2.2(3e) Cisco UCS 6200 Series unified port fabric interconnect

Cisco Cisco UCS chassis Cisco UCS 5108 Blade Server Chassis Chassis

Cisco Cisco UCS I/O module (IOM) Cisco UCS 2204XP Fabric Extender IOM

Cisco Cisco Nexus 5548UP Cisco NX-OS Software Cisco Nexus 5500 platform unified port switch

Cisco Cisco UCS blade server Cisco UCS B200 M3 Blade Server Half-width blade server (database server)

Cisco Cisco UCS VIC adaptor Cisco UCS VIC 1340 mLOM VIC

SAP SAP NetWeaver SAP NetWeaver 7.0 SAP ERP applications

Oracle Oracle 12c Database Oracle Database 12.1.0.2.0 Database software

SUSE SUSE Linux Enterprise 12 SLES 12 Operating system

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 52

Cisco UCS Networking and NFS Storage Topology

This section explains Cisco UCS networking and computing design considerations when deploying SAP

NetWeaver with Oracle Database in an NFS storage design. In this design, the NFS traffic is isolated from the

regular management and application data network using the same Cisco UCS infrastructure by defining logical

VLAN networks and QoS and tagging the vNIC with the appropriate class of service (CoS) to provide better data

security. Figure 12 presents a detailed view of the physical topology and some of the main components of Cisco

UCS in an NFS network design.

Figure 12. Cisco UCS Networking and NFS Storage Network Topology

Oracle DB-DR SAP-App-Server2

Oracle DB Server

SAP-App-Server

Cisco UCS 2204 IOM

Cisco UCS 2204 IOM

As shown in Figure 12, a pair of Cisco UCS 6248UP Fabric Interconnects carries both storage and network traffic

from the blades with the help of the Cisco Nexus 5548UP Switch. The 10-Gbps FCoE traffic leaves the Cisco UCS

fabrics through Cisco Nexus 5548UP Switches to NFS storage. As larger enterprises adopt virtualization, I/O

requirements become much higher. To effectively handle the higher I/O requirements, FCoE boot is a better

solution.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 52

Both the fabric interconnect and the Cisco Nexus 5548UP Sitch are clustered with the peer link between them to

provide high availability. Two virtual PortChannels (vPCs) are configured to provide public network, private

network, and storage access paths for the blades to northbound switches. Each vPC has VLANs created for

application network data, NFS storage data, and management data paths. For more information about vPC

configuration on the Cisco Nexus 5548UP Switch, see

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-543563.html.

As illustrated in Figure 12 and detailed in Table 2, eight links (four per chassis) go to fabric interconnect A (ports 1

through 8), and eight links go to fabric interconnect B. Fabric interconnect A links are used for public network and

NFS storage network traffic. Fabric interconnect B links are used for Oracle Database and SAP application backup

and restore network traffic and NFS storage network traffic.

Table 2. vPC Details

Network vPC ID VLAN ID

Fabric A 33 760, 191, 192, 120, and 121

Fabric B 34 760, 191, 192, 120, and 121

NFS Storage 1 3 101, 120, and 760

NFS Storage 2 4 102, 121, and 760

The advantage of Cisco VICs that set them apart from other network adapters is their powerful QoS capabilities. A

VIC can provide fair sharing of bandwidth for all its virtual adapters, and the policy engine that defines the way that

bandwidth is shared on the VIC conveniently is centrally defined by the systemwide Cisco UCS Manager. Figure

13 shows how the Cisco VIC schedules bandwidth for its virtual adapters.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 52

Figure 13. Cisco VIC QoS Capabilities

Fabric A

FEX B

Fabric B

FEX A

Cisco VIC

Bandwidth Scheduler Bandwidth Scheduler

10GE10GE

10% 40% 10% 40%

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

vNIC1

vHBA1

vNIC2

vNIC3

vNIC4

vHBA2

vNIC5

vNIC6

10% 40% 10% 40%

[8] CoS Queues

NoDrop

NoDrop

CoS 1 CoS 3 CoS 2 CoS 4 CoS 1 CoS 3 CoS 2 CoS 4

The example in Figure 13 uses six vNICs and two vHBAs, which the Cisco VIC presents to the host as real

adapters. The VIC has eight traffic queues, each based on a CoS value. This value, when used, typically is marked

on the Ethernet packets as they traverse the network to indicate special treatment at each hop. In this example,

each virtual adapter is associated with a specific CoS value (which was centrally configured using Cisco UCS

Manager). Therefore, any traffic received from the server by a virtual adapter will be assigned to and marked with a

specific CoS value. It will then be placed in its associated queue for transmission scheduling (if there is

congestion); otherwise, the traffic will be marked and immediately sent.

Another policy centrally defined in Cisco UCS Manager is the minimum percentage of bandwidth to which each

CoS queue will always be entitled (if there is congestion). For example, vNIC3 and vNIC6 are assigned to CoS 5,

for which the VIC will always provide 40 percent of the 10 Gigabit Ethernet link during periods of congestion.

Similarly, each vHBA has 40 percent of the bandwidth guaranteed, and vNIC1, vNIC2, vNIC4, and vNIC5 each

have 10 percent of the bandwidth guaranteed.

With intelligent QoS, the sum of the minimum bandwidth values is 100 percent. This approach differs greatly from

other, less intelligent approaches that use rate limiting, sometimes also called traffic shaping, in which the sum of

the maximum bandwidth values must not exceed 100 percent.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 52

Table 3 shows the vNICs and their requirements for this configuration. However, vNICs are not limited to the

number shown here. You can always add more vNICs as according to your throughput requirements and your

needs for isolation of network traffic.

Table 3. vNIC Details

vNIC VLAN Purpose

vNIC1 760 Public network

vNIC2 192 Backup and restore

vNIC3 120 Data, log, and storage access

vNIC4 191 Private network

vNIC5 192 Backup and restore

vNIC6 121 Data, log, and storage access

Table 4 shows the vHBAs and their requirements for this configuration.

Table 4. vHBA Details

vHBA VSAN Purpose

vHBA1 101 FCoE and SAN boot for fabric A

vHBA2 102 FCoE and SAN boot for fabric B

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 24 of 52

High-Level Cisco UCS Manager Configuration

This document provides only high-level configuration details. For detailed configuration guidance, you need to

follow a Cisco Validated Design.

Here are the high-level steps for Cisco UCS configuration:

1. Configure fabric interconnects for chassis and blade discovery.

a. Configure global policies.

b. Configure server ports.

2. Configure the LAN and SAN on Cisco UCS Manager.

a. Configure and enable Ethernet LAN uplink ports.

b. Configure and enable Fibre Channel SAN uplink ports.

c. Configure VLANs.

d. Configure VSANs.

3. Configure UUID, MAC address, worldwide node name (WWNN), and worldwide port name (WWPN) pools.

a. Create UUID pool.

b. Create IP address pool and MAC address pool.

c. Create WWNN pool and WWPN pool.

4. Configure vNIC and vHBA templates.

a. Create vNIC templates.

b. Create public vNIC template.

c. Create private vNIC template.

d. Create storage vNIC template.

e. Create HBA templates.

5. Configure Ethernet uplink PortChannels.

6. Create server boot policy for SAN boot.

Configure Fabric Interconnects for Blade Discovery

Cisco UCS 6248UP Fabric Interconnects are configured for redundancy to provide resiliency in the even of failures.

The first step is to establish connectivity between the blades and fabric interconnects.

Configure and Enable Ethernet LAN Uplink Ports

1. Choose Equipment > Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports, select the

desired number of ports, and right-click and choose Configure as Uplink Port (Figure 14).

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 25 of 52

Figure 14. Configure Ethernet LAN and FCoE Uplink Ports

In Figure 14, ports 31 and 32 on fabric interconnect A are selected and configured as Ethernet uplink ports.

2. Repeat the same process on fabric interconnect B to configure ports 31 and 32 as Ethernet uplink ports.

3. Select ports 29 and 30 on both fabrics and configure them as FCoE uplink ports for FCoE boot.

You will use these ports to create PortChannels in a later step.

Recommended Configuration for Oracle Database, VLANs, and vNICs

For Direct NFS Client instances running on Linux, best practices recommend that you always use multiple paths in

separate subnets. If multiple paths are configured in the same subnet, the operating system invariably picks the

first available path from the routing table. All traffic flows through this path, and the load balancing and scaling do

not work as expected. Refer to Oracle metalink note 822481.1 for more details.

Configure the LAN and SAN

For this configuration, you create VLANs 120 and 121 for storage access, and VSANs 101 and 102 for FCoE boot.

Configure VLANs

1. In Cisco UCS Manager, choose LAN > LAN Cloud > VLAN. Right-click and choose Create VLANs.

In this example, you need to create five VLANs: one for private traffic (VLAN 191), one for public network traffic

(VLAN 760), two for storage traffic (VLANs 120 and 121), and one for backup and restore traffic (VLAN 192).

These five VLANs will be used in the vNIC templates that are discussed later. Figure 15 shows VLAN 760 created

for the public network.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 26 of 52

Figure 15. Create VLAN for Public Network

Note: Be sure to create all VLANs as global across both fabric interconnects. This way, VLAN identity is

maintained across the fabric interconnects in the event of NIC failover.

2. Create VLANs for public, private, backup and restore, and storage traffic.

Here is the summary of the VLANs after you complete VLAN creation:

● VLAN ID 760 for public interfaces

● VLAN ID 191 for private interfaces

● VLAN ID 120 and VLAN 121 for storage access

● VLAN ID 192 for backup and restore operations

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 27 of 52

Configure VSANs

1. In Cisco UCS manager, choose SAN > SAN Cloud > VSANs. Right-click and choose Create VSAN

(Figure 16).

Figure 16. Configuring VSANs in Cisco UCS Manager

2. In this example, create VSANs 101 and 102 for SAN boot.

You have created a VSAN on both fabrics. For the VSAN on fabric A, the VSAN ID is 101, and the FCoE VLAN ID

is 101. On fabric B, the VSAN ID is 102, and the FCoE VLAN ID is 102.

Note: Even if you do not use FCoE for SAN storage traffic, you must specify a VLAN ID.

Configure Jumbo Frames in the Cisco UCS Fabric

To configure jumbo frames and enable quality of service in the Cisco UCS fabric, follow these steps:

1. In Cisco UCS Manager, click the LAN tab in the navigation pane.

2. Choose LAN > LAN Cloud > QoS System Class.

3. In the right pane, click the General tab.

4. In the Gold, Silver, Bronze and Best Effort row, enter 9216 in the box for the maximum transmission unit

(MTU) in the MTU column (Figure 17).

5. Click Save Changes.

6. Click OK.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 28 of 52

Figure 17. Setting Jumbo Frames

Create QoS Policies in the Cisco UCS Fabric

To create QoS policies in the Cisco UCS fabric, follow these steps:

1. In Cisco UCS Manager, click the LAN tab in the navigation pane.

2. Choose LAN > Policies > QoS Policies.

3. Right Click on the QoS Policies and select “Create QoS Policy”.

4. Provide the name of QoS policy and select appropriate priority from the priority drop down menu.

5. Click on OK to complete the creation of QoS policy.

6. Create three different policies like Gold, Silver and Bronze.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 29 of 52

Configure Ethernet Uplink PortChannels

This example uses two uplink ports from each fabric interconnect to the Cisco Nexus 5000 Series Switch for this

configuration. However you can have more than two uplink ports, depending on your bandwidth requirements. The

recommended approach is to configure one PortChannel in each fabric interconnect for throughput sharing, unless

you have any business reason to create multiple Port Channels in each fabric interconnect.

1. To configure PortChannels, choose LAN > LAN Cloud > Fabric A > Port Channels. Right-click and choose

Create Port-Channel. Select the desired Ethernet uplink ports configured earlier.

2. Repeat the same steps to create a PortChannel on fabric B.

In the setup here, ports 31 and 32 on fabric A are configured as PortChannel 33. Ports 31 and 32 on fabric B are

configured as PortChannel 34. Figures 18, 19, and 20 show the details.

Figure 18. Configuring PortChannels

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 30 of 52

Figure 19. Fabric A Ethernet PortChannel Details

Figure 20. Configured PortChannels on Fabrics A and B

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 31 of 52

Create Local Disk Configuration Policy (Optional)

You need to configure a local disk for the Cisco UCS environment if the servers in the environment do not have a

local disk.

Note: This policy should not be used on servers that contain local disks.

To create a local disk configuration policy, follow these steps:

1. In Cisco UCS Manager, click the Servers tab in the navigation pane.

2. Choose Policies > root.

3. Right-click Local Disk Config Policies.

4. Choose Create Local Disk Configuration Policy.

5. Enter SAN-Boot as the local disk configuration policy name.

6. Change the mode to No Local Storage.

7. Click OK to create the local disk configuration policy.

8. Click OK.

Create FCoE Boot Policies

This procedure applies to a Cisco UCS environment in which the storage FCoE ports are configured in the

following ways:

● The FCoE ports <<5a>> on NFS storage 1 and 2 are connected to Cisco Nexus 5548UP Switch A.

● The FCoE ports <<5b>> on NFS storage 1 and 2 are connected to Cisco Nexus 5548UP Switch B.

Two boot policies are configured in this procedure:

● The first configures the primary target to be FCoE port 5a on NFS storage1.

● The second configures the primary target to be FCoE port 5b on NFS storage 1.

To create boot policies for the Cisco UCS environment, follow these steps:

1. In Cisco UCS Manager, click the Servers tab in the navigation pane.

2. Choose Policies > root.

3. Right-click Boot Policies.

4. Choose Create Boot Policy.

5. Enter Boot-FCoE-A as the name of the boot policy.

6. (Optional) Enter a description for the boot policy.

7. Keep the Reboot on Boot Order Change check box unselected.

8. Expand the Local Devices drop-down menu and choose Add CD-ROM.

9. Expand the vHBAs drop-down menu and choose Add SAN Boot.

10. In the Add SAN Boot dialog box, enter Fabric-A in the vHBA field.

11. Make sure that the Primary radio button is selected as the SAN boot type.

12. Click OK to add the SAN boot initiator.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 32 of 52

13. From the vHBA drop-down menu, choose Add SAN Boot Target.

14. Keep 0 as the value for Boot Target LUN.

15. Enter the WWPN for FCoE port 5a on NFS storage 1.

16. Keep the Primary radio button selected as the SAN boot target type.

17. Click OK to add the SAN boot target.

18. From the vHBA drop-down menu, choose Add SAN Boot Target.

19. Keep 0 as the value for Boot Target LUN.

20. Enter the WWPN for FCoE port 5a on NFS storage 2.

21. Click OK to add the SAN boot target.

22. From the vHBA drop-down menu, choose Add SAN Boot.

23. In the Add SAN Boot dialog box, enter Fabric-B in the vHBA box.

24. Verify that the SAN boot type is automatically set to Secondary, and that the Type option is unavailable.

25. Click OK to add the SAN boot initiator.

26. From the vHBA drop-down menu, choose Add SAN Boot Target.

27. Keep 0 as the value for Boot Target LUN.

28. Enter the WWPN for FCoE port 5b on NFS storage 1.

29. Keep Primary as the SAN boot target type.

30. Click OK to add the SAN boot target.

31. From the vHBA drop-down menu, choose Add SAN Boot Target.

32. Keep 0 as the value for Boot Target LUN.

33. Enter the WWPN for FCoE port 5b on NFS storage 2.

34. Click OK to add the SAN boot target.

35. Click OK and then click OK again to create the boot policy.

36. Right-click Boot Policies again.

37. Choose Create Boot Policy.

38. Enter Boot-FCoE-B as the name of the boot policy.

39. (Optional) Enter a description of the boot policy.

40. Keep the Reboot on Boot Order Change check box unchecked.

41. From the Local Devices drop-down menu, choose Add CD-ROM.

42. From the vHBA drop-down menu, choose Add SAN Boot.

43. In the Add SAN Boot dialog box, enter Fabric-B in the vHBA box.

44. Make sure that the Primary radio button is selected as the SAN boot type.

45. Click OK to add the SAN boot initiator.

46. From the vHBA drop-down menu, choose Add SAN Boot Target.

47. Keep 0 as the value for Boot Target LUN.

48. Enter the WWPN for FCoE port 5b on NFS storage 1.

49. Keep Primary as the SAN boot target type.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 33 of 52

50. Click OK to add the SAN boot target.

51. From the vHBA drop-down menu, choose Add SAN Boot Target.

52. Keep 0 as the value for Boot Target LUN.

53. Enter the WWPN for FCoE port 5b on NFS storage 2.

54. Click OK to add the SAN boot target.

55. From the vHBA menu, choose Add SAN Boot.

56. In the Add SAN Boot dialog box, enter Fabric-A in the vHBA box.

57. Verify that the SAN boot type is automatically set to Secondary, and that the Type option is unavailable.

58. Click OK to add the SAN boot initiator.

59. From the vHBA menu, choose Add SAN Boot Target.

60. Keep 0 as the value for Boot Target LUN.

61. Enter the WWPN for FCoE port 5a on NFS storage 1.

62. Keep Primary as the SAN boot target type.

63. Click OK to add the SAN boot target.

64. From the vHBA drop-down menu, choose Add SAN Boot Target.

65. 65. Keep 0 as the value for Boot Target LUN.

66. Enter the WWPN for FCoE port 5a on NFS storage 2.

67. Click OK to add the SAN boot target.

68. Click OK and then click OK again to create the boot policy.

69. After creating the FCoE boot policies for fabric A and fabric B, you can view the boot order in the Cisco UCS

Manager GUI. To view the boot order, navigate to Servers > Policies > Boot Policies. Click Boot Policy Boot-

FCoE-A to view the boot order for fabric A in the right pane of the manager. Similarly, click Boot Policy Boot-

FCoE-B to view the boot order for fabric B in the right pane of the manager.

Create Service Profiles and Associate Them with Blade Servers

Service profile templates enable policy-based server management that helps ensure consistent server resource

provisioning suitable to meet predefined workload needs.

Create a Service Profile Template

1. In Cisco UCS Manager, choose Servers > Service Profile Templates > root. Right-click and choose Create

Service Profile Template.

2. Enter a template name, select the UUID pool that was created earlier, and select the Update Template radio

button. Click Next to continue.

3. In the Networking window, select the dynamic vNIC that was created earlier. Move to the next window.

4. On the Networking page, create one vNIC on each fabric and associate the vNICs with the VLAN policies

created earlier. Select Expert Mode, and click Add to add one or more vNICs that the server should use to

connect to the LAN.

5. On the Create vNIC page, select “Use vNIC template” and the adapter policy created earlier. Enter the vNIC

name.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 34 of 52

6. Similarly, create all the required vNICs for fabrics A and B with an appropriate vNIC template mapping for each

vNIC.

7. After the vNICs are created, you need to create the vHBAs. In the storage page, select Expert Mode, choose

the WWNN pool created earlier, and click Add to create vHBAs.

8. For this example, create two vHBAs:

● Fabric A using template SAP-Oracle-vHBA-A

● Fabric B using template SAP-Oracle-vHBA-B

9. This configuration uses Cisco Nexus 5548UP for zoning, so skip the Zoning section and use the default vNIC

and vHBA placement.

Configure Server Boot Policy

1. On the Server Boot Order page, choose the boot policy you created previously for SAN boot and click Next.

2. For the configuration here, leave the rest of the maintenance and assignment policies at their default settings.

However, these settings will vary from site to site depending on your workloads, best practices, and policies.

Create Service Profiles from Service Profile Templates

1. In Cisco UCS Manager, choose Servers > Service Profile Templates and right-click Create Service Profiles

from Template.

2. For this example, create the four service profiles listed here. Two of the service profiles are used for SAP

application servers, and two are used for database servers.

● SAP-APPS-1

● SAP-APPS-2

● SAP-Oracle-DB-1

● SAP-Oracle-DB-2

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 35 of 52

Set QoS Policy on the vNIC Templates

IF you are using vNIC template, set the appropriate QoS Policy on the properties of vNIC template as shown in the

below figure.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 36 of 52

If you are not using vNIC template, alternatively you can set the QoS policy for a vNIC by selecting the vNIC

properties as shown in the below figure.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 37 of 52

Set Global Configurations to Enable Jumbo Frames, Create QoS and Create vPCs on

Cisco Nexus 5548UP

To set global configurations, follow these steps on both Cisco Nexus 5548UP A and B.

Run the following commands to set global configurations and jumbo frames in QoS:

1. Log in as the admin user.

2. Run the following commands to enable jumbo frames:

conf t

spanning-tree port type network default

spanning-tree port type edge bpduguard default

port-channel load-balance ethernet source-dest-port

policy-map type network-qos jumbo

class type network-qos class-default

mtu 9216

exit

class type network-qos class-fcoe

pause no-drop

mtu 2158

exit

exit

system qos

service-policy type network-qos jumbo

exit

copy run start

3. Run the following commands to create QoS

configure terminal

class-map type qos class-gold

match cos 4

exit

class-map type qos class-silver

match cos 1

exit

class-map type qos class-bronze

match cos 1

exit

copy run to start

4. Configure the Cisco Nexus 5548UP Switches for VLANs, VSANs, vPC, virtual Fibre Channel (vFC), and Fibre

Channel and FCoE zoning.

5. When configuring the switches with vPCs, be sure that the status for all vPCs is Up for connected Ethernet

ports by running the commands shown in Figure 21 from the CLI on the Cisco Nexus 5548UP Switch.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 38 of 52

Figure 21. PortChannel Status on Cisco Nexus 5548UP

Configure Cisco UCS Servers and Stateless Computing Using FCoE Boot

Boot from FCoE is another feature that helps organizations move toward stateless computing, in which there is no

static binding between a physical server and the OS and applications it is tasked to run. The OS is installed on a

SAN logical unit number (LUN) and boot-from-FCoE policy is applied to the service profile template or the service

profile. If the service profile is moved to another server, the WWPN of the HBAs and the boot-from-SAN (BFS)

policy move along with it. The new server now assumes the exact same character as the old server, demonstrating

the true, unique stateless nature of the Cisco UCS blade server.

Main Benefits of Boot from FCoE

The main benefits of booting from the network include the following:

● Reduced server footprints: With boot from FCoE, each server does not need to have its own direct-

attached disk, eliminating internal disks as a potential point of failure. Thin diskless servers also take up less

facility space, require less power, and are generally less expensive because they have fewer hardware

components.

● Faster disaster and server failure recovery: All the boot information and production data stored on a local

SAN can be replicated to a SAN at a remote disaster recovery site. If a disaster destroys the functions of

the servers at the primary site, the remote site can take over with little downtime. Recovery from server

failures is simplified in a SAN environment. With the help of snapshots, mirrors of a failed server can be

recovered quickly by booting from the original copy of the server image. As a result, boot from SAN can

greatly reduce the time required for server recovery.

● High availability: A typical data center is highly redundant: with redundant paths, redundant disks, and

redundant storage controllers. Storing operating system images on disks in the SAN supports high

availability and eliminates the potential for mechanical failure of a local disk.

● Rapid redeployment: Businesses that experience temporary high production workloads can take

advantage of SAN technologies to clone the boot image and distribute the image to multiple servers for

rapid deployment. Such servers may need to be in production for only hours or days and can be readily

removed when the production need has been met. Highly efficient deployment of boot images makes

temporary server use a cost-effective endeavor.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 39 of 52

With boot from SAN, the image resides on a SAN LUN, and the server communicates with the SAN through an

HBA. The HBA’s BIOS contains the instructions that enable the server to find the boot disk. All Fibre Channel–

capable CNA cards supported by Cisco UCS B-Series Blade Servers support boot from SAN.

After the power on self-test (POST), the server hardware component fetches the boot device that is designated as

the boot device in the hardware BIOS settings. After the hardware detects the boot device, it follows the regular

boot process.

Summary of Boot from SAN Configuration

At this time, you have completed the following steps that are essential for boot-from-SAN configuration:

● SAN zoning configuration on the Cisco Nexus 5548UP Switches

● NFS storage array configuration for the boot LUN

● Cisco UCS configuration of boot from SAN policy in the service profile

You are now ready to install the OS. This document does not cover the steps to install the OS in an FCoE boot

configuration. Refer the URL for the details.

http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi55u1.pdf

SUSE Linux Server Configuration and Recommendations

For this solution, four servers were configured: two servers for Oracle Database (one for the primary database and

one for disaster recovery) and two servers for the SAP applications. Four Cisco B200 M3 servers used boot from

SAN to enable stateless computing in the event that a server needs to be replaced swapped, using the unique

Cisco UCS service profile capabilities. The OS boot uses FCoE, and the Oracle Database and SAP NetWeaver

components are configured to use the NFS protocol on the NFS storage. SUSE Linux 12 is installed on each

server.

Table 5 summarizes the hardware and software configuration details.

Table 5. Host Hardware and Software Configuration.

Component Details Description

Server 4 Cisco UCS B200 M3 servers 2 sockets with 12 cores each

Memory 128 GB Physical memory

Static vNIC 1 Public access Management and public access, with an MTU size of 1500

Static vNIC 2 Backup and restore Database backup and restore from NFS storage, with an MTU size of 9000

Static vNIC 3 NFS storage access Database access through NFS storage 1, with an MTU size of 9000

Static vNIC 4 Private SAP system communication, with an MTU size of 9000

Static vNIC 5 Backup and restore Database backup and restore from NFS storage, with an MTU size of 9000

Static vNIC 6 NFS storage access Database access through NFS storage 2, with an MTU size of 9000

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 40 of 52

Recommendations

To receive full support for your SAP system, you must meet certain requirements, including the following:

● To help ensure support for problems that may occur with the operating system, you must have a valid

support contract for SLES. You can obtain a support contract directly with Novell or with a support partner

who is authorized to redirect any level-3 support queries to Novell. For more information about SUSE

priority support for SAP applications, refer to SAP note 1056161.

● You must use hardware that is certified by your hardware vendor for use with SAP on Linux. Refer to SAP

note 171356 for a list of the corresponding notes of hardware partners.

Follow these general installation instructions:

● This SUSE version requires the update of certain SAP components (the SAP kernel) or the configuration of

a Linux Kernel 3.12 compatibility environment. For details, refer to the SAP NetWeaver compatibility

documentation with the Linux Kernel 3.0.0 shipped.

● Select English as the installation language and system language.

● As a starting point for package selections, use the default pattern selection presented by YaST in the

Software Selection submenu and make the following additional selections:

◦ In the Software Selection dialog box, manually select the pattern SAP Application Server Base. This

selection helps ensure the installation of the sapconf RPM Package Manager (RPM) package (formerly

sapinit). For more information, see SAP note 1275776.

◦ In the Software Selection dialog box, manually select the pattern C/C++ Compiler and Tools.

You should then have the following software selections:

● SLES 12

◦ Base System

◦ Help and Support Documentation

◦ Minimal System (Appliances)

◦ X Window System (only as an option, but you need at least X11-server and X11-libs to run the

graphical installer)

◦ Print Server

◦ SAP Application Server Base

◦ Web-Based Enterprise Management

◦ C/C++ Compiler and Tools

● Optional software selections

◦ Gnome Desktop Environment

Note: If you intend to install Oracle and SAP on the same system, do not install the Oracle Server Base pattern

(the orarun RPM package) on the system. Doing so will prevent sapinst from correctly installing the SAP system

components.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 41 of 52

If your SAP component requires Java Development Kit (JDK) 1.4.2, you need to install the JDK from the additional

software development kit (SDK) media.

For the more information about OS configuration and system requirements, see SAP note 01310037.

For this configuration, NFS volumes were shared with all four servers for SAPCD (downloaded SAP NetWeaver

software) and SAPINST (installation of the SAP application and Oracle Database).

Configure the Directory Structure and NFS Volumes for SAP Systems

1. Configure the local mount point and NFS share for SAPINST and SAPCD on all four servers.

nfs_1:/software /sapcd nfs defaults 0 0

nfs_2:/sapmnt /sapmnt nfs defaults 0 0

2. Configure the local mount point and NFS share for the primary database server.

nfs_1:/sapdata/sapdata1 /oracle/C4S/sapdata1 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_1:/sapdata/sapdata2 /oracle/C4S/sapdata2 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_1:/sapdata/sapdata3 /oracle/C4S/sapdata3 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_1:/sapdata/sapdata4 /oracle/C4S/sapdata4 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/saplog/origlogA /oracle/C4S/origlogA nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/saplog/origlogB /oracle/C4S/origlogB nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_1:/mirrlog/mirrlogA /oracle/C4S/mirrlogA nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_1:/mirrlog/mirrlogB /oracle/C4S/mirrlogB nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/oraarch /oracle/C4S/oraarch nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

3. Configure the local mount point and NFS share for the disaster-recovery database server.

nfs_2:/sapdrdata/sapdata1 /oracle/C4S/sapdata1 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/sapdrdata/sapdata2 /oracle/C4S/sapdata2 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/sapdrdata/sapdata3 /oracle/C4S/sapdata3 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/sapdrdata/sapdata4 /oracle/C4S/sapdata4 nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_1:/sapdrlog/origlogA /oracle/C4S/origlogA nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_1:/sapdrlog/origlogB /oracle/C4S/origlogB nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/mirrdrlog/mirrlogA /oracle/C4S/mirrlogA nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

nfs_2:/mirrdrlog/mirrlogB /oracle/C4S/mirrlogB nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 42 of 52

nfs_1:/droraarch /oracle/C4S/oraarch nfs

rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=65536,wsize=65536 0 0

4. Prepare all four servers and install SUSE Linux 12 on all the servers.

Table 6 summarizes the servers.

Table 6. SLES Servers

Server Purpose

Server 1 Primary database server

Server 2 Primary application server instance

Server 3 Disaster-recovery database server

Server 4 Additional application server instance

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 43 of 52

SAP NetWeaver Installation

Follow the detailed SAP guidelines for installing NetWeaver for the ABAP stack.

Install the SAP NetWeaver Application

The following steps show the high-level process for installing the SAP application for the ABAP stack.

1. Download SAP NetWeaver software from the SAP marketplace and extract the downloaded files using sapcar.

(This example uses /sapcd to store all extracted binaries, such as software provisioning, export1, export2,

kernel, Oracle 12c, and Oracle 12c client files).

2. Start the VNC server on SAP application server 1 (the primary application server) to access the GUI for the

application installation.

3. Change the directory to /sapcd/sw_provisioning on the SAP application server (server 2).

4. Start the installation from SAP application server 1 as the root user using the following command:

# cd /sapcd/sw_provisioning

# ./sapinst SAPINST_USE_HOSTNAME=<hostname of application server>

5. Open a web browser and type the VNC address to access the GUI of the application server: for example,

enter vnc://[email protected]:5901.

6. Open the xterminal of the application server and type the command shown in Figure 22 to start the GUI-based

SAP application installation.

Figure 22. Starting the GUI-Based SAP Application Installation

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 44 of 52

7. Select Prerequisites Check to verify that the server meets all the requirements for installing the SAP

application. Click Next to continue with the prerequisites check (Figure 23).

Figure 23. Performing the Prerequisites Check

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 45 of 52

8. After prerequisite results have been evaluated on server 2, proceed with the application stack installation.

Install the Central Services Instance on the primary application server (server 2; Figure 24).

Figure 24. Installing the Central Services Instance

9. To verify the processes and services after you’ve installed the central services instance, enter the ps -ef

command on the primary application server. Figure 25 shows the results.

Figure 25. Verifying the Central Services Installation

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 46 of 52

10. Install database instance on the primary database server on server 1 (Figure 26).

Figure 26. Installing the Database Instance

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 47 of 52

11. After the database instance has been installed successfully, install the primary application server instance on

the primary application server (server 2; Figure 27).

Figure 27. Installing the Primary Application Server Instance

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 48 of 52

12. Install the additional application server instance on server 4 (Figure 28).

Figure 28. Installing the Additional Application Server Instance

13. After the additional application server instance has been installed on server 4, configure the server to connect

to the primary database instance.

14. Configure server 3 for disaster recovery for the primary database server. Use the following documentation to

configure the disaster-recovery database:

● Use the software provisioning guidance from SAP to install the database binary on the disaster-recovery

database server.

● Use the document provided by Oracle to configure the disaster-recovery database server.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 49 of 52

Direct NFS Client for Oracle Database Configuration

For improved NFS performance, Oracle recommends that you use the Direct NFS Client shipped with Oracle 12c.

Direct NFS Client uses either the configuration file $ORACLE_HOME/dbs/oranfstab or the operating system mount

tab file /etc/mtab to find out what mount points are available. If oranfstab is not present, then by default Direct NFS

Client servers mount entries found in /etc/mtab. No other configuration is required. You can use oranfstab to

specify additional options specific to Oracle Database for Direct NFS Client. For example, you can use oranfstab to

specify additional paths for a mount point. You can add a new oranfstab file specifically for Oracle Database, either

in the path /etc or $ORACLE_HOME/dbs. When oranfstab is placed in $ORACLE_HOME/dbs, its entries are

specific to a single database. However, when oranfstab is placed in /etc, then it is global to all Oracle databases

and can contain mount points for all Oracle databases.

Direct NFS Client determines mount-point settings for NFS storage devices based on the configurations in

/etc/mtab. Direct NFS Client searches for the mount entries in the following order:

● $ORACLE_HOME/dbs/oranfstab

● /etc/oranfstab

● /etc/mtab

Direct NFS Client uses the first matched entry as the mount point. Oracle Database requires that mount points be

mounted by the kernel NFS system even when served through Direct NFS Client.

Configure Direct NFS Client

The implementation steps are as follows:

1. Set up the directory structure (mount points) to mount the shares on the host.

2. Update a file to mount the exported shares on the appropriate mount points. For SUSE Linux, update

/etc/fstab to mount the shares exported from the NFS storage to the appropriate mount points.

3. Create an oranfstab file with the following attributes for each NFS server to be accessed using Direct NFS

Client:

● server: Specifies the NFS server name

● path: Specifies up to four network paths to the NFS server, specified by IP address or by name, as

displayed using the ifconfig command on the filer

● pocal: Specifies up to four local paths on the database host, specified by IP address or by name, as

displayed using the ifconfig command run on the database host

● export: Specifies the exported path from the NFS server

● mount: Specifies the corresponding local mount point for the exported volume

● dontroute: Specifies that outgoing messages should not be routed by the operating system, but sent using

the IP address to which they are bound (Note that this POSIX option sometimes does not work on Linux

systems with multiple paths in the same subnet.)

● mnt_timeout: Specifies (in seconds) the amount of time that Direct NFS Client should wait for a successful

mount before timing out (This parameter is optional, and the default timeout value is 10 minutes [600].)

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 50 of 52

● nfs_version: Specifies the NFS protocol version that Direct NFS Client uses (Possible values are NFSv3,

NFSv4, and NFSv4.1. The default version is NFSv3. If you want to specify NFSv4.0, then you must set the

nfs_version parameter accordingly in the oranfstab file.)

● management: Enables Direct NFS Client to use the management interface for Simple Network

Management Protocol (SNMP) queries (You can use this parameter if SNMP is running on separate

management interfaces on the NFS server. The default setting is the server parameter value.)

● Specifies the community string for use in SNMP queries (The default value is public.)

The following example shows an oranfstab file with two NFS server entries:

server: MyDataServer1

local: 192.0.2.0

path: 192.0.2.1

local: 192.0.100.0

path: 192.0.100.1

nfs_version: nfsv3

dontroute

export: /vol/oradata1 mount: /mnt/oradata1

server: MyDataServer2

local: LocalPath1

path: NfsPath1

local: LocalPath2

path: NfsPath2

local: LocalPath3

path: NfsPath3

local: LocalPath4

path: NfsPath4

nfs_version: nfsv4

dontroute

export: /vol/oradata2 mount: /mnt/oradata2

export: /vol/oradata3 mount: /mnt/oradata3

export: /vol/oradata4 mount: /mnt/oradata4

export: /vol/oradata5 mount: /mnt/oradata5

management: MgmtPath1

community: private

4. By default, Direct NFS Client is installed in a disabled state with single-instance Oracle Database installations.

To enable Direct NFS Client, complete the following steps:

a. Change the directory to $ORACLE_HOME/rdbms/lib.

b. Enter the following command:

make -f ins_rdbms.mk dnfs_on

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 51 of 52

Destructive and Hardware Failover Testing

The goal of destructive and hardware failover tests is to help ensure that the reference architecture withstands

common failures, such as unexpected crashes, hardware failures, and human errors. The testing performed in this

solution used many hardware, software (process kills), and OS failures that simulated real-world scenarios under

stress conditions. The destructive testing also demonstrated the unique failover capabilities of the Cisco UCS VIC

1340 adapter.

Table 7 summarizes some of these test cases.

Table 7. Failure Test Scenarios

Scenario Test Status

Test 1: Chassis 1 IOM 2 Link Failure test

Run the system at full workload.

Disconnect IOM 2 from the first chassis and reconnect the links after 5 minutes.

Network traffic from IOM 2 will fail over without any disruption to IOM 1.

Test 2: Cisco UCS 6248UP Fabric B Failure test

Run the system at full workload.

Reboot fabric B and let it rejoin the cluster; then reboot fabric A.

Fabric failover does not cause any disruption to network or storage traffic.

Test 3: Cisco Nexus 5548UP Fabric A Failure test

Run the system at full workload.

Reboot the Cisco Nexus 5548UP fabric A switch, wait for 5 minutes, and reconnect it; repeat for the Cisco Nexus 5548UP fabric B switch.

Neither network nor storage traffic is disrupted.

Conclusion

Cisco UCS is built on leading computing, networking, and infrastructure software components and supports access

to storage. With a Cisco UCS solution, customers can use a secure, integrated, and optimized stack that includes

computing, networking, and storage resources that are sized, configured, and deployed as a fully tested unit

running industry-standard applications such as SAP NetWeaver on Oracle Database over Direct NFS Client.

The combination of Cisco UCS with NFS storage creates a powerful environment for SAP NetWeaver running on

Oracle Database:

● The Cisco UCS stateless computing architecture provided by service profiles allows fast, nondisruptive

workload changes to be implemented simply and transparently across the integrated Cisco UCS

infrastructure and Cisco x86 servers.

● Cisco UCS with a highly scalable NAS platform from a variety of storage providers provides an excellent

combination with SAP NetWeaver running on Oracle Database’s unique, scalable, and highly available NFS

technology.

● The foundation for this infrastructure is Cisco Unified Fabric, with its focus on secure IP networks as the

standard interconnects for the server and data management solutions.

● Cisco UCS Manager provides centralized, simplified management of infrastructure resources and end-to-

end automation.

● Unique Cisco UCS QoS and CoS features prioritize network throughput and reduce network latency.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 52 of 52

As a result, customers can achieve dramatic cost savings when using Ethernet-based products and deploy any

application on a scalable shared IT infrastructure built on Cisco UCS technologies. By using a flexible infrastructure

platform composed of presized computing, networking, and server components, you can more easily transform IT

and address operational challenges with greater efficiency and less risk.

For More Information

For additional information, see:

● Cisco UCS platform: http://www.cisco.com/en/US/netsol/ns944/index.html

● Cisco Nexus Family: http://www.cisco.com/en/US/products/ps9441/Products_Sub_Category_Home.html

● Cisco Nexus 5000 Series and NX-OS Software configuration guide:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurati

onGuide.html

● SUSE Linux best practices for SAP NetWeaver:

https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices.html

● SAP NetWeaver installation guide: http://help.sap.com/nw74/

● SAP NetWeaver software: http://help.sap.com/nw_platform

● Oracle Database disaster recovery: http://docs.oracle.com/database/121/SBYDB/concepts.htm

● FCoE boot with FlexPod data ONTAP operating in 7-Mode:

http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_ucsm2_7modedeploy.html#w

p517210

Printed in USA C11-735623-00 08/15