View
897
Download
2
Category
Tags:
Preview:
Citation preview
© Copyright IBM Corporation 20093.2
PowerVM Virtualization plain and simple
© Copyright IBM Corporation 2009
IBM System p
Goals with Virtualization
Lower costs and improve resource utilization- Data Center floor space reduction or…- Increase processing capacity in the same space- Environmental (cooling and energy challenges)- Consolidation of servers- Lower over all solution costs
Less hardware, fewer software licenses
- Increase business flexibility Meet ever changing business needs faster provisioning
- Improving Application Availability Flexibility in moving applications between servers
© Copyright IBM Corporation 2009
IBM System p
The virtualization elevator pitch
• The basic elements of PowerVM- Micro-partitioning – allows 1 CPU look like 10- Dynamic LPARs – moving resources- Virtual I/O server – partitions can share
physical adapters- Live partition mobility – using Power6- Live application mobility – using AIX 6.1
© Copyright IBM Corporation 2009
IBM System p
First there were servers
• One physical server for one operating system
• Additional physical servers added as business grows
Physical view Users view
© Copyright IBM Corporation 2009
IBM System p
Then there were logical partitions
• One physical server was divided into logical partitions
• Each partition is assigned a whole number of physical CPUs (or cores)
• One physical server now looks like multiple individual servers to the user
Physical view
8 CPUs
Users viewLogical view
1 CPUs
3 CPUs
2 CPUs
2 CPUs
© Copyright IBM Corporation 2009
IBM System p
Then came dynamic logical partitions
• Whole CPUs can be moved from one partition to another partition
• These CPUs can be added and removed from partitions without shutting the partition down
• Memory can also be dynamically added and removed from partitions
Physical view
8 CPUs
Users viewLogical view
1 CPUs
3 CPUs
2 CPUs
2 CPUs
1 CPUs
3 CPUs
2 CPUs
© Copyright IBM Corporation 2009
IBM System p
Dynamic LPAR
•Standard on all POWER5 and POWER6 systems
HMC
AIX 5L
Linux
Hypervisor
Part#1
Production
Part#2 Part#3 Part#4
Legacy Apps
Test/Dev
File/Print
AIX 5L
AIX 5L
Move resources between live
partitions
© Copyright IBM Corporation 2009
IBM System p
Now there is micro partitioning
• A logical partition can now have a fraction of a full CPU
• Each physical CPU (core) can be spread across 10 logical partitions
• A physical CPU can be in a pool of CPUs that are shared by multiple logical partitions
• One physical server can now look like many more servers to the user
• Can also dynamically move CPU resources between logical partitions
Physical view
8 CPUs
Users viewLogical view
0.2 CPU
2.3 CPUs
1.2 CPUs
1 CPU
0.3 CPU
1.5 CPUs
0.9 CPU
© Copyright IBM Corporation 2009
IBM System p
Logical partitions (LPARs) can be defined with dedicated or shared processors
Processors not dedicated to a LPAR are part of the pool of shared processors
Processing capacity for a shared LPAR is specified in terms of processing units.
With as little as 1/10 of a processor
Micro-partitioning terminology
© Copyright IBM Corporation 2009
IBM System p
Micro-partitioning – more details
Lets look deeper into micro-partitioning
© Copyright IBM Corporation 2009
IBM System p
A physical CPU is a single “core” and also called a “processor”
The use of micro-partitioning introduces the virtual CPU conceptA virtual CPU could be a fraction of a physical CPUA virtual CPU can not be more than a full physical CPU
IBM’s simultaneous multi threading technology (SMT) enables two threads to run on the same processor at the same time.
With SMT enabled the operating system sees twice the number of processors
Micro-partitioning terminology (details)
Physical CPU
Virtual CPU
Virtual CPU
Virtual CPU
Logical CPU
Logical CPU
Logical CPU
Logical CPU
Logical CPU
Logical CPUUsing SMT
Using micro-partitioning
Each logical CPU appears to the
operating system as a full CPU
© Copyright IBM Corporation 2009
IBM System p
The LPAR definition sets the options for processing capacity:ƒ Minimum:ƒ Desired:ƒ Maximum:
The processing capacity of an LPAR can be dynamically changedƒ Changed by the administrator at the HMCƒ Changed automatically by the hypervisor
The LPAR definition set the behavior when under a load ƒ Capped: LPAR processing capacity is limited to the desired setting
ƒ Uncapped: LPAR is allowed to use more then it was given
Micro-partitioning terminology (details)
© Copyright IBM Corporation 2009
IBM System p
Shared processor pool
Basic terminology around Logical Partitions
Shared processor partitionSMT Off
Shared processor partitionSMT On
Dedicated processor partition
SMT Off
Deconfigured
Inactive (CUoD)
Dedicated
Shared
Virtual
Logical (SMT)
Installed physical processors
Entitled capacity
© Copyright IBM Corporation 2009
IBM System p
Capped and uncapped partitions
• Capped partition- Not allowed to exceed its entitlement
• Uncapped partition- Is allowed to exceed its entitlement
• Capacity weight- Used for prioritizing uncapped partitions- Value 0-255- Value of 0 referred to as a “soft cap”
Note: The CPU utilization metric has less relevance in the uncapped partition.
© Copyright IBM Corporation 2009
IBM System p
What about system I/O adapters
• Back in the “old” days, each partition had to have its own dedicated adapters
• One Ethernet adapter for a network connection
• One SCSI or HBA card to connect to local or external disk storage
• The number of partitions was limited by the number of available adapters
Physical
adapters Users view
Logical
Partitions
1 CPUs
3 CPUs
2 CPUs
2 CPUs
Ethernet adap
Ethernet adap
Ethernet adap
Ethernet adap
SCSI adap
SCSI adap
SCSI adap
SCSI adap
© Copyright IBM Corporation 2009
IBM System p
Then came the Virtual I/O server (VIOS)
• The virtual I/O server allows partitions to share physical adapters
• One Ethernet adapter can not provide a network connection for multiple partitions
• Disks on one SCSI or HBA card can now be shared with multiple partitions
• The number of partitions is no longer limited by the number of available adapters
Ethernet adap
SCSI adap
Virtual I/O Server partition
0.5 CPU
1.1 CPUs
0.3 CPU
1.4 CPUs
2.1 CPUs
Ethernet network
© Copyright IBM Corporation 2009
IBM System p
Virtual I/O server and SCSI disks
© Copyright IBM Corporation 2009
IBM System p
Integrated Virtual Ethernet
LPAR#2
LPARVIOS
LPAR#3
LPAR#1
Power Hypervisor
SEA
Virtual Ethernet Switch
VirtualEthernet
Driver
VirtualEthernet
Driver
VirtualEthernet
Driver
LPAR#2
LPARVIOS
LPAR#3
LPAR#1
Power Hyper-visor
SEA EthernetDriver
EthernetDriver
EthernetDriver
IntegratedVirtual Adapter
VIOS Set up is not required for sharing Ethernet Adapters
PCI Ethernet Adapter
Virtual I/O Shared Ethernet Adapter Integrated Virtual Ethernet
vs
© Copyright IBM Corporation 2009
IBM System p
Lets see it in action
Now let’s see this technology in action
This demo illustrates the topics just discussed
© Copyright IBM Corporation 2009
IBM System p
© Copyright IBM Corporation 2009
IBM System p
Shared Processor pools
It is possible to have multiple shared processor pools
Lets dive in deeper
© Copyright IBM Corporation 2009
IBM System p
Linux Software: A,B,C
AIX 5L Software: X,Y,Z
Multiple Shared Processor Pools
VSP2 Max Cap=2VSP1 Max Cap=4
AIX 5L DB/2
Physical Shared Pool
► Useful for multiple business units in a single company – resource allocation► Only license the relevant software based on VSP Max► Cap total capacity used by a group of partitions ► Still allow other partitions to consume capacity not used by the partitions
in the VSP
© Copyright IBM Corporation 2009
IBM System p
AIX 6.1 Introduces Workload Partitions
• Workload partitions (WPAR) is yet another way to create virtual systems
• WPARs are partitions within a partition
• Each partition is isolated from one another
• AIX 6.1 can be run on Power5 or Power6 hardware
© Copyright IBM Corporation 2009
IBM System p
AIX 6 Workload Partitions (details)
WPAR appears to be a stand alone AIX system
Created entirely within a single AIX system image
Created entirely in software (no HW assist or configuration)
Provides an isolated process environment: Processes within a WPAR can only see other processes in the same partition.
Provides an isolated file system space A separate branch of the global file system space is
created and all of the WPARS processes are chrooted to this branch.
Processes within a WPAR see files only in this branch.
Provides an isolated network environment Separate network addresses, hostnames, domain names Other nodes on the network see WPAR as a stand alone
system.
Provides WPAR resource controls The amount of system memory, CPU resources, paging
space allocated to each WPAR can be set.
Shared system resources: OS, I/O Devices, Shared Library
WorkloadPartition
A
WorkloadPartition
C
WorkloadPartition
B
AIX 6 Image
WorkloadPartition
DWorkloadPartition
E
© Copyright IBM Corporation 2009
IBM System p
Inside a WPAR
© Copyright IBM Corporation 2009
IBM System p
WorkloadPartition
Billing
WorkloadPartition
QA
AIX # 2
WorkloadPartition
Data Mining
Live Application Mobility
WorkloadPartition
ApplicationServer
WorkloadPartition
Web
AIX # 1
ApplicationPartition
Dev
The ability to move a Workload Partition from one server to another
Provides outage avoidance and multi-system workload balancing
Workload Partition
Policy based automation can provide more efficient resource usage
Workload
Partitions
Manager
Policy
NFS NFS
© Copyright IBM Corporation 2009
IBM System p
Live application mobility in action
Lets see this techonolgy in action with another demo
Need to exit presentation in order to run the demo
© Copyright IBM Corporation 2009
IBM System p
Power6 hardware introduced partition mobility
With Power6 hardware, partitions can not be moved from on system to another without stopping the applications running on that partition.
© Copyright IBM Corporation 2009
IBM System p
Partition Mobility: Active and Inactive LPARs
Active Partition Mobility Active Partition Migration is the actual movement of a running LPAR from one
physical machine to another without disrupting* the operation of the OS and applications running in that LPAR.
Applicability Workload consolidation (e.g. many to one) Workload balancing (e.g. move to larger system) Planned CEC outages for maintenance/upgrades Impending CEC outages (e.g. hardware warning received)
Active Partition Mobility Active Partition Migration is the actual movement of a running LPAR from one
physical machine to another without disrupting* the operation of the OS and applications running in that LPAR.
Applicability Workload consolidation (e.g. many to one) Workload balancing (e.g. move to larger system) Planned CEC outages for maintenance/upgrades Impending CEC outages (e.g. hardware warning received)
Inactive Partition Mobility Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not
running) from one system to another.
Inactive Partition Mobility Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not
running) from one system to another.
Partition Mobility supported on POWER6AIX 5.3, AIX 6.1 and Linux
© Copyright IBM Corporation 2009
IBM System p
Live partition mobility demo
The following demo show live partition mobility (LPM) in action
© Copyright IBM Corporation 2009
IBM System p
Response Time & Utilization based Workload & Resource Management
AIX 5.3Linux
Partitions
Power Hypervisor
Virtual I/ O Server (VIOS)
Ethernet & Fiber Channel Adapter Sharing
Virtualized disks
Interpartition Communication
Dedicated I/O Shared I/O
AIX 6
IBM System p Offers Best of Both Worlds in Virtualization
WPARApplication
Server
WPARWeb
Server
WPARBilling
AIX instance
WPARTest
WPARBI
Logical Partitions (LPARS) AIX 6 Workload Partitions (WPARs)
Multiple OS Images in LPARs Up to a maximum of 254
Maximum Flexibility Different OSes and OS Versions in LPARs
Maximum Fault / Security / Resource Isolation
Multiple workloads within a single OS image Minimum number of OS Images: one
Improved administrative efficiency Reduce number of OS images to maintain
Good Fault / Security / Resource isolation
AIX Workload Partitions can be Used in LPARs
© Copyright IBM Corporation 2009
IBM System p
Virtualization Benefits
• Increase Utilization- Single application servers
often run at lower average utilizations levels.
- Idle capacity cannot be used- Virtualized servers run at high
utilization levels.
• Simplify Workload Sizing- Sizing new workloads is difficult - LPARs can be resized to match needs- Can over commit capacity- Scale up and scale out applications on
the same hardware platform
0102030405060708090
100
8:00 10:00 12:00 2:00 4:00
Time
CP
U U
tiliz
atio
n
Purchased
Peak
Average
© Copyright IBM Corporation 2009
IBM System p
Backup slides
Still more details for those interest….
© Copyright IBM Corporation 2009
IBM System p
Partition capacity entitlement
• Processing units- 1.0 processing unit represents one
physical processor• Entitled processor capacity
- Commitment of capacity that is reserved for the partition
- Set upper limit of processor utilization for capped partitions
- Each virtual processor must be granted at least 1/10 of a processing unit of entitlement
• Shared processor capacity is always delivered in terms of whole physical processors
Processing capacity1 physical processor1.0 processing units
0.5 processing unit 0.4 processing unit
Minimum requirement0.1 processing units
© Copyright IBM Corporation 2009
IBM System p
Capped Shared Processor LPAR
Maximum Processor Capacity
Entitled Processor CapacityProcessorCapacityUtilization LPAR Capacity Utilization
Pool Idle Capacity Available
Time
minimum processor capacity
ceded capacity
utilized capacity
© Copyright IBM Corporation 2009
IBM System p
Uncapped Shared Processor LPAR
Maximum Processor Capacity
ProcessorCapacityUtilization
Pool Idle Capacity Available
Time
Entitled Processor Capacity
minimum processor capacity
Utilized Capacity
ceded capacity
© Copyright IBM Corporation 2009
IBM System p
Shared processor partitions
• Micro-Partitioning allows for multiple partitions to share one physical processor
• Up to 10 partitions per physical processor• Up to 254 partitions active at the same time• Partition’s resource definition
- Minimum, desired, and maximum values for each resource- Processor capacity- Virtual processors- Capped or uncapped
• Capacity weight
- Dedicated memory• Minimum of 128 MB and 16 MB increments
- Physical or virtual I/O resources
CPU 0 CPU 1
CPU 3 CPU 4
LPAR 1 LPAR 2
LPAR 5 LPAR 6
LPAR 4LPAR 3
© Copyright IBM Corporation 2009
IBM System p
Understanding min/max/desired resource values
• The desired value for a resource is given to a partition if enough resource is available.
• If there is not enough resource to meet the desired value, then a lower amount is allocated.
• If there is not enough resource to meet the min value, the partition will not start.
• The maximum value is only used as an upper limit for dynamic partitioning operations.
© Copyright IBM Corporation 2009
IBM System p
Partition capacity entitlement example
• Shared pool has 2.0 processing units available • LPARs activated in sequence• Partition 1 activated
- Min = 1.0, max = 2.0, desired = 1.5- Starts with 1.5 allocated processing units
• Partition 2 activated- Min = 1.0, max = 2.0, desired = 1.0- Does not start
• Partition 3 activated- Min = 0.1, max = 1.0, desired = 0.8- Starts with 0.5 allocated processing units
© Copyright IBM Corporation 2009
IBM System p
Capped and uncapped partitions
• Capped partition- Not allowed to exceed its entitlement
• Uncapped partition- Is allowed to exceed its entitlement
• Capacity weight- Used for prioritizing uncapped partitions- Value 0-255- Value of 0 referred to as a “soft cap”
© Copyright IBM Corporation 2009
IBM System pShared Dedicated Capacity
0
25
50
75
100
125
150
175
200
1-way Dedicated Wasted Dedicated
0.5 Uncapped 1 0.5 Uncapped 2
Dedicated Processor Partitions often have excess capacity that can be utilized by uncapped micropartitions
Increased Resource Utilization
Today
Unused capacity in dedicated partitions gets wasted
0
25
50
75
100
125
150
175
200
1-way Dedicated Wasted Dedicated
0.5 Uncapped 1 0.5 Uncapped 2
With the new support, a dedicated partition will donate its excess cycles to the uncapped partitions
Results in increased resource utilization Dedicated processor partition maintains the performance
characteristics and predictability of the dedicated environment under load
With Shared Dedicated Capacity
Equivalent Workload Complete
© Copyright IBM Corporation 2009
IBM System p
WPAR Manager view of WPARs
© Copyright IBM Corporation 2009
IBM System p
Active Memory Sharing Overview
• Next step in resource virtualization, analogous to shared processor partitions that share the processor resources available in a pool of processors.
• Supports over-commitment of physical memory with overflow going to a paging device.- Users can define a partition with a logical memory size larger than the available physical
memory. - Users can activate a set of partitions whose aggregate logical memory size exceeds the
available physical memory.
• Enables fine-grained sharing of physical memory and automated expansion and contraction of a partition’s physical memory footprint based on workload demands.
• Supports OS collaborative memory management (ballooning) to reduce hypervisor paging.
A pool of physical memory is dynamically allocated amongst multiple logical partitions as needed to optimize overall physical memory usage in the pool.
Recommended