Upload
egbert-wheeler
View
226
Download
0
Tags:
Embed Size (px)
Citation preview
VMware Virtual SAN 6.0What’s New Technical Walkthrough
Raiko Mesterheide
Systems Engineer
The Software-Defined Data Center
Transform storage
by aligning it with app demands
2
Expand virtual compute to all
applications
Virtualize the network for speed
and efficiency
Managementtools give wayto automation
VMware Software-Defined Storage
3
vSphere
Storage Policy-Based Mgmt
Virtual SAN
Storage Policy-Based Mgmt
SAN / NAS
vSphere Virtual Volumes
Virtual Datastore
VMware Software-Defined Storage
Virtual Datastore
Bringing the Efficient Operational Model of Virtualization to Storage and Availability
Storage Policy-Based Management:
4
vSphereStorage Policy-Based Mgmt
Virtual SAN
Capacity
Performance
Availability
2 Failures to tolerate
Reserve thick 10 GB
Flash Read Cache10 %
• Intelligent storage placement at scale
• Dynamic adjustments in real time
• Automated policy enforcement
App-centric Control Plane That Across Storage Tiers
5
Virtual SAN
Virtual SAN Puts The App In ChargeVM-centric Service Levels for Simpler and Automated Storage Management Through App-centric Approach
1. Define storage policy
2. Apply policy at VM creation
✖ Hardware-centric, vendor-specific management
✖ Slow provisioning, rigid storage constructs (LUNs, Volumes)
✖ Data services aligned to storage container, not directly with VM needs
✖ Frequent data migrations
Fast, VM-centric provisioning No need to manage LUNs, Vols. Resources and data service are
automatically provisioned and maintained
Easy to change without data migration
Today
Virtual SAN
Storage Policy
Capacity
Availability
Performance
Virtual SAN DatastoreLUN
LUN
6
VMware Virtual SAN : Hybrid
vSphere + Virtual SAN
…
• Software-defined storage built into vSphere
• Runs on any standard x86 server
• Pools flash-based devices into a shared datastore
• Managed through per-VM storage policies
• Delivers High performance through flash acceleration
• 2x more IOPS with VSAN Hybrid
• Up to 40K IOPS/host
• Highly resilient - zero data loss in the event of hardware failures
• Deeply integrated with the VMware stack
Virtual SAN
Hard disksSSDHard disks
SSDHard disks
SSD
Virtual SAN Datastore
Radically Simple Hypervisor-Converged Storage Software
7
VMware Virtual SAN : All-Flash
vSphere + Virtual SAN
…
• Flash-based devices used for caching as well as persistence
• Cost-effective all-flash 2-tier model:
o Cache is 100% write: using write-intensive, higher grade flash-based devices
o Persistent storage: can leverage lower cost read-intensive flash-based devices
• Very high IOPS: up to 90K(1) IOPS/Host
• Consistent performance with sub-millisecond latencies
Virtual SAN All-Flash
Virtual SAN All-Flash Datastore
NEW in 6.0
SSDs SSDs SSDs
(1) All performance numbers are subject to final benchmarking results. Please refer to guidance published at GA
Extremely High Performance with Predictability
8
Enterprise-Class Scale and PerformanceEnhancements
in 6.0
Hosts / Cluster 32 64 64
IOPS / Host 20K 40K 90K
VMs / Host 100 200 200
VMs / Cluster 3200 6400 6400
Virtual SAN
5.5
Virtual SAN
6.0 Hybrid
Virtual SAN
6.0 All-Flash
Note: All performance numbers are subject to final benchmarking results. Please refer to guidance published at GA
9
Virtual SAN 6.0 Now Ready For Business-Critical Apps
VDI DR Test/Dev
Virtual InfrastructureBest storage for VMs
Optimized for Virtual Infrastructure
Enterprise-class
Ready for business critical apps
VMware Virtual SANHardware
11
Hardware Requirements
Any Server on the VMware Compatibility Guide
All flash-based devices, and storage controllers MUST be listed on the VMware Compatibility Guide for VSAN
1Gb/10Gb NIC
SAS/SATA Controllers (RAID Controllers must work in “pass-through” or RAID0” mode)
SAS/SATA//PCIe SSD
SAS/NL-SAS/SATA HDD
At least 1 of each(Except All-Flash)
4GB to 8GB USB, SD Cards
12
Flash Based Devices
In Virtual SAN hybrid ALL read and write operations always go directly to the Flash tier.
Flash based devices serve two purposes in Virtual SAN hybrid architecture
1. Non-volatile Write Buffer (30%)– Writes are acknowledged when they enter prepare stage on the flash-based devices.
– Reduces latency for writes
2. Read Cache (70%)– Cache hits reduces read latency
– Cache miss – retrieve data from the magnetic devices
Choice of hardware is the #1 performance differentiator between Virtual SAN configurations.
13
Flash Based Devices
In Virtual SAN all-flash read and write operations always go directly to the Flash devices.
Flash based devices serve two purposes in Virtual SAN All Flash:
1. Cache Tier (write buffer)– High endurance flash devices.
– Listed on VCG
2. Capacity Tier– Low endurance flash devices
– Listed on VCG
Choice of hardware is the #1 performance differentiator between Virtual SAN configurations.
14
Network
• 1Gb / 10Gb supported for hybrid architecture– 10Gb shared with NetIOC for QoS will support most environments
– If 1GB then recommend dedicated links for Virtual SAN
• 10Gb only supported for all-flash architecture– 10Gb shared with NIOC for QoS will support most environment
• Jumbo Frames will provide nominal performance increase– Enable for greenfield deployments
– Enable in large deployments to reduce CPU overhead
• Virtual SAN supports both VSS & VDS– NetIOC requires VDS
• Network bandwidth performance has more impact on host evacuation, rebuild times than on workload performance
VMware Virtual SAN
High Density Direct Attached Storage
2015 & 2016
– Manage disks in enclosures – helps enable blade environment
– Flash acceleration provided on the server or in the subsystem
– Data services delivered via the VSAN Data Services and platform capabilities
– Direct attached and disks (flash devices, and magnetic devices) are Supports combination of direct attached disks and high density attached disks (SSDs and HDDs) per disk group.
– Supported HDDASs will be tightly controlled by the HCL (exact list TBD).
• Applies to HDDASs and controllers
• Also supported on Virtual SAN 5.5
HDDAS
Blade Servers
HDDSSD
vSphere + Virtual SAN
VMware Virtual SAN – VCG and Ready Nodes
16
1
2
1
2
www.vmware.com/go/virtualsan-hcl
Virtual SAN Hardware Quick Reference Guide• 5 Ready Node profile guidelines• Sizing assumptions• Design considerations
Virtual SAN Ready Nodes• List components and quantity
that make up each Ready Node• Info on how to quote/order the
Ready Node
3
3 Always use certified components!• Drivers and firmware• Supportability • Increase customer satisfaction• Reduce rework and time-to-
market
17
VMware Virtual SAN: One Destination SDDC & SDS, Three Paths
Component Based
…using the VMware Virtual SAN Compatibility Guide (VCG) (1)
Choose individual components …
SSD or PCIe
SAS/NL-SAS/ SATA HDDs
Any Server on vSphere Hardware Compatibility List
HBA/RAID Controller
Virtual SAN Ready Node
40+ OEM validated server configurations ready for Virtual SAN deployment (2)
Note: 1) Components must be chosen from Virtual SAN HCL, using any other components is unsupported – see Virtual SAN VMware Compatibility Guide Page2) VMware continues to update/add list of the available Ready Nodes, please refer to Virtual SAN VMware Compatibility Guide Page for latest list3) Product availability varies by countries. Please contact your local VMware partners for details, pricing and availability – click here
Maximum Flexibility Maximum Ease of Use
VMware EVO:RAIL
A Hyper-Converged Infrastructure Appliance
(HCIA) for the SDDC
Each EVO:RAIL HCIA is pre-built on a qualified and optimized
2U/4 Node server platform.
Sold via a single SKU by VMware Qualified EVO:RAIL Partners (QEPs) (3)
Software + Hardware Hyper-Converged Infrastructure
There are 5 VSAN Ready Node Profiles – Server Workload
Virtual SAN All Flash - Server
Server Low Profile
Server Medium Profile
Server High Profile
• Up to 30VMs
• Up to 4K IOPs
• 5TB raw capacity
• Up to 60VMs
• Up to 24K IOPs
• 8TB raw capacity
• Up to 120VMs
• Up to 40K IOPs
• 14.4TB raw capacity
Virtual SAN Hybrid - Server
Server Medium Profile
Server High Profile
• Up to 60VMs
• Up to 60K IOPs
• 8TB raw capacity• Capacity 8x1TB SSD• Caching 2x200GB SSD
• Up to 120VMs
• Up to 80K IOPs
• 12TB raw capacity• Capacity 12x1TB SSD• Caching 2x400GB SSD
For complete details on the sizing assumptions and design considerations of the Ready Node profiles, please refer to the “Virtual SAN Hardware Quick Reference Guide” on the Virtual SAN VMware Compatibility Guide Page
VMWARE FIELD & PARTNER USE ONLY - CONFIDENTIAL
There are 4 VSAN Ready Node Profiles – VDI Workload
Virtual SAN All Flash - VDI
VDI Linked Clones Profile
VDI Full Clones Profile
• Up to 100 desktops
• Up to 10K IOPs
• 1.2TB raw capacity
• Up to 100 desktops
• Up to 10K IOPs
• 10.8TB raw capacity
For complete details on the sizing assumptions and design considerations of the Ready Node profiles, please refer to the “Virtual SAN Hardware Quick Reference Guide” on the Virtual SAN VMware Compatibility Guide Page
Virtual SAN Hybrid - VDI
VDI Linked Clones Profile
VDI Full Clones Profile
• Up to 200 desktops
• 1.6TB raw capacity• Capacity 4x400GB SSD• Caching 1x400GB SSD
• Up to 200 desktops
• 9.6TB raw capacity• Capacity 12x800GB SSD
• Caching 2x400GB SSD
20
2x VMs per host 62TB Virtual DisksSnapshots and
Clone
• Greater capacity allocations per VMDK
• VMDK >2TB are supported
• Larger supported capacity of snapshots and clones per VMs
• 32 per Virtual Machine
• Larger Consolidation Ratios
• Due to increase of supported components per hosts
• 9000 Components per Host
Virtual SAN Performance and Scale Improvements
Host Scalability
• Cluster support raised to match vSphere
• Up to 64 nodes per cluster in vSphere
• VSAN can scale up to 64 nodes.
21
Disk Format VSAN 5.5 to 6.0Disk Serviceability
Functions
• In-Place modular rolling upgrade
• Seamless In-place Upgrade
• Seamless Upgrade Rollback Supported
• Upgrade performed from RVC CLI
• PowerCLI integration for automation and management
• Ability to manage flash-based and magnetic devices.
• Storage consumption models for policy definition
• Default Storage Policies
• Resync Status dashboard in UI
• VM capacity consumption per VMDK
• Disk/Disk group evacuation
• New On-Disk Format
• New delta-disk type vsanSparse
• Performance Based snapshots and clones
Virtual SAN 6.0 New Features
VSAN Platform
• New Caching Architecture for all-flash VSAN
• Virtual SAN Health Services
• Proactive Rebalance
• Fault domains support
• High Density Storage Systems with Direct Attached Storage
• File Services via 3rd party
• Limited support hardware encryption and checksum
VMFS-L VSAN FS
22
Virtual SAN 6.0 Enables Both Hybrid or All-Flash Architectures
Hybrid All-Flash
30K IOPS/Host 90K IOPS/Hostpredictable sub-millisecond latency
New!
CachingSSD, PCIe, Ultra DIMM etc.Read cache / Write buffer
SSD, PCIe, Ultra DIMM etc.Write-only buffer
Magnetic Disks Flash Devices
DataPersistence
Virtual SAN Flash Caching Architectures
disk group disk group
capacity capacity
read cache read cache 10% of projected used capacity High Endurance devices
- 2 to 3 TBW per dayCache Tier
Capacity Tier
Size for remainder of capacity Magnetic devices Price on best $/GB
disk group disk group
capacity capacity
write buffer write bufferCache Tier
Capacity Tier
Size for remainder of capacity Lower required endurance
- 0.2 TBW per day sufficient Price on best $/GB
10% of projected used capacity High Endurance devices
- 2 to 3 TBW per day
Hybrid
All-Flash
All-Flash Cache Tier Sizing
Cache tier should have 10% of the anticipated consumed storage capacity
Cache is entirely write-buffer in all-flash architecture
Cache devices should be high write endurance models: Choose 2+ TBW/day or 3650+/5 year
Total cache capacity percentage should be based on use case requirements.
– For general recommendation visit the VMware Compatibility Guide.
– For write-intensive workloads a higher amount should be configured.
– Increase cache size if expecting heavy use of snapshots
Measurement Requirements ValuesProjected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash cache capacity percentage 10%
Total flash cache capacity required 20TB x .10 = 2 TB
VMware Virtual SAN 6.0Usability Improvements
• vCenter can manage multiple vsanDatastores with different sets of requirements.
• Each vsanDatastore can have a different default profile assigned.
Default Storage Policies
vSphere + Virtual SAN
Hard disksHard disksSSD SSD Hard disksSSD
…vSphere + Virtual SAN
Hard disksHard disksSSD SSD Hard disksSSD
…
vCenter Server
VSAN default policy
BCAdefault policy
Virtual Machine Usability Improvements
• Virtual SAN 6.0 adds functionality to visualize Virtual SAN datastore resource utilization when a VM Storage Policy is created or edited.
• Virtual SAN’s free disk space is raw capacity. – With replication, actual usable space is lesser.
• New UI shows real usage on– Flash Devices
– Magnetic Disks
– Displayed in the vSphere Web Client and RVC:
Virtual Machine >2TB VMDKs
• In VSAN 5.5, the max size of a VMDK was limited to 2TB.– Max size of a VSAN component is 255GB.– Max number of stripes per object was 12.
• In VSAN 6.0 the limit has been increased to allow VMDK up to 62TB.– Objects are still striped at 255GB.
• 62TB limit is the same as VMFS and NFS so VMDK can be
Resynchronization Status
• Virtual SAN might need to move data around in the background: change policy, host failure, long term/permanent component loss, user triggered reconfig, maintenance mode, etc.
• UI Resync Dashboard shows the VMs and objects that are resyncing and remaining bytes to sync.
Proactive Rebalance
• Proactive rebalance is a new feature introduced in 6.0 to address two typical use cases:
– Adding a new node to an existing vsan cluster or bringing a node out of decommission state.
– Leverage the new nodes even if the fullness of existing disks are below 80%.
– Rebalance would be more effective if it can be started earlier than disk almost full.
• Performed through RVC
– vsan.proactive_rebalance --start ~/computers/cluster
VMware Virtual SANFailure Scenarios
Fault Domains
Rack A
Fault Domain A Fault Domain B Fault Domain C
Virtual SAN Cluster
Rack CRack B
vmdk witness
raid-1
vmdk
raid-1
vmdkwitnessvmdk
In Virtual SAN 5.5 assumed different hosts have independent failure behavior.
For FTT=n, VSAN creates (n+1) replicas on (n+1) unique hosts
Failure protection example in Virtual SAN 5.5
Four racks with two hosts each
FTT=2 to protect against one rack failure requires 3 replicas
Fault Domains
vsanDatastore
R1 R2 R3
VSAN network VSAN network VSAN network VSAN network
esx-1 esx-2 esx-3 esx-4 esx-5 esx-6 esx-7 esx-8
Virtual SAN ClusterRack A Rack B Rack C Rack D
An example of Virtual SAN 6.0 utilizing new fault domain feature with four racks with two hosts each
Four defined fault domains
FD1 = esx-1, esx-2
FD2 = esx-3, esx-4
FTT=2 to protect against one rack failure requires only 2 replicas
Fault Domains
FD3 = esx-5, esx-6
FD4 = esx-7, esx-8
Rack A Rack B Rack C Rack D
W
vsanDatastore
R1 R2
VSAN network VSAN network VSAN network VSAN network
esx-1 esx-2 esx-3 esx-4 esx-5 esx-6 esx-7 esx-8
Virtual SAN Cluster
W
vSphere admins can configure fault domains and their definitions from:
• vSphere Web Client
• ESXCLI
Fault Domain Configuration
Number of failures to tolerate (FTT) are applied based on fault domains and no longer on hosts.– Example: FTT=n, (2n + 1) fault domains are required
– Provisioning failures can occur due to misconfigured hosts or unsatisfiable number of fault domains.
Fault domains are configurable though host profiles – Host profile configurations are persistent across reboots
– Once reconfiguration begins objects will be out of compliance for a period of time
– Once the objects are synchronize they will be back in compliance
Failure Domain
New RVC commands provide visibility and management capabilities to the failure domains configuration:
• vsan.fault_domains /Datacenter/Computer/VSAN
• vsan.fault_domains --help
Fault Domains
55
Virtual SAN Objects, Components, Witness
• New Quorum computation:
– Each fault domain must have equal number of votes during quorum computation.
– In VSAN 5.5, each component has only one vote and VSAN may add additional witnesses to
equalize votes.
– In VSAN 6.0, each component initially has only one vote and VSAN may increase number of
votes for certain components to equalize votes.
VMware Virtual SAN 6.0Interoperability
File Services with NexentaConnect
• NexentaConnect complements VMware Virtual SAN simplified operating and storage consumption models by:
– Adding file services (SMB, NFS) on top of Virtual SAN
– Provide similar ease of management capabilities
– Leveraging Storage Policy Based Management (SPBM) and underlying storage technologies
• NexentaConnect is used for storing files while VSAN is for virtual machine storage
• Offers vSphere Administrators flexibility and benefits such as
– Abstracted pool of files services
– High performance NFS and SMB network shares
– Live monitoring capabilities
– Disaster Recovery planning capabilities
• This is a 3rd party solution and not developed by VMware
vSphere + Virtual SAN
Hard disksHard disksSSD SSD Hard disksSSD
…
Virtual SAN Shared Datastore
vRealize Automation
• vRealize Automation Advanced complements VMware Virtual SAN simplified operating and storage consumption models by:
– Delivering a dynamic storage service level allocation on top of Virtual SAN.
– Leveraging Storage Policy Based Management (SPBM) and underlying Virtual SAN storage technologies.
vRealize Operations
• Day to Day Operations Management
– Enable Alerting & Notification for troubleshooting VSAN related failures and performance issues
– Provide a single pane of glass for simplified and automated operations management for VSAN by means of exploratory dashboards, heat maps etc
• Analytics and Future Capacity Planning
– Analyze Health, Risk and Efficiency of Virtual SAN cluster around performance, capacity and availability
– Enable use of advanced analytics, reporting and planning capabilities for physical infrastructure supporting Virtual SAN
PowerCLI
• PowerCLI 6.0 delivers a set of Virtual SAN related cmdlets (no longer a fling) for managing Virtual SAN.
– Some of the existing cmdlets were altered to work with Virtual SAN.
• Here is some of the new cmdlets:
– Export-SpbmStoragePolicy
– Get-SpbmCapability
– Get-SpbmCompatibleStorage
– Get-SpbmEntityConfiguration
– Get-SpbmStoragePolicy
– Get-VSANDisk
– Get-VsanDiskGroup
– Import-SpbmStoragePolicy
– New-SpbmRule
– New-SpbmRuleSet
– New-SpbmStoragePolicy
– New-VsanDisk
– New-VsanDiskGroup
– Remove-SpbmStoragePolicy
– Remove-VsanDisk
– Remove-VsanDiskGroup
– Set-SpbmEntityConfiguration
– Set-SpbmStoragePolicy
Disaster Recovery For The Software-Defined Data Center
• VM-centric, storage-independent replication simplifies protection
• Flexible storage topologies (External to Virtual SAN or vCloud Air)
vSphere Replication
Production Site Recovery Site
vSphere
Site Recovery Manager
vSphere Replication
VDPA backup replication
VD
PA
Backupdatastore
Virtual SAN
Virtual SAN
External Storage
Backupdatastore
vSphere
vSphere Replication
• Storage-efficient dedupe reduces storage investments
• WAN-efficient backup data replication enables basic DR
vSphere Data Protection Advanced
• Server side economics lower storage costs
• Hyper-convergence on x86 platform reduces DR footprint
Virtual SAN
• Centralized recovery plans enables DR scale for thousands of VMs
• DR workflow automation reduces OpEx on DR management
Site Recovery Manager
• DR as a Service to vCloud Air shifts DR investments from CapEx to OpEx
• Fully delivered and supported by VMware
vCloud Air Disaster Recovery
Site Recovery Manager
VMware Virtual SANMonitoring & Troubleshooting
Ruby vSphere Console (RVC)
• New RVC commands for management and configurations purposes have been added
• Here is the list of the new commands:
– vsan.v2_ondisk_upgrade
– vsan.proactive_rebalance
– vsan.purge_inaccessible_vswp_objects
– vsan.enable_capacity_flash
– vsan.host_claim_disks_differently
– vsan.host_wipe_non_vsan_disk
– vsan.host_evacuate_data
– vsan.host_exit_evacuation
– vsan.scrubber_info
– basic.screenlog
Virtual SAN Health
Virtual SAN Health Services: is designed deliver troubleshooting and health reports to vSphere Administrators about Virtual SAN 6.0 subsystems and their dependencies such as:
– Cluster Health– Network Health– Data Health– Limits Health– Physical Disk Health
65
THANK YOU