Upload
teresa-norton
View
221
Download
0
Embed Size (px)
Citation preview
© 2014 IBM Corporation
Storwize FamilyV7.3 Technical Update
May 8, 2014
Byron GrossnickleBill Wiegand
Consulting I/T SpecialistConsulting I/T Specialist
NA Storage Specialty TeamNA Storage Specialty Team
Storage VirtualizationStorage Virtualization
© 2014 IBM Corporation
Agenda
Next Generation Storwize V7000 Hardware
What’s new in V7.3 Software
2
© 2014 IBM Corporation
Agenda
Next Generation Storwize V7000 Hardware
What’s new in V7.3 Software
3
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 2076-524 Control Enclosure
Control enclosure is 2U, same physical size as previous model Front view looks identical to V5000 and only comes in 24 drive SFF
configuration for the control enclosure and both SFF and LFF configurations for expansion enclosures
Back layout is very different to make room for the more powerful canisters
PSU 1 PSU 2Canister 1 Canister 24
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Rear View
Dual controller/node canisters
Host Interface slotsCompressionaccelerator slot
PSU PSU
SAS expansion ports 1GbE ports
Technician port
5
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Exploded View
6
Drive Cage
Drives
Midplane
Enclosure Chassis
Fan Cage
PSU
Canisters
© 2014 IBM Corporation
Ivy Bridge1.9GHz
E5-2628L-V2
Me
zz Co
nn
DMI
SPCSAS EXP
PLX PCIe V3-1GB full duplex8 lanes
SAS Chain 12Gb/phy
PLXCOLETO CREEK
*Optional 2ndCompression Acceleration
Card
Quad1GbE
COLETO CREEK
Boot 128GB SSD
USB
TPM
1GbE
16GB DIMM16GB DIMM
16GB DIMM16GB DIMM
4 phys
4 phys
4 phys4 phys
Standard
*Optional
To ControlEnclosure Drives on
SAS Chain 0
To Expansion Enclosure Drives on
SAS Chain 1
SAS Chain 2
High speedcross cardcommunications
HBAs8Gb FC
or 10GbE
Storwize V7000 Hardware Refresh: Block Diagram of Node Canister
7
© 2014 IBM Corporation
There are four 1Gb Ethernet ports which are numbered as shown in the picture The T port is the Technician port used for initial configuration of the system There are two external 12Gb SAS ports for expansion
– SAS host and SAS virtualization is not supported There are two USB ports There are three slots for expansion cards
Storwize V7000 Hardware Refresh: Built-in Ports per Node Canister
8
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Expansion Card Options
There are three expansion slots numbered 1-3 left to right when viewed from the rear Ports on a particular card are numbered top to bottom starting with 1 Supported expansion cards
– Compression pass-through comes standard with system to enable on-board compression engine
Slot Supported cards
1 Compression pass-through, Compression Acceleration card
2 None, 8Gb FC*, 10GbE**
3 None, 8Gb FC*, 10GbE**
* Statement of Direction for 16Gb FC announced** Only one 10GbE card supported per node canister
9
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 8Gb FC Card
Same adapter as used in current Storwize V7000 Models– PMC-Sierra Tachyon QE8 – SW SFPs included– LW SFPs optional
Up to two can be installed in each node canister for total of 16 FC ports in control enclosure 16Gb FC Statement of Direction announced
10
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 10GbE Card
The new 4 port 10GbE adapter supports both FCoE and iSCSI– Can be used for IP replication too
In V7.3.0 we will only support one 10GbE adapter in each node canister of the 2076-524 Support for IBM 10Gb optical SFP+ only Each adapter port has amber and green coloured LED to indicate port status
– Fault LED is not used in V7.3
FCoE frame routing, FCF, performed by CEE switch or passed-thru to FC switch– No direct attach of hosts or storage to these ports
Software allows using FCoE/iSCSI protocols simultaneously as well as IP replication on same port
– Best practice is to separate these protocols onto different ports on the card
Green LED Meaning
On Link established
Off No link
11
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Compression Accelerator Card
New Storwize V7000 model has one on-board compression accelerator standard and supports volume compression without any additional adapter installed
– This configuration will have a pass-through adapter in slot 1 to allow the on-board compression hardware to be utilized
One additional Compression Accelerator card (see picture) can optionally be installed in slot 1, replacing the pass-through adapter, for a total of two Compression Accelerator cards per node canister
12
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Memory/CPU Core Allocation – RtC
For this initial release there will be fixed memory sizes assigned for RtC use based on how much memory is installed in each node canister
An additional 32GB of memory can be installed in each node canister– Currently can only be used by RtC code– Statement of direction announced to allow use of this extra memory in non-RtC environment
Memory Allocation when RtC enabled:
CPU Core Allocation when RtC enabled:
This gives a balanced configuration between SVC and RtC performance– Recommendation for serious RtC use is add the extra 32GB of memory per node canister– Second Compression Accelerator is also recommended and requires extra 32GB of memory
Installed RAM RtC Allocation
32 GB 6 GB
64 GB 6 GB + optional 32 GB Upgrade
Compression Disabled Compression Enabled
SVC RTC SVC RTC
8 0 4 4
13
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Max Performance (One I/O Group)
Uncompressed Previous Storwize V7000 New Storwize V7000
Read Hit IOPS 850,000 1,300,000
Read Miss IOPS 125,000 238,000
Write Miss IOPS 25,000 50,000
“DB-like” 52,000 100,000
Compressed Previous Storwize V7000 New Storwize V7000
Read Miss IOPS 2,000-44,000 39,000-149,000
Write Miss IOPS 1,100-17,000 22,500-78,000
“DB-like” 1,500-32,700 41,000-115,000
Compressed performance shows a range depending on I/O distribution Compressed performance is better than uncompressed in some cases because of fewer
I/Os to drives and additional cache benefits
Preliminary data: Subject to change before GA14
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Fan Module
Each control enclosure contains two fan modules for cooling, one per node canister Each fan module contains 8 individual fans in 4 banks of 2
The fan module as a whole is a replaceable component, but the individual fans are not There is a new CLI view lsenclosurefanmodule
IBM_Storwize:FAB1_OOB:superuser>svcinfo lsenclosurefanmoduleenclosure_id fan_module_id status1 1 online1 2 online
15
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Internal Battery (1)
The battery is located within the node canister rather than the PSU in the new model– Provides independent protection for each node canister
A 5-second AC power loss ride-through is provided– After this period, if power is not restored, we initiate a graceful shutdown– If power is restored during the ride-through period, the node will revert back to main power and the
battery will revert to 'armed‘ state– If power is restored during the graceful shutdown, the system will revert back to main power and
the node canisters will shutdown and automatically reboot A one second full-power test is performed at boot before the node canister comes online A periodic test on the battery (one at the time) is performed within the node canister, only if
both nodes are online and redundant, to check whether the battery is functioning properly
16
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Internal Battery (2)
Power Failure– If power to a node canister fails, the node canister uses battery power to write cache and
state data to its boot drive– When the power is restored to the node canister, the system restarts without operator
intervention– How quickly it restarts depends on whether there is a history of previous power failures– The system restarts only when the battery has sufficient charge to power the node canister
for the duration of saving the cache and state data again– If the node canister has experienced multiple power failures, and the battery does not have
sufficient charge to save the critical data, the system starts in service state and does not permit I/O operations to be restarted until the battery has sufficient charge
Reconditioning
– Reconditioning ensures that the system can accurately determine the charge in the battery. As a battery ages, it loses capacity. When a battery no longer has capacity to protect against two power loss events it reports the battery end of life event and should be replaced.
– A reconditioning cycle is automatically scheduled to occur approximately once every three months, but reconditioning is rescheduled or cancelled if the system loses redundancy. In addition, a two day delay is imposed between the recondition cycles of the two batteries in one enclosure.
17
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 2076-24/12F Expansion Enclosure
The expansion enclosure front looks just like the V5000 enclosures
The expansion enclosure back looks pretty much like the V5000 enclosures too
18
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: 2076-24/12F Expansion Enclosure
Available in 2.5- and 3.5-inch drive models– 2076 Models 24F and 12F respectively
Attach to new control enclosure using 12Gbps SAS Mix drive classes within enclosure including different drive SAS interface speeds Mix new enclosure models in a system even on same SAS chain All drives dual ported and hot swappable
19
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: Expansion Enclosure Cabling
20
© 2014 IBM Corporation
Storwize V7000 Hardware Refresh: SAS Chain Layout
Each control enclosure supports two expansion chains and each can connect up to 10 enclosures
Unlike previous Storwize V7000 the control enclosure drives are not on either of these two SAS chains
– There is a double-width high-speed link to the control enclosure and SSDs should be installed in control enclosure
– There is as much SAS bandwidth dedicated to these 24 slots as there is to other two chains combined
– The control enclosure internal drives are shown as being on ‘port 0’ where this matters
SSDs can also go in other enclosures if more then 24 required for capacity reasons
HDDs can go in control enclosure if desired Mix of SSDs and HDDs is fine too
Expansion Expansion
Expansion Expansion
SAS port 1 Chain 1
SAS port 2Chain 2
ControlSAS Adapter
Node Canister
InternalSAS links
8 more 8 more
‘SAS port 0’ Chain 0
21
© 2014 IBM Corporation
Clustered System Example – 2 IOGs and Max of 40 SFF Expansion Enclosures
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 0
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 1
22
© 2014 IBM Corporation
Clustered System Example – 4 IOGs and Max of 40 SFF Expansion Enclosures
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 0
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 1
Control Enclosure SAS Chain 0
I/O Group 2
Control Enclosure SAS Chain 0
I/O Group 3
23
© 2014 IBM Corporation
Clustered System Example – 4 IOGs and Max of 40 SFF Expansion Enclosures
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 0
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 1
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 2
SAS Chain 1 SAS Chain 2
Control Enclosure
Expansion Enclosure Expansion Enclosure
SAS Chain 0
I/O Group 3
24
© 2014 IBM Corporation
Clustered System Example – 4 IOGs and Max of 80 LFF Expansion Enclosures
Control Enclosure SAS Chain 0
I/O Group 0
SAS Chain 1
Expansion Enclosure
SAS Chain 1
Expansion Enclosure
Control Enclosure SAS Chain 0
I/O Group 1
SAS Chain 1
Expansion Enclosure
SAS Chain 1
Expansion Enclosure
Control Enclosure SAS Chain 0
I/O Group 2
SAS Chain 1
Expansion Enclosure
SAS Chain 1
Expansion Enclosure
Control Enclosure SAS Chain 0
I/O Group 3
SAS Chain 1
Expansion Enclosure
SAS Chain 1
Expansion Enclosure
25
© 2014 IBM Corporation
Technician Port
Technician port is used for the initial configuration of the system in lieu of USB stick– Technician port is marked with a T (Ethernet port 4)– As soon the system is installed and the user connects laptop Ethernet cable to the Technician
Port the welcome panel will appear (same as on SVC DH8)– The Init tool will not be displayed, if there is a problem, which prevents the system from
clustering• E.G. Node canister is in Service state because of an error or there is a stored System ID (if the system was
set up before and the user forgot to remove the ID (chenclosurevpd -resetclusterid)
– If there is a problem, then the Service Assistant GUI will be shown whereby the customer can logon and check the node canister’s status
* If the users laptop has DHCP configured, nearly all do, it will automatically configure to bring up Initization screen* If they do not have DHCP they will need to set IP of their Ethernet adapter to 192.168.0.2 – 192.168.0.20
26
© 2014 IBM Corporation
Hardware Compatibility within the Storwize family
Expansion Enclosures– The V7000 Gen2 expansion enclosures can only be used with a V7000 Gen2 control
enclosure– The V7000 Gen1 expansion enclosures can only be used with a V7000 Gen1 control
enclosure– The V3x00/V5000/SVC-DH8 and Flex SystemV7000 expansion enclosures cannot be used
with a V7000 Gen2 control enclosure and drives can not be swapped between models either• Note that Flex System V7000 will not support V7.3
Control Enclosures– V7000 Gen2 control enclosures can cluster with V7000 Gen1 control enclosures– Allows for non-disruptive migration from Gen1 to Gen2 or long-term system growth– No clustering between V7000 Gen2 and V3x00/V5000 and Flex System V7000
Remote Copy– No remote-copy restrictions as we can replicate amongst any of the SVC/Storwize models
Virtualization– Fibre-channel and FCoE external storage virtualization with appropriate HBAs– No SAS host support or SAS storage support with 2076-524
File Modules– V7000 Unified will support V7000 Gen2 control enclosures when IFS 1.5 GAs
27
© 2014 IBM Corporation
Agenda
Next Generation SAN Volume Controller Hardware
Next Generation Storwize V7000 Hardware
What’s new in V7.3 Software
28
© 2014 IBM Corporation
Storwize Family Software Version 7.3
Required for new Storwize V7000 model and new SVC node model– Existing Storwize V3700/5000/7000 and SVC nodes supported too
Supports additional expansion for Storwize V3700 and Storwize V5000– Both systems now support up to 9 expansion enclosures
Improved licensing model for Storwize V7000 and Storwize V5000– SVC and Storwize V3700 licensing is unchanged
New cache design Easy Tier v3 Storage Pool Balancing
29
© 2014 IBM Corporation
Cache Re-Architecture
30
© 2014 IBM Corporation
Why re-architect?
More scalable for the future– Required for supporting beyond 8K volumes– Required for support beyond 8 node clusters– Required for 64 bit user addressing beyond 28 GB
• SVC code only uses 28 GB max today Required for larger memory sizes in nodes/canisters Required for more CPU cores Reduces # of IOPs copy services do directly to the back end storage Required for flush-less FlashCopy prepare
– Allows near CDP like capability RtC benefits from the cache underneath Algorithmic independence
– Allows changes to pre-fetch and destage algorithms without touching the rest of the cache Improved debugging capability
31
© 2014 IBM Corporation
Cache Architecture pre-V7.3.x
Front End
Remote Copy
Cache
FlashCopy
Volume Mirror
TP/RtC
Host I/O
TP/RtC
Virtualization
RAID 1/5/6/10
Virtualization
RAID 1/5/6/10
Backend Backend
FWL
FWL
FWL
FWL
FWL = Forwarding Layer
32
© 2014 IBM Corporation
Cache Architecture V7.3.x
Front End
Remote Copy
Upper Cache
FlashCopy
Volume Mirror
TP/RtC
Host I/O
TP/RtC
Lower Cache Lower Cache
Virtualization
RAID 1/5/6/10
Virtualization
RAID 1/5/6/10
Backend Backend
FWL
FWL
FWL
FWL
FWL = Forwarding Layer
33
© 2014 IBM Corporation
Upper Cache
Simple 2-way write cache between node pair of the I/O group– This is it’s primary function
• Receives write• Transfers to secondary node of the I/O group• Destages to lower cache
Very limited read cache– This is mainly provided by the lower cache
Same sub-millisecond response time Partitioned the same way as the original cache
34
© 2014 IBM Corporation
Lower Cache
Advanced 2-way write between node pair of an I/O group– Primary read cache– Write caching for host i/o as well as advanced function i/o
Read/write caching is beneath copy servies for vastly improved performance to FlashCopy, Thin Provisioning, RtC and Volume Mirroring
35
© 2014 IBM Corporation
SVC Stretch Cluster – Old Cache Design
Cache Cache
Site1Preferred Node IO group Node Pair
Write Data
Site2Non-Preferred Node
Destage
Mirror
Copy 1Copy2
Storage at Site 1 Storage at Site 2
Data is replicated twice over ISL
36
© 2014 IBM Corporation
SVC Enhanced Stretch Cluster – New Cache Design
UC UC
Site1Preferred Node IO group Node Pair
Write Data with location
Site2Non-Preferred Node
Destage
Mirror
Copy 1Preferred
Storage at Site 1 Storage at Site 2
Copy 2Non preferred
LC_1LC_ 2
Destage
Reply with location
LC_1LC_ 2
Token write data message with location
Copy 1 Non preferred
Copy 2 Preferred
Destage
Data is replicatedonce across ISL
37
© 2014 IBM Corporation
Stretch Clustered – Old Cache with compression at both
CA CA
Site1Preferred Node
IO group Node Pair
Uncompressed Write Data
Site2Non-Preferred Node
Destage
Mirror
Storage at Site 1Storage at Site 2
Cmp Cmp
Data is replicated twice over ISL.1 x compressed1 x uncompressed
Mdisk FW Compressed Write Data
38
© 2014 IBM Corporation
Enhanced Stretch Cluster with compression at both
UCA UCA
Site1Preferred Node
IO group Node Pair
Uncompressed Write Data
Site2Non-Preferred Node
Destage
Mirror
Copy 1Preferred
Storage at Site 1 Storage at Site 2
Copy 2Non preferred
LCA1LCA 2
Destage
LCA1LCA 2
Cmp'd Write data Copy 2
Copy 1 Non preferred
Copy 2 Preferred
Destage
C C
Data is replicated three times over ISL.1 x uncompressed, 2 x compressedRtC changes buffer location, invalidates UCA location.
Cmp'd Write data Copy 1
39
© 2014 IBM Corporation
Other features/benefits of cache re-design
Read-Only cache mode– In addition to the read/write or none available today
Redesigned volume statistics that are backward compatible with TPC Per volume copy statistics
– Enables drill down on each of the 2 copies the volume has Switch the preferred node of a volume easily and non-disruptively – with simple command
– Prior to 7.3 had to use NDVM to try to change preferred node non-disruptively– Had to change I/O group and back again– Available from the command line only (as of this writing)
40
© 2014 IBM Corporation
Changing preferred node in 7.3
In 7.3 the movevdisk command can be used to change the preferred node in the i/o group– If no new i/o group is specified, the volume will stay in the same i/o group but will change to
the preferred node specified.
41
© 2014 IBM Corporation
Upper Cache Allocation - Fixed
4GB V3700 – 128MB All other Platforms – 256MB
The rest of the cache is designated to the lower cache
42
© 2014 IBM Corporation
Lower Cache Allocation - BFN
Attempts to use all cache left after the upper cache and other components have been initialized 32 GB – RTC not supported
– 28 GB for SVC (extra 4 is used for linux kernal, etc)• 12 GB used for write cache, the rest for read
64 GB– 28 GB for SVC (26 GB if compression is on)
• 12 GB for write cache, remaining for read– 36 GB for compression
43
© 2014 IBM Corporation
Lower Cache Allocation – Next Gen V7000
Attempts to use all cache left after the upper cache and other components have been initialized 32 GB
– 4 GB for compression– 28 GB for SVC
• 12 GB used for write cache, the rest for read 64 GB
– 28 GB for SVC• 12 GB for write cache, remaining for read
– 36 GB for compression
44
© 2014 IBM Corporation
Software Upgrade to 7.3
Upgrade from 6.4.0 and onwards only All volumes (Vdisks) are cache disabled from beginning to upgrade commit
– Not a big issue on the SVC since the back end arrays have cache– More of a challenge on the V7,V5, V3 since all reads and write will be going directly to the
back end• Choose a time of lower activity to upgrade
– Manual upgrade is supported• Must use applysoftware -prepare
45
© 2014 IBM Corporation
Easy Tier v3 and Automated Storage Pool Balancing
46
© 2014 IBM Corporation
Easy Tier v3: Support for up to 3 Tiers
Support any combination of 1-3 tiers Flash/SSD always is Tier-0 and only Flash/SSD can be Tier-0 Note that ENT is always Tier-1 but NL can be Tier-1 or Tier-2
– ENT is Enterprise 15K/10K SAS or FC and NL is NL-SAS 7.2K or SATA
Tier 0 Tier 1 Tier2
SSD ENT NL
SSD ENT NONE
SSD NL NONE
NONE ENT NL
SSD NONE NONE
NONE ENT NONE
NONE NONE NL
47
© 2014 IBM Corporation
Easy Tier v3: Planning
Deploy flash and enterprise disk for performance Grow capacity with low cost disk Moves data automatically between tiers New volumes will use extents from Tier 1 initially
– If no free Tier 1 capacity then Tier 2 will be used if available, otherwise capacity comes from Tier 0
Best to keep some free extents in pool and Easy Tier will attempt to keep some free per Tier
– Plan for one extent times number of MDisks in Tiers 0 and 2 and sixteen extents times number of MDisks in Tier1
– E.G. 2 Tier-0, 10 Tier-1 and 20 Tier-2 MDisks so 182 free extents in the pool
• (2*1) + (10*16) + (20*1) = 182
48
Flash Arrays
HDD Arrays
Active Data Migrates Up
Less Active DataMigrates Down
© 2014 IBM Corporation
Easy Tier v3: Automated Storage Pool Balancing
Any storage medium has a performance threshold:
– Performance threshold means once IOPS on a MDisk exceed this threshold, IO response time will increase significantly
Knowing the performance threshold we could:
– Avoid overloading MDisks by migrating extents
– Protect upper tier's performance by demoting extents when upper tier's MDisks are overloaded
– Balance workload within tiers based on utilization
– Use xml file to record the MDisk’s threshold and make intelligent migration decisions automatically
49
© 2014 IBM Corporation
Easy Tier v3: Automated Storage Pool Balancing
XML files have stanzas for various drive classes, RAID types/widths and workload characteristics to determine MDisk thresholds
– Internal drives on Storwize systems we are aware of so more stanzas for them– Externally virtualized LUNs we don’t know what is behind them so based on controller
50
© 2014 IBM Corporation
Drive MDisk Volume Comments24 - 300GB 15K RPM Drives 3 - RAID-5 arrays Vol_0, Vol_1, Vol_2, Vol_3
each 32GB capacityTotal MDisk size 5.44TBTotal Volume size 128GB All Volumes are created on MDisk0 initially
Configuration:
Performance improved by balancing workload across all 3 MDisks:
Provided as basic storage functionality, no requirement for an Easy Tier license51
Easy Tier v3: Automated Storage Pool Balancing
© 2014 IBM Corporation
Easy Tier v3: STAT Tool
Provides recommendations on adding additional tier capacity and performance impact– Tier 0: Flash– Tier 1: “Enterprise” disk (15K and 10K)– Tier 2: Near-line disk (7.2K)
52
© 2014 IBM Corporation
Easy Tier v3: Workload Skew Curve Generate the skew report of the workload The workload skew report can be directly read by Disk Magic
53
© 2014 IBM Corporation
0 5000 10000 15000 20000 25000 30000 35000 40000
0
1
2
0
1
2
1
1
0x0
00
0
0x0
00
1 0
x00
04
0x0
00
5
Po
ol
an
d T
ier
ExtentsActive ActiveLG Low Inactive Unallocated54
Easy Tier v3: Workload Categorization
© 2014 IBM Corporation
EasyTier v3: Data Movement Daily Report
Generate a daily (24hours) CSV formatted report of Easy Tier data movements
55
© 2014 IBM Corporation
Miscellaneous Enhancements
56
© 2014 IBM Corporation
User Controlled GUI for Advanced Storage Pool Settings
With introduction of new GUI for Storwize V7000 in 2010 we hid the “Advanced Pool Settings” from users to simplify things while presenting this option on SVC GUI in V6.1
These settings allow the choice of the extent size for the pool and the capacity warning threshold for the pool
– The goal was to have Storwize users not to have to understand extent sizes– SVC users were use to these options and continued to see them via the “Advanced Pool
Settings” In V7.1 we introduced 4TB NL-SAS drives and had to make a change to the default extent
size from 256MB to 1GB to be able to address a limitation on the number of extents required to build a default RAID-6 10+P+Q array in the Storwize family of systems
– This change was a concern with some customers who wanted to keep pool extent size consistent at 256MB for volume migration, etc. that in turn caused a late change in V7.2 which now provides the ability in the GUI to configure extent size for a storage pool at creation
This V7.3 change will allow GUI users on all products to set extent sizes if they so desire
57
© 2014 IBM Corporation
Modify RAID Sparing Behaviour
The chain balancing rules are changed in V7.3– Many insufficient spare errors and “unbalanced” errors will autofix on upgrade as wrong
chain no longer unbalances configuration The chain-balanced presets are now more easily trackable and will continue to demand
spares on the correct chain The member goal view will highlight when a chain balanced array has a member on the
wrong chain
58
© 2014 IBM Corporation
Restrictions and Limitations Update
Storwize V3500/3700 with V7.3 installed will support up to nine expansion enclosures per control enclosure
– V3700: Single SAS chain includes controller enclosure and up to nine expansion enclosures– V3x00: Drive limit is 240 SFF or 120 LFF disks
Storwize V5000 with V7.3 installed will support up to nine expansion enclosures per control enclosure
– V5000: Dual SAS chains with control enclosure and up to four expansion enclosures on one and up to five expansion enclosures on the other (Same as today's V7000)
– V5000: Up to 20 expansion enclosures per clustered system– V5000: Drive limit is 240 SFF or 120 LFF disks per I/O Group or 480 SFF or 240 LFF disks
per I/O Group for a clustered system For Real-time Compression pre-V7.3, the SVC and Storwize V7000 systems have a limit of
200 compressed volumes per I/O Group– SVC DH8 with second CPU, extra 32GB of memory and both compression accelerator cards
installed in each node will support 512 compressed volumes per I/O Group– The jury is still out on whether the new Storwize V7000 with the extra 32GB of memory and
the second compression accelerator card in each node canister will allow for more then 200 compressed volumes per I/O Group
• We won’t know till testing is completed and V7.3 and new hardware GA’s on June 6 th
• Info on status of this change will be posted on support matrix under “Restrictions and Limitations” where we list the maximums for various functions of the system
59