46
Copyright © XDelta Limited, 2016 VSI Professional Services Alliance: XDelta Moving to VSI OpenVMS V8.4-2 and beyond Colin Butcher CEng FBCS CITP Technical director, XDelta Limited www.xdelta.co.uk Welcome

VSI Professional Services Alliance: XDelta Moving to VSI … · 2016. 6. 16. · i2 vs. -i4 • The following slides contain preliminary data on performance differences between selected

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

  • Copyright © XDelta Limited, 2016

    VSI Professional Services Alliance: XDelta

    Moving to VSI OpenVMS V8.4-2 and beyond

    Colin Butcher CEng FBCS CITP

    Technical director, XDelta Limited www.xdelta.co.uk

    Welcome

  • Copyright © XDelta Limited, 2016

    XDelta – who we are • VSI Professional Services Alliance member

    • Independent consultants since 1996:

    • UK based with international reach • Over 30 years experience with OpenVMS

    • Business-critical systems • Cross-sector experience • Engineering background • Gartner (2009):

    • Identified XDelta as one of few companies world-wide capable of OpenVMS migration related projects

  • Copyright © XDelta Limited, 2016

    • Why move to VSI OpenVMS ?

    Part 1

  • Copyright © XDelta Limited, 2016

    • Need supported systems to run business

    • Need minimal risk of disruption to business

    • Complexity and difficulty of moving applications and data can make it almost impossible to move elsewhere

    • Cost of redesign or replacement can be prohibitive

    • Want to reduce long-term cost of ownership

    VSI OpenVMS – evolution not revolution

  • Copyright © XDelta Limited, 2016

    • Most OpenVMS systems are bespoke: • Tight integration with operating system and infrastructure

    • Well engineered operating system:

    • Well structured and documented • Nothing better for multi-site mission-critical capabilities • Scales very well from small to large implementations

    • Distinct culture:

    • People who like to understand things • People who like to do things properly

    VSI OpenVMS – mission-critical platform

  • Copyright © XDelta Limited, 2016

    • Multi-site disaster tolerance

    • Scale-up with big blade hardware

    • Scale-out with “shared everything” OpenVMS clusters

    • Low IO latency, high IO throughput

    • Secure object protection, alerting and auditing

    Availability, performance, security

  • Copyright © XDelta Limited, 2016

    • HP Integrity Servers: • Selected earlier generation servers (rx2660, etc.) • Previous generation -i2 “Tukwila” family • Current generation -i4 “Poulson” family • Next generation “Kittson” family in progress

    • x86-64 port in progress

    • HP Alpha:

    • “evaluation kit” recent release (April 2016)

    VSI OpenVMS – supported platforms

  • Copyright © XDelta Limited, 2016

    • Imagine 96 bl890c-i4 servers in a cluster: • 6,114 (96 * 64) cores all running at 2.53 GHz • 144 TB (96 * 1.5 TB) physical memory • All 10GigE networking • All 8GigFC fibrechannel storage • All flash storage arrays

    • When OpenVMS on x86-64 arrives, what will the hardware

    be capable of ?

    VSI OpenVMS – scale-up and scale-out

  • Copyright © XDelta Limited, 2016

    Platform migration – why move ?

    Small IA64 (eg: rx2660, rx2800-i2)

    Small VAX (eg:

    microVAX)

    Mid-range VAX (eg: VAX 4000)

    Large VAX (eg: VAX 7000)

    HP VM

    Small Alpha (eg: DS25)

    Mid-range Alpha (eg: ES45)

    Mid-range IA64 (eg: BL870c-i2)

    Large IA64 (eg: BL890c-i2)

    Large Alpha (eg: GS1280)

    Platform type / scale

    Urgency

  • Copyright © XDelta Limited, 2016

    • VSI OpenVMS V8.4-2 on Integrity Servers

    Part 2

  • Copyright © XDelta Limited, 2016

    • “Poulson” 2.53 GHz 8 core processor with shared L3 cache

    • Re-architected CPU design • Around 30 % per core greater throughput • Reduced NUMA effects for same core count • Better memory latency and bandwidth • Improved floating point and integer performance • Improved hyperthreading – less stall time

    • bl870c-i4 (32 cores) about 1.3x better than bl890c-i2

    HPE Integrity -i4 servers – highlights

  • Copyright © XDelta Limited, 2016

    • bl860c-i4: single blade, 16 cores, 384 GB, 4x 10GigE, 3x mezz, 1C2D SAS

    • bl870c-i4:

    dual blade, 32 cores, 768 GB, 8x 10GigE, 6x mezz, 2C4D SAS

    • bl890c-i4: quad blade, 64 cores, 1.5 TB, 16x 10GigE, 12x mezz, 4C8D SAS

    • rx2800-i4: 2U rack, 16 cores, 384 GB, 4x 1GigE, 6x PCIe gen2, 1C8D SAS

    HPE Integrity -i4 servers – hardware

  • Copyright © XDelta Limited, 2016

    • Re-architected CPU, not just an updated design • Higher clock rate (now 2.53 GHz, was 1.73 GHz) • “Out of order” instruction execution • Minimal stall time between co-threads with hyperthreading • Double the core count (8 cores) • Greater memory capacity • Reduced memory latency • Shared on-chip cache

    • New 10GigE LoM - LAN only, not FCoE • Same 8GigFC mezzanine cards

    Server hardware differences (-i2 to -i4)

  • Copyright © XDelta Limited, 2016

    • Virtual Connect (1GigE, 10GigE, 8GigFC) • Flex10 • LAN side of FlexFabric

    • 10GigE chassis based switching • 10GigE passthrough • 1GigE passthrough

    • 8GigFC chassis based switching • 4GigFC passthrough

    Chassis hardware – c7000 / c3000

  • Copyright © XDelta Limited, 2016

    • 3PAR storage arrays at 8GigFC

    • SSD devices for local storage and 3PAR storage arrays

    • 8GigFC SAN - HPE / Brocade, Cisco

    • 10GigE networking - HPE Procurve, Cisco, etc.

    Infrastructure hardware

  • Copyright © XDelta Limited, 2016

    • Complete build of base system from sources (as V8.4-1H1)

    • -i4 server support (64 cores supported, threads off)

    • New LAN driver for LoM support

    • VSI branding

    • Patch kits and OS support available via HPE, or from VSI

    VSI OpenVMS V8.4-2 on -i4 servers

  • Copyright © XDelta Limited, 2016

    CPU architecture – Intel 9500 – “Poulson”

  • Copyright © XDelta Limited, 2016

    System architecture – rx2800-i4

  • Copyright © XDelta Limited, 2016

    Scalable blade architecture – bl8x0c-i4

  • Copyright © XDelta Limited, 2016

    QPI fabric – bl870c-i4 and bl890c-i4

  • Copyright © XDelta Limited, 2016

    • CPU 00 is the primary CPU - try to reduce its workload

    • Fastpath CPU selection - be aware of physical layout

    • CPU choice for dedicated lock manager

    • CPU choice for TCPIP (BG device)

    • CPU choice for PEdriver

    • Consider physical layout - RADs and NUMA

    High core count

  • Copyright © XDelta Limited, 2016

    • Hyperthreading is extremely workload dependent

    • In general the OpenVMS scheduler does a better job

    • Enable / disable hyperthreading with “cpuconfig” at EFI shell (requires reboot)

    • “CPU” count will appear to double when enabled • V8.4-2 supports 64 “scheduling units” • V8.4-1H1 supports 32 “scheduling units”

    Hyperthreading

  • Copyright © XDelta Limited, 2016

    • OpenVMS uses large shared memory regions: • XFC (50 % available memory by default) • RMS global buffers • Global sections (especially database caches) • Memory disc driver (MD devices)

    • Set memory interleave behaviour with “memconfig” at EFI

    shell (requires reboot)

    • Useful starting point for OpenVMS is “mostly UMA”

    NUMA (non-uniform memory access)

  • Copyright © XDelta Limited, 2016

    Memory architecture – bl890c-i4

  • Preliminary Performance Results: Integrity -i2 vs. -i4

    • The following slides contain preliminary data on performance differences between selected i2 and i4 servers running OpenVMS E8.4-1H1.

    • The data was generated from VSI-written programs used to measure certain aspects of system performance.

    • The results shown here should not be used as a general characterization of overall system performance or as an indication of how any specific application may perform.

  • 4000

    2968 3000

    2070 2000

    6000

    5045 4832

    0

    1000

    5000

    BL860c i2 (1.47ghz)

    BL860c i4 (2.4ghz)

    RX2800 i2 (1.73ghz)

    RX2800 i4 (2.67ghz)

    MB

    /Sec

    i2 vs i4 Memory Bandwidth (more is good)

    i2 vs. i4 Memory Bandwidth

  • 237

    196

    243

    150

    108

    100

    0

    50

    200

    250

    300

    BL860c i2 (1.47ghz)

    BL860c i4 (2.4ghz)

    RX2800 i2 (1.73ghz)

    RX2800 i4 (2.67ghz)

    Late

    ncy

    (ns)

    i2 vs i4 Memory Latency (less is good)

    i2 vs. i4 Memory Latency

  • i2 vs. i4 Integer Performance

    1.00 1.00 0.75 0.50 0.25 0.00

    2.00 1.75

    1.49 1.50

    1.26 1.17 1.25

    BL860c i2 (1.47ghz)

    BL860c i4 (2.4ghz)

    RX2800 i2 (1.73ghz)

    RX2800 i4 (2.67ghz)

    Rel

    ativ

    e Pe

    rfor

    man

    ce

    i2 vs i4 Integer Performance (more is good)

  • 1.00 1.00 0.75 0.50 0.25 0.00

    2.00 1.75

    1.57

    1.50 1.33

    1.14 1.25

    BL860c i2 (1.47ghz)

    BL860c i4 (2.4ghz)

    RX2800 i2 (1.73ghz)

    RX2800 i4 (2.67ghz)

    Rel

    ativ

    e Pe

    rfor

    man

    ce

    i2 vs i4 Floating Point Performance (more is good)

    i2 vs. i4 Floating Point Performance

  • Copyright © XDelta Limited, 2016

    • Without good data you cannot do good performance work

    • Avoid guesswork - run T4 all the time • If needed, use T4 “expert mode” and SDA extensions

    • Don’t make it go faster, stop it going slower • The fastest IO is the IO you don’t do • The fastest code is the code you don’t execute • The idle loop is anything but idle • A faster machine just waits more quickly!

    Performance engineering – use T4

  • Copyright © XDelta Limited, 2016

    • Disable devices you don’t use SYSMAN IO SET EXCLUDE=(EWC,EWD,…)

    • Experiment with memory interleave setting (memconfig) • Use memory reservations

    • Fastpath settings for device types • Dedicated CPU for TCPIP + LCKMGR

    • Experiment with hyperthreading (cpuconfig)

    Tips – VSI OpenVMS on –i2 and -i4 servers

  • Copyright © XDelta Limited, 2016

    • VSI OpenVMS V8.4-2 on Alpha (evaluation kit, April 2016)

    Part 3

  • Copyright © XDelta Limited, 2016

    “Now that we have released two new versions of VSI OpenVMS on the HPE Integrity hardware platform, we want to bring similar upgrades to Alpha hardware users,” said Duane P. Harris, CEO of VMS Software. “This evaluation kit will solicit feedback from Alpha users, and help us plan for a future Alpha production release.”

    The Alpha Evaluation Kit is meant to reach a wide user audience. Their

    participation and input will be one of the key factors in deciding if VSI pursues OpenVMS releases for the Alpha platform. If VSI does proceed with future development, the Alpha release will follow the traditional Field Tests and release cycles.

    VSI announce Alpha Evaluation Kit

  • Copyright © XDelta Limited, 2016

    • Released April 2016, valid to September 2016 • Not for production use • No support • Base OS and network layers only - no layered products • Only by upgrade from existing system

    • Is there enough interest ? • How might it be put to use ? • What else would be needed ? • Please get in touch - we need your input

    VSI OpenVMS Alpha V8.4-2 Evaluation Kit

  • Copyright © XDelta Limited, 2016

    • Extended support past HP deadline for organisations still reliant on Alpha, except VSI are not permitted to support non-VSI earlier versions (as with OpenVMS for Integrity)

    • Supports both emulated hardware and physical hardware • Emulated systems can be used in a virtual environment in

    limited circumstances • Get current on Alpha before migrating to Integrity • Buy time to complete migrations to Integrity • Continue with Alpha until OpenVMS on x86-64 is available

    as a “production ready” system • “Rights to new versions” is part of VSI licencing strategy

    Potential ways forward with Alpha

  • Copyright © XDelta Limited, 2016

    • Becomes easy to put off moving to Integrity, which creates a bigger challenge later when moving to x86-64

    • Emulators are becoming more capable, but big system performance will be hard to deliver from an emulator

    • Hardware support (especially storage) becomes more difficult and is likely to be expensive

    • Limited hardware capabilities of Alpha platforms: • 1Gbps or 2Gbps fibrechannel (KGPSA) • 1Gbps ethernet (DEGPA)

    • Could new storage hardware be supported ? Could it be supported as a boot device ?

    Some issues around staying with Alpha

  • Copyright © XDelta Limited, 2016

    • If Alpha is of interest, get in touch and try the evaluation kit in a safe offline environment

    • Understand the support issues and regulatory / legal implications if you choose to stay with old hardware

    • Consider the surrounding infrastructure as well, especially storage subsystems

    • Migrating to Integrity now will help you migrate to x86-64 when it becomes “production ready”

    Alpha – next steps

  • Copyright © XDelta Limited, 2016

    • Benefits of moving to VSI OpenVMS

    Part 4

  • Copyright © XDelta Limited, 2016

    • Significant step up from HP Integrity -i2 servers: • rx2800-i4 delivers more than 2x rx2800-i2 in same space • bl860c-i4 delivers more than bl870-i2 in half space • bl870c-i4 delivers more than bl890-i2 in half space • bl890c-i4 delivers more than 2x bl890-i2 in same space

    • Per-core costs reduced

    • Significant step up from AlphaServer GS1280, with modern

    storage and network infrastructure

    VSI OpenVMS V8.4-2 on Integrity

  • Copyright © XDelta Limited, 2016

    • bl870c-i4 and bl890c-i4 are good for GS1280 migration

    • Multi-core processors, NUMA, hyperthreading

    • 10GigE network, 8GigFC SAN

    • Smaller footprint, lower power, less cooling

    • Blade chassis connectivity for bl8x0c-i4

    • EVA to 3PAR storage migration

    Migrating from Alpha to Integrity

  • Copyright © XDelta Limited, 2016

    • Future OpenVMS releases from VSI will bring new features and prepare the way for transition to OpenVMS on x86-64

    • Security patches, bugfix patches, patch rollup updates, new versions of OpenVMS and layered products

    • Support by VSI for VSI releases of OpenVMS

    • Support beyond end 2020

    VSI OpenVMS V8.4-2 and beyond

  • Copyright © XDelta Limited, 2016

    • Evolution is lower risk than revolution

    • Code compatibility and data portability

    • Retain best of existing investment

    • Phased approach over time, not “big bang” all at once

    • Stay within support window

    Reduced risk and lower cost

  • Copyright © XDelta Limited, 2016

    • Buy with HPE part numbers via HPE, or from VSI

    • Available through HPE channel partners / resellers

    • No need for split procurement of hardware and software

    • Support via HPE, or from VSI

    It’s easy – the choice is yours

  • Copyright © XDelta Limited, 2016

    VSI Professional Services Alliance: XDelta

    Moving to VSI OpenVMS V8.4-2 and beyond

    Colin Butcher CEng FBCS CITP

    Technical director, XDelta Limited www.xdelta.co.uk

    Thank you

    Slide Number 1Slide Number 2Slide Number 3Slide Number 4Slide Number 5Slide Number 6Slide Number 7Slide Number 8Slide Number 9Slide Number 10Slide Number 11Slide Number 12Slide Number 13Slide Number 14Slide Number 15Slide Number 16Slide Number 17Slide Number 18Slide Number 19Slide Number 20Slide Number 21Slide Number 22Slide Number 23Slide Number 24Slide Number 25Slide Number 26Slide Number 27i2 vs. i4 Memory Bandwidthi2 vs. i4 Memory Latencyi2 vs. i4 Integer Performancei2 vs. i4 Floating Point PerformanceSlide Number 32Slide Number 33Slide Number 34Slide Number 35Slide Number 36Slide Number 37Slide Number 38Slide Number 39Slide Number 40Slide Number 41Slide Number 42Slide Number 43Slide Number 44Slide Number 45Slide Number 46