58
F5 Acopia ARX Product Demonstration Troy Alexander Field Systems Engineer

F5 Acopia ARX Product Demonstration

  • Upload
    triage

  • View
    46

  • Download
    1

Embed Size (px)

DESCRIPTION

F5 Acopia ARX Product Demonstration. Troy Alexander Field Systems Engineer. Agenda. Acopia Technology Introduction Product Demonstration. Introducing F5’s Acopia Technology. The Constraints of Today’s Infrastructure. Complex Mixed vendors, platforms, file systems Inflexible - PowerPoint PPT Presentation

Citation preview

Page 1: F5 Acopia  ARX Product Demonstration

F5 Acopia ARXProduct Demonstration

F5 Acopia ARXProduct Demonstration

Troy AlexanderField Systems Engineer

Page 2: F5 Acopia  ARX Product Demonstration

2

Agenda

Acopia Technology Introduction

Product Demonstration

Page 3: F5 Acopia  ARX Product Demonstration

3

Introducing F5’s Acopia Technology

Page 4: F5 Acopia  ARX Product Demonstration

4

The cost of managing storage is five to ten times the acquisition costThe cost of managing storage is five to ten times the acquisition cost

The Constraints of Today’s Infrastructure

Complex– Mixed vendors, platforms,

file systems

Inflexible – Access is tightly coupled to

file location

– Disruptive to move data

Inefficient– Resources are under and

over utilized

Growing rapidly– 70% annually (80% are files)

Page 5: F5 Acopia  ARX Product Demonstration

5

Virtualization Breaks the Constraints

Simplified access– Consolidated, persistent

access points

Flexibility– Data location not bound to

physical resources

Optimized utilization– Balances load across

shared resources

Leverages technology– Freedom to choose most

appropriate file storage

“File virtualization is the hottest new storage technology in plan today…” (TheInfoPro)“File virtualization is the hottest new storage technology in plan today…” (TheInfoPro)

Page 6: F5 Acopia  ARX Product Demonstration

6

Where Does Acopia Fit?

Plugs into existing IP / Ethernet switchesVirtualizes heterogeneous file storage devices that present file systems via NFS and/or CIFS– NAS, File Servers, Gateways

Does not connect directly to SAN– Can manage SAN data presented

through a gateway or serverNo changes to existing infrastructure– ARX appears as a normal NAS

device to clients– ARX appears as a normal CIFS or

NFS client to storageSAN virtualization manages blocks, Acopia manages files– Data management vs. storage

managementIBM HP EMC HDS NTAP

SAN

LAN

SAN VirtualizationSAN Virtualization

Acopia File VirtualizationAcopia File Virtualization

SAN

Blocks

Users and Applications

NAS File Servers

Page 7: F5 Acopia  ARX Product Demonstration

7

What does F5’s Acopia do?

• Automates common storage management tasks

– Migration

– Storage Tiering

– Load Balancing• These tasks now take place without affecting access

to the file data or requiring client re-configuration

During the demo the F5’s Acopia ARX will virtualize a multi-protocol (NFS & CIFS) environment

F5 Acopia provides the same functionality for NFS, CIFS and multi-protocol

Page 8: F5 Acopia  ARX Product Demonstration

8

What are the F5`s Acopia differentiators ?

Purpose-built to meet challenges of global file management – Separate data (I/O) & control planes with dedicated resources – Enterprise scale: >2B files, 24 Gbps in a single switch

Real-time management of live data– Unique dynamic load balancing– Unique in-line file placement (files are not placed on the wrong share

and then migrated after the fact)– No reliance on stubs or redirection

Superior reliability, availability and supportability – Integrity of in-flight operations ensured with redundant NVRAM – Enterprise server class redundancy– Comprehensive logging, reporting, SNMP, call-home, port mirroring

Proven in Fortune 1000 enterprises– Merrill Lynch, Bear Stearns, Yahoo, Warner Music, Toshiba, United

Rentals, Novartis, Raytheon, The Hartford, Dreamworks, etc.

Page 9: F5 Acopia  ARX Product Demonstration

9

How is F5 Acopia Architecture ?

Patented tiered architecture separates data & control paths– Data path handles non-metadata operations at wire speed – Control path handles operations that affect metadata & migration

Each path has dedicated processing and memory resources and each can scale independently; unique scale and availability

PC-based appliances inadequate – single PCI bus, processor & shared memory

Clients

NAS & File Servers

Adaptive Resource Switch

Data Path(Fast Path)

TransactionLog

Control Path

Remote transaction log (mirror)

Local

Page 10: F5 Acopia  ARX Product Demonstration

10

How will F5 Acopia Virtualization work ?

Virtual IP

“Virtual Volume Manager”Routes virtual path to physical path

Virtual Volume

NetApp Volume

EMC Volume

NAS Volume

Applications, Users

arx:/eng/project1/spec.doc

Virtual Path

na:/vol/vol2/project1/spec.doc

ILM operation 1

Page 11: F5 Acopia  ARX Product Demonstration

11

How will F5’s Acopia virtualization work ?

Virtual IP

“Virtual Volume Manager”Routes virtual path to physical path

Virtual Volume

NetApp Volume

EMC Volume

NAS Volume

Applications, Users

arx:/eng/project1/spec.doc

Virtual Pathdoes not change

ILM operation 1

emc:/vol/vol2/project1/spec.doc

New Physical Path

Page 12: F5 Acopia  ARX Product Demonstration

12

How will F5’s Acopia virtualization work ?

Virtual IP

“Virtual Volume Manager”Routes virtual path to physical path

Virtual Volume

NetApp Volume

EMC Volume

NAS Volume

Applications, Users

arx:/eng/project1/spec.doc

Virtual Pathdoes not change

ILM operation 2

nas:/vol/vol1/project1/spec.doc

Page 13: F5 Acopia  ARX Product Demonstration

13

F5’s Acopia virtualization layers

Presentation Namespace

Adaptive Resource Switch (ARX)

Users & Application Servers

NAS & File Servers NAS & File Servers

Mount Point

Attach Points

Virtual VolumeManager

Presentation Namespace (PNS)

Page 14: F5 Acopia  ARX Product Demonstration

14

F5’s Acopia architecture

CONTROLPLANE

DATAPLANE

(Fast Path)

ASMSCM

NSM

Control

RAIDDrives

Temperature sensorsFan sensors

Power sensors

MgmtInterface

Data

• Wire speed, low latency • Non-metadata

operations, e.g. file read / write

• In-line policy enforcement

• High performance SMP architecture

• Metadata services

• Policy

NVR

Page 15: F5 Acopia  ARX Product Demonstration

15

ARX Architecture Differentiators

Network switch purpose-built to meet challenges of global file management– Real-time management of live data

3 tiered architecture provides superior scale & reliability– Separate I/O, control & management planes– Dedicated resources for each plane– Each plane can be scaled independently

PC-based appliances are inadequate– Single PCI bus, processor & shared memory – Bottleneck!

• File I/O, policy, management all share same resources – Single points of failure

Data integrity and reliability – Integrity of in-flight operations ensured with redundant NVRAM – External metadata and ability to repair & reconstruct metadata

Page 16: F5 Acopia  ARX Product Demonstration

16

How will Acopia work?

ARX acts as a proxy for all file servers / NAS devices– The ARX resides logically in-line

– Uses virtual IP addresses to proxy backend devices

Proxies NFS and CIFS traffic

Provides virtual to physical mapping of the file systems – Managed Volumes are configured & imported

– Presentation volumes are configured

Page 17: F5 Acopia  ARX Product Demonstration

17

What is File Routing or Metadata ?

Metadata will be stored on a highly available filerNo proprietary data is stored in the metadata; the metadata can be completely rebuilt (100% disposable)ARX ensures integrity of in-flight operations– If the ARX loses power or is reset, the NVRAM has a list of the

outstanding transactions– When the ARX is booted back up, before it services any user

requests, it validates all of the pending transactions in the NVRAM & takes the appropriate action

– Ensures transactions are performed in the correct order

ARX provides tools to detect / repair / rebuild metadata

Page 18: F5 Acopia  ARX Product Demonstration

18

Filesets and Placement Policy Rules

Placement rules migrate filesRules use filesets as sources– Filesets supply matching criteria for policy rules– Filesets can match files based on age, size or “name”

• Age = groups files based on last accessed or last modified date / time

• “Name” = matches any portion of a file’s name using Simple criteria (similar to DOS/Unix, e.g. *.ppt) or POSIX compliant Regular Expressions (e.g. [a-z]*\.txt)

– Filesets can be combined to form unions or intersections

Place rules can target a specific share or a share farm

Page 19: F5 Acopia  ARX Product Demonstration

19

What are Acopia’s Policy Differentiators

Load balancing is unique to Acopia – No other virtualization device is able to do in-line placement and

real time load balancing

Multi-protocol, multi-vendor migration is unique to AcopiaAbility to tier storage without requiring stubs is unique to AcopiaIn-line policy enforcement is unique to Acopia– Competitive solutions require expensive “treewalks” to

determine what to move / replicate

Flexibility and scale of migration / replication capability is unique to Acopia– From individual file / fileset to an entire virtual volumes

Page 20: F5 Acopia  ARX Product Demonstration

20

High availability Overview

ARX’s are typically deployed in a redundant pairThe primary ARX keeps synchronized state with the secondary ARX – In-flight transactions (NVRAM), Global Configuration, Network

Lock Manager Clients (NLM), Duplicate Request Cache (DRC)The ARX will monitor resources to determine failover criteria– The operator can optionally define certain resources to be

“critical” and be considered for failover criteria, e.g. default gateway, critical share, etc.

The ARX does not store any user data on the switches

Page 21: F5 Acopia  ARX Product Demonstration

21

How to Deploy Acopia in the Network

ARX HA subsystem uses a 3 voting systems to avoid “split brain” scenarios– “Split Brain” is a situation

where a loss in communication causes both devices in an HA pair to service traffic requests which can result in data corruption

Heartbeats are exchanged between – The primary switch – The standby switch – The Quorum Disk

• The Quorum Disk is a share on a server/filer

Acopia HA Pair

Switch A Switch B

Client

Workgroup Switches

DistributionSwitches

Core Routers / Layer 3 Switches

NAS & File Servers

Quorum Disk

Page 22: F5 Acopia  ARX Product Demonstration

22

The Demo Topology

Page 23: F5 Acopia  ARX Product Demonstration

23

Acopia Demo – What Will be Shown

Data Migration

Tiering

Load Balancing (Share Farm)

Inline policy and file placement by name

Shadow Volume Replication

Page 24: F5 Acopia  ARX Product Demonstration

24

Data Migration

Page 25: F5 Acopia  ARX Product Demonstration

25

Usage Scenario: Data Migration

Movement of files between heterogeneous file servers

Drivers: – Lease rollover, vendor

switch, platform upgrades, NAS consolidation

Benefits:– Reduce outages and

business disruption

– Faster migrations

– Lower operational overhead• No client reconfiguration,

automated

Page 26: F5 Acopia  ARX Product Demonstration

26

Data Migration with AcopiaSolution:

– Transparent migration at any time

– Paths and embedded links are preserved

– File-level granularity, without links or stubs

– NFS and CIFS support across multiple vendors

– Scheduled policies to automate data migration

– CIFS Local Group translation

– CIFS share replication

– Optional data retention on source

– IBM uses ARX for its data migration services

Benefits:– Reduce outages and business disruption

– Lower operational overhead• No client reconfiguration

• Decommissioning without disruption

• Automation

Page 27: F5 Acopia  ARX Product Demonstration

27

Data Migration (One for One)

NAS-1

NAS-2

Client View

home on “server1” (U:)

F5 Acopia ARX

Transparent migrations occur at the file system level via standard CIFS / NFS protocolsA file system is migrated in its entirety to a single target file systemAll names, paths, and embedded links are preservedMultiple file systems can be migrated in parallelInline policy steers all new file creations to target filer

– No need to go back and re-scan like Robocopy or Rsync– No need to quiesce clients to pick up final changes

File systems are probed by the ARX to ensure compatibility before mergingThe ARX uses no linking or stub technology so it is backed out easily

Page 28: F5 Acopia  ARX Product Demonstration

28

Data Migration (One for One)

NAS-1

NAS-2

Client View

home on “server1” (U:)

Acopia ARX

File and directory structure is identical on source and target file systemsAll CIFS and NFS file system security is preserved

– If CIFS local groups are in use SIDs will be translated by the ARX

File Modification and Access times are not altered during the migration– The ARX will preserve the create time (depending on the filer)

The ARX can perform a true multiprotocol migration where both CIFS and NFS attributes/permissions are transferred

– Robocopy does CIFS only, Rsync NFS only

The ARX can optionally replicate CIFS shares and associated share permissions to the target filer

Page 29: F5 Acopia  ARX Product Demonstration

29

Data Migration: Fan Out

NAS-1

NAS-2

Client View

home on “server1” (U:)

Acopia ARX

NAS-3

NAS-4

Fan out migration allows admin to change sub optimal data layout due to reactive data management policiesStructure can be re-introduced into the environment via fileset based policies that allow for migrations using:

• Anything in the file name or extension• File path• File age (last modify or last access)• File size• Include of exclude (all files except for)• Any combination (union or intersection)• For the more advanced user regular

expression matching is available

Rules operate “in-line”– Any new files created are automatically

created on the target storage, no need to re-scan the source again

Page 30: F5 Acopia  ARX Product Demonstration

30

Data Migration: Fan In

NAS-1

NAS-2

Client View

home on “server1” (U:)

Acopia ARX

NAS-3

NAS-4

Fan in migration allows admin to take advantage of larger/more flexible file system capabilities on new NAS platformsSeparate file systems can be merged an migrated into a single file systemThe ARX can perform a detailed file system collision analysis before merging file systems

– A collision report is generated for each file system

– Admin can choose to manually remove collisions or let the ARX rename offending files and directories

Like directories will be merged– Clients see aggregated directroy

In cases where the directory name is the same but has different permissions the ARX can synchronize the directory attributes

Page 31: F5 Acopia  ARX Product Demonstration

31

Data Migration: Name Preservation

\\Server1

Client View

home on “\\server1” (U:)

Acopia ARX

\\Server1-old

Acopia utilizes a name based takeover method for migrations in most cases– No 3rd party namespace technology is required, however DFS can be layered over ARX

All client presentations (names, shares, exports, mount security) are preserved

The source filer’s CIFS name is renamed and then the original name is transferred to the ARX allowing for transparent insertion of the ARX solution

– This will help avoid issues with embedded links in MS-Office documents

The ARX will join Active Directory with the original source filer CIFS name

If WINS was enabled on the source filer it is disabled, and the ARX will assume the advertisement of any WINS aliases

For NFS the source filers DNS entry is updated to point to the ARX– Or auto-mounter/DFS maps can be updated

The ARX can assume the filers IP address if needed

Page 32: F5 Acopia  ARX Product Demonstration

32

Case Study: NAS Consolidation

“Acopia's products allow us to consolidate our back-end storage resources while providing data access to our users without disruption.”

Chief Technology Architect

“Acopia's products allow us to consolidate our back-end storage resources while providing data access to our users without disruption.”

Chief Technology Architect

Environment: Windows file servers, NAS

Critical Issue: Large scale file server to NAS consolidation; 24x7 environment

Reasons: Cost savings in rack space, power, cooling and operations

Requirements: Move the data without disrupting the business

Solution: ARX6000 clusters

Result: >80 file servers migrated to NAS without disruption

Migrations completed faster, with less intervention

One of the world's leading financial services companies, with global presence

Page 33: F5 Acopia  ARX Product Demonstration

33

Migration Demonstration

Page 34: F5 Acopia  ARX Product Demonstration

34

Acopia Demo Topology

ARX500

Windows Client

J: L:

Virtual VolumeV:

Virtual File Systems

Physical File Systems

VirtualView

Virtual Server

Layer 2 Switch

PhysicalView

Page 35: F5 Acopia  ARX Product Demonstration

35

Storage TieringInformation Lifecycle Management

Page 36: F5 Acopia  ARX Product Demonstration

36

Usage Scenario: Tiering / ILM

Match cost of storage to business value of data– Files are automatically

moved between tiers based on flexible criteria such as age, type, size, etc.

Drivers:– Storage cost savings, backup

efficiencies, compliance

Benefits:– Reduced CAPEX

– Reduced backup windows and infrastructure costs

Page 37: F5 Acopia  ARX Product Demonstration

37

Storage Tiering with F5 Acopia

Solution:– Automated, non-disruptive data

placement of flexibly defined filesets

– Multi-vendor, multi-platform

– Clean (no stubs or links)

– File movement can be scheduled

Benefits:– Reduced CAPEX

• Leverage cost effective storage

– Reduced OPEX

– Reduced backup windows and infrastructure costs

Page 38: F5 Acopia  ARX Product Demonstration

38

Storage Tiering with F5 Acopia

Can be applied to all data or a subset via filesets

Operates on either last access or last modify time

The ARX can run tentative “what if” reports to allow for proper provisioning of lower tiers

Files accessed or modified on lower tiers can be brought up to tier 1 dynamically

Page 39: F5 Acopia  ARX Product Demonstration

39

“Based upon these savings, we estimate that we will enjoy a return on our Acopia investment in well under a

year.”

Reinhard Frumm, Director Distributed IS, Messe Dusseldorf

“Based upon these savings, we estimate that we will enjoy a return on our Acopia investment in well under a

year.”

Reinhard Frumm, Director Distributed IS, Messe Dusseldorf

Storage Tiering Case Study

Tier 1 Tier 2

NetApp 3020

NetApp 940c

AcopiaARX1000

Users and Applications

International trade-show company

Challenges– Move less business critical data

to less expensive storage, non-disruptively to users

Solution– ARX1000 cluster

Benefits– 50% reduction in disk spend

– Dramatic reduction in backup windows (from ~14 hours to ~3 hours) and backup infrastructure costs

Page 40: F5 Acopia  ARX Product Demonstration

40

Tiering Demonstration

Page 41: F5 Acopia  ARX Product Demonstration

41

Acopia Demo Topology

ARX500

Windows Client Virtual Volume

V:

Virtual File Systems

VirtualView

Virtual Server

Layer 2 Switch

PhysicalView

Physical File Systems

L: Tier 1 N: Tier 2

Page 42: F5 Acopia  ARX Product Demonstration

42

Load Balancing

Page 43: F5 Acopia  ARX Product Demonstration

43

Load Balancing with Acopia

Solution:– Automatically balances new

file placement across file servers

– Flexible dynamic load balancing algorithms

– Uses existing file storage devices

Benefits:– Increased application

performance

– Improved capacity utilization

– Reduced outages associated with data management

Page 44: F5 Acopia  ARX Product Demonstration

44

Load BalancingA common problem for our customers is applications that require lots of space

– Administrators are reluctant to provision a large file system, because if it ever needs to be recovered it will take too long

– They tend to provision smaller file systems and force the application deal with adding new storage locations

– This typically requires application down time and complexity being added to the application

The ARX can decouple the application from the physical storage so that the application only needs to know a single storage location

– The application will no longer need to deal with multiple storage locations

The storage administrator can now keep file systems small and dynamically add new storage without disruption

– No more down time when capacity thresholds are reached

2 TB 2 TB 2 TB 2 TB 2 TB 2 TB

Application

Page 45: F5 Acopia  ARX Product Demonstration

45

Load Balancing

One or more file systems can be aggregated together into a share-farmWithin the share-farm the ARX can load balance new file creates using the following algorithms– Round Robin, Weighted Round Robin, Latency, Capacity

The ARX will load balance with file level granularity but constraints can be added to keep files and/or directories togetherThe ARX can also maintain free space thresholds for each file system in the share-farm– When a file system crosses the threshold it is removed from the new file

placement algorithmThe ARX can also be setup to automatically migrate file from a file system if a certain free space threshold is not maintained

Page 46: F5 Acopia  ARX Product Demonstration

46

Load Balancing Case Study

Challenges– Infrastructure was a bottleneck to

production of digital content

– Difficult to provision new storage

Solution– ARX6000 cluster

Benefits– Ability to digitize >500% more music

– 20% reduction OPEX costs associated with managing storage

– Reduction in disk spend due to more efficient utilization of existing NAS “Acopia’s products increased

our business workflow by 560%”

Mike Streb, VP Infrastructure, WMG

“Acopia’s products increased our business workflow

by 560%”

Mike Streb, VP Infrastructure, WMG

NetApp 3050

AcopiaARX6000

Compute Nodes

Page 47: F5 Acopia  ARX Product Demonstration

47

Acopia Demo Topology

ARX500

Windows Client

M:

Virtual VolumeV:

Virtual File Systems

Physical File Systems

VirtualView

Virtual Server

Layer 2 Switch

PhysicalView

L: Tier 1 Tier 1 N:

Tier1 - Share Farm

Tier 2

Page 48: F5 Acopia  ARX Product Demonstration

48

Inline Policy Enforcement\Place by Name

Page 49: F5 Acopia  ARX Product Demonstration

49

Inline Policy Enforcement\Place by Name

Classification and placement of data based on name or path

Drivers: – Tiered storage, business

polices, SLA’s for applications or projects, migration based on file type or path

Benefits:– File level granularity

– Can migrate existing file systems to comply with current policy

– Operates inline for real time policy enforcement for new data creation

Page 50: F5 Acopia  ARX Product Demonstration

50

File Based Placement

Filesets– Group of files based on name,

type, extension, string, path, size– Unions, intersections, include and

excldue supportedStorage Tiers– Arbitrary definition defined by the

enterprise– Can consist of a single share or

share farm with capacity balancing

File Placement– Namespace is walked only once

for initial placement of files – In-line policy enforcement will

place files on proper tier in real time

+

Page 51: F5 Acopia  ARX Product Demonstration

51

Demonstration

Inline Policy Enforcement and File Placement by Name

Page 52: F5 Acopia  ARX Product Demonstration

52

Acopia Demo Topology

ARX500

Windows Client

M:

Virtual VolumeV:

Virtual File Systems

Physical File Systems

VirtualView

Virtual Server

Layer 2 Switch

PhysicalView

L: Tier 1 Tier 1 N:

Tier1 - Share Farm

Tier 2

Page 53: F5 Acopia  ARX Product Demonstration

53

Shadow Volume Replication

Page 54: F5 Acopia  ARX Product Demonstration

54

Data Replication with Acopia

Technology:– File-set based replication – NFS & CIFS across multiple

platforms– Replicas may be viewed– Supports multiple targets– Change-based updates only (file

deltas)

Benefits:– Target is not required to be of like

storage type– WAN bandwidth preservation– Can be used for centralized backup

applicationsSecondary SitePrimary Site

IP Network

Applications and Users

ARXCluster

Acopia Global Namespace

Replication

Page 55: F5 Acopia  ARX Product Demonstration

55

World’s largest equipment rental company

Challenges– Upgrade NAS platform

– Introduce lower cost ATA disk

– File-based Disaster Recovery solution

Solution– ARX1000 cluster at primary data center

– ARX1000 at disaster recovery facility

Benefits– NAS upgrade with no impact to users

– 50% savings through use of ATA disk

– Cost effective disaster recovery solution

– Dramatic reduction in backup and replication times

Primary Site

WAN

Disaster Recovery Site

Tier 1

Tier 2

Replica

“Acopia has reduced our total backup and replication times by about 70%.”

Bonnie Stiewing, Senior Systems Administrator, United Rentals

“Acopia has reduced our total backup and replication times by about 70%.”

Bonnie Stiewing, Senior Systems Administrator, United Rentals

Data Replication Case Study

WAN

Page 56: F5 Acopia  ARX Product Demonstration

56

Demonstration

Shadow Volume Replication

Page 57: F5 Acopia  ARX Product Demonstration

57

Acopia Demo Topology

ARX500

Windows Client

N:

Virtual VolumeV:

Virtual File Systems

Physical File Systems

VirtualView

Virtual Server

Layer 2 Switch

PhysicalView

L: Tier 1 Tier 2 O:

Tier1 - Share Farm

Shadow

Replication

M:

Virtual VolumeW:

Page 58: F5 Acopia  ARX Product Demonstration

58