Storage Fundamentals

Embed Size (px)

Citation preview

  • 7/30/2019 Storage Fundamentals

    1/99

    - 1 -

    Storage Basics & EMC Storage

  • 7/30/2019 Storage Fundamentals

    2/99

    - 2 -

    Day 1

    Basics of Storage Technology

    Managing and Monitoring the Storage Devices

    Business Continuity

    Day 2

    Switches and Directors

    Host Integration with the Storage Devices

    Labs

    Agenda

  • 7/30/2019 Storage Fundamentals

    3/99

    - 3 -

    Types of Storage Connectivity

    Direct Attached Storage (DAS)

    Storage Area Networks (SAN)

    Network Attached Storage (NAS)

    SAN Foundation

    Multipath & Failover

    Basics of Storage Technology

  • 7/30/2019 Storage Fundamentals

    4/99

    - 4 -

    Options for connecting computers (Hosts) to storage.

    DAS (Direct Attached Storage): Storage is directly attached by a cable to

    the Host.

    SAN (Storage Area Network): Storage resides on a dedicated network,

    providing any-to-any connection between hosts and storage.

    NAS (Network Attached Storage): Storage is attached to a TCP/IP based

    network (LAN or WAN), and accessed using CIFS and NFS protocols for

    file access and file sharing. A NAS device is sometimes also called a fileserver. It receives request over a network and has an internal processor

    which translates that request to the SCSI block I/O commands to access

    the appropriate device only visible to the NAS product itself.

    Types of Storage Connectivity

  • 7/30/2019 Storage Fundamentals

    5/99

    - 5 -

    Direct Attach Storage

  • 7/30/2019 Storage Fundamentals

    6/99

    - 6 -

    Direct Attached Storage is restricted to access by a singlehost.

    Sometimes by two or more hosts in cluster (failover orfailback) configurations.

    DAS may initially appear to be low cost from the point ofview of each user or department.

    However, from the wider perspective of the entireorganization, DAS may be higher for networking approachesdue to the difficulty of sharing unused capacity with otherhosts, and also lack of a central point of management formultiple storage systems

    Direct Attach Storage (Cont)

  • 7/30/2019 Storage Fundamentals

    7/99- 7 -

    Storage Area Network

    SAN serves to interconnect storagerelated resources that are connected tomultiple servers. SAN are usually built

    using Fiber Channel technology, butthe concept of a SAN is independent ofthe underlying type of network we canalso use iSCSI (IP based SANs) in

    production environment. I/O requeststo disk storage on a SANare called block I/Os because, just asfor direct-attached disk, the read and

    write I/O commands identify a specificdevice (disk drive or tape drive) and inthe case of disks, specific block

    (sector) locations on the disk.

  • 7/30/2019 Storage Fundamentals

    8/99- 8 -

    Access: Longer distance between the processors and storage, higher availability,

    improved performance. Fibre Channel is faster than most LAN media. A larger

    number of processors can be connected to the same storage device.

    Consolidation: Replacement of multiple independent storage devices by fewer

    devices that support capacity sharing. SANs provide the ultimate in scalability,

    because software can allow multiple SAN devices to appear as a single pool of

    storage accessible to all processors on the SAN. Storage on a SAN can be

    managed from a single point of control. Controls over which hosts can see which

    storage (called zoning and LUN masking) can be implemented.

    Storage Area Network (Cont)

  • 7/30/2019 Storage Fundamentals

    9/99- 9 -

    Network Attached Storage

  • 7/30/2019 Storage Fundamentals

    10/99- 10 -

    NAS on a network that may be shared with non-storage traffic.

    Today, the network is usually an Ethernet LAN, but could be any

    network that supports the IP based Protocols like iSCSI.

    In contrast to block I/O used by DAS and SANs, NAS I/O

    requests are called file I/Os. File I/O is a higher-level type of

    request that specifies the file to be accessed, and a number of

    bytes to read or write beginning at that offset.

    Unlike block I/O, there is no awareness of a disk volume or disk

    sectors in a file I/O request.

    Network Attached Storage (Cont)

  • 7/30/2019 Storage Fundamentals

    11/99- 11 -

    Ease of installation: NAS is generally easier to install and manage than a

    SAN. A NAS appliance can usually be installed on an existing LAN/WANnetwork. NAS manufacturers often cite up and running times of 30

    minutes or less. Hosts can potentially start to access NAS storage

    quickly, without needing disk volume definitions or special devicedrivers. In contrast, SANs take more planning, including design of a Fibre

    Channel network and selection/installation of SAN managementsoftware.

    Backup: NAS appliances include a snapshot backup facility, to make

    backup copies of data onto tape while minimizing application downtime.For SANs, such facilities are available on selected disk systems or in

    selected storage management package.

    Resource Pooling: NAS allows capacity within the appliance to bepooled. That is, the NAS device is configured as one or more filesystems, each residing on a specified set of disk volumes. All users

    accessing the same file system are assigned space within it on demand.

    Network Attached Storage (Cont)

  • 7/30/2019 Storage Fundamentals

    12/99- 12 -

    Define Storage Area Network (SAN)

    Features and benefits of implementing a SAN

    Overview of the underlying protocols used within a SAN

    SAN Foundation

  • 7/30/2019 Storage Fundamentals

    13/99- 13 -

    Logically defined space used by FC

    nodes to communicate with each other.

    One switch or group of switchesconnected together

    Routes traffic between attached devices

    Component identifiers: Domain ID

    Unique identifier for an FC switchwithin a fabric

    Worldwide Name (WWN) Unique 64-bit identifier for an FC

    port (either a host port or astorage port)

    Basic Structure of SAN

    Host

    SWITCH

    Login

    Service

    Name

    Service

    Fabric

    Array

    Application

    O/S

    File System

  • 7/30/2019 Storage Fundamentals

    14/99- 14 -

    SAN addresses two storage connectivity

    problems:

    Host-to-storage connectivity: so a host

    computer can access and use storage

    provisioned to it

    Storage-to-storage connectivity: for data

    replication between storage arrays

    SAN technology uses block-level I/O protocols

    Where as NAS uses file-level I/O protocols

    The host is presented with raw storage

    devices: just as in traditional, direct-attached

    storage

    Basic Structure of SAN (Cont)

    Host

    SWITCH

    Login

    Service

    Name

    Service

    Fabric

    Array

    Application

    O/S

    File System

  • 7/30/2019 Storage Fundamentals

    15/99

    - 15 -

    SAN Connectivity Methods

  • 7/30/2019 Storage Fundamentals

    16/99

    - 16 -

    There are three basic methods of communication using FibreChannel infrastructure

    Point to point (P-to-P)

    A direct connection between two devices

    Fibre Channel Arbitrated Loop (FC-AL)

    A daisy chain connecting two or more devices

    Fabric connect (FC-SW) Multiple devices connected via switching technologies

    SAN Connectivity Methods (Cont)

  • 7/30/2019 Storage Fundamentals

    17/99

    - 17 -

    RAID stands for Redundant Array of Independent Disks.

    Conceptually, RAID is the use of 2 or more physical disks,to create 1 logical disk, where the physical disks operate intandem to provide greater size and more bandwidth.

    RAID has become an indispensable part of any storagesystem today and is the foundation for storagetechnologies.

    The use of RAID technology has re-defined the designmethods used for building storage systems.

    RAID

  • 7/30/2019 Storage Fundamentals

    18/99

    - 18 -

    RAID can and will provide excellent I/O performance, whenimplemented with the same care that databaseadministrators have historically taken in designing simpledisk solutions, i.e., separate tables from theircorresponding indexes, if they are accessed in tandem.

    On the other hand, it can wreak havoc when implemented ina haphazard fashion.

    The two main technical reasons for making the jump toRAID are scalability and high availability in the context ofI/O and system performance.

    RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    19/99

    - 19 -

    Striping : Striping is the process of breaking down data into

    pieces and distributing it across multiple disks that support alogical volume Divide, Conquer & Rule.

    Mirroring : Mirroring is the process of writing the same data, toanother member of the same volume simultaneously.

    Parity : Parity is the term for error checking. Some levels ofRAID, perform calculations when reading and writing data.

    Concepts In RAID

  • 7/30/2019 Storage Fundamentals

    20/99

    - 20 -

    Striping results in a logical volume that is larger and has greater

    I/O bandwidth than a single disk. It is purely based on the linearpower of incrementally adding disks to a volume to increase thesize and IO bandwidth of the logical volume. The increase inbandwidth is a result of how read/write operations are done on astriped volume.

    A given disk can process a specific number of I/O operations persecond. Anything more than that and the requests start to queueup. By creating a single volume from pieces of data on severaldisks, we can increase the capacity to handle I/O requests in alinear fashion, by combining each disks I/O bandwidth. Now, whenmultiple I/O requests for a file on a striped volume is processed,they can be serviced by multiple drives in the volume, as therequests are sub-divided across several disks. This way all drivesin the striped volume can engage and service multiple I/O requestsin a more efficient manner.

    Striping

  • 7/30/2019 Storage Fundamentals

    21/99

    - 21 -

    Mirroring provides protection for data by writing exactly thesame information to every member in the volume. Additionally,

    mirroring can provide enhanced read operations because the

    read requests can be serviced from either member of the

    volume.

    Mirroring

  • 7/30/2019 Storage Fundamentals

    22/99

    - 22 -

    Some levels of RAID, perform calculations when reading and writing

    data. The calculations are primarily done on write operations.However, if one or more disks in a volume are unavailable, thendepending on the level of RAID, even read operations would requireparity operations to rebuild the pieces on the failed disks. Parity isused to determine the write location and validity of each stripe that is

    written in a striped volume. Parity is implemented on those levels of

    RAID that do not support mirroring.

    Parity algorithms contain Error Correction Code (ECC) capabilities,which calculates parity for a given stripe or chunk of data within aRAID volume. The size of a chunk is operating system (O-S) and

    hardware specific. The codes generated by the parity algorithm areused to recreate data in the event of disk failure(s). Because thealgorithm can reverse this parity calculation, it can rebuild data, lost

    as a result of disk failures.

    Parity

  • 7/30/2019 Storage Fundamentals

    23/99

    - 23 -

    Striping yields better I/O performance.

    Mirroring provides data protection.

    Parity (when applicable) is a way to check the work.

    With these 3 aspects of RAID, we can achieve scalable, protected,

    highly available I/O performance.

    Putting It All Together

  • 7/30/2019 Storage Fundamentals

    24/99

    - 24 -

    RAID can be implemented as software-based, where the control

    software is usually either bundled with the O-S or in the form ofan add-on.

    This type of RAID is also known as host-based RAID. This typeof implementation does impose a small overhead, as it

    consumes Memory, I/O bandwidth and CPU on the host where itis implemented.

    RAID implemented by hardware in the form of micro-codepresent in dedicated disk controller modules that connect tothe host. These controllers are internal to the host where RAIDis implemented.

    The Types Of RAID

  • 7/30/2019 Storage Fundamentals

    25/99

    - 25 -

    RAID can also be implemented using controllers that are

    external to the host where it is implemented. Thisimplementation is bridge-based implementation and is not

    preferred, as they incur longer service times for I/O requests.

    This is due to the longer I/O paths from the disks to the host.

    This type of implementation is usually typical of I/O sub-systems

    that are half-Fiber and half-SCSI. It is also common to see

    this implementation on storage systems that support multiple

    hosts running multiple operating systems.

    Hardware-based RAID should be preferred over software-based

    or host-based RAID over bridge-based RAID.

    The Types Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    26/99

    - 26 -

    RAID levels usually range from 0 to 7.

    The differences between the various levels, is based onvarying I/O patterns across the disks.

    These I/O patterns by their inherent nature offer different levelsand types of protection and performance characteristics.

    The Levels Of RAID

  • 7/30/2019 Storage Fundamentals

    27/99

    - 27 -

    RAID 0:

    This level of RAID is a normal file system with striping, in

    which data loss is imminent with any disk failure(s).

    In simply worlds, it is data striped across a bunch of disks.

    This level provides good read/write performance but norecoverability.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    28/99

    - 28 -

    RAID 1:

    In very simple terms this level of RAID provides mirroring and thusfull data redundancy.

    This is often called a mirrored disk. In most cases, the volume thatthe operating system sees is made up of two or more disks.However, this is presented to an application or a database as a

    single volume. As the system writes to this volume, it writes anexact copy of the data to all members in the volume.

    This level of RAID requires twice the amount disk storage ascompared to RAID 0. Additionally, some performance gains can be

    reaped from parallel reading of the two mirror members.

    RAID 1 doubles the capacity of processing read requests from thevolume when compared to not having mirrored members.

    There are no parity calculations involved in this level of RAID.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    29/99

    - 29 -

    RAID 0+1

    Stripe First, Then Mirror What You Just Striped. This level of RAIDcombines levels 0 and 1 (striping and mirroring). It also provides goodwrite and read performance and redundancy without the overhead ofparity calculations.

    On disk failure(s), no reconstruction of data is required, as the data is

    read from the surviving mirror.

    This level of RAID is the most common implementation for write-intensiveapplications and is very widely used.

    The most common complaint is the cost, since it requires twice as muchspace. To justify this cost, you will have to spend some timeunderstanding the performance requirements and availability needs ofyour systems.

    It must be noted here that, the loss of 1 disk of a mirrored member, doesreduce the I/O servicing capacity of the volume by 50%.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    30/99

    - 30 -

    RAID 1+ 0

    Mirror First, Then Stripe Over What You Just Mirrored.

    This level of RAID has the same functionality as RAID 0+1, but

    is better suited for high availability. This is because on the loss

    of 1 disk in a mirror member, the entire member of a mirroredvolume does not become unavailable.

    It must be noted here that, the loss of 1 disk of a mirrored

    member, does not reduce the I/O servicing capacity of the

    volume by 50%. This should be preferred method for

    configurations that combine striping and mirroring, subject to

    hardware limitations.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    31/99

    - 31 -

    RAID 2

    This level of RAID incorporates striping, and the

    redundancy/protection is provided through parity.

    This method requires less disk space compared to RAID 1, but

    the need to calculate and write parity, will make writes slower.

    This level of RAID was one of the early implementations of

    striping with parity using the famous hamming code technique,

    but was later replaced by RAID 3, 5 and 7. This level of RAID is

    very rarely implemented.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    32/99

    - 32 -

    RAID 3

    In this level of RAID, the ECC algorithm calculates parity to provide dataredundancy as in RAID 2, but all of the parity is stored on 1 disk.

    The parity for this level of RAID is stored at the bit/byte-level as opposed

    to the block/chunk level.

    RAID 3 is slowly gaining popularity but is still not very widely used. It isbest suited for data mart/data warehouse applications that support a few

    users but require sequential bulk I/O performance (data-transferintensive).

    When full table scans and/or index range scans are the norm for a givenapplication and the user population is small, RAID 3 may be just theticket.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    33/99

    - 33 -

    RAID 4

    This level of RAID is the same as RAID 3 but with blocklevel parity and is very rarely implemented.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    34/99

    - 34 -

    RAID 5

    This is by far one of the most common RAID implementationstoday. In this level of RAID, data redundancy is provided viaparity calculations. Parity is distributed across the number ofdrives configured in the volume.

    It results in minimal loss of disk space to parity values, and itprovides good performance on random read operations andlight write operations.

    RAID 5 caters better to Input Output Per Second (IOPS) with its

    support for concurrently servicing many I/O requests.

    It should not be implemented for write-intensive applications,since the continuous process of reading a stripe, calculatingthe new parity and writing the stripe back to disk (with the newparity), will make writes significantly slower.

    The Levels Of RAID (Cont)

    Th L l Of RAID (C )

  • 7/30/2019 Storage Fundamentals

    35/99

    - 35 -

    An exception to this rule that requires consideration is when the I/O

    sub-system has significant amounts of write cache and the

    additional overhead imposed by the ECC algorithms is measured

    and confirmed by analysis to be minimal. The definition of

    significant is left to the discretion of the reader, but in general a

    write cache sized in many gigabytes can be considered significant.

    On many systems, however, the performance penalty for write

    operations can be expensive even with a significant write cache

    depending on the number of writes and the size of each write.

    RAID 5 is best suited to read-only applications. Like RAID 3, it is

    best suited for data mart/data warehouse applications, but it can

    support many application users performing random I/O instead of

    sequential I/O.

    The Levels Of RAID (Cont)

    Th L l Of RAID (C t )

  • 7/30/2019 Storage Fundamentals

    36/99

    - 36 -

    RAID 6

    In this level of RAID, parity is calculated using a more complex

    algorithm and redundancy is provided using an advanced multi -dimensional parity method.

    RAID 6 stores 2 sets of parity for each block of data and thus makeswrites even slower than RAID 5.

    However, on disk failures, RAID 6 facilitates quicker availability of thedrives in the volume (after a disk failure), without incurring the

    negative performance impact of re-syncing the drives in the volume.

    This level of RAID is very rarely implemented.

    The Levels Of RAID (Cont)

    Th L l Of RAID (C t )

  • 7/30/2019 Storage Fundamentals

    37/99

    - 37 -

    RAID-S

    If you are using EMC Storage arrays, then this is yourversion of RAID 3/5. It is well suited to data mart/datawarehouse applications.

    This level of RAID should be avoided for write intensive

    or high-volume transactional applications for the samereasons as any RAID 5 implementation.

    EMC storage solutions are usually configured with largewrite caches, but generally speaking, these write cachesare not large enough to overcome the additional overheadof the parity calculations during writes.

    The Levels Of RAID (Cont)

    Th L l Of RAID (C t )

  • 7/30/2019 Storage Fundamentals

    38/99

    - 38 -

    Auto RAID

    With Auto RAID (implemented by HP), the controller along with theintelligence built within the I/O sub-system, dynamically modifies thelevel of RAID on a given disk block to either RAID 0+1 or RAID 5,depending on the near historical nature of the I/O requests on thatblock.

    The recent history of I/O patterns on the disk block is maintainedusing the concept of a working set (which is a set of disk blocks).

    For obvious reasons, there is one working set each for reads and

    writes, and blocks keep migrating back and forth between the twosets, based on the type activity.

    A disk block in this context is 64K in size.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    39/99

    - 39 -

    Level of RAID Functionality

    RAID 0 Striping, No recoverabili ty, Require read/write performance without Recoverabili ty.

    RAID 1 Mirroring, Recoverability, Require write performance

    RAID 0+1/1+0 Combination of 0 and 1, Recoverability, Require read and write performance, Very widely

    used, 1+0 is better than 0+1 for availability

    RAID 2 Early implementation of striping with parity, Uses the Hamming Code Technique for parity

    calculations, Was replaced by RAID 3, RAID 5, and RAID 7, Very rarely implemented

    RAID 3 Striping with bit/byte-level parity, Dedicated parity disk, Recoverabili ty, Require read

    performance for bulk sequential reads, Require data transfer over IOPS, Not widely used but

    gaining popularity.

    RAID 4 Striping with block-level parity, Dedicated parity disk, recoverability, Very rarely implemented

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    40/99

    - 40 -

    RAID 5 Striping with block-level parity, Distributed parity across the number of disks in the volume,

    Recoverability, Require read performance for random reads that are small in nature, Require IOPS

    over data-transfer, Very widely used

    RAID 6 Striping with block-level multi-dimensional parity, Recoverability, Slower writes than RAID 5,Very

    rarely implemented

    RAID 7 Same as RAID 3, but with better asynchronous capabili ty for reads and writes, Significantly better

    overall I/O performance when compared to RAID 3, Significantly more expensive than RAID 3

    RAID-S EMCs implementation of RAID 3/5

    Auto RAID Hewlett Packards (HP) automatic RAID technology that auto configures the I/O system based on

    the nature and type of I/O performed on the disk blocks within the RAID Array.

    The Levels Of RAID (Cont)

  • 7/30/2019 Storage Fundamentals

    41/99

    - 41 -

    Continuous access to information is a must for the smooth

    functioning of business operations, as the cost of businessdisruption could be huge, hence Business continuity is an

    integrated and enterprise wide process that includes all activities

    that a business must perform to mitigate the impact of planned and

    unplanned downtime.

    Business continuity entails preparing for, responding to, and

    recovering from a system outage that adversely affects business

    operations. It involves proactive measures, such as business impact

    analysis and risk assessments, data protection, security, and

    reactive countermeasures, such as disaster recovery and restart, tobe invoked in the event of a failure.

    Business Continuity

  • 7/30/2019 Storage Fundamentals

    42/99

    - 42 -

    Analyzing the business impact of an outage, designing appropriate

    solutions to recover from a failure. One or more copies of the originaldata are maintained using any of the following strategies, so that data

    can be recovered and business operations can be restarted using an

    alternate copy.

    Backup and recovery: Backup to tape is the predominant method ofensuring data availability. These days, low-cost, high-capacity disks are

    used for backup, which considerably speeds up the backup and

    recovery process. The frequency of backup is determined based

    frequency of data changes.

    Storage array-based replication (local): Data will be replicated within

    the same storage array. The replica is used independently for BC

    operations. Replicas can also be used for restoring operations if data

    corruption occurs.

    Business Continuity Planning

  • 7/30/2019 Storage Fundamentals

    43/99

    S b d li i (l l)

  • 7/30/2019 Storage Fundamentals

    44/99

    - 44 -

    Snapview Snapshots

    Provide support for consistent on-line backup

    Offload backup processing from production hosts

    Snapshots can be used testing, decision support scenarios

    A successful recovery requires that consistent data written to the

    backup media

    Storage array-based replication (local):

  • 7/30/2019 Storage Fundamentals

    45/99

    - 45 -

    Overall highest service level for backup and recovery

    Fast sync on first copy, faster syncs on next copy

    Fastest restore from Clone

    Removes performance impact on production volume

    De-coupled from production volume

    100% copy of all production data on separate volume

    Backup operations scheduled anytime

    Offers multiple recovery points

    Up to eight Clones against a single source volume

    Selectable recovery points in time

    Accelerates application recovery

    Instantly restore from Clone, no more waiting for tape restore

    Snap view Clones

  • 7/30/2019 Storage Fundamentals

    46/99

    - 46 -

    Snap View Clones and Snap View Snapshots

    Each Snap View Clone is a full copy of the source Creating initial Clone requires full sync

    Incremental syncs thereafter

    Clones may have performance improvements over snapshots in certain situations No Copy On First Write mechanism

    Less potential disk contention depending on write activity

    Each Clone requires 1x additional disk space

    Snapshots Clones

    Elements per Source 8 8

    Sources per storage system 100 Sources * 50 Clone Groups *

    Elements per storage system 800 sessions *

    300 snapshots *

    100 total images *

    S C R

  • 7/30/2019 Storage Fundamentals

    47/99

    - 47 -

    Adding Clones

    Must be exactly equal size to Source LUN

    Remove Clones

    Cannot be in active sync or reverse-sync process

    Termination of Clone Relationship

    Renders Source and Clone as independent LUNs

    Does not affect data

    Source and Clone Relationship

    R t R li ti (Mi i / S )

  • 7/30/2019 Storage Fundamentals

    48/99

    - 48 -

    MirroView

    Independent of server, operating system, network applications, and databaseCentralized, simplified management via EMC Navisphere.

    Mirrorview software must be loaded on both Primary and Secondary arrays.

    Secondary LUN must be the same size as Primary LUN.

    Secondary LUN need not be the same RAID type as Primary.

    Secondary LUN not accessible to host's.

    Mirror must be removed or Secondary promoted to Primary for host to haveaccess. Bi-directional mirroring fully supported.

    Remote Replication (Mirrorview / Sancopy)

    Primary Secondary

    MirrorView Connectivity, Flexibility, and

  • 7/30/2019 Storage Fundamentals

    49/99

    - 49 -

    Switch attach

    FC/IP router . . . . . . . . . . >60km DWDM . . . . . . . . . . . . . . 200km

    Optical extender . . . . . . .40 km

    Long wave GBIC . . . . . . .10 km

    Shortwave GBIC . . . . . . .500 m

    Direct attach

    CLARiiON to CLARiiON . . 300/500 m

    Optical extender . . . . . . . . 10 km

    MirrorView Connectivity, Flexibility, and

    Distances

  • 7/30/2019 Storage Fundamentals

    50/99

  • 7/30/2019 Storage Fundamentals

    51/99

    R t Mi T

  • 7/30/2019 Storage Fundamentals

    52/99

    - 52 -

    Primary

    CLARiiON that serves mirrored primary data to a production host

    Secondary

    CLARiiON that contains a mirrored secondary copy of primary

    data

    Mirror Synchronization Mechanism to copy data from primary LUN to a secondary LUN

    Mechanism may use fracture log/write intent log to avoid full data

    copy

    Mirror Fracture

    Condition when a secondary is unreachable by the primary

    Can be invoked by administrative command

    Remote Mirror Terms

  • 7/30/2019 Storage Fundamentals

    53/99

    Remote Mirror Functionality

  • 7/30/2019 Storage Fundamentals

    54/99

    - 54 -

    High Availability

    Mirrors resilient to single SP failures

    Dual SP protection (primary & secondary copies)

    Host I/O allowed to mirror while mirror sync active

    Checkpoint of mirror sync progress

    Allows sync to continue from last sync checkpoint (if primaryfailure)

    Quick recovery of single SP or full failure

    Write intent log feature removes full data sync requirement

    Mirror I/O can be multiplexed across multiple FC connections

    For HA and performance

    Remote Mirror Functionality

  • 7/30/2019 Storage Fundamentals

    55/99

  • 7/30/2019 Storage Fundamentals

    56/99

  • 7/30/2019 Storage Fundamentals

    57/99

  • 7/30/2019 Storage Fundamentals

    58/99

    How Does SAN Copy Work?

  • 7/30/2019 Storage Fundamentals

    59/99

    - 59 -

    CLARiiON system acts as a Copy Manager Runs on CLARiiON CX400 through CX700, FC4700/FC4700-2 and later

    Can achieve TBs/Hour performance Depends on network infrastructure

    Block-level moving/copying of full LUNs Simultaneous push and pull (bidirectional) data movement supported

    64 KB granularity for incremental copies

    Communicates via World Wide Names Over SAN, LAN or WAN (via FC/IP conversion)

    Uses the following devices as source data Snap View Snapshot (full copies only) or Clone Time Finder BCV Idle production LUN

    How Does SAN Copy Work?

  • 7/30/2019 Storage Fundamentals

    60/99

    SanCopy Topology

  • 7/30/2019 Storage Fundamentals

    61/99

    - 61 -

    SanCopy Topology

    Data

    SAN

    CLARiiON

    SYMMETRIX

    LAN

    Management

    Station

    (Navisphere)

    Copy

    Manager

    Data

    Data

    Data

    Data

    Data

    HOSAGET

    Object Copied

    is LUN/Volume

    Data

    SAN Copy: Data Mobility for Business

  • 7/30/2019 Storage Fundamentals

    62/99

    - 62 -

    Enable better business

    decisions Pull data from remote

    locations to data center

    Gather daily sales records,

    Inventory updates

    Stop costly data errors

    Push data to distributed

    locations

    Applications, daily pricingupdates, Inventory updates

    Reduce operational costs

    Centralize data for easier

    management

    SAN Copy: Data Mobility for Business

    Delhi

    Atlanta

    Mumbai

    Corporate DataCenter

    Pune

    Data

    Data

    Data

    Data

  • 7/30/2019 Storage Fundamentals

    63/99

    Managing the Storage

  • 7/30/2019 Storage Fundamentals

    64/99

    - 64 -

    Management Software

    Multipath and Failover

    Managing the Storage

    SAN Management Tools

  • 7/30/2019 Storage Fundamentals

    65/99

    - 65 -

    SAN Management Tools

    CLARiiON Hardware

    FLARE Operating Environment

    Navisphere

    EMC ControlCenterCLARiiON Based Applications

  • 7/30/2019 Storage Fundamentals

    66/99

    CLARiiON Management Options

  • 7/30/2019 Storage Fundamentals

    67/99

    - 67 -

    There are two CLARiiON management interfaces

    CLI (Command Line Interface)

    navicli commands can be entered from the command lineand can perform all management functions

    GUI (Graphical User Interface)

    Navisphere Manager is the graphical interface for allmanagement functions to the CLARiiON array

    CLARiiON Management Options

  • 7/30/2019 Storage Fundamentals

    68/99

    Navisphere Manager

  • 7/30/2019 Storage Fundamentals

    69/99

    - 69 -

    Discover

    Discovers all managed CLARiiON

    systems

    Monitor

    Show status of storage systems,Storage Processors, disks,snapshots, remote mirrors, and othercomponents

    Centralized alerting

    Apply and provision

    Configure volumes and assignstorage to hosts

    Configure snapshots and remote

    mirrors

    Set system parameters

    Customize views via NavisphereOrganizer

    Report

    Provide extensive performance

    statistics via Navisphere Analyzer

    Navisphere Manager

    Storage Configuration and Provisioning

  • 7/30/2019 Storage Fundamentals

    70/99

    - 70 -

    Understanding application and server

    requirements and planning

    configuration

    RAID Group is a collection of physical

    disks

    RAID Protection level is assigned to

    all disks within the RAID group

    Binding LUNs is the creation of Logical

    Units from space within a RAID Group

    Storage groups are collections of LUNs

    that a host or group of hosts have

    access to.

    g g g

    Step 5 Connect Hosts withStorage Groups

    Step 4Add LUNs to

    Storage Groups

    Step 3 Create Storage Groups

    Step 2 Bind LUNs

    Step 1 Create RAID Groups

    Step 0 - Planning

    Creating RAID Groups

  • 7/30/2019 Storage Fundamentals

    71/99

    - 71 -

    RAID protection levels are setthrough a RAID group

    Physical disks part of one RAID group only

    Drive types cannot be mixed in the RAID Group

    May include disks from any enclosure

    RAID types may be mixed in an array

    RAID groups may be expanded

    Users do not access RAID groups directly

    Creating RAID Groups

    5 disk RAID-5 group 4 disk RAID-1/0 group

  • 7/30/2019 Storage Fundamentals

    72/99

  • 7/30/2019 Storage Fundamentals

    73/99

  • 7/30/2019 Storage Fundamentals

    74/99

  • 7/30/2019 Storage Fundamentals

    75/99

  • 7/30/2019 Storage Fundamentals

    76/99

  • 7/30/2019 Storage Fundamentals

    77/99

  • 7/30/2019 Storage Fundamentals

    78/99

  • 7/30/2019 Storage Fundamentals

    79/99

  • 7/30/2019 Storage Fundamentals

    80/99

  • 7/30/2019 Storage Fundamentals

    81/99

  • 7/30/2019 Storage Fundamentals

    82/99

  • 7/30/2019 Storage Fundamentals

    83/99

  • 7/30/2019 Storage Fundamentals

    84/99

    Path Fault with PowerPath

  • 7/30/2019 Storage Fundamentals

    85/99

    - 85 -

    If a host adapter, cable,or channeldirector/StorageProcessor fails, thedevice driver returns atimeout to PowerPath

    PowerPath responds bytaking the path offlineand re-driving I/Othrough an alternatepath

    Subsequent I/Os usesurviving path(s)

    Application is unawareof failure

    Host Application(s)

    HBA HBA

    SD SDSD

    HBA

    Host Bus

    Adapter

    SCSIDriver

    Storage

    SERVER

    STOR

    AGE

    InterconnectTopology

    SD

    HBA

    PowerPath

  • 7/30/2019 Storage Fundamentals

    86/99

  • 7/30/2019 Storage Fundamentals

    87/99

  • 7/30/2019 Storage Fundamentals

    88/99

  • 7/30/2019 Storage Fundamentals

    89/99

  • 7/30/2019 Storage Fundamentals

    90/99

    I/O with PowerPath Queues in Balance

  • 7/30/2019 Storage Fundamentals

    91/99

    - 91 -

    PowerPath dynamically

    balances workloadacross all available paths

    PowerPath will providegreatest performance

    improvement inenvironments where theworkload is not balanced

    Workloads are seldombalanced

    Workloads dynamicallychange

    Host Application(s)

    Host BusAdapter

    SCSIDriver

    SD SDSD SD

    HBA HBAHBA HBA

    Request

    Request

    Request

    Request Request

    Request

    Request

    Request

    PowerPath

    InterconnectTopology

    SE

    RVER

    STORAGE

  • 7/30/2019 Storage Fundamentals

    92/99

    PowerPath Advantages

  • 7/30/2019 Storage Fundamentals

    93/99

    - 93 -

    Automatic

    Dynamic intelligent load management Manages multiple I/O data paths to maximize performance and high

    availability

    Utilizes multiple data paths to provide greatest efficiency

    Nondisruptive Path failover keeps your business in business

    Continuous access to information

    Online management and configuration

    Optimized

    Optimizes server and data path utilization by eliminating downtime

    Prioritizes bandwidth utilization

    Maximizes existing server investment

    Business Impact of PowerPath Features

  • 7/30/2019 Storage Fundamentals

    94/99

    - 94 -

    Optimized performance and high

    availability; no applicationdisruption

    Consistent and improved servicelevels

    Improved manageabilitysavestime, reduces maintenance cost

    Optimized data managementthrough user-selectable storageallocation policies

    Automated information utilization;optimized data movement

    Automatic path failover and recovery

    Dynamic load balancing of I/O

    Online configuration andmanagement

    Policy-based management

    Automated server-to-storage I/Omanagement

  • 7/30/2019 Storage Fundamentals

    95/99

    PowerPath Interoperability

  • 7/30/2019 Storage Fundamentals

    96/99

    - 96 -

    PowerPath Verses Other Products

  • 7/30/2019 Storage Fundamentals

    97/99

    - 97 -

    Veritas DMP

    Provides failover and limited load balancing capability

    SUN Alternate Pathing

    Failover only

    HP PVlinks

    Failover only

    Windows MPIO

    Failover only

  • 7/30/2019 Storage Fundamentals

    98/99

  • 7/30/2019 Storage Fundamentals

    99/99