Datalink Efficiency Tips Wpaper

Embed Size (px)

Citation preview

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    1/14

    7 Ways to Make Your Data Center

    Infrastructure More Efficient

    Exploring Top Practices for Cutting Costs andImproving Productivity in Todays Data Center

    White Paper

    by Kent Christensen, virtualization practice manager, DatalinkJuan Orlandini, principal architect, Datalink

    August 2009

    Economic pressures place a great emphasis on IT organizations to make wise infra-

    structure decisions that ensure the best ongoing use of resources. Striking the correct

    balance often comes down to developing strategies that not only solve the most press-

    ing IT issues, but that also help drive greater efficiency in the data center infrastructure

    (including the servers, networks and underlying storage that fuel a companys coreapplications). This white paper outlines some of the top ways that organizations can

    achieve great efficiencies in their environments and drive benefits ranging from steep

    capital savings to streamlined operations. It also illustrates how Datalink customers

    have implemented these methods in their architectures.

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    2/14

    Table of Contents

    Overview: The drive toward efficiency .................................................................................1

    Mastering the balancing act ........................................................................................1

    Why focus on efficiency ..............................................................................................1Efficiency Tip #1: Virtual backup with a snap .....................................................................3

    Leveraging storage array-based snapshots ...............................................................4

    Efficiency Tip #2: Move away from one size fits all ..............................................................4

    Efficiency Tip #3: Add an extra D to your D to D ...............................................................6

    Efficiency Tip #4: Get more out of your primary storage ......................................................7

    Efficiency Tip #5: Make smarter copies of your data ............................................................8

    Better manage your backup data ...............................................................................8

    Efficiency Tip #6: Use smart storage for optimized replication and DR ...............................9

    Efficiency Tip #7: Simplify backup with global storage lifecycle technology.......................10

    Conclusion ..........................................................................................................................12Datalink can help ......................................................................................................12

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    3/14

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    Overview: The drive toward efficiency

    Mastering the balancing act

    IT organizations face tough decisions. With many having already learned to

    go lean with their own staffing and budgetary needs, recent economic pres-

    sures now place an even greater emphasis on IT making wise infrastructure

    decisions to ensure the best ongoing use of resources.

    Balancing these issues isnt easy, especially when pitted against extreme

    growing pains and the frequent IT demands typically associated with most

    companies mission-critical applications. For many of Datalinks customers,

    striking the correct balance comes down to developing strategies that not

    only solve the most pressing IT issues but that also help achieve greater effi-

    ciency in their data center infrastructures.

    As more and more companies embark on consolidation initiativesincluding

    those surrounding server virtualizationit has become increasingly clear thatthere are often additional ways to reap huge returns. This has proven espe-

    cially true for many Datalink customers that have evolved their virtual server

    environments to include efficiency gains in areas like data protection and

    remote disaster recovery.

    This white paper shows some of the top ways Datalink customers have been

    able to achieve great efficiencies in their environments. From steep capital

    savings to streamlined operations, the results for these customers speak for

    themselves and will continue to pay off for years to come.

    NOTE: For the purposes of this paper, our focus on efficiency in a data

    center infrastructure surrounds the servers, networks and underlying stor-age that fuel a companys core applications. Data center infrastructure also

    encompasses the data protection and disaster recovery needs of this type of

    server-to-storage environment.

    Why focus on efficiency

    Whats so important about achieving greater efficiency? Think back a few

    years and recall when it used to take one IT staff member to manage a few

    terabytes of data. Today, with highly efficient technology and process inno-

    vations, its common for the same staff member to now manage a few hun-

    dred terabytes of data.

    In a few years, imagine that same staff member managing 10 times the data

    they do now. Your IT budget and staff numbers have flat-lined and are

    likely to be close to where they are now, yet your data has skyrocketed (see

    Figure 1 on the following page).

    Another challenge that accompanies massive storage growth is that exist-

    ing practices will not scale with the capacity growth, at least not within the

    projected budget. While the price of storage is dropping, it is not dropping as

    fast as capacity is growing. In addition, with capacities growing as such an

    This white papershows some of

    the top ways that

    Datalink custome

    have been able to

    achieve efficienci

    in their environ-

    ments.

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    4/14

    alarming rate, supporting processes like backup and disaster recovery (DR)

    are strained or broken.

    How will you close this gap? How will your infrastructure accommodate the

    storage and management of such a growing volume of data? More impor-

    tantly, how can you lay the groundwork now to sustain the innovation youll

    need to avoid massive data center expense later on?

    In essence, you have two choices:

    1. Learn how to slow or reduce your data growth

    2. Learn how to manage more with the same IT staff resources

    In either case, you are required to look at ways that lead to greater efficiency

    in your data center infrastructure. Surprisingly enough, the decisions you

    make in this area dont always mean buying more. Often, much headway can

    be achieved by incorporating many efficiency-driving practices or features

    you may already have available in your current infrastructure.

    Datalink works with clients to not just incorporate current best practices and

    efficient technologies, but to also help them strategically apply such technol-

    ogies toward the bigger picture: Meeting the increasingly demanding needs

    of the business without spending more or hiring more.

    The by-products of efficiency

    The recommendations outlined in this paper, can help drive the following

    efficiency by-products:

    Delayed capital expenditures

    Significant reductions in TCO

    Steep reductions in risk associated with remote DR and local/remote data

    protection along with associated cost savings

    2009 Datalink. All Rights Reserved. www.datalink.com

    Often, much

    headway can be

    achieved by incor

    porating many

    efficiency-driving

    practices or fea-

    tures you may

    already have ava

    able in your curre

    infrastructure.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    Figure 1: Assumingorganizations have a60% data growth rate,their storage require-ments will increase ten-fold. With IT headcountremaining flat, this cre-ates a gap.

    GAP

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    5/14

    Lower maintenance charges

    Substantial savings on power and cooling

    Sizable decreases in server and storage footprints

    Streamlined management of data growth with no added IT headcount

    The ability to store more data within the existing infrastructure

    Dramatic boosts in productivity, service levels and end-user satisfaction

    We encourage readers to explore the efficiency tips found here and contem-

    plate how one or more of these methods could lead to positive results in your

    own environment. While still valid for a broader IT audience, many of the

    technologies mentioned throughout this paper become even more valuable

    for those in virtual server environments.

    Efficiency Tip #1: Virtual backup with a snap

    Server virtualization brings with it incredible savings in consolidation.

    Consolidating as many as 10 or more physical servers onto one is an amazing

    efficiency breakthrough. Yet, one challenge with this type of consolidation is

    the potentially negative impact it can have on server backup processes. The

    table below illustrates a few of these challenges.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    One challenge

    with this server

    consolidation is th

    potentially negativ

    impact it can have

    on server backup

    processes.

    Approach Description

    Backup in the olddays

    With an abundance of physical servers, many organizations subscribed to com-mon backup wisdom: Put a backup agent on each server, manage backup jobs

    with one or more master backup servers, then send thefi

    nished backups to tape(or increasingly, to some type of intermediate disk-based backup target like thatfound with virtual tape library/VTL technology. While sites may have had to dealwith some network congestion, longer backup windows, tape inefficiencies andthe need to schedule and stagger backup jobs accordingly, the risk of maxingout an individual servers CPU resources for backup wasnt usually a factor.

    Backup in virtualserver environ-ments

    Virtual machine environments are a whole different story. Here, one physicalservers CPU resources can be in high demand already as it supports the needsof several hosted virtual machines. Adding a backup agent inside each virtualmachine can add extra strain on the physical server and negatively impact over-all performance.Mission critical application platforms support the offload of backup processes viaAPIs that interface with more efficient off host data protection applications.

    For example, VMware Consolidated Backup (VCB) and its centralized VCBProxy Server offload the data movement and leverage snapshot functionality tosignificantly reduce the backup impact on each ESX Servers CPU resources.For some IT environments, use of VCB with third-party data protection softwarecan be a viable option. As VMs proliferate and the need for multiple proxy serv-ers becomes apparent, however, backup times, longer backup queues and thecomplexity of backup jobs and data recovery may still increase. Virtual serverperformance can also be impacted as virtual machines and applications remainin state-stable mode until the VCB snapshot backup process is completed.

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    6/14

    Leveraging storage array-based snapshots

    Both in and outside of virtual server environments, Datalink clients have

    seen considerable efficiency gains by offloading server-side processing onto

    their underlying network-based storage array. Backup environments are one

    example of the merits of this approach. For IT organizations looking to makea change to their backup environments, this first efficiency tip suggests the

    alternative use of storage array-based snapshot technology.

    You can think of this technology as disk-based snapshots originating from

    within the virtualization software layer of the storage array itself. Many stor-

    age system vendors have evolved their array-based snapshot technology to

    integrate closely with VMware and other server virtualization vendors.

    For VMware, this means storage-based snapshots work first with VMware

    APIs in order to create system state-stable snapshots of virtual machines.

    Some can also interface with the underlying application to create application-

    consistent as well as crash-consistent point-in-time copies of the data. Theunderlying copy/snapshot and backup processing leverages storage resources

    that have little or no impact on the virtual server environment or the hosted

    applications. Using this technology, each VM can often be backed up in just

    a few seconds. The process of backing up entire virtual server environments

    can also be completed in minutes, as opposed to the hours typically required

    for numerous backup jobs to be performed with VMware Consolidated

    Backup (VCB) and third-party data protection.

    Benefits of this approach often include dramatic reductions in backup win-

    dows, significant improvements in recovery points and recovery times,

    improvements in virtual server performance, and an easier platform for off-

    site disaster recovery.

    Efficiency Tip #2: Move away from one size fits all

    Another way Datalink clients have begun to get ahead of the typical cost

    associated with exponential data growth is by treating certain types (or class-

    es) of data in different ways, especially in regards to:

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    For IT organiza-tions looking to

    make a change to

    their backup envi-

    ronments, this firs

    efficiency tip sug-

    gests the alterna-

    tive use of storag

    array-based snap

    shot technology.

    Efficiency Gains from the Real World

    A Datalink client in the natural energy field used array-based snapshot technology to protect

    over 160 virtual machines (along with existing, non-virtualized servers). Moving from its prior,

    ineffective tape-based backup processes, the client was able to subsequently:

    Decrease maintenance costs and avoid a $1M+ investment in a new tape backup system

    Eliminate tape and backup-related downtime, with significantly less IT management time

    required

    Drop backup/recovery times from its prior 1-2 days to just minutes (or seconds, in some

    cases)

    More easily initiate remote disaster recovery through use of related replication technology

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    7/14

    The type of reliability and performance needed

    The type of local and remote data protection required

    For example, not every application needs to be supported by high cost Fibre

    Channel disk drives. Likewise, not every type of data needs the same levelsof data protection, replication, or recovery time objective (RTO) and recov-

    ery point objective (RPO) associated with disaster recovery. Great savings

    and efficiencies can be gained by identifying and classifying data sets into

    their associated levels of service. Certain technologies are helping IT organi-

    zations get there faster. Datalink also offers services that help organizations

    align their service level agreements (SLAs) and classes of data with their

    business requirements. As a result, decisions about the technical infrastruc-

    ture required to support their priorities become much clearer. In the end, this

    type of tiered data model often gives organizations improved ability to iden-

    tify and meet or exceed service level objectives.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    Not every applica

    tion needs to be

    supported by high

    cost Fibre Chann

    disk.

    Efficiency Gains from the Real World

    Example #1: One large Datalink client in the healthcare field had serious data overload before

    it implemented a combined file virtualization/data tiering strategy that allowed it to analyze and

    classify its growing slate of unstructured file data (which had already grown to over 100 million

    files). Some of the clients prior issues included low storage utilization rates (50% per device),

    large backup windows, downtime associated with lengthy and involved data migrations, and

    data protection/management issues associated with its 60% annual data growth rate. After

    project implementation, the client:

    Delayed capital purchases of $1.2M and related annual maintenance costs of $240K per

    year

    Saved money implementing lower-cost SATA drives for certain types of data

    Increased utilization from 50% to over 80%

    Improved backup times to become 20 times more efficient

    Went from a disruptive, 6-month data migration of just 0.5TB to an automated, non-disrup-

    tive migration of 20TB of data in the same timeframe

    Example #2:Another Datalink client incorporated near-line SATA storage to archive its produc-

    tion snapshots, along with the selective use of solid-state disk (SSD) technology. The move to

    nearline SATA storage alone yielded impressive results, which included:

    Four-fold savings in capital

    50% reduction in power consumption

    A further study by Datalink assessed the benefits that SSD storage could bring the client. The

    analysis showed the following benefits of moving the clients selected data sets to SSD storage:

    A ten-fold increase in performance, with no change in capital expenditure per terabyte A three-fold savings in electricity and cooling costs (as compared to the 15,000RPM Fibre

    Channel drives that had previously hosted the data)

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    8/14

    Efficiency Tip #3: Add an extra D to your D to D

    As the volume of primary storage continues to grow, it also impacts the vol-

    ume and reliability of backup data. A disk-based backup solution, such as

    a virtual tape library (VTL) or disk-to-disk (D2D) copy, not only improves

    backup and restore performance, but also greatly increases reliability.

    However, as the popularity of disk-to-disk backup grows, the volume of stor-

    age required to retain data for recovery, archive or compliance can grow as

    fast or faster than that for primary storage.

    This often leads to ever-increasing requests to buy more disk shelves just to

    contain the backup data in your infrastructure. Thankfully, efficient technolo-

    gies like deduplication now offer another alternative that extends the use of

    existing disk storage, often by many months.

    In the case of disk-based backup, Datalink has incorporated deduplication

    technology to clients existing backup environments in a number of ways.

    Depending on the customer environment, deduplication technology may be

    incorporated as an inline system or a post-processing function. It may be

    carried out at the source of the data or at the destination. And it may also be

    deployed as a component of a companys primary storage architecture or as

    a separate appliance within the organizations backup, archival or secondary

    storage infrastructure. In any case, deduplication added to D2D environments

    typically results in significant efficiency gains. For more information on this

    topic, see the Datalink white paper, An In-Depth Look at Deduplication

    Technologies, at datalink.com.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    In the case of disbased backup,

    Datalink has inco

    porated deduplica

    tion technology

    to clients existing

    backup environ-

    ments in a numbe

    of ways.

    Efficiency Gains from the Real World

    Another Datalink client in the energy field had begun to seriously struggle in its efforts to back

    up several hundred servers to its already overwhelmed tape library. Juggling more than 700

    backup tapes, backup slowdowns, multi-hour restores, and relatively high failure rates, the com-

    panys IT department knew it needed to make a change. It also needed to discover a better way

    to accommodate even more data from remote sites as part of a larger data center consolidation

    it had already begun. After assessing the situation, Datalink recommended thatin conjunction

    with its move to VMware to reduce server footprintsthe organization employ D2D technology

    with deduplication. This decision proved key to greater savings, such as:

    Reduced backup storage capacity needed -- from over 800TB to roughly 33TB

    Extended backup retention times by 300%

    Achieved a storage capacity savings ratio of 100:1 (required just 1/100 of the storage capac-

    ity previously used) for VMware backup data with one application achieving a 385:1 capacitysavings ratio

    Yielded an average capacity savings of 23:1 across all virtual and non-virtual applications

    through the combined used of compression and deduplication

    Reduced backup administration from one day a week to 10 minutes per day

    Cut recovery times by 50%

    Significantly reduced footprint and cooling costs and achieved a 45% reduction in power

    utilization compared to expanding the companys legacy tape infrastructure

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    9/14

    Efficiency Tip #4: Get more out of your primary storage

    Similar to the previous efficiency tip, this guideline tries to move IT organi-

    zations away from the unending cycle of application data growth that then

    requires more and more disk trays to support it. While some growth in disk

    storage may be unavoidable, there are a few storage optimization technolo-

    gies that can significantly slow disk demand. These allow you to make the

    most efficient use of the disk you already have in your data center environ-

    ment. They can also help you significantly postpone or delay the purchase of

    additional disk storage. The most significant technologies for optimizing the

    use of your primary storage include:

    Deduplication (of primary storage). Similar to deduplication in backup

    environments, some storage vendors now offer deduplication of primary

    storage as well. For growing VMware environments with lots of duplicate

    VM system and OS file data, this can mean the ability to reclaim as much

    as 60-80+% of the storage capacity previously alotted to virtual machines.

    Across multiple application data types both in and outside of VMware

    environments, deduplication often frees over 50% of currently used stor-

    age space.

    Thin provisioning of storage volumes. Storage systems that use their own

    virtualization layer to offer a virtual pool of storage also often offer thin

    provisioning functionality. This allows you to configure two volume

    sizes to store application data: The first is often a larger sized volume that

    the application server or server administrator sees as available for use.

    The second, underlying volume is the thin amount of disk space that

    the application has actually used within the larger storage pool. The dif-

    ference between the two volume sizes can often be used by other applica-

    tions and servers, extending the usable life of an organizations storage.Incorporating just this type of thin provisioning technology, Datalink

    clients in VMware environments often report the ability to extend their

    current storage capacity by 30-40%.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    Storage systemsthat use their own

    virtualization laye

    to offer a virtual

    pool of storage al

    often offer thin

    provisioning func

    tionality.

    Efficiency Gains from the Real World

    Example #1:Many Datalink customers have already begun using deduplication on their primary

    storage. Here are a few ways this technology has begun to drive efficiency:

    A manufacturing company reclaimed over 80% of its primary storage associated with

    VMware

    A medical device company reclaimed more than 65% of its primary storage An airline reclaimed 60% of its primary storage

    A Fortune 500 technology company reclaimed 24% of its primary storage

    Example #2:Datalink recently worked with a law firm to implement thin provisioning. The

    results?

    The firm reclaimed 45 percent of its previously reserved storage capacity

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    10/14

    Efficiency Tip #5: Make smarter copies of your data

    Better manage your backup data

    Backup data sets are a prime example of the current proliferation of disk-

    based data. But, they are not the only example of the multiple copies of pri-

    mary application data that often accumulate throughout the data center infra-

    structure and drive storage costs significantly higher.

    There may be still several other data copies in use for test, development,

    staging or even training, not to mention the data replicated to a local or

    remote site as part of a wider disaster recovery process. Fortunately, technol-

    ogies now exist to dramatically reduce the disk capacity needs for multiple

    data copies. Including some elements mentioned previously, such technolo-

    gies include:

    Array-based snapshots. In fact, innovation in storage system software can

    mean one vendors array-based snapshots might consume a much smallerfootprint of storage capacity than another vendors. This may come

    down to how efficiently the snapshot technology tracks and stores just

    the changes (or deltas) since the prior snapshot(s). It is also important to

    understand the performance impact of the different snapshot technologies.

    Array-based clones. Specifically in the areas of test and development,

    some storage systems offer a cloning mechanism based on prior data

    snapshots or copies of snapshots. Cloned data sets can often be made in a

    few minutes while they only consume a sliver of disk space. This is a sig-

    nificant change over the traditional lengthy provisioning, planning and 1:1

    data copy requirements often needed for test and development environ-

    ments. [Note: VMware offers a VMware-linked clone capability and Lab

    Manager functionality that can also prove highly effective in customer

    environments.

    Deduplication. To reduce capacity needed for data copies, deduplication

    can be used on primary storage. If the data is then replicated to another

    site, it can mean less WAN bandwidth and less storage capacity on the

    other side. When used specifically on backup data, disk capacity needs

    also decrease dramatically.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    Innovation in storage system soft-

    ware can mean

    one vendors arra

    based snapshots

    might consume

    a much smaller

    footprint of stor-

    age capacity than

    another vendors.

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    11/14

    Efficiency Tip #6: Use smart storage for optimized replication and DR

    Datalink clients who have moved forward with server virtualization have

    since been able to make great efficiency strides in their disaster recovery ini-tiatives. This is often the result of leveraging advances in server virtualization

    technology and the related integration of the storage systems with VMware

    and others.

    When these technologies are used effectively, IT organizations benefit in

    many ways. One of those is an easier way to reach high service level goals

    set for different areas of the business. These include painless yet significant

    improvements in disaster recovery-related RPOs and RTOs.

    Frequently building on smart storage foundations like array-based snap-

    shots, many organizations have begun to achieve more affordable off-site

    disaster recovery. This is accomplished through the combination of existingsnapshot technology with related storage-based technologies for remote rep-

    lication.

    The smart storage copies can then be leveraged for an efficient failover and

    recovery process. For example, VMware vCenter Site Recovery Manager

    (SRM) leverages this architecture type to achieve a more automated, less

    time-intensive failover and recovery process. It can also significantly aid IT

    organizations in simplifying periodic DR test activities they have underway.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    Efficiency Gains from the Real World

    Datalink recently worked with a medical technology company to make smarter copies of data. The company needed the

    ability to provide multiple copies (up to 30) of its ERP database for developers. Storing that many copies of the same data

    was extremely expensive. Additionally, making and continually resynchronizing so many copies was way too time consum-

    ing. The answer proved to be a solution utilizing array-based snapshot and cloning technology. This allowed for creation

    of transparent, virtual copies of production data for use by developers. The organization is also using replication and WAN

    optimization technology for its DR strategy. Following are a few benefits of this environment:

    Production doesnt suffer. The organization makes a copy of the production data. The cloned volumes are created from

    the physical copy so there is no impact on production.

    More efficient. The organization can provide (in just seconds) as many copies of the data as required by the develop-

    ers.

    Greater flexibility. The developers can roll back to copies at almost any point in time, nearly instantaneously.

    More economical. The core data remains the same so less storage is required. The only additional space required is for

    the changed blocks of data among the various data copies.

    Better ability to meet SLAs. The organization can meet its RPO of nearly zero and its RTO of less than four hours.

    Decreased storage requirements. With 20 TB of data on the ERP system, the storage requirements under the old

    method would have been approximately 400 TB (in order to have 20 development copies, which is common). The newscenario now only requires 50-60 TB of storage (the primary data, the copy of that data, and the changed blocks that

    exist on the copies).

    Bandwidth cost savings. With the WAN optimization appliances located at a major data center and DR site, bandwidth

    requirements have decreased by about 50 percent.

    Fast. The WAN optimization technology has enabled the IT organization to go from a 36-day window to just a 3-day

    window for resynchronizing the data at the DR site.

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    12/14

    The benefits of using this type of smart storage architecture are many:

    Reduced cost to deploy remote replication and off-site disaster recov-

    ery. Based on these types of innovative technologies, its not uncommon

    for the cost of a remote, disk-based DR paradigm to be on par (or even

    less than) investing in a legacy tape infrastructure. In contrast to first-generation replication technologies, new smart storage architectures also

    offer the flexibility to:

    - Replicate between different product models (within a vendor line) at

    the primary and secondary sites and, in some cases, between storage

    systems from different vendors.

    - Significantly reduce bandwidth requirements and telecom charges

    associated with remote replication and data transmission.

    Reduced impact of any potential disaster. With much faster RPO and

    RTO goals now able to be affordably achieved, todays generation of

    storage-based DR architectures offer key efficiency gains just when youneed them most.

    Efficiency Tip #7: Simplify backup with global storage lifecycletechnology

    Many IT environments still rely heavily on server-based data protection and

    the use of a master backup server combined with multiple media servers to

    protect a mounting group of non-virtualized application servers. Here, too,

    significant efficiency breakthroughs can be made by applying what some

    server-based data protection vendors call storage lifecycle technology toyour existing backup environment.

    Backup administrators who work in these growing environments are already

    familiar with the problematic and often excessive manual effort involved in:

    Load balancing clients and media servers to ensure backup job success

    Assigning and reassigning specific application servers to the proper back-

    up media server

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    Efficiency Gains from the Real World

    Several Datalink clients have moved from tape-based backups alone to a more disk-based

    paradigm that capitalizes on smart storage snapshot and replication technology and WAN opti-

    mization. In one example, a Datalink client switched to using snapshot and replication technol-

    ogy for backup and remote replication. By combining this type of smart storage technology with

    a WAN optimization device and deduplication, the client:

    Maximized the use of its existing WAN pipe

    Achieved 60% reductions in the size of data replicated and transmitted for remote DR

    Reduced backup/recovery times significantly Automated its local and remote recovery processes for virtual machines

    Efficiency break-

    throughs can be

    made by applying

    what some serve

    based data prote

    tion vendors call

    storage lifecycle

    technology to you

    existing backup

    environment.

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    13/14

    Scheduling and juggling backup jobs with media server resources

    Troubleshooting backup issues

    Scaling out existing backup policies to new backup media servers and

    newer or expanding applications

    Setting service level goals around best attempts to back up and recover

    what is often perceived as the most critical systems versus a wider, busi-

    ness-oriented policy approach to RTO/RPO goals and classes of service

    focused on each applications unique needs

    Instead of relying on so much manual effort on the part of backup adminis-

    trators, newer data protection software offers the ability to create a shared,

    load-balanced pool of media servers where automated, storage lifecycle tech-

    nology can then be more readily applied. Automated storage lifecycle poli-

    cies derived from this technology then make it possible to more easily use,

    deploy or alter resources within this shared media server pool.

    Using such new features gives administrators a much easier way to perform

    technology refreshes. For example, adding a newer, more powerful media

    server to the mix then becomes a simple exercise of automatically adding

    another resource to the existing backup server pool.

    Instead of requiring administrators to perform and manually re-engineer

    1:1 mappings of application servers to their associated backup media serv-

    ers, storage lifecycle technology now allows applications to be classified

    based on the kind of backup and recovery services they might require. This

    includes defining (at the data protection software-level) business objectives

    for each class of data surrounding backup frequency, retentions, type of

    backup and type of recovery.

    Once high-level backup policy is defined, carrying out the policy becomes an

    automated function with the data protection software automatically perform-

    ing much of the scheduling, load balancing, media server assignment and

    scale-out typically required to ensure effective data protection.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    2009 Datalink. All Rights Reserved. www.datalink.com

    Newer data pro-tection software

    offers the ability to

    create a shared,

    load-balanced po

    of media servers

    where automated

    storage lifecycle

    technology can

    then be more rea

    ily applied.

    Efficiency Gains from the Real World

    Datalink has helped clients achieve the following benefits by applying this type of storage life-

    cycle technology to their data protection processes:

    Significant operational efficiencies (backup administrators can now do more in less time

    since much of their prior backup/recovery activity is now automated)

    Success at rapid, horizontal scaling of data protection architectures

    A strong business-level focus on data protection and recovery that goes from the adminis-

    trators best attempt to a more clearly defined set of business-driven RTO/RPO goals that

    focus on different data classes and different tiers of service

    A more focused, global policy approach with higher resource utilization, more streamlined

    management and a better ability to meet and exceed service levels

  • 8/14/2019 Datalink Efficiency Tips Wpaper

    14/14

    Conclusion

    Datalink can help

    IT decisions that build on efficient technologies and practices like those out-

    lined here can go far toward helping you solve your IT organizations press-

    ing challenges. They can also help set a foundation for a stronger service-

    oriented architecture that can ultimately lead you closer to a future cloud

    computing infrastructure thats highly efficient.

    A paradigm shift is needed to keep IT budgets from spiraling out of con-

    trol as organizations struggle with exponential application and data growth.

    Server virtualization technology represents the beginning of this shift and is

    leading to significant consolidation.

    Now, optimization of data in these smaller physical footprints is the next

    step. Technologies that strive to let you do more with less will continue to be

    a key part of getting there.

    As a leading information storage architect, Datalink analyzes, designs, imple-

    ments, manages and supports information storage infrastructures and solu-

    tions in a variety of environments, including those with extensive server

    virtualization strategies.

    Datalink is known for its balanced insights, field-tested best practices and

    practical advice and support that helps todays data center make the best use

    of technology to meet business and IT needs.

    Not tied to one manufacturer or one suite of products, we use technologies

    from multiple industry-leading and competing innovators and tailor solutionsto your needs. Inside access to manufacturers research and development

    roadmaps, resources, and technologies, provides us with a unique vantage

    point.

    Our practice areas span solutions made up of hardware and software from

    multiple storage innovators, along with a comprehensive suite of professional

    and support services. We specialize in the areas of backup and recovery; con-

    solidation and virtualization; business applications; and archive and compli-

    ance.

    To learn more about how Datalink can help your organization gain greater

    efficiency in the data center, contact Datalink at (800) 448-6314 or visit

    www.datalink.com.

    7 Ways to Make Your Data Center Infrastructure More Efficient

    Not tied to onemanufacturer

    or one suite of

    products, we use

    technologies from

    multiple industry-

    leading and com-

    peting innovators

    and tailor solution

    to your needs.