16
Tech OnTap Archive February 2008 HIGHLIGHTS Exchange Archival Case Study Real-World Storage Performance Hospital Prescribes Oracle on NFS Resources: Demystifying Dedupe Savings Ratios High-Performance Workloads Guide From Months to Minutes: Rapid Apps Provisioning Case Study Rich Angalet, Manager, Sprint Dale Elmer, Director of IT Operations Management, Sprint Want to mount up to 1TB of storage in just a few seconds? How about deploying a standard three- server/database/Web environment in minutes with one only person? The Sprint team shows you how. More “Controversy: NetApp Outperforms EMC in SAN Database Benchmark.” One key takeaway from this result is that turning on a simple feature like snapshots can radically change performance. Don’t let a bad experience with EMC’s snapshots scare you away from NetApp’s.” Dave's Blog DRILL DOWN High-Performance Random-Access Workloads Config and Tuning Guide Read up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads. iSCSI Software Boot for Windows Step-by-step guide on how to prepare, configure, and boot servers using NetApp. Demystifying Dedupe Savings Ratios Confused about how to get the real dedupe savings ratios? Watch this video and learn how to get apples-to-apples comparison between vendors. Accelerate Application Development Hear how Marvell Semiconductor Inc. reduced database cloning time by 83% to deploy applications faster. Plus, an online technical Q&A transcript from the Webcast. TIPS FROM THE TRENCHES Oracle on NFS: One Hospital’s Prescription for Maximum Flexibility Jess Carruthers, Project Manager Architecting a storage environment for Oracle® can be a tricky proposition. While there is no simple “one size fits all” solution, this article takes a closer look at one busy medical center that chose NFS to meet its Oracle storage needs. More Implementing Exchange Archival: Mailbox Management and Compliance Shaun Mahoney, Consulting Systems Engineer, NetApp E-mail archival often has two components, archival for mailbox management and journaling for regulatory compliance. Learn about the planning and implementation processes for a large financial company on how it was able to: Migrate most users to Exchange Server 2007 Add archival capability to get the volume of e-mail under control and eliminate the need for PST files Add journaling to meet regulatory compliance More ENGINEERING TALK Real-World Storage Performance Benchmarking NetApp Versus EMC Stephen Daniel, Director, Database Performance, NetApp The NetApp FAS3040 head to head with the EMC CLARiiON CX3 Model 40 with snapshots enabled. Get the full download on what makes SPC-1 real life and results of the benchmark. More FEEDBACK Tech OnTap February 2008 | Page 1

February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

Embed Size (px)

Citation preview

Page 1: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

Tech OnTap Archive February 2008

HIGHLIGHTS

Exchange Archival Case StudyReal-World Storage PerformanceHospital Prescribes Oracle on NFSResources:• Demystifying Dedupe Savings Ratios• High-Performance Workloads Guide

From Months to Minutes:Rapid Apps Provisioning Case StudyRich Angalet, Manager, SprintDale Elmer, Director of IT Operations Management, Sprint

Want to mount up to 1TB of storage in just a fewseconds? How about deploying a standard three-server/database/Web environment in minutes withone only person? The Sprint team shows you how.

More

“Controversy: NetApp Outperforms EMC inSAN Database Benchmark.” One keytakeaway from this result is that turning on asimple feature like snapshots can radicallychange performance. Don’t let a badexperience with EMC’s snapshots scare youaway from NetApp’s.”

Dave's Blog

DRILL DOWNHigh-Performance Random-AccessWorkloads Config and Tuning GuideRead up on best practices NetApp hasdeveloped for improving the performanceof its storage systems for demandingworkloads.

iSCSI Software Boot for WindowsStep-by-step guide on how to prepare,configure, and boot servers using NetApp.

Demystifying Dedupe Savings RatiosConfused about how to get the realdedupe savings ratios? Watch this videoand learn how to get apples-to-applescomparison between vendors.Accelerate Application DevelopmentHear how Marvell Semiconductor Inc.reduced database cloning time by 83% todeploy applications faster. Plus, an onlinetechnical Q&A transcript from theWebcast.

TIPS FROM THE TRENCHESOracle on NFS: One Hospital’s Prescriptionfor Maximum FlexibilityJess Carruthers, Project Manager

Architecting a storage environment forOracle® can be a tricky proposition. Whilethere is no simple “one size fits all”solution, this article takes a closer look atone busy medical center that chose NFSto meet its Oracle storage needs.

More

Implementing Exchange Archival:Mailbox Management and ComplianceShaun Mahoney, Consulting Systems Engineer, NetApp

E-mail archival often has two components, archival formailbox management and journaling for regulatorycompliance. Learn about the planning and implementationprocesses for a large financial company on how it wasable to:

• Migrate most users to Exchange Server 2007• Add archival capability to get the volume of e-mail under

control and eliminate the need for PST files• Add journaling to meet regulatory compliance

More

ENGINEERING TALK

Real-World Storage PerformanceBenchmarking NetApp Versus EMCStephen Daniel, Director, Database Performance, NetApp

The NetApp FAS3040 head to head withthe EMC CLARiiON CX3 Model 40 withsnapshots enabled. Get the full downloadon what makes SPC-1 real life andresults of the benchmark.

More

FEEDBACK

Tech OnTap February 2008 | Page 1

Page 2: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

TECH ONTAP ARCHIVE

Rich AngaletManager, Sprint

With over 25 years of IT experience, Angalet (left) has expertise in operations, hardware, operatingsystems, network, data centers, facilities, and automation. He is currently responsible for theimplementation of Sprint’s 4S initiative. Rich attended Rutgers University and enjoys motorcycles,classic cars, and the outdoors.

Dale ElmerDirector of IT Operations Management, Sprint

Dale Elmer started in IT in 1976 at Centel, a small utilities company that merged with Sprint in 1993.Dale held various IT positions within Sprint. Prior to his current role, Dale was the director of QualityAssurance for Sprint’s 4S initiative.

Simplicity, Speed, Standards, and Stability:Provisioning Model for Rapid App DeploymentBy Rich Angalet and Dale Elmer

At Sprint, new applications undergo testing by our Test Environment Operations teambefore we release them into production for use by internal customer groups. Thisfollows a familiar development/test/production cycle.But, as the organization and IT infrastructure grew in both size and complexity, westarted experiencing significant slowdowns in our ability to deploy test and productionenvironments, sometimes moving into months before the underlying infrastructurecould be arranged.

With more and more solutions being developed and awaiting internal release toproduction, our team began looking for ways to support the more rapid innovation andapplication deployment cycles Sprint required.

This is when we began discussing the factors keeping us from achieving rapiddeployment of our test environments. This is also when we birthed our initial "4S"services provisioning model. Having heard about the success of Australia's Telstrawith its OmniPresence "Storage Everywhere" project, our Sprint team started toconsider how a more flexible, server-and-storage-farm infrastructure could help usmore quickly provision, tear down, and restore our test and production environments.This services provisioning model ended up surrounding the four S'es we viewed ascritical to achieving this goal:

Simplicity: We knew the current deployment process—involving multiple teamsand multiple IT layers—was way too complex. Our new infrastructure wouldhave to take fewer steps and require less support from various groups to deploytest environments.Speed: We set out to achieve a zero-hour service level agreement for deliveryof test environments to our customers. This was a high bar to achieve, sinceour typical environment delivery cycle could take weeks or even months.Standards: We had so many variations in our environment. It was unbelievable.By standardizing on the key infrastructure components to be used in the server,storage, database, middleware, and application environments, we thought itwould simplify our efforts and deliver what we needed a lot faster. We wantedto get to the point where we could quickly build 50 to 100 hundred serversexactly alike, if we wanted.Stability: We wanted our deployments to be stable enough to be built up, thentorn down or rebuilt just as quickly, with the ability to quickly reprovision thefreed capacity for another test or production environment.

RELATED INFORMATION

Telstra Delivers StorageEverywhere

Friday Institute: Enabling Softwareas a Service Through Virtualization

About Sprint Nextel

Sprint Nextel offers a comprehensiverange of wireless and wirelinecommunications services bringing thefreedom of mobility to consumers,businesses, and government users.Sprint Nextel is widely recognized fordeveloping, engineering, anddeploying innovative technologies,including two robust wirelessnetworks serving approximately 54million customers at the end of 2007,industry-leading mobile data services,instant national and internationalpush-to-talk capabilities, and a globalTier 1 Internet backbone.

Virtual Reality: Building anArchitecture Capable of

Evolving in Any Direction

Many types of organizations involvelarge numbers of geographicallydistributed locations. This can leaveIT teams faced with supportingnumerous remote sites—but withminimal remote staff and expertise.

To solve this dilemma, The FridayInstitute developed an end-to-endarchitecture involving servervirtualization, advanced cloningtechnology with NetApp FlexClone®,and end-to-end management to

Tech OnTap February 2008 | Page 2

Page 3: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

Focusing on these four areas, we were able to develop a fast service provisioningmodel that now allows us to use just a few commands to rapidly roll out a server andinstall Oracle®, WebSphere, and any other application components. This modelincludes the automatic creation of storage volumes and the ability to allocate storageso rapidly that we've seen the system mount as much as 1TB of storage to a host injust a few seconds.

Project results:

Ability to provision 1TB to a host in a few secondsAbility to automatically discover and apply protection policies to newlyprovisioned data setsReduction of provisioning time of database, application, and storage on aserver down to just 15 minutes

Frustration and Delays: The Impetus for ChangeWhen we received a customer request to set up a new test environment, the processused to involve coordinating with a minimum of six or more teams. The servercomponent might be provided by Facilities; the [storage] might be handled by ourSystems Administration organization. Then, another group would install themiddleware or database layer. Someone else would be responsible for the applicationside, or the installation of [BEA] WebLogic or [IBM] WebSphere, if those wererequired. For the most part, every piece of software was handled by other groupsindividually.

Since our deployment of environments depended so heavily on the availability ofevery team to do the work, it slowed the process down. By the time we had organizedeveryone's schedule, the time from request to deployment typically spanned fromweeks to months.

We began to wonder how much faster we could deploy test environments if weminimized the reliance on other teams for execution of their component. What if,instead, we could just automate their portion of the implementation, based on theirgroup's own preagreed policies and processes? If we could successfully separate outthe "execution" side for any environment's deployment from the "policy" sideassociated with each group, what efficiencies could we gain?

To help test this premise, we decided to perform an informal pilot last year with thesupport of an executive sponsor. Using a core team of just four to five people, wewere able to create a prototype process that was still largely manual. What theprototype allowed us to do, however, was prove our ability to dramatically reducedelivery time of a test or production environment to customers. In our pilot, we wereable to go from 80 hours of calendar time down to just four hours and one person.Doing this allowed us to separate the actual time and labor it took to perform tasks,which were relatively small, from the group coordination and scheduling functions.

One time-saving deployment strategy used during the pilot was to replace theorganization's traditional middleware and custom software installation processes withan installed image of the middleware application, created with NetApp Snapshot™software operating on one of our NetApp FAS storage systems. Having previouslybeen stored and catalogued in its installed form, the Snapshot copy could then berapidly deployed to a new testing environment with a quick mount of the Snapshotcopy using NFS to a designated target server.

With our initial theory proven, we embarked on a larger project leading to thedevelopment of our current 4S services provisioning model.

Moving to a Shared Services Delivery PlatformWe set out to develop and test a new compute farm that would allow us to quicklyprovision and scale both server and storage resources to meet the needs of any host.After analyzing where we could gain the biggest win in deployment, we decided tofocus first on automating the deployment of test environments consisting of Sun™Solaris™ servers and an Oracle Database or BEA WebLogic or IBM WebSphere orSun iPlanet. Applications with one or more of these components represented roughly45% of the total test environments being deployed. These would be our first test casefor our emerging services provisioning model.

We developed a server and storage farm, both running over a TCP/IP network andbeing managed by an infrastructure management software layer. The infrastructuremanagement layer would be used to help automate and speed the delivery ofresources from within the farm. A few key functions the infrastructure management

deliver software services on demand.The resulting framework is a virtual,open environment that offers flexibilityunimaginable in traditionalenvironments.

Learn more. Read the article.

VSimplified Backup andReplication Managementwith NetApp Protection

Manager

Protection Manager is an intuitive,policy-based management applicationfor NetApp disk-based backup andreplication technologies, includingSnapMirror, SnapVault, and OpenSystems SnapVault. This tool enablesadministrators to apply predefinedpolicies to their data, therebyeliminating ambiguity and thepotential for error inherent in manualmanagement.

A new demo shows how ProtectionManager solves three commonmanagement challenges:

How do you make sure thateverything is protected when datais distributed everywhere?How do you scale your dataprotection environment withoutspending all day on tediousmanual tasks?How do you rapidly roll out globalchanges across all sites andsystems?

Watch the demo.Read Three Backup and ReplicationManagement Challenges Solved.

Tech OnTap February 2008 | Page 3

Page 4: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

Figure 1) Sprint's 4S server andstorage farm services deliverymodel.

layer needed to perform included:

ProvisioningStorage and backup managementOS deployments using Jumpstart

These functions, and the farm's conceptual design, are shown in Figure 1. Weanticipated each line in the figure, going from point A to point B, might involvesignificant custom development and integration work before we could turn the deliveryof test environments and related services into a more automated, push-buttonprocess. When it got to the storage and backup management component, however,we were surprised at how little integration was required with NetApp ProtectionManager, the application we chose to manage the process.

For the server farm, we deployed Sun Solaris servers,starting at 50 servers with an ultimate goal to move toabout 100. For storage, we centralized on 100TB ofNetApp FAS3000 series storage and 100TB ofNetApp NearStore near-line storage as our primaryand secondary storage. Systems are connected usingan IP SAN that is architected to scale out to everycorner of our data center. The architecture itself isbuilt to scale to support thousands of hosts.

For our larger infrastructure management and provisioning component, we chose IBMTivoli Provisioning Manager to help us manage, catalog, and automate provision priorserver/operating system environments for reuse. To help us more quickly manage,back up, and provision the data sets associated with each test environment build, weevaluated a few backup and recovery applications before settling on the use ofNetApp Protection Manager, in conjunction with other NetApp data protection toolslike NetApp Snapshot, SnapMirror®, and SnapVault®.

Determining the best storage and backup management solution turned into one of theharder aspects of the model to implement.

Data Protection Tools Put to New Use: Fast Provisioning andRelease of Storage ResourcesWhen we began to architect the 4S model, we knew we wanted a very highly availableand resilient backup capability so that we could avoid potential delays in the deliveryof services. We also knew we wanted to do more management of the data sets thanjust backing up the data in the event of a local or broader system failure.

We needed a solution that would allow us to provision the storage and perform thesubsequent teardown or release of that service so that the underlying storage assetscould be reused. At the same point, we wanted to "checkpoint" the test environmentso that we could stop it, yet resume it again at some point in the future, therebyfreeing up our server capacity in the meantime for better utilization.

Host-Centric Compared to Storage-Centric ViewWhen we compared NetApp Protection Manager against other host-based backupapplications, we liked that it offered a storage-centric instead of a host-based view ofbackup data. If we wanted to provision storage services for a new test environment,mount the volume(s) to a host, then dismount them and remove them, we felt a host-based backup approach would cause some of the storage to be "orphaned" andwithout an associated host. This was a key reason we wanted to move the host out ofthe picture when it came to protecting and managing the data sets.

After testing Protection Manager, we liked the fact that it maintained a storage-focusedoversight of the underlying physical storage, volumes, Snapshot copies, and datasets. More importantly, it had an important feature not available in the other solutionswe looked at: the ability to autodiscover storage components and underlying storagevolumes within our NetApp FAS and NearStore storage systems. This was huge forus, as it meant we would not have to build months' worth of custom script in order toallow the system to discover and report on or manage the current state of various datasets.

We also liked the fact that Protection Manager allowed us to group data sets orvolumes with common protection requirements, then apply a predefinedbackup/restore policy to it. These turned into another type of provisioning policy withinour model. The approach also fit with our original vision of reducing the variousgroups' involvement in executing the test environment builds, while still providing them

Tech OnTap February 2008 | Page 4

Page 5: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

Figure 2) Data sets and backupsmanaged by NetApp ProtectionManager.

the oversight of policy surrounding what the builds should contain.

We set up our shared storage farm to be generic and simple with protection policies toback up our software clones, separate protection policies to back up user data, andseparate protection policies for root volumes and storage system files. This process isoutlined in Figure 2.

Examples of Rapid Storage Provisioning (andReprovisioning)NetApp Protection Manager worked well out of thegate. After just a few days spent implementing thisaspect of the model and providing just two pages ofwritten instructions, we were able to easily deploynew storage and test environments.

Now, to deploy a standard three-tier serverenvironment (including three servers, eachrepresenting a database system, application, andWeb environments and even terabytes of storage),we've reduced the provisioning time down to just 15minutes and one person who now just needs to typein a few commands. That was a process that previously used to take hours or weeksto complete.

In another scenario, we had already provisioned resources for one test environmentbased on the customer's initial requirement for an Oracle10g™ database. After theenvironment had been provisioned, however, we were made aware of a newrequirements for the environment to run on Oracle9 i™ instead. This might have beena nightmare with our old provisioning style. With our new services model, we wereable to use just enter a few commands to disconnect resources from Oracle10g andreconnect them again in Oracle9 i.

Chasing Efficiency First, Cost Savings to FollowOne of the interesting things about this project was the fact that we didn't enter into itinitially with cost savings uppermost in our minds. Sometimes, too much focus on costsavings can actually water down a project. Instead, we were provided the leadershipto really tackle the issues correctly by focusing on the business issues first. Ournumber-one business issue was not about cost savings. It was about how we couldspeed up the deployment of applications and environments into production. We feelwe accomplished that with our 4S model and are now beginning work on its wideracceptance throughout other facets of the organization. We are starting to see thebenefits of this approach with subsequent cost savings in the form of better storageutilization and faster time to production.

The important thing for other groups to recognize is that this type of services deliveryinfrastructure doesn't remove their responsibilities and ownership of their piece of theinfrastructure. Instead, it repackages their pieces in a way that allows them to be moreproactive in setting and refining policy decisions and standards as we move forward.Also, by proving the merits of the project first with the support of a few core membersand an executive sponsor, we were able to demonstrate a big win earlier that we hopewill help the organization more quickly adapt to the benefits such an architecture nowoffers to other areas as well.

Tech OnTap February 2008 | Page 5

Page 6: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

TECH ONTAP ARCHIVE

Jess CarruthersProject Manager

For the past nine years Jess Carruthers has managed storage systems from NetApp and competitorssupporting Oracle® ERP, financial databases, clinical databases, document, and PACS imagingenvironments. In 2005 as a beta site for Data ONTAP® 7G, Jess and his team consolidated 28TB ofOracle and CIFS storage across nine NetApp systems to a FAS960c and a NearStore® systemrunning SnapVault®. Utilization jumped from 50% to 76%. The team is currently expanding itsenvironment to include three FAS clusters, three NearStore systems, and a FAS3070 at aremote DR site.

Customer Case StudyOracle on NFS for Maximum FlexibilityBy Jess Carruthers

Architecting a storage environment for Oracle can be a tricky proposition. While thereis no simple “one size fits all” solution, there is a wealth of knowledge available foranyone who wants to deploy Oracle with NetApp storage, and much of thatinformation derives from direct customer experience. This article takes a closer look atone NetApp customer that chose NFS to meet its Oracle storage needs.

I work for a regional medical center with a total staff of nearly 15,000 people spreadacross three hospitals, a rehabilitation center, primary care clinics, extended-carecenters, a research institute, and a hospice center. Medical staff includes 3,100physicians representing more than 91 medical and surgical specialties.

We use Oracle as the back end for our clinical care and financial applications, soOracle performance and availability are critical. A tight IT budget coupled with agrowing demand for storage—for Oracle as well as other applications—has createdsignificant challenges.

NetApp storage in conjunction with NFS has helped us meet our growing storageneeds without breaking the budget. With our NetApp infrastructure, we can react tochange more quickly than we can with other storage solutions. In this article I’m goingto describe how we configure Oracle to run over NFS, discuss our data protection anddisaster recovery strategy, and explain the advantages we get from this approach.

An Evolving Oracle EnvironmentLike many busy Oracle shops, ours is far from static. We’re constantly evolving,changing, and adding to meet business demands. We run E-Business Suite as well asa diversity of applications for patient tracking, billing, and archiving. We havedatabase instances running on Oracle Databases 8i, 9i, and 10g.

Two years ago, all our Oracle servers were monolithic HP and Sun™ servers withhost-based HA for availability. Today, we’re transitioning to Oracle Real ApplicationClusters (RAC) and slowly replacing or supplementing our big iron servers withclusters of cost-effective Linux® servers.

Today, we have a total of about 24 database instances, and we’re adding about 1TBof new storage a month for Oracle and other purposes. This is where NFS comes in.We’re adding storage and changing our storage configurations all the time, and NFSand NetApp make the whole process much simpler than if we were using SANstorage.

Why Oracle on NFS?The main reason we prefer NFS for Oracle is that it allows us to adapt more easily.We can quickly expand or contract an NFS file system to meet changing needs. I

RELATED INFORMATION

Oracle on NetApp: NFS

Oracle on NetApp: SAN

Why Protocol Should Be Irrelevant

SnapManager for Oracle

Choosing a StorageProtocol

for Oracle

Your choice of protocol depends on awide variety of factors that includeexisting infrastructure, processes, andskill sets, in addition to the relativecapabilities of each technology.

NetApp offers a range ofimplementation options and tools fordeploying and managing Oracleenvironments using NFS, FibreChannel, or iSCSI, alone or incombination. Find out more specificsfrom previous Tech OnTap articles:

Oracle on NFSOracle on Fibre ChannelWhy Protocol Should BeIrrelevant

NetApp Blogs InvolvingNFS and Databases

Updates, presentations, andobservations on many of the topicsdiscussed in this article are availablefrom technical blogs maintained bySanjay Gulabani and NetApptechnical director Mike Eisler. Inaddition, NetApp founder Dave Hitzrecently blogged about the origins ofdirect NFS.

Tech OnTap February 2008 | Page 6

Page 7: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

To meet 5 minute RPO: Oraclearchive logs are mirrored every4 minutes SnapMirror

might take a 1TB aggregate and give each of eight applications 100GB while holding200GB in reserve. We can then expand (or contract) any of those NFS volumes in amatter of minutes without disruption to ongoing operations. We can make changeswhenever we need to, and it’s no big deal.

Like most IT operations, we also have storage from other vendors. With our SANhardware, expanding a volume is a lot more work. Expansion takes at least eighthours, and there’s no way to automatically shrink a volume. With SAN storagesystems, you find yourself overprovisioning to avoid getting caught short and puttingoff operations that are so easy on NetApp.

Another benefit of NAS over SAN is less administrative overhead. TCP/IP cards justwork right out of the box. We don’t have to spend a lot of time updating drivers andfirmware as we would with HBA adapters. Our DBAs really prefer the NFSenvironment because it gives them much more autonomy. You can grow Oracle tablespaces and data files, create or restore Snapshot™ copies of NetApp volumes withoutinvolving a storage administrator, and get the data you need up on any host (ormultiple hosts) with a simple NFS mount—a particular benefit with Oracle RAC.

Today we have a total of 12 NetApp storage systems with raw capacity of 190TB. Inaddition to storage for our database/application needs, we recently added aNearStore R200 to support our radiology and mammography image archive (PACS),and we’ve deployed a NetApp FAS3070 for off-site vaulting with SnapVault, to supportour objective of eliminating tape from our environment.

Configuring NFS for OracleIn the beginning, everyone was afraid that performance would be an issue, but if youbuild your environment correctly, performance should not be a problem. When weconfigure a NAS environment for Oracle, we basically apply the same rules you wouldapply to a SAN. We create a private network and use redundant switches to create aredundant fabric. Since Fibre Channel SAN runs at 2Gb per second or faster, we meetor exceed that performance level by aggregating several Gigabit Ethernet connectionstogether to give us bandwidth of anywhere from 2Gb to 6Gb per second depending onthe application. Essentially, we’re creating a dedicated SAN; it just runs a differentprotocol.

To get the best possible performance, some tuning is required for the TCP/IP stackand for NFS. Fortunately, NetApp has some great resources that tell you exactly whatto do. (Find out more from a recent Tech OnTap article.)

Data Protection and Disaster RecoveryNo matter if you choose NAS or SAN for Oracle, data protection and DR are going tobe critical considerations. For our Oracle environment, we have an establishedrecovery point objective (RPO) of five minutes and a recovery time objective (RTO) offour hours for any given database. We achieve these goals using a combination ofNetApp SnapVault and NetApp SnapMirror® , providing both data protection and DRthrough a completely tapeless solution.

Every night at 1:00 a.m. we put all our Oracle Databases into hot backup mode andcreate a Snapshot copy of each one using a customized script. This takes about 10minutes. Then starting at 1:15 a.m., SnapVault runs and vaults all those changes toour DR facility. We throttle the performance to 20MB per second so that we don’timpact production applications. This gives us the equivalent of a full backup of everydatabase every night with off-site storage. We maintain 20 days’ worth of these nightlybackups online. For database applications, we don’t need to go back years. We justneed to protect ourselves against possible application errors that would require us toroll back.

All our archive logs are stored in separate volumes,and we use NetApp SnapMirror to sync thosevolumes to the DR site every five minutes. Thecombination of the two allows us to meet our five-minute RPO and four-hour RTO.

To recover, we copy the SnapVault backup to aread/write volume and then replay the archive logsthat have been stored using SnapMirror. We useSnapVault instead of SnapMirror to protect Oracledata volumes and department share data to reducestorage costs. Our primary volumes are on Fibre Channel disk, and our SnapVault

All three blogs include an opportunityto post comments and submitquestions:

Gulabani's Databases on NetAppStorage BlogEisler's NFS BlogDave's Blog: Oracle Optimizes ItsDatabase for NFS

Tech OnTap February 2008 | Page 7

Page 8: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

volumes are on SATA disk. We keep five days of Snapshot copies on primary storageand 20 days of Snapshot copies at the DR site. We are also in the process ofdeploying NetApp deduplication on our vaulted data to further save on space.

Keeping Ahead of the CurveMy organization is very proactive when it comes to efficiency. We have businessoperations analysts on site to look at workflow and how things should interact. Theydo process redesign before we deploy an application to make sure we are making themost efficient use of new software and hardware.

Like everyone in IT, we face a constant struggle to stay ahead of demands and keepup with growth. We have three DBAs and three admins dedicated to UNIX® and Linuxservers and storage. Because our NetApp storage systems configured for NAS are soeasy to manage, it takes less than one full-time equivalent administrator to manage itall, freeing up resources for other work. Over the past few years, our total NetAppstorage has grown by 80TB, but we haven’t had to add additional staff to manage it.NetApp makes it possible for us to get more done in less time with less staff.

Tech OnTap February 2008 | Page 8

Page 9: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

E-Mail ArchivalSolutions to ManageIncreased Data Growth

E-Mail Archive andCompliance: Eliminatethe Need for SeparateStorage Silos WithoutSacrificing Performance

TECH ONTAP ARCHIVE

Shaun MahoneyConsulting Systems Engineer, NetApp

Based in New York City, Shaun Mahoney serves as a business solutions architect focusing on archiveand compliance solutions. Before joining NetApp, Shaun spent more than 10 years working in thefinancial services community. During the time he was a NetApp customer, Shaun helped architect anddevelop the NetApp LockVault™ product.

Implementing Exchange Archival:A Case StudyBy Shaun Mahoney

When someone refers to e-mail archival they might betalking about:

Archival for mailbox management orJournaling for regulatory compliance

Although the same application can be used to provideboth, the two functions generally have completelyseparate environments with different hardware andsoftware configurations and unrelated end goals. Thefact that these two functions are so different often leadsto confusion in discussions of e-mail archival, making itdifficult for companies to figure out exactly what theyneed.

I recently worked with a large financial services company that is doing a major refreshof its messaging environment, including:

Migrating most users to Exchange Server 2007Adding archival capability to get the volume of e-mail under control andeliminate the need for PST filesAdding journaling to meet regulatory compliance

In this article, I’m going to look at this company as a case study to illustrate thedifferences between archival and journaling. Since journaling was the part I was mostclosely involved with, I’ll focus particular attention on that, including the complianceplanning process.

Customer Challenges and GoalsMigrationThe company needed to reduce the cost and complexity of its messaging solutions,improve availability, and increase efficiency. Like many companies in the finance area,this one had grown through acquisition and was using multiple messaging systems asa result. The Exchange migration project will address this problem and get the entirecompany running on a single up-to-date solution.

ArchivingAt the same time, the company wanted to make its messaging environment moreresilient and efficient, reduce backup costs, and avoid the need for PST files. Using e-mail archiving to move messages out of the active Exchange repository addressesthese issues. Because the primary Exchange repository decreases in size, backup issimplified, and PST files are no longer needed to retain older e-mail.

RELATED INFORMATION

Tips for Archive and CompliancePlanning

Impact of Exchange 2007

NetApp: Viable for Exchange?

Tips for Archive andCompliance Planning

What would you do if someone slid a5¼" disk across the table and askedyou to open up the WordStardocument on it? This may seem likean extreme case, but when you’replanning on retaining information foryears there’s a lot more to thinkabout than just squirreling away thedata.

In a recent Tech OnTap article,NetApp systems engineer Mike Rileyprovided six tips for improving yourarchive and compliance planning:

1. Avoid storage silos2. Avoid proprietary data

formats and technologies3. Don’t consciously limit your

options4. Plan for performance5. Utilize storage virtualization6. Think carefully about

encryption

Read the article to find out more.

Support Your ComplianceTechnology with the Right

Processes

The latest and greatest technology(from any vendor) is not enough toensure you are in compliance. Thereare important processes that must be

Tech OnTap February 2008 | Page 9

Page 10: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

The goals for the archival project were:

Reduce costsSave disk spaceLimit impact to end users (ideally zero impact)

ComplianceFor the financial company, compliance with SEC and other regulations is a majorconcern. The company had previously implemented a basic compliance strategywhere it was journaling messages, but ultimately realized that it wasn’t journaling in amanner that satisfied regulatory requirements. Fixing this problem was the drivingforce behind this part of the project. NetApp was brought in specifically to helpaddress compliance needs, which is how I got involved.

The primary goals for the compliance project were:

Reduce riskEnsure reliabilityKeep the solution as simple as possible

Establishing Compliance ObjectivesWhen it comes to compliance, the most difficult thing is often determining what to saveand for how long. If you choose too short a duration, you risk being out of compliance.If you choose too long a duration, in contrast, you may tie up resources, locking downdata that you no longer need to retain. (In a properly implemented compliance system,once data is locked down, it can’t be deleted under any circumstances until theretention period expires.)

When NetApp first sat down with the compliance team for this company, the team hadidentified the users for whom journaling was required. There were several businessunits that had different requirements for retention based on their business functionand various regulatory requirements. However, they weren’t sure how many differentretention time frames they would need. They were originally considering five separateretention programs: three, five, six, seven, and eight years. This would have been verycomplex to manage, and they didn’t have a good methodology to identify which usersbelonged in which program. It would also have made the project more costly.

Our first step was to determine what regulations they had to meet, what the timeframes actually were, and to sit down with them to simplify the retention plan. Basedon those discussions, we were able to get it down to two categories: three years andeight years—much more manageable.

Next I began digging into the specifics of the eight-year requirement. This requirementwas based on a regional regulation that ranged from four to eight years depending onthe region. Only the three-year requirement mandated that things be set for thespecified time period. The other regulation mandated that the data be kept for thespecified time period. This allowed us to set everything for only three years (furthersimplifying the journaling environment) and then change the record policy to reviewthe records before the three years were up and extend the retention of information asnecessary to satisfy the other regulations. The eight-year regulation is currently underreview to be shortened to four years across all regions, so this step may ultimatelysave the company from unnecessarily locking down data for a period of four years.

Project ImplementationMigration and ArchivalBoth the implementation and ongoing management for the Exchange and the archivalprojects were outsourced to a third party. As part of the request for proposal (RFP) foreach project, the company specified that NetApp be used for all back-end storage.The company already had a small amount of NetApp storage in place and clearly sawthe benefit of using NetApp hardware and associated software tools, includingSnapManager® for Exchange.

Storage for each project is provided using clustered NetApp FAS3070 storagesystems. The company has one FAS3070 in each data center, and a stretchedMicrosoft® cluster spans the two data centers. The Exchange environment uses high-performance Fibre Channel disk and the iSCSI protocol for block access.

The archive environment uses its own clustered NetApp storage with a mix of FibreChannel and SATA drives. Symantec® Enterprise Vault™ software is used toimplement archiving. Enterprise Vault uses Microsoft SQL Server™ as its underlyingdatabase engine. All SQL Server data and indexes are on Fibre Channel storage

part of any compliance solution.Overlook them, and the auditors maynever bother to take a look at yourcool technology.

Here are some key processes youshould know about and implement:

All physical access andmaintenance of the media orsystem must be audited andrecorded and must havepreapproval by a control source.You should have a security logrecording any configurationchanges made to yourcompliance solution.You need to be able to providean auditable topology andworkflow report of how data getsto your compliant storage.There must be an audit trail foroutage remediation. In otherwords, should you experience anoutage, what steps would betaken in order to bring the systemback into compliance, and howdid you know when you hadreached that point?There must be a method to verifyand audit the quality andaccuracy of the data over time.

Tech OnTap February 2008 | Page 10

Page 11: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

Figure 1) Enterprise Vault andNetApp storage configuration forthe journaling project.

accessed using iSCSI, while the archive data itself is stored on CIFS shares on SATAdisk for greater economy.

JournalingBecause it was for compliance, the journaling project was not outsourced. Thecompany wanted to maintain direct ownership and control to ensure that it fullyunderstood and minimized all potential risks. A preexisting dedicated team owns theproject with responsibility for the software, servers, and storage.

The physical infrastructure for the journaling environment is very similar in principle tothose for the Exchange and archival environments. A clustered NetApp FAS3070storage system was deployed in each of the company’s two data centers. Undernormal operation, each data center provides journaling for roughly half the company’smessaging. NetApp SnapMirror® software is used to mirror all journaled data betweenthe two data centers so that two copies are maintained at all times for disasterrecovery.

All Microsoft SQL Server data is stored on FibreChannel disk and accessed using iSCSI. NetAppSnapManager for SQL Server is installed to simplifySQL Server data management. Index data is storedon CIFS shares on Fibre Channel storage. Journaldata resides on CIFS shares on SATA storage. Toensure compliance, the volumes used for journal dataare NetApp SnapLock® Compliance volumes.NetApp SnapLock Compliance provides WORMcapabilities with retention support for data stored bySymantec Enterprise Vault.

Clustered Microsoft SQL and Symantec Enterprise Vault servers have beenconfigured to span the two data centers. Each location has an active SQL andEnterprise Vault server with passive counterparts located in the other location forimmediate failover if necessary.

Journaling is supported by a feature of Microsoft Exchange. Exchange maintains ajournal mailbox. Every message that goes in or out of the environment gets copied tothat mailbox. Enterprise Vault makes a copy of each message in the journal mailboxon its compliance storage.

Enterprise Vault also creates index entries based on the message subject, who it’sfrom and to, basic message content, and so on and stores this information in asearchable index that supports e-discovery. The SQL database contains pointersbetween the data stored on disk and the metadata in the index. A master directorydatabase contains pointers from all users to all messages and relevant index data. Asyou would expect, Enterprise Vault makes it possible to set a retention period for allmessages in the journaling environment and can automatically delete those messagesonce the retention period has expired.

Solution ImpactAt the time of this writing, both the Exchange and archive projects are installed, andusers are moving to the new solutions. Compliance will go online after user migrationis completed. It’s not possible to determine the full impact of each project yet, butthere are some clear benefits already. For instance, the advantage of having a singlemessaging environment across the company should be obvious.

We can also evaluate the archive and journaling projects against the stated objectivesabove. The goals of the archive project were reducing costs, saving disk space, andminimizing user impact. Because the archive is essentially transparent to end users,the user impact will be negligible. Costs are reduced by cutting the amount ofexpensive primary storage needed for Exchange and simplifying ongoing Exchangedata management, particularly backup.

Costs are further reduced through the use of NetApp unified storage systems. Thearchive configuration uses both iSCSI and CIFS. Because NetApp storage supportsmultiple protocols, it is possible to provide all back-end storage with a single type ofstorage system. Solutions requiring separate storage for each protocol would besignificantly more expensive. A single-vendor solution is naturally less complex than amultivendor one as well, so complexity is reduced, making management simpler.

The journaling project obtains similar benefits from the use of NetApp storage. Thecompany also is able to take advantage of NetApp technologies such as space-

Tech OnTap February 2008 | Page 11

Page 12: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

efficient Snapshot™ copies and FlexClone® volumes to further reduce spaceconsumption and streamline data management and disaster recovery testing for SQLdatabases and index metadata. Standardizing on one storage platform gives thecompany less to buy (reducing costs), requires less space in the data center, andultimately gives the company a greener solution that requires less power and coolingwhile performing significantly faster than competitors.

If we evaluate the journaling environment against the overall objectives of reducingrisk, ensuring reliability, and keeping things simple, the plan discussed earlier meetsthe primary objective of reducing risk. The plan is also as simple as we can make it,and that simplicity should contribute to the reliability of the final journalingimplementation. The hardware and software implementation for the journaling projectis not only simple, but also as robust as we can make it, offering full failover fordisaster recovery to ensure reliability.

ConclusionThe company has been very happy with the results achieved from these projects sofar. It recognized from the outset that NetApp didn’t just offer great products, but alsohad the expertise to partner with the company and help make the compliance project asuccess by creating a simple, robust compliance plan that meets regulatoryrequirements without locking down more data than necessary for longer thannecessary.

Tech OnTap February 2008 | Page 12

Page 13: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

Highlights of ResultsNetApp Versus EMC CLARiiON

Better performance with Snapshot enabled(97% versus 36% of baseline)Higher performance with fewer disks

NetApp SPC-1 FirstsHighest raw storage utilizationLowest cost per tested GBFirst ever result with RAID 6Only active result not using RAID 1/0

TECH ONTAP ARCHIVE

Stephen DanielDirector of Database Platforms and Performance Technology, NetApp

Steve Daniel has been at NetApp for seven years, focusing exclusively on database I/O and theimpact to storage systems. Today, he focuses on database performance and reliability on both NASand SAN. Before joining NetApp, Steve honed his database skills during 12 years at Data General.

Real-World Storage PerformanceBenchmarking NetApp versus EMC CLARiiONBy Stephen Daniels

Determining the performance that SAN storage will deliver in the real world-runningreal applications—can be a challenge. Many vendors have only discussed informal orad hoc benchmark results, making comparisons between storage systems difficult.What's more, benchmarks frequently use impractical system configurations (thesmallest available disks, huge numbers of spindles), and they don't assess theperformance impact of features that may be essential to your operations.

NetApp recently set out to run a benchmark comparison that would address some ofthese limitations with:

A standard workload representative of real-world applicationsConfigurations based on vendor best practicesThe use of Snapshot™ copies during the benchmark

For this testing we chose a standardbenchmark, SPC-1, to assess theperformance of a NetApp FAS3040system versus that of an EMCCLARiiON CX3-40. In this article I'mgoing to describe the benchmark,the system configurations, and theresults achieved. For more detailedinformation you can also review thepublished, fully audited results onthe Storage Performance CouncilWeb site.

The SPC-1 BenchmarkThe Storage Performance Council (SPC) is a vendor-neutral standards body withrepresentatives from a wide range of storage vendors. The SPC has so far releasedtwo benchmarks:

SPC-1 generates a workload with characteristics of typical businessapplications such as database applications and serving e-mail with random I/O,queries, and updates and therefore seemed highly appropriate for ourpurposes.SPC-2 is designed to simulate applications with large-scale, sequential datamovement, so we didn't consider it a good representative workload for thisstudy.

I consider SPC-1 to be the best benchmark available to model the way that databasesstress storage systems. When SPC developed the benchmark, it studied how a varietyof applications accessed storage and then modeled the workload based on those

RELATED INFORMATION

Storage Performance Council

Complete SPC-1 Results

This article is just a summary of thetesting we performed. For moreinformation go to the SPC benchmarkresults page or click the followinglinks to go directly to the fulldisclosure reports (pdf):

NetApp FAS3040FAS 3040 with SnapshotEMC CLARiiON CX3 Model 40CX3 Model 40 with SnapView

Tech OnTap February 2008 | Page 13

Page 14: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

measurements. The mix of operations is representative of a broad class ofapplications-roughly half of all commercial applications.

As you are probably aware, the most commonly used set of database benchmarksare the TPC benchmarks, but those are designed to test the database server, andtherefore the measured results depend more on how the server is configured thanthey do on the storage.

SPC-1 has the advantage that it just measures the performance of the storagesystem. It doesn't depend on how you configure the server and database. Theworkload it sends to storage, however, is a great proxy for the kind of load that today'sbusiness applications generate. In addition, SPC has strict rules for how thebenchmark must be run, so you can compare results between vendors; all results areaudited by a certified, independent auditor. All published results include lengthydisclosures including full pricing of the benchmarked system.

System ConfigurationsAs I said in the introduction, we wanted to test system configurations similar to thosethat customers might be likely to deploy. In each case, we wanted to build a Taurusrather than a NASCAR, so we chose two midrange systems, the NetApp FAS3040and the EMC CLARiiON CX3 Model 40, and configured them appropriately. We usedthe exact same server, identically configured, to run the benchmark against eachstorage system in all cases.

At this point you might be thinking, "If you're benchmarking a system from anothervendor, how can I trust the results?" SPC actually takes this into account in its rules.In order for us to publish a benchmark of another vendor's gear, the SPC required usto certify that we made a good faith effort to demonstrate the true maximumperformance of the equipment. To do that, we started with EMC's documents on howto tune the system for best performance, and we followed them closely. Then wespent a couple of months tuning and adjusting, trying to improve the performance.Everything that goes into the final result is documented in the full disclosure reportsfor each test.

Specifically, EMC docs were very clear that the best performance could only beobtained from the CLARiiON system using RAID 1/0 (mirroring and striping), so weconfigured the system that way rather than with RAID 5 or RAID 6 (double-parityRAID). The FAS3040 system was configured to use RAID-DP™, NetApp's double-parity RAID 6 implementation, which is the default on NetApp systems. On bothsystems we used 82% of the usable storage for the benchmark. This is a tunableoption of the benchmark. The CLARiiON system was configured with 155 disks, whilethe NetApp system had 140. Both systems used 146GB 15,000 RPM 4Gb per seconddisks.

Other than that, we also made an effort to tune the memory management on eachsystem for best results. EMC allows you to tune the read versus write cache. We triedall the variables until we got the best results. We found, for example, that by turning offthe CLARiiON write cache for portions of the workload that did very little writing we leftmore write cache for the write-intensive portion of the benchmark. This improvedperformance significantly.

On the NetApp system, we found we could improve performance by changing thememory management policy to reflect the fact that most SPC-1 data is not referencedrepeatedly. This policy change can be implemented with the following priority settingswith Data ONTAP® 7.3:

priority onpriority set enabled_components=cachepriority set volume <volume-name> cache=reuse

The net effect of these commands is to tell the memory system to reuse memory fornewer items more aggressively than it would normally. (The enabled_componentssubcommand is new in Data ONTAP 7.3. If you are using Data ONTAP 7.2 you canskip that command.)

A couple of the things we tuned are still being refined, so they are enabled by thesetflag command. In future versions of Data ONTAP either these flags will becomeoptions or they will disappear as the system becomes self-tuning for these features.

Tech OnTap February 2008 | Page 14

Page 15: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

priv set diagsetflag wafl_downgrade_target 0setflag wafl_optimize_write_once 0

The "downgrade_target" command changes the priority of a process within DataONTAP that handles incoming SCSI requests. This process is used by both FC SANand iSCSI. If your system is not also running NAS workloads, then this priority shiftimproves response time.

We're explicitly calling out these settings because, based on our testing, we think theywill yield performance benefits for online business application workloads. If you areinterested, you can read more about them in a recent NetApp technical report.

Testing the Performance Impact of Managing Snapshot CopiesMost benchmarks ignore the performance impact of commonly used features such assnapshots. In the early days of NetApp, few other vendors offered this capability, buttoday, almost all storage vendors offer some form of snapshot technology. NetAppcustomers use NetApp Snapshot technology throughout the day to create backups orcheckpoints from which applications can be restarted, and we assume that theSnapView™ capability provided by EMC is used similarly.

When a feature becomes a regular part of daily operation across a wide base of users,it makes sense to test the performance impact of that feature. So, in addition totesting the maximum performance of each system, we also ran the same tests againwhile periodic snapshots were being created. On the NetApp system, we set up aschedule that created a Snapshot copy every 15 minutes and retained the mostrecent three Snapshot copies. This schedule ensured that during a five-hour-longbenchmark run we both created and deleted a reasonable number of Snapshotcopies.

On the CLARiiON, we reduced the snapshot load. During the three-hour-longsustainability portion of the SPC-1 benchmark, we took one snapshot an hour into thetest. An hour later we deleted it and took another. Just before the test finished wedeleted the second snapshot.

Benchmark ResultsThe results of our testing are summarized in Figure 1. The baseline results show themaximum performance without snapshots. As you can see, the results are similar. TheNetApp system delivers about 31,000 SPC-1 I/O operations per second (IOPS), whilethe CLARiiON delivers a maximum of about 25,000 SPC-1 IOPS. The NetApp systemused for these tests has a list price of $421,730.49, while the EMC system has a listprice of $517,851.02. This corresponds to $13.61per IOP for NetApp versus $20.72per IOP for the EMC configuration.

Figure 1) SPC-1 performance of NetApp versus EMC with and without snapshots.

An interesting result occurs when the snapshot feature is enabled on each platform.Performance on the NetApp system only drops to about 30,000 SPC-1 IOPS (97% ofmaximum). On the EMC system, performance drops to approximately 9,000 SPC-1IOPS, or 36% of the performance level without snapshot enabled. The NetApp systemused for the Snapshot tests has a list price of $446,210.49, while the EMC system has

Tech OnTap February 2008 | Page 15

Page 16: February 2008 HIGHLIGHTS From Months to Minutes: · PDF fileRead up on best practices NetApp has developed for improving the performance of its storage systems for demanding workloads

a list price of $535,251.02. The NetApp system therefore costs $14.89 per IOP versus$59.49 per IOP for the EMC system.

NetApp FAS3040 EMC CLARiiON CX3-40

Baseline IOPS 30,985.90 24,997.48

Baseline List Price $421,730.49 $517,851.02

Cost per IOP (without snapshots) $13.61 $20.72

Snapshot IOPS 29,958.60 8,997.17

Snapshot (% of Baseline) 97% 36%

Snapshot List Price $446,210.49 $535,251.02

Cost per IOP (with snapshots) $14.89 $54.49

Table 1) Comparison of NetApp versus EMC with and without snapshots.

ConclusionWhile we were careful to ensure that we tested both systems under the sameconditions-configured to achieve optimal performance in each case-these resultsdemonstrate some significant advantages for NetApp technology. The NetAppSnapshot implementation is clearly more efficient than the EMC implementation andhas a far smaller impact on performance. We believe this translates into direct benefitsfor busy production environments.

The result for the NetApp system is the only SPC-1 result published in the last fiveyears not using RAID 1/0 and the first ever result with double-parity RAID 6. Despitethat, compared to the EMC CLARiiON configuration we tested, the NetApp FAS3040demonstrated higher performance with fewer disks (140 versus 155).

Because of the use of RAID-DP instead of mirroring, the NetApp configuration alsodemonstrated the highest storage utilization of any recent SPC-1 result. With the FAS3040, 61% of raw storage was utilized versus 38% for the mirrored CLARiiON CX3model 40 configuration. (Note that both systems had volumes filled to 82%.) This highrate of utilization in turn leads to the lowest cost per benchmarked gigabyte (thegigabytes that are actually available for use) ever reported for an SPC-1 benchmark:$33.50 per benchmarked gigabyte.

Tech OnTap February 2008 | Page 16