18
EDITOR’S NOTE DEAL WITH DATA CAPACITY CHALLENGES FLASH-ONLY ARRAYS REPLACING HYBRIDS FOR PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE Best Bets for Modernizing Legacy Storage Systems Storage still plays a key role in today’s data center, which requires the latest technology and creative ways to manage the growing data under tight budgets. Flash storage and software-defined storage are the most recent trends in modernizing legacy storage, but IT still struggles with integration.

Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

EDITOR’S NOTE DEAL WITH DATA CAPACITY CHALLENGES

FLASH-ONLY ARRAYS REPLACING HYBRIDS FOR PRIMARY STORAGE

SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

Best Bets for Modernizing Legacy Storage SystemsStorage still plays a key role in today’s data center, which requires the latest technology and creative ways to manage the growing data under tight budgets. Flash storage and software-defined storage are the most recent trends in modernizing legacy storage, but IT still struggles with integration.

Page 2: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS2

EDITOR’SNOTE

Modernizing Legacy Storage Systems is Easier Than You Think

Data will never stop growing in volume, and your data capacity needs will keep growing in sync. Just because there are new demands on your data storage needs doesn’t mean that it makes sense to rip and replace your current storage infrastructure. But how do you manage to integrate that legacy storage system, which is likely based on disk drives—or maybe even tape—with modern all-flash or hybrid arrays?

Surprisingly, it isn’t impossible or even dif-ficult—OK, not too difficult—to modernize your legacy storage systems without resorting to a wholesale technology swap. If you follow the guidelines discussed in the articles in this Drill Down, you can more easily navigate the sometimes dangerous waters of updating legacy

storage—dangerous to your data, that is. That starts with learning how to handle that growing data capacity challenge with proper data han-dling technique, so you don’t waste all that new bleeding-edge storage. It includes the debate between choosing all-flash arrays or hybrid arrays for your modernization plans, and even whether or not you should choose commodity tech from a vendor or consider going down the do-it-yourself path.

Read these articles and you will set off on the right foot on your journey to modernizing that legacy storage system. n

Rodney BrownSenior Site Editor, SearchStorage.com

Page 3: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS3

CAPACITY CHALLENGES

Deal with Data Capacity Challenges

Capacity. It’s usually the first word that comes to mind when thinking about storage. Not performance, reliability, availability or even serviceability. Data capacity is almost always the top concern: How much of it is left and how fast it is being consumed?

That is the odd quirk about storage. Unlike all other computing and networking technolo-gies, only storage is consumed. All others are utilized over and over again. The IT predisposi-tion to never dispose of any data—ever—dem-onstrates how storage is constantly consumed.

For many shops, a new storage system must have enough capacity to contain all of the data stored on the one it will replace, plus all of the projected data that will be created and stored during its lifetime. Data storage never shrinks. It just relentlessly gets bigger. Regardless of industry, organization size, level of virtualiza-tion or “software-defined” ecosystem, it is a constant stress-inducing challenge to stay

ahead of the storage consumption rate. That challenge is not getting any easier.

More devices than ever are creating addi-tional data to be stored at an accelerating rate. The internet of things promises to boost data storage growth into warp speed, and the vast majority of that data is unstructured. Unstruc-tured data historically had nominal value as it aged, but clever technologists have changed that paradigm. That unstructured data today has huge potential business intelligence value that analytical tools can extract for competi-tive advantages. With more devices than ever creating data and more uses for that data, orga-nizations are even more reluctant to deep-six anything.

Storage has become the biggest technology line item in many data centers. Data growth velocity has not slowed and is, in fact, accel-erating. Budgets, on the other hand, are not increasing.

Page 4: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS4

CAPACITY CHALLENGES

To make matters worse, media capacity den-sity increases have slowed. Capacity growth increases in all media have slowed based on quantum technology limitations. The rate of deceleration varies by technology (lower decel-eration for NAND flash; higher deceleration for hard disk drives, tape and optical).

So, IT is forced to find cost-effective ways to cope with expanding data capacity require-ments without breaking the bank.

USE HIGH-DENSITY SHELVES

One way to reduce rack space and floor space is to utilize high-density shelves. The high-density shelves for 3.5-inch hard disk drives (HDDs) come in a range of drive and rack unit (U) sizes. The most popular utilize 4U with populated drives of 48, 60, 72, 84 and 98. Uti-lizing the 10 TB, highest-capacity 3.5-inch HDDs enables nearly a PB of storage in 4U. That’s a lot of density.

There is a downside to these high-density shelves. Drives can only be accessed from above, typically requiring a ladder. And the weight of these shelves can easily exceed a few

hundred pounds. Sliding shelf brackets are not set up to handle that amount of weight. Many vendors specify a “server lift” (powered or hydraulic) to support the shelf when it is pulled out. That adds cost and time.

There are also flash solid-state drive (SSD) high-density shelves. SanDisk puts up to 512 TB (raw) in 3U; Toshiba puts 192 TB (raw) in 2U; and HGST puts 136 TB (raw) in 1U. The SanDisk and Toshiba shelves also need to access the drives from above. A ladder will be required. But weight is not a problem.

High-capacity media and high-density drive shelves do not reduce data capacity consump-tion, but it does reduce total systems, manage-ment and supporting infrastructure required to meet that capacity consumption.

STOP STORING EVERYTHING FOREVER

Seems like a simple concept that, in reality, very few IT organizations implement. Not all data needs to be stored forever. IT teams need to set policies defining retention times for different types of data and enforce them. There is a lot of data that has limited or no value over time.

Page 5: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS5

CAPACITY CHALLENGES

Take the example of video surveillance. Video consumes a lot of storage. How long does surveillance video need to be saved? One week? Two weeks? A month? A year? There are smart IT organizations that have a policy of keep-ing their video data no more than a couple of weeks or, at most, a month. Obviously, if there is something of interest on a particular video, it’s kept longer.

Systemic enforced data retention policies will significantly slow the consumption of stor-age capacity. Making it happen requires time, cooperation (buy-in), effort and discipline to enforce. But, keeping valueless data forever is simply not smart or financially sustainable.

TAKE OUT THE GARBAGE

How much valueless data currently consumes an organization’s storage?

Email messages, Word documents, spread-sheets, presentations, proposals and more from employees long departed. Is there much or any value in this data? Why is it still there? How much of that consumed storage is taken up by personal MP3s, photos, videos and so

on? There is a lot more than most IT managers realize.

How about multiple iterations, versions, or drafts of files, documents, spreadsheets, pre-sentations, price lists and so on that consume

storage and are outdated or obsolete? Data tends to be sticky. These crimes of inefficient storage consumption are amplified when that storage is replaced because that “garbage” data continues to consume capacity on the new sys-tem as well as every tech refresh after that. In other words, you may be buying more storage than you need.

But, how do you find that garbage data? It’s not as if the garbage data sends out an alert claiming garbage status. The good news is that there are several software applications and ser-vices out there that analyze unstructured (file)

Data tends to be sticky. These crimes of inefficient storage consumption are amplified when that storage is replaced.

Page 6: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS6

CAPACITY CHALLENGES

data and provide that analysis (e.g., Caringo FileFly, Data Dynamics, NTP Software, Varonis Systems and so on).

These applications identify orphaned data; personal data such as MP3s, photos and videos; and old unassessed data. They can enable the data to be deleted or migrated to low-cost stor-age options such as LTFS tape, local or cloud object storage, optical storage, and cloud cold storage. The amount of prime real estate stor-age capacity reclaimed can be enormous and typically pays for the software application or service many times over.

MIGRATE DATA AS IT AGES

The first storage location where new data lands is where it likely stays until that storage is tech refreshed or upgraded. Even then, it will stay in the same relative location. That’s a tremendous consumption waste of the most expensive stor-age capacity.

Data’s value decreases as it ages. Data is generally most valuable and most frequently accessed within the first 72 hours after it is stored. Access declines precipitously from that

point forward. The data is rarely accessed after 30 days and almost never after 90 days. And, yet, it frequently stays on high-priced storage months or years after its value has plummeted.

The main reason this occurs is that migrating data among different types of storage systems can be difficult and manually labor-intensive. In addition, moving the data often breaks the chain of ownership, making it difficult to retrieve the data if or when it’s required.

Hybrid storage systems have storage tiering within the array that enables movement of data from high-cost, high-performance storage tiers to lower-cost, lower-performing storage tiers and back again based on user-defined policies. Many only provide data movement between tiers within the array. Some can move data within the array and to external lower-cost, lower-performing storage as well. They may utilize cloud storage such as Amazon Simple Storage Service, cloud storage with S3 inter-faces or LTFS tape, as a much lower-cost tier.

A stub is left after the data is moved so that the chain of ownership and metadata remains intact. When a user or application seeks data that has been moved to a lower-cost and

Page 7: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS7

CAPACITY CHALLENGES

lower-performing tier, the stub retrieves that data, placing it back on its original tier trans-parently. It just takes a little longer to access it.

Although this technology does not reduce the amount of capacity consumed, it does align the data value better with the storage costs. One more thing about hybrid storage systems: they are not just between flash storage tiers and HDDs, object storage, cloud storage or LTFS tape tiers. There are hybrids that use fast flash and slower-capacity flash as their storage tiers. And there are others that use high-performance small form factor (2.5-inch) HDDs and large form factor (3.5-inch) nearline HDDs as their storage tiers. All of them reduce the cost of storage capacity but not the total storage capacity consumed.

There are also third-party software and ser-vices (i.e., Caringo FileFly, Data Dynamics, NTP Software and others) that will move data from a costlier storage tier to a lower-cost tier by policy, between systems, to object storage or to LTFS tape storage using stubs to ensure that data can be accessed if necessary.

There are two key differences between third-party software and hybrid storage systems. The

first is that the hybrid storage systems mostly operate only within the system and, in a few cases, S3 API object storage. Third-party soft-ware works between different vendors’ storage systems, S3 API object storage systems, LTFS tape and so on. The second is that the third-party software allows the storage administrator to choose to delete or eliminate garbage data based on policy. Therefore, unlike hybrid stor-age systems, the third-party software can actu-ally reduce the amount of data stored.

MAKE THE MOST OF DATA REDUCTION

TECHNOLOGIES

Data reduction technologies have gained sig-nificant adoption in most storage systems, software-defined storage and even hyper- converged systems over the past few years. These technologies include thin provisioning, data deduplication and compression.

Thin provisioning does not actually reduce data storage consumption. It instead signifi-cantly reduces storage wasted by overprovi-sioning. Applications do not like running out of storage capacity. When it happens, they crash.

Page 8: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS8

CAPACITY CHALLENGES

It is not a pretty situation, and one that causes urgent and serious IT problems. IT attempts to avoid this by overprovisioning storage capac-ity to applications—especially mission-critical applications. That overprovisioned capacity per application can’t be utilized by other applica-tions. This creates a lot of unused and unus-able storage capacity (often called orphaned storage).

Thin provisioning essentially virtualizes that overprovisioning so each application “thinks” it has its own unique overprovisioned storage capacity but, in reality, is sharing a single stor-age pool with every other application. Thin provisioning eliminates orphaned storage and significantly reduces the amount of storage system capacity purchase requirements. That reduction has the same net effect as reducing the amount of data stored.

Data deduplication first gained traction on unique target storage systems for backup data (i.e., EMC DataDomain, ExaGrid, Hewlett Packard Enterprise StoreOnce, NEC HYDRAstor, Quantum DXi and others). Today, most backup software has data deduplication built into the software.

Data deduplication has also made its way into hybrid storage and all-flash arrays, as well as traditional legacy storage arrays. The rationale for moving data deduplication into the array is to decrease the cost of “effective” usable capacity. Effective usable capacity is the

amount of capacity that would be required if no data deduplication took place. So, if the amount of capacity required without data deduplica-tion is approximately 100 TB but only 20 TB with data duplication, then the effective usable capacity of that 20 TB system is 100 TB.

There is generally not as much duplicate data in primary data as there was in older backup data. This means data reduction ratios tend to be less. Some workloads, such as virtual desk-top infrastructure, create a lot of duplicate data.

Applications do not like running out of storage capacity. When it happens, they crash. It is not a pretty situation, and one that causes urgent and serious IT problems.

Page 9: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS9

CAPACITY CHALLENGES

Others, such as video data, have very little or none. In addition, compressed or encrypted data cannot be deduplicated.

It’s important to remember that there are performance tradeoffs with data deduplication. Inline data deduplication is the most prevalent form of deduplication: It requires that every write must be compared against stored data to identify unique data.

Unique data is stored and the system cre-ates a pointer for the duplicated data. That comparison creates additional latency for every write. As the amount of data stored on the system increases, so does the metadata and latency. And every read requires that the data is “rehydrated” or made whole. That adds latency to reads. That latency also increases with con-sumed data capacity similarly to the writes. Primary application workloads have response time limitations. Too much latency and an application can time out.

This has led to two different variations of inline deduplication: one for flash-based stor-age and one for HDD storage. The three orders of magnitude (1,000 times) lower latency of flash storage allows for more in-depth data deduplication, producing better deduplication results.

The other type of deduplication is post-processing. Post-processing data deduplica-tion does not add latency on writes because it happens after the data has been written. That processing is pretty intensive and must be scheduled in an idle time window. It also requires more capacity to land the data and does nothing to reduce read latency.

Compression technologies operate similarly to data deduplication but are limited to work-ing within a block, file or object. Results are usually the same or less than deduplication and latency concerns are similar.

The key thing to keep in mind about these

As the amount of data stored on the system increases, so does the metadata and latency. And every read requires that the data is ‘rehydrated’ or made whole.

Page 10: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS10

CAPACITY CHALLENGES

data reduction technologies is they are not mutually exclusive. They can and should be used together. Just remember: Deduplication must occur before compression. Compressed data cannot be deduplicated.

There is one caveat: Moving the data from one storage system generally, but not always, requires the data to be rehydrated before it is moved.

USE EFFICIENT DATA PROTECTION APPS

Data protection products historically created a lot of duplicate or copy data. But, most modern data protection products feature native dedu-plication. Many IT pros perceive that it is an onerous process to change out from legacy data protection to modern data protection. There are a couple of false assumptions that underpin that perception.

The first is they have to migrate data pro-tected under the old system to the new. That’s not true. Old backups or other types of older data protection data are not archives and should never be used as archives. This is because they have to be recovered in order

to search them. The only reason to keep older backups is for compliance reasons. This doesn’t mean those backups have to be migrated to newer data protection systems. The software can be turned off from creat-ing any new backups. The old backup data just stays static until it ages out past the compli-ance requirements and then it can be destroyed.

The original software can still be used to re-cover older data for things such as eDiscovery.

The second false premise is that imple-mentation of modern data protection is just as painful as legacy data protection. Things have changed considerably. Many modern data protection systems are relatively easy to implement.

To reduce secondary data capacity consump-tion appreciably, be sure that your data protec-tion software is up to date.

MANAGE DATA COPIES

Dragon Slayer Consulting surveyed 376 IT organizations over two years and found a median average of eight copies of the same data. Copies often resided on the same and

Page 11: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS11

CAPACITY CHALLENGES

different systems. Copies are created and used for dev ops, test dev, data warehouses, busi-ness intelligence, backups, business continu-ity, disaster recovery, active archives and more. This can have a huge amplification effect on storage consumption.

The key to controlling out-of-control cop-ies is to utilize variations of redirect-on-write or thin-provisioned, copy-on-write snapshot technologies. That can take place within a storage system (most storage systems, soft-ware-defined storage, even hyper-converged

systems) or separated out using a dedicated appliance or with software (such as Actifio, Catalogic, Cohesity, IBM SVC, Rubrik and oth-ers) utilizing lower-cost storage. These snap-shots are fundamentally a virtual copy. They look and act like a real data copy. They can be written to and modified like a real copy. But they consume a very tiny fraction of the stor-age capacity.

Managing data copies is an essential capacity coping strategy with huge potential savings in data capacity requirements.—Marc Staimer

Page 12: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS12

PRIMARY STORAGE

Flash-Only Arrays Replacing Hybrids for Primary Storage

Judging by the recent flood of flash-only storage launches, you can expect to soon see the primary storage hybrid array on the endan-gered species list—right alongside the black rhino and blue whale.

Almost all of the new primary storage launches these days are all-flash arrays. Since the start of 2016, EMC, Hitachi Data Systems, Pure Storage, Nimble Storage, Tegile, IBM and X-IO Technologies have launched flash-only primary storage arrays. That doesn’t count NetApp’s acquisition of SolidFire’s all-flash platform. Most of the large storage vendors have several all-flash products, and it’s become difficult to find a storage array vendor without any flash-only platforms.

Of course, storage and IT technologies die hard. Tape and mainframe are Exhibits A and B of that. But recent product launches make you think the entire world is going all-flash. And it is—for primary storage. The fastest hard disk

drives are already disappearing from primary storage arrays. It will probably take at least a few years, but it’s more a question of when than if HDDs will disappear from arrays, except for bulk storage.

Flash proponents said that day is closer than you think.

“There is no longer any reason for customers to purchase disk solutions,” Scott Dietzen, CEO of all-flash specialist Pure Storage, said of the recent gush of all-flash arrays.

EMC declared 2016 the year of all-flash for primary storage, while launching two flash-only systems in February. That’s after selling $1 billion worth of all-flash XtremIO arrays in 2015. Other legacy vendors agree with EMC’s proclamation.

“Now that pretty much every vendor, large and small, has all-flash arrays in the market, that’s the end of the all-flash array as this special thing. It’s just primary storage now,”

Page 13: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS13

PRIMARY STORAGE

said Dave Wright, NetApp vice president, and SolidFire’s founder and former CEO. “I don’t think disk is going away any time soon, but it’s more and more being relegated toward cold storage, secondary storage, backup, archive [and] object storage.”

As flash prices decline and users encounter more applications that require high perfor-mance, solid-state drives are replacing 15,000 RPM HDDs inside of storage arrays. HDDs are being relegated to cloud providers and on-premises bulk storage.

USERS DRAWN TO FLASH

Performance-hungry users find less need for hard drives after they get a taste of SSDs.

“I haven’t talked to one end user [who] has deployed flash [and] doesn’t want more,” said Eric Burgener, storage research director for market research firm IDC. “Flash is better on performance and reliability, uses less power, and has better CPU utilization.”

Health Network Laboratories (HNL) in Allentown, Pa., is a prime example of that. HNL initially bought XtremIO specifically for virtual

desktops. Less than a year later, CIO Harvey Guindi said he plans to purchase all-flash for his primary storage.

“We have already made the determination that all new storage growth is going to be flash,” Guindi said.

He said he will move high-performance Microsoft SQL Server database workloads that stress performance of his EMC VNX hybrid arrays to XtremIO. “We think we’re just going to keep adding XtremIO for high performance and expand VNX with flash, but more for things that don’t demand as much perfor-mance,” he said.

Flash proponents claimed long-term total cost of ownership (TCO) for all-SSD arrays is roughly the same as for hybrids. That TCO includes the price of power, maintenance, space and other factors.

“TCO is already compelling for flash,” IDC’s Burgener said. “You can buy 20 SSDs to do the work of 250 hard disk drives. But you couldn’t buy into a flash array for as low a price as you can buy a hard-disk array.”

Bill Evans, vice president of IT at Ferrellgas, based in Overland Park, Kan., said he didn’t

Page 14: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS14

PRIMARY STORAGE

consider all-flash feasible when he started looking for a new storage array in early 2015. He ended up buying four Violin Memory Flash Storage Platform 7300 all-flash arrays, which he said cost less than the hybrid options he looked at from EMC and IBM for the same per-formance. Ferrellgas is moving its SQL Server databases to the Violin systems, and Evans is in the flash-only camp for primary storage.

“When we started our evaluation, we had no idea we could afford an all-flash array,” Evans said. “Flash had been pretty darn expensive. We had a little bit before, and used it carefully. It was a challenge trying to manage that. Going to all-flash took away the extra management of putting data in the right places on our stor-age. And we’ve seen considerable performance improvements.”

Not everybody finds SSDs to be a bargain, though. If flash cost the same as disk drives, all-flash storage would already have taken over the world. Even with the cost of flash coming down, with advances such as TLC 3D NAND, the entry price still remains an issue for many.

The raw cost of flash is still higher than HDDs, even if you figure in data reduction for

flash. SSDs cost around $8 to $9 per GB com-pared with about $0.35 per GB for the most expensive HDDs.

According to TechTarget Research’s Q4 2015 post-purchase survey for all-flash arrays, the average selling price for a flash-only array was $1.63 million for 63 deals in the quarter.

Michael Scarpelli, IT director for the La Jolla Institute for Allergy & Immunology in La Jolla, Calif., uses hybrid arrays from Nimble Storage and Reduxio, and takes advantage of flash’s performance for virtual machine storage. But he said the idea of all-flash for all primary stor-age remains beyond his budget.

“The limiting factor for us is cost,” Scarpelli said. “If I had my druthers, sure, I’d go all-flash. Why not? But when we’re going for storage, we basically drop in what is going to be the

If flash cost the same as disk drives, all-flash storage would already have taken over the world.

Page 15: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS15

PRIMARY STORAGE

cheapest and most reliable drives. When we’re dealing with [virtual machine]-related stor-age, we always try to get a little of that flash in there, because you want that fast. You want fast writes, fast reads, fast cache. We usually want some of it, but if a product did not have that, I don’t know, that would turn me off.”

HYBRID INFRASTRUCTURE WILL

OUTLIVE HYBRID ARRAYS

Mark Peters, a senior analyst at Enterprise Strategy Group Inc., in Milford, Mass., said the recent flash-only flood “is testimony that we are in the midst of a media change across the whole market. As price drops and flash [becomes]

more accessible, it’s logical that it will take over for disk.” Still, he said a role remains for HDDs, whether they are inside primary arrays or not.

“I am still a massive fan of hybrid, with the word infrastructure behind it rather than array,” Peters said. “The need for hybrid is absolute, unless you want to waste money. Why would you put cold data on flash? It’s not sensible.”

Even if all primary workloads go to flash, Peters said HDDs will play an important role in storage for a long time. “The TCO of flash is becoming attractive at the high end, because you were going to pay more anyway,” he said. “At that high end, I agree, flash makes sense. But for data you are going to use only occasionally, it makes sense to consider disk.” —Dave Raffo

Page 16: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS16

SDS MARKET

Software-Defined Storage Market: Custom vs. Commodity Hardware

It seems the software-defined storage mar-ket is all about using commodity hardware to build out a data center. Yet, most software-defined storage vendors tell you exactly what hardware to use. If the hardware is truly a com-modity, then a simple minimum specification should be all that is required.

The reality is that x86 server hardware is not an undifferentiated commodity: Every model from each vendor has different components. The result is that each combination of com-ponents needs to be certified together for each hypervisor. Even standardized components like network and storage adapters may need to be certified with storage services.

The x86 servers that power modern data centers are often referred to as a commod-ity. The implication is that any server will do. In reality, this is not true. When a software-defined product vendor talks about commod-ity hardware, they are referring to an economic

model rather than complete interchangeability. Many older storage systems that relied on

custom hardware are hardware-defined. The array was built from disk shelves and control-lers designed specifically for the storage array. These storage systems had a huge amount of hardware design cost and saw slow changes because physical manufacturing of sheet metal and custom ASICs both take a lot of time. Software-defined storage runs on standard servers that could run Windows or a hypervi-sor, but it’s the software that turns this hard-ware into a storage array.

CONTROL FOR PERFORMANCE

By tightly controlling both the hardware and software, the software-defined storage market can deliver an integrated experience to custom-ers. Part of delivering this experience is limit-ing the cost of hardware certification. More

Page 17: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS17

SDS MARKET

combinations of hardware and software that must be tested means more cost and more time before a product can ship. Software-defined storage vendors commonly require the use of specific hardware that you buy from them. By restricting the hardware selection, a software-defined product vendor can control the cost of testing new software versions.

Maximizing storage performance is about aligning the performance of every step on the data path. That means SDS vendors that want to extract the best performance also tune their software for the exact hardware components they choose.

These vendors will choose specific network adapters, storage adapters and drives to align performance and ensure the array benefits from every component. This tight control is an essential part of high-performance storage.

BRING YOUR OWN SERVERS

Not all storage needs to be blazing fast. Many workloads suit storage with moderate perfor-mance and much lower cost. This segment of

the software-defined storage market is much more amenable to customers using their choice of commodity hardware. SDS vendors target-ing midrange performance and lower are likely

to ship as software-only, rather than software wrapped in hardware. Today’s moderate per-formance still provides far better performance than was achievable five years ago. The rise of low-cost solid-state drives enables astounding performance in hybrid configurations.

Some software-defined storage provid-ers want to deliver ultimate performance by tightly controlling every component. Other SDS vendors are able to deliver great perfor-mance using whatever hardware you choose. Both use commodity hardware to enable lower costs and speed innovation.—Alastair Cooke

Maximizing storage performance is about aligning the performance of every step on the data path.

Page 18: Best Bets for Modernizing Legacy Storage Systemscdn.ttgtmedia.com/searchStorage/Downloads/HB_Best... · PRIMARY STORAGE SOFTWARE-DEFINED STORAGE MARKET: CUSTOM VS. COMMODITY HARDWARE

HOME

EDITOR’S NOTE

DEAL WITH DATA

CAPACITY CHALLENGES

FLASH-ONLY ARRAYS

REPLACING HYBRIDS FOR

PRIMARY STORAGE

SOFTWARE-DEFINED

STORAGE MARKET:

CUSTOM VS. COMMODITY

HARDWARE

BEST BETS FOR MODERNIZING LEGACY STORAGE SYSTEMS18

ABOUT THE

AUTHORS

ALASTAIR COOKE is a freelance trainer, consultant and blogger specializing in server and desktop virtualization.

DAVE RAFFO is associate editorial director with TechTarget’s Storage Media Group.

MARC STAIMER is the founder, senior analyst and chief dragon slayer of Dragon Slayer Consulting in Beaverton, Ore.

Best Bets for Modernizing Legacy Storage Systems is a SearchStorage.com e-publication.

Rich Castagna | VP Editorial

Ed Hannan | Senior Managing Editor

Cathy Gagne | VP Editorial

James Miller | Executive Editor

Rodney Brown | Senior Site Editor

Dave Raffo | Associate Editorial Director

Garry Kranz | Staff Writer

Linda Koury | Director of Online Design

Jillian Coffin | Publisher

TechTarget 275 Grove Street, Newton, MA 02466

www.techtarget.com

© 2016 TechTarget Inc. No part of this publication may be transmitted or repro-duced in any form or by any means without written permission from the pub-lisher. TechTarget reprints are available through The YGS Group.

About TechTarget: TechTarget publishes media for information technology pro-fessionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to independent ex-pert commentary and advice. At IT Knowledge Exchange, our social community, you can get advice and share solutions with peers and experts.

COVER: ISTOCKSTAY CONNECTED!

Follow @SearchStorageTT today