32
MK-95HPP000-03 Hitachi Protection Platform NetBackup OST Best Practices

Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

MK-95HPP000-03

Hitachi Protection Platform NetBackup OST Best Practices

Page 2: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

ii

Hitachi Protection Platform NetBackup OST Best Practices

© 2008-2017 Hitachi, Ltd. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. and Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi”).

Hitachi, Ltd. and Hitachi Data Systems reserve the right to make changes to this document at any time without notice and assume no responsibility for its use. Hitachi, Ltd. and Hitachi Data Systems products and services can only be ordered under the terms and conditions of Hitachi Data Systems' applicable agreements.

All of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information on feature and product availability.

Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of Hitachi Data Systems’ applicable agreements. The use of Hitachi Data Systems products is governed by the terms of your agreements with Hitachi Data Systems.

Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi in the United States and other countries. All other trademarks, service marks, and company names are properties of their respective owners.

Export authorization is required for Data At Rest Encryption. Import/Use regulations may restrict exports to certain countries:• China – AMS2000 is eligible for import but the License Key and SED may not be sent to China • France – Import pending completion of registration formalities • Hong Kong – Import pending completion of registration formalities • Israel – Import pending completion of registration formalities • Russia – Import pending completion of notification formalities • Distribution Centers – IDC, EDC and ADC cleared for exports

Page 3: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

Table of Contents iii

Hitachi Protection Platform NetBackup OST Best Practices

Contents

Documentation Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

Intended Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Hitachi Protection Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2

OpenStorage Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2OST Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3Optimized Duplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3Auto Image Replication (A.I.R.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4NetBackup Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5Optimized Synthetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

OST Best Practices for the S-Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Version Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7OST Plug-In Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Backup Policy Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8Balance Storage Lifecycle Policies (SLPs) Duplication Resources . . . . . . . . . . .8Introduce new SLPs Gradually. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9Conserve Storage Lifecycle Policy (SLP) Numbers . . . . . . . . . . . . . . . . . . . . .9Monitor SLP Progress and Backlog Growth . . . . . . . . . . . . . . . . . . . . . . . . . .9Checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Give Backups Priority Over Duplications . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Multistreaming and Small Backup Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Use “Maximum I/O streams per volume” with Disk Pools . . . . . . . . . . . . . . . 11Make Space For Duplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Managing OST Duplication Backlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Secondary Operation Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Storage Server Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Disk Pool Capacity and Multiple Storage Pools . . . . . . . . . . . . . . . . . . . . . . 14NetBackup Device Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14NetBackup Buffer Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Disk Polling Service Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Network Device Timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Use Multiple Storage Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Use Multiple I/O Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17One-to-One Relationship between an OST Storage Server and a NetBackup Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Preferred OST IP Address Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Retention-Based Disk Volumes Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Page 4: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

iv Table of Contents

Hitachi Protection Platform NetBackup OST Best Practices

Retention-Based Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Retention-Based Storage Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Disk Volumes for Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Increase Deduplication Database Cache . . . . . . . . . . . . . . . . . . . . . . . . . . 21OST-over-Fibre-Channel Recommendations . . . . . . . . . . . . . . . . . . . . . . . . 21Data Erasure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Measuring Performance Using GEN_DATA . . . . . . . . . . . . . . . . . . . . . . . . . 23

Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Page 5: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

1

Hitachi Protection Platform NetBackup OST Best Practices

Documentation Identifiers

Introduction

This section describes the intended audience and provides a general overview of Hitachi Protection Platform (HPP) OST best practices.

Intended Audience

This document is intended for customers, authorized services providers, and Hitachi Data Systems personnel.

Overview

This document describes Veritas NetBackup OpenStorage Technology (OST) and how it works with Hitachi Data Systems OST-compatible devices. OST is an API that allows third-party storage providers to interact with Veritas backup and recovery products. By providing a means for the backup software and storage hardware to link together seamlessly, OST maximizes efficiencies and minimizes complexity. Enterprise-class solutions such as NetBackup and HDS hardware can achieve new levels of backup and recovery performance.

NetBackup and the HDS HPP are industry-leading technologies that fulfill most requirements. Follow the best practices presented here to maximize the functional capabilities. Using this document should allow a better understanding of the complexity of the technology, as well as how to implement it successfully.

Scope

This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices. It does not provide generic information about either NetBackup or HDS HPP products, capabilities, connectivity, and so on, unless directly relevant to the interaction between NetBackup and the Hitachi HPP devices using OST.

Number Date DescriptionMK-95HTTP000-00 December 2014 First publicationMK-95HTTP000-01 July 2015 Second publicationMK-95HTTP000-02 July 2016 Third publicationMK-95HTTP000-03 January 2017 Fourth publication

Page 6: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

2

Hitachi Protection Platform NetBackup OST Best Practices

Hitachi Protection Platform

The HDS OST-compatible devices are comprised of legacy Sepaton Data Protection products such as the S2100 and new HDS products such as the S2750. 

OpenStorage Technology

Overview

Veritas OpenStorage Technology (OST) allows storage partners to deliver backup and recovery solutions that have a tighter integration with NetBackup. The OpenStorage initiative allows you to better use advanced disk-based storage solutions from HDS. Tighter integration between the backup software and storage allows you to increase the efficiency and performance of your existing assets using an easy-to-deploy, purpose-built backup appliance with contemporary features and functions.

Nowadays, intelligent storage device manufacturers include advanced capabilities such as deduplication, replication, and copying and writing directly to tape. Initially it wasn’t possible to coordinate these activities with the backup and recovery software. An appliance could make a copy, but the backup software was unaware of it. Therefore, the backup catalog was incomplete. In addition, the backup software had no way to manage the device or tell it to perform certain functions, such as making a copy. Veritas developed OST to address these issues. Now, when partners become members of the Veritas Technology Enabled Program (STEP), they gain access to the OpenStorage API. Partners can then write code for a plug-in that can be installed on the NetBackup media server platforms of their choice; for example, Windows, Solaris, and Linux. OST allows HDS to write plug-ins for their storage appliances that allow NetBackup to control when backup images are created, duplicated, and deleted. It also allows the HDS protection platform to control how the images are stored in and copied between devices. In this manner, Hitachi adds unique business value to the joint solution with its specialized technological innovations.

The OST API is protocol independent, so HDS can use the protocols best suited for its devices and customers, including Fibre Channel and TCP/IP (IPv4 only). The HPP appliance controls the storage format, where the images reside in the storage platform, and the data transfer method. Consequently, performance and storage utilization are highly optimized. NetBackup does not know how the backup images are stored, nor does it control which capabilities HDS exposes through the

HPP Model Series Sepaton Model Series S2750 HPP-S NA NAS2700 HPP-S NA NAS2500 HPP-S S2100 ES3S1500 HPP-S NA NA

Page 7: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

3

Hitachi Protection Platform NetBackup OST Best Practices

OpenStorage API. Similarly, NetBackup does not control the communication between the plug-in and the storage server. The HPP appliance selects the API or protocol to use between the plug-in and the storage server. NetBackup determines when backup images are created, copied, or deleted. Images cannot be moved, expired, or deleted on the storage until NetBackup instructs the HPP to do so through the API.

OST Features

In addition to backup and restore, there are a number of technology functions that OST can implement. The sections below are those that are currently supported by HDS, including:

• Optimized Duplication• Auto Image Replication (A.I.R.)• NetBackup Accelerator• Optimized Synthetics

When an OpenStorage disk appliance can copy the data on one appliance over to a similar appliance, NetBackup can use that capability. Replication can occur in two scenarios: Optimized Duplication (OpDup) and Auto-Image Replication (A.I.R).

Optimized Duplication

Optimized Duplication (OpDup) is done between two OST storage servers in the same NetBackup domain. With Optimized duplication between HPP devices within the same NetBackup domain, the devices can take advantage of the HPP deduplication functionality to provide the optimized duplication feature. The ability to duplicate backups to storage in other locations, often across geographical sites, helps facilitate disaster recovery.

Benefits of Optimized Duplication are:

• Reduced workload on the NetBackup media servers. Thus, more backups can be performed.

• Faster duplication since duplication can occur in the background and simultaneously with ongoing backup jobs.

• Reduced network traffic since only new and changed data is transferred between the storage servers.

• Better to implement a model where a single production data center replicates to one or more disaster recovery sites.

Page 8: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

4

Hitachi Protection Platform NetBackup OST Best Practices

Auto Image Replication (A.I.R.)

Backups that are generated in one NetBackup domain can be replicated to storage in one or more target NetBackup domains. This process is referred to as Auto Image Replication (A.I.R.). The actual data transmission is done between the storage devices, with minimal involvement by the NetBackup media servers.

To optimize this data transmission, the HPP implementation of A.I.R. uses the HDS HPP deduplication functionality to send only unique data blocks between the separate NetBackup domains, along with “pointers” to data already in the target NetBackup domain.

The ability to replicate backups to storage in other NetBackup domains, often across geographical sites, helps facilitate the following disaster recovery needs:

• One-to-one model. A single production data center can back up to a disaster recovery site.

• One-to-many model. A single production data center can back up to multiple disaster recovery sites.

• Many-to-one model. Remote offices in multiple domains can back up to storage in a single domain.

• Many-to-many model. Remote data centers in multiple domains can back up multiple disaster recovery sites.

HPP Implementation Notes

OST A.I.R. must be configured on the HPP system before creating any disk pools in NetBackup, otherwise the option to select OST A.I.R. will be unavailable in NetBackup.

When OST A.I.R. is configured for one-way replication on an HPP system, you should ignore the following message:

There are no Destination Disk Volume Pairs

Page 9: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

5

Hitachi Protection Platform NetBackup OST Best Practices

NetBackup Accelerator

NetBackup Accelerator increases the speed of backups. The increased speed is made possible through the utilization of change-detection techniques on the client. These change-detection techniques identify changes to files that have occurred since the last backup. The client sends the changed data along with “pointers” to unchanged data to the media server, thus creating an efficient backup stream. The media server then looks up the location of the unchanged data in previous backups and sends both the unchanged data and the “pointers” to the unchanged data in previous backups to the OST storage server. This allows the OST storage server to construct the new backup image.

Accelerator has the following advantages:

• Reduces the I/O and CPU overhead on the client. The result is a faster backup and reduced load on the client.

• Creates a compact backup stream that uses reduced network bandwidth between the client and server.

• Creates a full image that contains all data necessary for a restore.

NetBackup Accelerator Can Be Problematic for HPP Systems

A NetBackup Accelerator job will create multiple I/O streams for read operations that may persist for long durations. If you are using the HPP OST-over-Fibre Channel feature, this can be problematic due to the NetBackup system reserving multiple Fibre Channel (FC) devices for a given backup job.

This may cause contention for FC devices, thus preventing new backup jobs from running. When using NetBackup Accelerator with the OST-over-Fibre Channel feature, careful planning is required to determine the appropriate number of FC devices to configure on your Hitachi Protection Platform backup appliance.

For additional information, refer to the section Increase the Number of Fibre Channel Devices on page 22.

Page 10: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

6

Hitachi Protection Platform NetBackup OST Best Practices

Optimized Synthetics

A NetBackup media server uses messages to instruct the storage server which of the full and incremental backup images to use when creating a synthetic backup. The HDS HPP appliance constructs (synthesizes) the backup image directly on the disk storage. In NetBackup, backups created in this way are known as optimized synthetic backups.

The OpenStorage optimized synthetic backup method provides these benefits:

• It is faster than a regular synthetic backup since regular synthetic backups are constructed on the media server. Optimized synthetic backups are moved across the network from storage to the media server and synthesized into one image. The synthetic image is then moved back to the storage.

• Requires no data movement across the network. Regular synthetic backups use network traffic.

• Uses fewer disk resources since duplicate data is not created or stored.

OST Best Practices for the S-Series

This section describes best practices for configuration and management of Veritas and HDS HPP systems. Software updates, functionality changes and new information all contribute to a changing relationship between NetBackup, OST, and the HDS devices. Visit https://support.hds.com for new updates to this document.

The order in which the following recommendations appear should not be considered an order of priority. As this document develops over time, some best practices may be removed and other recommendations added, meaning that any priority of implementation, if any existed, may no longer be relevant.

Page 11: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

7

Hitachi Protection Platform NetBackup OST Best Practices

Version Compatibility

OST technology was released in 2007 with NetBackup v6.5. OST has developed significantly since then with added functionality and improved performance. Thus, it is important that all technology versions are compatible with each other, including:

• NetBackup• Server OS• Client OS• HDS OST plug-in• HDS Hitachi Protection Platform systems

The NetBackup Enterprise Server Hardware Compatibility List shows the software versions of server and client, NetBackup, and HDS plug-in that have been qualified for use with OST.

https://www.veritas.com/support/en_US/article.TECH76495

Naming Conventions

Verify that naming conventions are consistent for NetBackup media servers, Master Servers and HDS HPP. If FQDNs are used, use them for all media servers. If short names are used, be consistent in using short names for all media servers as well.

Many enterprises use DNS to provide names in response to requests from devices. For NetBackup and HDS devices to interact efficiently, it is vitally important that names are consistent for all elements of the environment. Where DNS is not administered accurately, local host file entries on the NetBackup servers and HDS devices should be considered. Implementing this approach can cause administrative issues of its own, but it can be preferable to repeated resolution of naming convention issues.

Page 12: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

8

Hitachi Protection Platform NetBackup OST Best Practices

OST Plug-In Versions

The NetBackup Hardware Compatibility List shows those plug-ins provided by HDS that are compatible with the product. Within the document, plug-ins are listed that although compatible, may not offer the performance of later versions. You should have the OST plug-in version that is supported by the HPP version that is installed. You can find that information on the console manager (System > Chassis > OST > OST Plug-in Downloads).

Figure 1: Sample OST Plug-In Downloads Screen

To check the version of the plug-in installed, use the command:

Linux and UNIX: /usr/openv/netbackup/bin/admincmd/bpstsinfo –piWindows:%installpath%\VERITAS\netbackup\bin\admincmd\bpstsinfo –pi

Backup Policy Configuration

HDS HPP works most efficiently with large backup fragments. If fragments are of a minimal size, overhead can be raised and deduplication ratios reduced. Where configuration within NetBackup can affect the size of a backup fragment, increasing the minimum size should be reconsidered where possible.

Balance Storage Lifecycle Policies (SLPs) Duplication Resources

Using storage lifecycle policies (SLPs) improves the efficiency of your duplication resources. However, SLP duplication is fundamentally the same duplication process as any other NetBackup duplication process and is subject to the same constraints. One of the most important things to remember is that all duplication relationships are one to one. Disk backups are not multiplexed; all duplication from disk-to-tape is single-stream. Disk-to-disk duplications will run in parallel (subject to any limitations on the maximum I/O streams) but duplications from disk-to-tape are serial in nature with only one stream going to each tape drive.

Page 13: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

9

Hitachi Protection Platform NetBackup OST Best Practices

Introduce new SLPs Gradually

By adding new duplication jobs to the environment, more will be asked of the backup storage infrastructure. To help determine how an increase in duplication and storage lifecycle policies (SLPs) in general will impact resources, apply SLPs to backup policies progressively and observe whether, and by how much, hardware use gradually increases. Always consider the additional stress that an increase in duplication may place on the environment.

Conserve Storage Lifecycle Policy (SLP) Numbers

Do not create more SLPs than needed. A single SLP can be applied to multiple backup policies and a set of two or three SLPs (one for each schedule type) may be sufficient for the majority of backup policies in an environment. Using a small set of SLPs helps to standardize things like retention periods which, in turn, make storage usage more efficient.

This is consistent with general best practices to be conservative with the number of Backup Policies and Storage Unit Groups that are configured. Minimizing complexity is always a best practice.

Monitor SLP Progress and Backlog Growth

It is possible to get an immediate view of the progress of Storage Lifecycle Policy (SLP) by using the SLP utility command, nbstlutil. This command can be used to spot potential backlog problems before they build up to unmanageable levels. Two options of the nbstlutil command are particularly useful for this:

• nbstlutil report – This command provides a summary of incomplete images. This command supports the –storageserver, –mediaserver and –lifecycle qualifiers to home in on hot spots that may develop into backlog situations. It displays the number of duplication operations in progress (either queued or active) and the total size of the in-progress duplication jobs.

• nbstlutil stlilist –image_incomplete –U – This command displays details of the unfinished copies sorted by age and can be used to determine both the time the images have been in the backlog and the names of the individual images. The image at the top of the list is the oldest, so the backup time of that image is the age of the backlog. In most configurations that image should not be more than 24 hours old. There should never be more than one backup of a particular object pending duplication at any time. Each image is categorized as being either NOT_STARTED or IN_PROCESS. NOT_STARTED means that the duplication job has not yet been queued up for process. IN_PROCESS means that the image is currently included in the process list of a queued or active duplication job. IN_PROCESS images also display the operation which is in process, i.e. which duplication job within the hierarchy is queued or active.

Page 14: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

10

Hitachi Protection Platform NetBackup OST Best Practices

Checkpoints

Checkpoints can be enabled in NetBackup backup policies. By taking checkpoints during a backup, time can be saved if the backup fails. Taking checkpoints periodically during a backup allows NetBackup to retry a failed backup from the beginning of the last checkpoint, rather than restart the entire job.

The checkpoint interval indicates how often NetBackup takes a checkpoint during a backup. The default is 15 minutes. Checkpoint intervals can be set on a policy-by-policy basis. A checkpoint interval that is too granular can cause a backup fragment to be too small for the HDS device, causing issues as mentioned above. Balance the loss of performance due to frequent checkpoints with the possible time lost when failed backups restart. If the frequency of checkpoints affects performance, increase the time between checkpoints or disable them. The maximum checkpoint interval is 240 minutes.

Give Backups Priority Over Duplications

Under default conditions (when windows are not used to restrict the timing of secondary operations), SLPs are designed to ensure that duplications are done as quickly and efficiently as possible, by utilizing available resources. This is generally acceptable when backups are not in progress but, in most cases, duplication should not take priority over backups during the backup window. The HPP also gives preference to backups by default. OST duplication requires additional time when data is being “ingested” and deduplicated.

To give backup jobs preferential access to storage, assign a higher job priority to the backup policies and a lower priority for the duplication job priority in the SLP. Setting a backup job priority greater than the duplication job priority makes it more likely that duplication jobs will not get access to disk pools if backup jobs are waiting in the queue.

NetBackup’s batching logic also creates groups based on the duplication job priority setting of each SLP. This means multiple SLPs with the same priority can be batched together. By applying the same duplication priority to multiple SLPs using the same resources and retention periods, it is possible to increase the overall batching efficiency.

Page 15: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

11

Hitachi Protection Platform NetBackup OST Best Practices

Multistreaming and Small Backup Sets

Multistreaming in NetBackup is where multiple backup streams are utilized concurrently to maximize the throughput from a client. Some disks may have minimal data requiring backup. If they are provided with their own backup stream, then the resulting backup image can be too small for the HPP to handle efficiently. Therefore, use multistreaming when a non-multistreamed backup would be excessively large.

NetBackup limits concurrent operations using a number of configuration settings, such as Maximum concurrent jobs, Maximum jobs per client, Maximum I/O streams, and jobs per policy.

Use “Maximum I/O streams per volume” with Disk Pools

Disk storage units provide a limit to the number of concurrent write jobs that use the storage unit, however there are no limits on the number of read jobs (restores and duplications) that may be accessing the same disk pool at the same time. Multiple storage units can also access the same disk pool simultaneously. This can give rise to unexpected I/O contention on the disk pool. By setting the Maximum I/O streams per volume option on the Disk Pool, the total number of jobs that access the disk pool concurrently can be limited, regardless of the job type. While the disk pool is maxed out with backup jobs that are writing images to the device, no duplication jobs are allowed to start reading from the device.

By default Maximum I/O streams per volume is not enabled and there is no limit to the number of streams. When it is enabled on a Disk Pool the default number of streams is 2. The optimal number of streams will vary depending on the amount of hardware available, network bandwidth, HDS appliance nodes in use, and so on. It is preferable to separate writes from reads whenever possible, in order to minimize I/O contention. Where this is not possible, however, for example where there is a 24-hour backup window, then a means to ensure duplication jobs can be processed is required. This can be done by creating a differential between the Maximum I/O streams per volume setting of Disk Pools and the Maximum Concurrent Jobs setting of Disk Storage Units. In this case, the total number of concurrent jobs across all storage units writing to a disk pool should be less than the maximum number of I/O streams for that disk pool. The differential should be equivalent to the number of duplication jobs expected to run, in order to complete those duplications in a timely manner. A “timely manner” can be a day, a week, or whatever is considered acceptable. The main objectives are that:

• Duplication backlog does not increase over time• Backups are completed within the expected recovery point limit

If either of the two objectives above are not met, then further reconfiguration may be required, including new software or hardware.

Page 16: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

12

Hitachi Protection Platform NetBackup OST Best Practices

Make Space For Duplication

Duplication of data from the source HPP requires additional reads after the data has “landed.” System resources must be available for duplication to accommodate the amount of data that must be duplicated. Additional resources are also required on the target side to receive the new data and deduplicate it, as with a backup. Duplication taxes the NetBackup resource broker (nbrb) twice as much as backups, because duplication needs resources to read and to write. If nbrb is overtaxed, it can slow the rate at which all types of jobs (backups, duplications, and restores) are able to acquire resources and begin to move data. As such, it is good to lower resource contention wherever possible, for example, by separating backup and duplication windows, minimizing resource allocations to one type of job activity, and so on. If backups are transported via Fibre Channel, refer to the section Increase the Number of Fibre Channel Devices on page 22.

See Configuring optimized duplication to an OpenStorage device within the same NetBackup domain: https://www.veritas.com/support/en_US/article.000053035

Managing OST Duplication BacklogIf a backlog of duplication jobs develops, performance may be increased by implementing the following changes:

• Avoid scheduling another backup while duplication jobs are running.

• Reduce the deduplication timeout, potentially to zero.• Increase the number of replication agent jobs for HPP V7.2.0.1

or earlier to 16 per node. HPP V7.2.0.2 has increased this value by default.

Secondary Operation Windows

In NetBackup 7.6 it is possible to control when secondary operations such as duplications execute. Backup windows can be significantly reduced by scheduling duplication operations so that they run outside of the normal backup window. The windows that control the secondary operations are configured in the same way as the windows used in backup policies. The primary difference between the two window types is that the backup policy windows have a start window (i.e. a period during which backups can start) while the SLP windows have an operating window and the operation stops when the window closes.

Page 17: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

13

Hitachi Protection Platform NetBackup OST Best Practices

Storage Server Naming

OST storage server names must be unique within the LAN defined by NetBackup and the HPP nodes. HDS recommends the name also identify the system within the entire corporate network. For example:

<hostname>-<S/N>-<site name>-<ordinal>

snow-13999-hq-001

Storage server names should also be added as aliases in the hosts files as shown below.

Remember, hosts files must be updated on all systems that communicate with the HPP using NetBackup.

Hosts File

With NetBackup, it is imperative that the storage server and node names resolve to an IP address. To provide a measure of fault tolerance, the HPP hosts files should also contain address-to-name translations. So if the name service is down, IP addresses can still be identified by name.

Sample:

192.168.22.36 wind wind.hds.com wind-198-ny # HPP node 0

192.168.22.28 snow snow.hds.com snow-199-ny # HPP node 1

192.168.22.105 rain rain.hds.com rain-201-nj # HPP target

192.168.22.108 sleet sleet.hds.com # client

192.168.22.153 hail hail.hds.com # media srvr

192.168.22.107 storm storm.hds.com # media srvr

Note the storage server names, ending in “ny” or “nj” are aliases for the three host names and share the same IP addresses.

Page 18: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

14

Hitachi Protection Platform NetBackup OST Best Practices

Disk Pool Capacity and Multiple Storage Pools

All disk volumes residing in a storage pool share the available capacity. A disk volume never spans HPP storage pools; however, a single storage server can use disk volumes residing in multiple storage pools.

Disk volumes do not failover to another disk volume. The data stored in the disk volume can be duplicated or replicated to another disk volume on another HPP system.

If a disk volume is full, it can no longer be written. Notifications are sent when the total storage pool usage reaches 90 percent. If a disk volume reaches 98 percent, the HPP systems stops storing data.

NetBackup Device Mappings

NetBackup new device support is established through the use of device mappings files. These files are used by the NetBackup Enterprise Media Manager database to determine the protocols and settings to communicate with storage devices. These files should be updated whenever there are new releases to ensure correct functionality with third-party OST devices, such as the HPP appliance.

The latest version of these files is included in NetBackup release updates and is also available for download independently on the Veritas support site. Updating the mappings independently should only be required when OST hardware has been upgraded and the version of NetBackup is not up to the proper revision. If necessary, obtain the version of these files from:

Linux and Unix: https://www.veritas.com/support/en_US/article.000025759

Windows: https://www.veritas.com/support/en_US/article.000025758

To check that the version of the installed files is correct, run the following command:

Windows: %installpath%\VERITAS\Volmgr\bin\tpext -get_dev_mappings_verLinux and Unix: /usr/openv/volmgr/bin/tpext -get_dev_mappings_ver

Install according to the instructions provided.

Page 19: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

15

Hitachi Protection Platform NetBackup OST Best Practices

NetBackup Buffer Sizes

NetBackup uses network buffers to transfer data. It uses default size and number of buffers, unless specified otherwise. Following are the buffer sizes recommended by Veritas and HDS:

• SIZE_DATA_BUFFERS = 262144• NUMBER_DATA_BUFFERS = 64• NET_BUFFER_SZ = 524288

Further instruction and explanation on how to implement these values can be found at www.veritas.com/support.

The NET_BUFFER_SZ entry in NetBackup is a configuration element that allows administrators to fine-tune the network send and receive buffers within the Operating System TCP stack. NetBackup, by default, tunes this to 4x the SIZE_DATA_BUFFERS value + 1024 bytes. For some environments, this may or may not be optimal. Allowing administrators to “hard code” to a value of their own choice allows further refinement. This requirement came about due to legacy TCP stacks and networking environments being unable to cope with the requirements of the backup software.

More recent operating systems, particularly Linux and Windows releases, are implementing complex buffer sizing algorithms that can optimize the network stack on the fly. These algorithms can be disabled when the OS detects software attempting to override them. The net result is that network traffic becomes sub-optimal or even degraded.

Page 20: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

16

Hitachi Protection Platform NetBackup OST Best Practices

Disk Polling Service Timeout

The NetBackup Disk Polling Service (DPS), which is responsible for telling NetBackup whether a disk pool and disk volume is up, polls stats from the OST servers using a system call to the NetBackup service, bpstsinfo. By default, however, DPS has a 60-second timeout limit and if it does not get a reply back within that time, DPS automatically treats it as an error and marks the disk pool and disk volumes as down. OST devices need a higher value than 60 seconds, and Hitachi recommend 3600 seconds. To change this setting, create the following files:

touch/usr/openv/netbackup/db/config/DPS_PROXYNOEXPIRE echo “3600”>/usr/openv/netbackup/db/config/DPS_PROXYDEFAULTSENDTMO echo “3600”>/usr/openv/netbackup/db/config/DPS_PROXYDEFAULTRECVTMO

Restart the nbrmms service and daemon on the NetBackup media server where the change has been made. Do this for all media servers that attach to the OST device.

Refer to the following Veritas article that describes shows how to make this change in UNIX and Windows:

https://www.veritas.com/support/en_US/article.000012819

Network Device Timeouts

Duplications, replications, remote copy operations, and LAN backups can be negatively impacted by time-outs on idle connections. Network devices such as switches, firewalls, gateways, etc. should have time-outs greater than 12 hours (or the largest data segment copied, divided by your average data rate, times 2) for OST TCP ports 5562 and 5564.

Use Multiple Storage Servers

In multinode environments, multiple storage servers provide another means to exploit parallelism. A storage server has a unique IP address and a fixed number of I/O servers (nodes) to support ingest from that address. Multiple storage servers distribute the I/O management across the chosen HPP nodes. Data loading is distributed by the I/O node configuration of each storage server.

One or more storage units can be matched up with a storage server. It is preferred, however, to establish a one-to-one relationship between storage units and storage servers:

wind-sn14567-ny-001 storage_2weeks

Page 21: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

17

Hitachi Protection Platform NetBackup OST Best Practices

Use Multiple I/O Nodes

To allow the OST system software to dynamically balance load and resource consumption, configure multiple I/O nodes for each storage server. Conversely, high-priority backups for which no resource contention is desired should not share storage servers or I/O nodes. (This should be rare. Careful attention to the design and scheduling of policies enables optimal multitasking.)

One-to-One Relationship between an OST Storage Server and a NetBackup Domain

The HPP supports multiple OST storage servers per system (only one per processing node). Each storage server can only support a single NetBackup domain. You cannot configure multiple NetBackup domains to a single storage server.

Preferred OST IP Address Assignments

Always assign an IP address to the management port of an I/O server.

Hitachi requires that each OST I/O server (node) have an IP address assigned to the management port (eth0) to allow the node to be an I/O server node also. Failure to assign an IP address results in the node being deemed ineligible for selection as an I/O node. The IP address does not need to be valid for that environment nor does the NIC need to be connected. The address simply needs to assigned.

Configure multiple (bonded) interfaces for improved data throughput

OST uses more than one I/O Ethernet port when defined and when IP addresses are assigned. Configure addresses from unique subnets on each port or bonded pair. In addition, for higher data rates, you can configure a redundant, parallel Ethernet bond (mode 4) of two 10-Gbps ports. A sample S2500 configuration showing eth0 and eth4 is:

3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000

link/ether 80:c1:6e:23:ea:1c brd ff:ff:ff:ff:ff:ff

inet 172.22.17.86/20 brd 172.22.31.255 scope global eth0

9: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000

link/ether 10:60:4b:93:81:38 brd ff:ff:ff:ff:ff:ff

inet 172.16.17.86/20 brd 172.16.31.255 scope global eth4

Page 22: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

18

Hitachi Protection Platform NetBackup OST Best Practices

These ports communicate with media servers, master servers, or other HPPs on the same subnet. Internally, the I/O server keeps a list of available ports, listens on all of them, and responds to messages as they arrive. Port eth1 shown here demonstrates establishing an IP address on a different subnet, 172.16.48.0/20:

4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000 link/ether 80:c1:6e:23:ea:1e brd ff:ff:ff:ff:ff:ff inet 172.16.49.86/20 scope global eth1

Retention-Based Disk Volumes Naming

You should create disk volumes that are named in accordance with the Retention Based Storage Organization; for example, vol_07days. If not using retention as a basis for organizing storage, volume names should be based upon a consistent scheme, such as data type.

Retention-Based Storage

HPP S-Series stores OST images in virtual disk volumes. The disk volumes are comprised of many fixed-size storage containers that can hold pieces of multiple OST images. The storage space each disk volume container consumes is only reclaimed when all resident image pieces have expired. Storing images with multiple retention periods in the same disk volume causes storage containers to persist longer than necessary and wastes storage space.

Arranging NetBackup and OST storage by retention period increases efficiency and reduces used capacity. For optimal space management, a one-to-one relationship should exist between an HPP disk volume and an NBU OST image retention level. Retention periods can share a volume but should minimize the difference in the period. For example, all policies with a 1- or 2-week retention period can use the same disk volume while policies with a 3- or 4-week retention period should use a different disk volume. Conversely, you should not store images with a 1-week retention period in the same disk volume as images from a policy with a 13-week retention period.

Page 23: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

19

Hitachi Protection Platform NetBackup OST Best Practices

Retention-Based Storage Organization

A simple schematic is included on the next two pages.

NetBackup Perspective

+-- Policies

|

+-- 2 weeks --> storage_2weeks --> pool_2weeks

| |

| +-- policy_oracle_2weeks

| +-- policy_sql_2weeks

| +-- policy_email_2weeks

|

+------ 4 weeks --> storage_4weeks --> pool_4weeks

| |

| +-- policy_users_4weeks

| +-- policy_windows_4weeks

|

|

+---------- 6 weeks --> storage_6weeks --> pool_6weeks

| |

| +-- policy_oracle_6weeks

| +-- policy_filer_6weeks

+-- Storage Units

|

+-- storage_2weeks

+------ storage_4weeks

+---------- storage_6weeks

+-- Disk Pools

|

+-- pool_2weeks

+------ pool_4weeks

Page 24: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

20

Hitachi Protection Platform NetBackup OST Best Practices

+---------- pool_6weeks

HPP (VTL) Perspective

+-- OST (2 node S2x00; wind & snow)

|

+-- wind-sn14567-ny-001 storage_2weeks

| |

| +-- vol_2weeks pool_2weeks

|

+-- snow-sn14567-ny-002 storage_4weeks

| storage_6weeks

|

+-- vol_04weeks pool_4weeks |

+-- vol_06weeks pool_06weeks

Disk Volumes for Accelerator

NetBackup Accelerator removes redundancy from data at the client by sending differential information only to the HPP-S. HPP systems do not deduplicate NetBackup Synthetic or Accelerator data when it lands. Accelerator backups are placed in a “dedupe complete” state. The logical data cited in the backup report reflects the total logical data before the accelerator benefit; the physical capacity used reflects the used capacity required to store the changed blocks. Thus, the deduplication ratio is based on the volume of changed blocks stored after hardware compression versus the logical data processed.

Accelerator backups should be stored on dedicated disk volumes based upon retention period, not mixed on one volume with other types of backups. Commingling standard and accelerated backup types in one OST container may impede space reclamation because the deduplication threshold will not be reached. Retention Based Storage is also required to improve overall OST storage efficiency.

One obvious advantage is the Capacity > Disk Volumes screen on the user interface displays the Accelerator and standard backup deduplication ratios for comparison and increased precision.

Veritas describes how Accelerator is implemented in this technical tip:

https://www.veritas.com/support/en_US/article.000112448

Page 25: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

21

Hitachi Protection Platform NetBackup OST Best Practices

Increase Deduplication Database Cache

Use of the Netbackup Accelerator feature can cause some deduplication database tables to become enormous. This results in a marked decrease in system performance if all database settings are left at the defaults. If Netbackup Accelerator is used, then the database’s cache setting needs to be increased to 24GB from the default of 8GB on systems where node0 (a.k.a.,the master node) has at least 90GB of memory. Contact Hitachi Global Support & Services and request that this setting be checked and set appropriately if the Netbackup Accelerator feature is expected to be used.

OST-over-Fibre-Channel Recommendations

The Hitachi Protection Platform (HPP) backup appliance uses special purpose SCSI target device emulations that it presents on its Fibre Channel ports when the OST-over-Fibre-Channel feature is licensed. These devices are modeled after SCSI tape drives but are not SCSI tape drives. They are for the exclusive use of the NetBackup OST plug-in loaded on the NetBackup servers.

By default, the HPP appliance presents 24 Fibre Channel devices from each node, with six devices on each of the four FC ports. These Fibre Channel devices are allocated, in a round-robin manner, to each FC port on each node. Unlike the various tape drive emulations available on the HPP in connection with standard tape library emulations that can be configured to be accessible from multiple FC ports, these special FC devices are accessible from one and only one FC port.

The following sections deal specifically with OST-over-FC implementations.

Dedicate Media Server Fibre Channel HBA to OST

The HBA configured on the media server needs to be dedicated for optimal OST operations and should not be shared as an initiator for other SCSI target devices.

Enable FC-Tape Support on Media Server Fibre Channel HBAs

Enable the FC-Tape option supported by the Fibre Channel HBA vendor. This provides Fibre Channel link-level error recovery, thus reducing possible errors seen by the backup application. Check with the HBA vendor on how to enable FC-Tape support.

Page 26: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

22

Hitachi Protection Platform NetBackup OST Best Practices

Connect all Fibre Channel ports

To maximize FC device availability and throughput, connect all FC ports to your SAN or directly to your media server. Due to the OST FC devices being presented on one and only one FC port, media server access to all four ports on a node is required for access to all OST FC devices.

Increase the Number of Fibre Channel Devices

The number of FC devices per node can be increased to a maximum of 192 total FC devices minus the number of non-OST device emulations on that node. Increasing the number of FC devices on the HPP appliance makes additional FC devices available on the media server, thus allowing for more concurrent I/O streams. Additional configuration dependencies may be needed based on OST Storage Unit and Disk pool settings. Contact HDS Global Service and Support to increase the number of FC devices.

Consider SAN Zoning or LUN Mapping

When multiple media servers are connected to the Hitachi HPP appliance, consider the use of SAN based zoning or HPP LUN mapping to segregate data traffic. This may also assist with tuning the appropriate number of I/O streams configured on the Disk Pool with the available number of FC devices.

Restrictions when Using both Fibre Channel and IP for Ingest

If your environment requires ingest using both IP and Fibre Channel then you will need to configure two OST Storage Servers on the HDS backup appliance. Supporting multiple storage servers would require a minimum of two nodes since only one Storage Server can be configured per node and is limited to supporting one protocol and one NetBackup Domain.

Avoid Inter Switch Links (ISLs)

Whenever possible avoid using ISLs between media servers and the HDS HPP with which they communicate. ISLs can quickly become congested when multiple transfers occur on multiple devices. Consolidate media servers and HDS Nodes on individual physical switches if possible. Avoid Multi-Switch ISLs on all OST / FC transfers. In addition to increased latency, multi-switch hops consume double the available ISL bandwidth.

Page 27: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

23

Hitachi Protection Platform NetBackup OST Best Practices

Data Erasure

Data erasure (previously known as data shredding) is currently implemented as a single threaded process and is an expensive I/O operation due to security requirements. As such, it is strongly recommended that only data that truly requires erasure be subjected to this process. And because data erasure is a disk volume property, the use of a separate disk volume (or set of disk volumes) is recommended to isolate erasures from “normal” data. If data-at-rest encryption is used, data erasure is redundant and is not advised.

Measuring Performance Using GEN_DATA

Veritas provides a Technical Solution that describes a set of file list directives to aid performance tuning:

https://www.veritas.com/support/en_US/article.000091135

Data is algorithmically generated under the control of a NetBackup client. Size, number of copies, compressibility, delivery rate, and other parameters can be programmed. An independent NetBackup policy, such as GEN_Test, is recommended.

Example OneThis filelist contains commands to execute backups using generated data:

NEW_STREAMGEN_DATAGEN_RANDOM_SEED=1GEN_DEDUP_KBSTRIDE=512GEN_KBSIZE=1048576GEN_MAXFILES=5GEN_PERCENT_DEDUP=50

Example TwoThe following commands execute the commands from the command line interface:bpbackup –p GEN_Test –s User –w GEN_DATA GEN_KBSIZE=16777216 GEN_MAXFILES=8 GEN_RANDOM_SEED=11 GEN_DEDUP_KBSTRIDE=512 GEN_PERCENT_RANDOM=99 &

bpbackup –p GEN_Test –s User –w GEN_DATA GEN_KBSIZE=16777216 GEN_MAXFILES=8 GEN_RANDOM_SEED=23 GEN_DEDUP_KBSTRIDE=512 GEN_PERCENT_RANDOM=99 &

bpbackup –p GEN_Test –s User –w GEN_DATA GEN_KBSIZE=16777216 GEN_MAXFILES=8 GEN_RANDOM_SEED=31 GEN_DEDUP_KBSTRIDE=512 GEN_PERCENT_RANDOM=99 &

Page 28: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

24

Hitachi Protection Platform NetBackup OST Best Practices

In laboratory tests, each invocation of the GEN_Test policy generated approximately 150,000 kBps as measured by ‘sar –n DEV 5’ for a 10- Gbps Ethernet connection. The data rate rolled off when more than three commands were executed in parallel.

Terminology

This table provides the typical OST term and its associated Veritas OST API term.

TABLE 1-1. OST and Veritas Terminology Associations

OST Veritas OpenStorage Initiative and APIsDisk Pool Disk pools allow you to partition and pool your

physical storage resources. Disk Volume Logical Storage Unit equivalent to NetBackup's

disk pool. Disk Volume Pair Links the Source and Destination Disk Volumes

together as a pair. Image Subset of backup data. An image can be either a

complete data set or data set fragment. An individual image can be either replicated within a single Disk Volume or across multiple Disk Volumes and multiple Storage Servers. Each logical backup set will have minimum of three images (i.e., header image, fragment image, and true image restore image).

I/O Node Input-output node.Platform HPP platform displayed by NetBackup as a

Storage Server.Plug-in HPP software implemented in an OpenStorage

API, which must be installed on a media server.Storage Lifecycle Policy

Storage Lifecycle Policies support the immediate production of a tape copy after backing up data to the HPP S-Series.

Storage Node An HPP S-Series storage system.Storage Server OST platform software running on the HPP

platform to service requests.

Page 29: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

25

Hitachi Protection Platform NetBackup OST Best Practices

Related Documentation

These links reference further useful documentation provided by Veritas:

NetBackup 7.6 Administrator Guide:https://www.veritas.com/support/en_US/article.DOC6452

NetBackup 7.6 Troubleshooting Guide:https://www.veritas.com/support/en_US/article.DOC6470

NetBackup 7.6 Command Reference Guide:

https://www.veritas.com/support/en_US/article.000003747

NetBackup 7.0 - 7.6.x Hardware Compatibility List:https://www.veritas.com/support/en_US/article.TECH76495

Best Practices for SLPs and A.I.R. NetBackup 7.6:https://www.veritas.com/support/en_US/article.TECH208536

NetBackup OST 7.6 Solutions Guide:https://www.veritas.com/support/en_US/article.DOC6464

NetBackup Tips for logging:http://www.veritas.com/community/articles/quick-guide-setting-logs-netbackup

Understanding /etc/hosts:

https://www.veritas.com/support/en_US/article.000090088

In addition, HDS provides Knowledge Base articles relating to OST and the HDS HPP products. Visit the support portal for more information at https://support.hds.com.

Common NetBackup OST Configuration Issues:

https://www.veritas.com/support/en_US/article.000044252

https://www.veritas.com/support/en_US/article.000044207

https://www.veritas.com/support/en_US/article.000043976

Page 30: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

26

Hitachi Protection Platform NetBackup OST Best Practices

Page 31: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

1

Hitachi Protection Platform NetBackup OST Best Practices

Page 32: Hitachi Protection Platform NetBackup OST Best Practices · 2019-04-23 · This document provides best practices for OST using NetBackup V7.7 with the HDS HPP OST-compatible devices

MK-95HPP000-03

Hitachi Data Systems

Corporate Headquarters2845 Lafayette StreetSanta Clara, California 95050-2639U.S.A.www.hds.com

Regional Contact Information

Americas+1 408 970 [email protected]

Europe, Middle East, and Africa+44 (0)1753 [email protected]

Asia Pacific+852 3189 [email protected]