Data ONTAP 81 Data Protection Guide For

Embed Size (px)

DESCRIPTION

This provides a full discussion around data protection on netapp filers.

Citation preview

  • Data ONTAP 8.1Data Protection Guide For Cluster-Mode

    NetApp, Inc.495 East Java DriveSunnyvale, CA 94089 USATelephone: +1 (408) 822-6000Fax: +1 (408) 822-4501Support telephone: +1 (888) 463-8277Web: www.netapp.comFeedback: [email protected]

    Part number: 210-05680_A0Updated for Data ONTAP 8.1.1 on 16 August 2012

  • Contents

    Protecting data using Snapshot copies ....................................................... 5How Snapshot copies work ......................................................................................... 5

    Considerations for working with Snapshot copies of Infinite Volumes ..................... 6

    Maximum number of Snapshot copies ........................................................................ 7

    Creating a Snapshot copy ............................................................................................ 7

    Modifying attributes of a Snapshot copy .................................................................... 8

    Displaying information about Snapshot copies ........................................................... 9

    When Snapshot copies of Infinite Volumes are accessible ......................................... 9

    Client access to Snapshot copies is temporarily disabled when a node loses

    quorum ................................................................................................................. 10

    Restoring the contents of a volume from a Snapshot copy ....................................... 11

    Restoring a single file from a Snapshot copy ................................................ 11

    Restoring part of a file from a Snapshot copy of a FlexVol volume ............ 12

    Renaming a Snapshot copy of a FlexVol volume ..................................................... 13

    What Snapshot disk consumption is .......................................................................... 13

    How Snapshot copies consume disk space ................................................... 13

    How changing file content consumes disk space .......................................... 14

    Monitoring Snapshot copy disk consumption ............................................... 15

    Deleting a Snapshot copy .......................................................................................... 16

    Computing reclaimable space for Snapshot copies of a FlexVol volume ................. 17

    What the Snapshot copy reserve is ............................................................................ 17

    How Data ONTAP uses deleted active file disk space ................................. 18

    Example of what happens when Snapshot copies exceed the reserve .......... 19

    Recovery of disk space for file system use ................................................... 19

    Managing Snapshot copies (cluster administrators only) ....................... 21Creating a Snapshot policy ........................................................................................ 21

    Naming convention for scheduled Snapshot copies ...................................... 21

    What prefixes are .......................................................................................... 22

    Using prefixes to name automatic Snapshot copies ...................................... 22

    Managing Snapshot copies through Snapshot policies ............................................. 22

    Modifying the maximum number of Snapshot copies for a Snapshot

    policy's schedule ...................................................................................... 23

    Table of Contents | 3

  • Adding a schedule to a Snapshot policy ........................................................ 23

    Changing the description associated with a Snapshot policy ........................ 23

    Removing a schedule from a Snapshot policy .............................................. 24

    Displaying information about Snapshot policies ...................................................... 24

    Deleting a Snapshot policy ........................................................................................ 25

    Providing disaster recovery using mirroring technology (clusteradministrators only) .............................................................................. 26

    Understanding mirroring technology ........................................................................ 26

    How mirrors work ......................................................................................... 26

    Levels of protection that data protection mirrors provide ............................. 26

    Data protection mirror for an Infinite Volume .............................................. 27

    What a data protection mirror for the namespace constituent is ................... 27

    How SnapMirror recovers constituents for an Infinite Volume .................... 27

    Mirror location in the global namespace ....................................................... 28

    Path name pattern matching .......................................................................... 28

    Language setting requirement ....................................................................... 29

    When the active file system is available to clients on the destination

    volume ..................................................................................................... 29

    Support for locks in Infinite Volumes ........................................................... 29

    Data protection mirror limitations ................................................................. 30

    Managing data protection mirrors ............................................................................. 33

    Uses for data protection mirrors .................................................................... 33

    Creating a data protection mirror in a cluster ................................................ 36

    Creating a data protection mirror between cluster peers ............................... 38

    Managing mirroring relationships ................................................................. 52

    Deleting a mirror ........................................................................................... 58

    Retrieving data during disaster recovery ....................................................... 58

    Converting a data protection mirror destination to a writable volume ......... 60

    Testing database applications ........................................................................ 60

    Copyright information ............................................................................... 62Trademark information ............................................................................. 63How to send your comments ...................................................................... 64Index ............................................................................................................. 65

    4 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Protecting data using Snapshot copies

    You can use Snapshot copies to restore lost data due to accidental deletion.

    Data ONTAP maintains a configurable Snapshot schedule that creates and deletes Snapshot copiesautomatically for each volume. You can also create and delete Snapshot copies, and manageSnapshot schedules based on your requirements.

    If you lose data due to a disaster, you use data protection mirrors to restore data.

    How Snapshot copies workA Snapshot copy is a copy of a FlexVol volume representing the volume's contents at a particularpoint in time. You can view the contents of the Snapshot copy and use the Snapshot copy to restoredata that you lost recently.

    A Snapshot copy of a volume is located on the parent volume but has read-only access. It representsthe contents of the original volume at a particular point in time. A parent volume and a Snapshotcopy of it share disk space for all blocks that have not been modified between the creation of thevolume and the time the Snapshot copy is made, thereby making Snapshot copies lightweight.

    Similarly, two Snapshot copies share disk space for those blocks that were not modified between thetimes that the two Snapshot copies were created. You can create a chain of Snapshot copies torepresent the state of a volume at a number of points in time. Users can access Snapshot copiesonline, enabling users to retrieve their own data from past copies, rather than asking a systemadministrator to restore data from tape. Administrators can restore the contents of a volume from aSnapshot copy.

    Each volume has a .snapshot directory that is accessible to NFS users by using the ls commandand to CIFS users by double-clicking the ~snapshot folder. The contents of the .snapshotdirectory are a set of subdirectories, labeled by type, date, and time, resembling the following:

    $ ls .snapshotdaily.2006-05-14_0013/ hourly.2006-05-15_1306/daily.2006-05-15_0012/ hourly.2006-05-15_1406/hourly.2006-05-15_1006/ hourly.2006-05-15_1506/hourly.2006-05-15_1106/ weekly.2006-05-14_0019/hourly.2006-05-15_1206/

    Each subdirectory of the .snapshot directory includes a list of the parent volume's files anddirectories. If users accidentally delete or overwrite a file, they can locate it in the most recentSnapshot directory and restore it to their main read-write volume simply by copying it back to themain directory. The following example shows how an NFS user can locate and retrieve a file namedmy.txt from the .snapshot directory:

    5

  • $ ls my.txtls: my.txt: No such file or directory$ ls .snapshotdaily.2006-05-14_0013/ hourly.2006-05-15_1306/daily.2006-05-15_0012/ hourly.2006-05-15_1406/hourly.2006-05-15_1006/ hourly.2006-05-15_1506/hourly.2006-05-15_1106/ weekly.2006-05-14_0019/hourly.2006-05-15_1206/$ ls .snapshot/hourly.2006-05-15_1506/my.txtmy.txt$ cp .snapshot/hourly.2006-05-15_1506/my.txt .$ ls my.txtmy.txt

    The .snapshot directory is always visible to NFSv2 and NFSv3 clients and available from withinthe volume, and not visible but still available from any other volume. For NFSv4 clients,the .snapshot directory is not visible, but accessible in all paths of a volume.

    Considerations for working with Snapshot copies of InfiniteVolumes

    You can create, manage, and restore Snapshot copies of Infinite Volumes. However, you should beaware of the factors affecting how the copies are created and the requirements for managing andrestoring the copies.

    When creating Snapshot copies of Infinite Volumes, consider the following factors:

    The volume must be online.You cannot create a Snapshot copy of an Infinite Volume if the Infinite Volume is in a Mixedstate because a constituent is offline.

    The Snapshot copy schedule should be more than hourly.It takes longer to create a Snapshot copy of an Infinite Volume than of a FlexVol volume.

    The operation can run in the background.The creation of a Snapshot copy of an Infinite Volume is handled as a cluster-scoped job (unlikethe same operation on a FlexVol volume). The long running operation spans multiple nodes in thecluster. You can make the job run in the background by using the -foreground parameter of thevolume snapshot copy create command.

    After you create Snapshot copies of an Infinite Volume, you cannot rename or modify the copies.

    When you are managing Snapshot copy disk consumption, consider the following limitations

    You cannot calculate the amount of disk space that can be reclaimed if Snapshot copies of anInfinite Volume are deleted.

    If you use the df command to monitor Snapshot copy disk consumption, it displays informationabout all of the data constituents in an Infinite Volumenot for the Infinite Volume as a whole.

    To reclaim disk space used by Snapshot copies of Infinite Volumes, you must manually delete thecopies.

    6 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • You cannot use a Snapshot policy to automatically delete Snapshot copies of Infinite Volumes.However, you can manually delete Snapshot copies of Infinite Volumes, and you can run thedelete operation in the background.

    When you are restoring Snapshot copies of Infinite Volumes, consider the following requirements:

    You must restore the entire copy.You cannot restore single files or parts of files.

    The copy must be in a valid state.You cannot use admin privilege to restore a Snapshot copy of an Infinite Volume if the copy is ina partial or invalid state. You can contact technical support to run the commands for you becausethe commands require diagnostic privilege.

    Maximum number of Snapshot copiesYou can accumulate a maximum of 255 Snapshot copies of a regular FlexVol parent volume.

    Over time, automatically generated hourly, weekly, and monthly Snapshot copies accrue. Having anumber of Snapshot copies available gives you a greater degree of accuracy if you have to restore afile.

    The number of Snapshot copies can approach the maximum if you do not remove older Snapshotcopies. You can configure Data ONTAP to automatically delete older Snapshot copies of regularFlexVol volumes as the number of Snapshot copies approaches the maximum. You cannot configureData ONTAP to automatically delete older Snapshot copies of Infinite Volumes.

    The maximum number of Snapshot copies is reduced from 255 when a volume uses a data protectionmirror because the data protection mirror uses Snapshot copies too. The following data protectionmirrors affect the maximum number of Snapshot copies available to a volume:

    A regular FlexVol volume or Infinite Volume in a data protection mirror relationship A regular FlexVol volume with a load-sharing mirror An Infinite Volume with a data protection mirror in a cluster for the namespace constituent

    Creating a Snapshot copyYou can manually create a Snapshot copy of a volume if you need to protect data that cannot waituntil a regularly scheduled Snapshot copy is made.

    Before you begin

    An Infinite Volume must be in an online state before you can create a Snapshot copy. You cannotcreate a Snapshot copy if the Infinite Volume is in a mixed state. A mixed state is when a constituentof the Infinite Volume is in an offline state.

    Protecting data using Snapshot copies | 7

  • About this task

    It takes longer to create a Snapshot copy of an Infinite Volume than it does to create a Snapshot copyof a FlexVol volume because an Infinite Volume is larger than a FlexVol volume.

    Steps

    1. If you are creating a Snapshot copy of an Infinite Volume, ensure that the Infinite Volume is in anonline state using the volume show command.

    2. Manually make a Snapshot copy using the volume snapshot create command.

    You must specify a Vserver name, source volume name, and Snapshot copy name. You canoptionally specify a comment for the Snapshot copy.

    ExampleThe following example creates a Snapshot copy named user_anna_snap_1 of a source volumenamed user_anna on a virtual server named vs1:

    vs1::> volume snapshot create -vserver vs1 -volume user_anna-snapshot user_anna_snap_1

    Modifying attributes of a Snapshot copyYou can change the comment associated with a Snapshot copy of a FlexVol volume to better identifythe Snapshot copy. You cannot change the comment or name associated with a Snapshot copy of anInfinite Volume.

    Step

    1. Use the volume snapshot modify command to change the comment associated with aSnapshot copy.

    ExampleThe following example adds a comment named "Anna's snapshot" to a Snapshot copy nameduser_anna_snap_1 of a source volume named user_anna on a virtual server named vs1:

    vs1::> volume snapshot modify -vserver vs1 -volume user_anna-snapshot user_anna_snap_1 -comment "Anna's snapshot"

    8 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Displaying information about Snapshot copiesYou can display information about Snapshot copies to identify Snapshot copies that you might wantto use or delete.

    Step

    1. Use the volume snapshot show command to display information about Snapshot copies.

    The command displays the following information about Snapshot copies:

    Virtual server name Volume name Snapshot copy name State Size Total number of blocks Number of used blocks

    See the volume snapshot show man page for more details about the command.

    ExampleThe following example displays information about all Snapshot copies of a volume nameduser_raoul:

    vs1::> volume snapshot show -volume user_raoul ---Blocks---Vserver Volume Snapshot State Size Total% Used%-------- ------- ---------------------------- -------- -------- ------ -----

    vs1 user_raoul daily.2006-10-12_0011 valid 1.95MB 0% 0% daily.2006-10-13_0011 valid 928KB 0% 0% hourly.2006-10-13_0606 valid 188KB 0% 0% hourly.2006-10-13_0706 valid 188KB 0% 0% hourly.2006-10-13_0806 valid 184KB 0% 0% hourly.2006-10-13_0906 valid 184KB 0% 0% hourly.2006-10-13_1006 valid 1.01MB 0% 0% hourly.2006-10-13_1106 valid 460KB 0% 0% weekly.2006-10-01_0016 valid 20.37GB 35% 14% weekly.2006-10-08_0016 valid 15.16GB 29% 10%11 entries were displayed.

    When Snapshot copies of Infinite Volumes are accessibleSnapshot copies of an Infinite Volume are restorable and fully accessible to clients only when theSnapshot copies are in a valid state.

    A Snapshot copy of an Infinite Volume consists of information spanning multiple constituents acrossmultiple aggregates. Although a Snapshot copy cannot be created if a constituent is offline, aconstituent might be deleted or taken offline after the Snapshot copy is created. If a Snapshot copy of

    Protecting data using Snapshot copies | 9

  • an Infinite Volume references a constituent that is offline or deleted, the Snapshot copy might not befully accessible to clients or restorable.

    The availability of a Snapshot copy of an Infinite Volume is indicated by its state, as explained in thefollowing table:

    State Description Client access to theSnapshot copy

    Impact on restore

    Valid The copy is complete. Fully accessible to clients Can be restored

    Partial Data is missing orincomplete.

    Partially accessible toclients

    Cannot be restored withoutassistance from technicalsupport

    Invalid Namespace information ismissing or incomplete.

    Inaccessible to clients Cannot be restored

    The validity of a Snapshot copy is not tied directly to the state of the Infinite Volume. A validSnapshot copy can exist for an Infinite Volume with an offline state, depending on when theSnapshot copy was created compared to when the Infinite Volume went offline. For example, a validSnapshot copy exists before a new constituent is created. The new constituent is offline, which putsthe Infinite Volume in an offline state. However the Snapshot copy remains valid because itreferences its needed pre-existing constituents. The Snapshot copy does not reference the new,offline constituent.

    To view the state of Snapshot copies, you can use the volume snapshot show command.

    Client access to Snapshot copies is temporarily disabledwhen a node loses quorum

    When a node in a cluster that contains an Infinite Volume loses quorum, client access to Snapshotcopies in the Infinite Volume is disabled.

    When a node loses quorum, it cannot communicate with the cluster and receive updated informationabout Snapshot copies of the Infinite Volume. While the node is out of quorum, client access toSnapshot copies in an Infinite Volume is temporarily disabled, and the storage system uses thefollowing methods to communicate with you:

    You receive an event message from the Event Management System. Clients become unresponsive when accessing Snapshot copies in an Infinite Volume. You receive an event message from the Event Management System when the node regains

    quorum and resumes communication with the cluster.

    10 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Restoring the contents of a volume from a Snapshot copyYou can restore the contents of a volume from a Snapshot copy to quickly recover lost or damageddata.

    Before you begin

    You must have the advanced privilege level or higher to run the command. If you are working with a Snapshot copy of an Infinite Volume, the Snapshot copy must be valid

    and the Infinite volume must be online.

    Steps

    1. If the volume is an Infinite Volume, use the volume unmount command to unmount it.

    2. Use the volume snapshot restore command to restore the contents of a volume from aSnapshot copy.

    ExampleThe following example restores data to a volume named src_os from a Snapshot copy namedsrc_os_snap_3 on a Vserver named vs0:

    vs1::> volume snapshot restore -vserver vs0 -volume src_os-snapshot src_os_snap_3

    3. If the volume is an Infinite Volume, use the volume mount command to mount it.

    After you finish

    You should manually replicate all mirrors of a volume immediately after you restore from a Snapshotcopy. Not doing so can result in unusable mirrors that must be deleted and re-created.

    Restoring a single file from a Snapshot copyYou can restore a single file to the required version from a Snapshot copy of a FlexVol volume. Therestored file can replace an existing file with the same name in the active file system or become anew file if there is data in the existing file that you want to retain. You can also restore LUNs, butyou cannot restore a single file from a Snapshot copy of an Infinite Volume.

    Before you begin

    The volume to which you want to restore the file should be online and writable. The volume to which you want to restore the file should have enough space for the restore

    operation to be completed successfully.

    Protecting data using Snapshot copies | 11

  • About this task

    You might encounter the following conditions when restoring a single file:

    The file that you are restoring is not available during the restore operation. If you are restoring an existing LUN, a LUN clone is created and is backed up in the form of a

    Snapshot copy.During the restore operation, you can read to and write from the LUN.

    Step

    1. To restore a single file, use the volume snapshot restore-file command.

    The restore operation might take a long time, depending on the size of the file or LUN that youare restoring.

    If you want to display the number of in-progress single file restore operations, use the volumesnapshot restore-file-info command.

    Restoring part of a file from a Snapshot copy of a FlexVol volumeYou can restore a range of data from a file in a Snapshot copy to an existing file in the active filesystem. Partial file restores can only be used to restore specific pieces of a LUN, and NFS or CIFScontainer files. You cannot restore part of a file from a Snapshot copy of an Infinite Volume.

    Before you begin

    You must understand the metadata of the host LUN or container file so that you know whichbytes belong to the object that you want to restore.

    Write operations are not allowed on the object that you are restoring.Otherwise, it might result in an inconsistent data.

    The volume where the LUN or the container file is to be restored must be online and writable.

    Steps

    1. To restore part of a file, use the volume snapshot partial-restore-file command.

    To get the settings for partial file restore on a cluster, use the volume snapshot partial-restore-file-list-info command.

    After the restore is complete, you must purge operating system or application buffers so that thestale data is cleaned.

    2. After the restore is complete, purge operating system or application buffers so that the stale datais cleaned.

    12 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Renaming a Snapshot copy of a FlexVol volumeYou can rename a Snapshot copy to more easily identify it. You cannot rename a Snapshot copy ofan Infinite Volume.

    About this task

    You cannot rename a Snapshot copy that is created as a reference copy during execution of thevolume copy or volume move commands.

    Step

    1. Use the volume snapshot rename command to rename a Snapshot copy.

    ExampleIn the following example a Snapshot copy named dept_acctg_backup is renamed todept_acctg_snap. The Snapshot copy is of a volume named dept_acctg on a virtual server namedvs0.

    vs1::> volume snapshot rename -vserver vs0 -volume dept_acctg-snapshot dept_acctg_backup -newname dept_acctg_snap

    What Snapshot disk consumption isData ONTAP preserves pointers to all the disk blocks currently in use at the time the Snapshot copyis created. When a file is changed, the Snapshot copy still points to the disk blocks where the fileexisted before it was modified, and changes are written to new disk blocks.

    How Snapshot copies consume disk spaceSnapshot copies minimize disk consumption by preserving individual blocks rather than whole files.Snapshot copies begin to consume extra space only when files in the active file system are changedor deleted. When this happens, the original file blocks are still preserved as part of one or moreSnapshot copies.

    In the active file system the changed blocks are rewritten to different locations on the disk orremoved as active file blocks entirely. As a result, in addition to the disk space used by blocks in themodified active file system, disk space used by the original blocks is still reserved to reflect the statusof the active file system before the change.

    The following illustration shows disk space usage for a Snapshot copy:

    Protecting data using Snapshot copies | 13

  • How changing file content consumes disk spaceA given file might be part of a Snapshot copy. The changes to such a file are written to new blocks.Therefore, the blocks within the Snapshot copy and the new (changed or added) blocks both usespace within the volume.

    Changing the contents of the myfile.txt file creates a situation where the new data written tomyfile.txt cannot be stored in the same disk blocks as the current contents because the Snapshotcopy is using those disk blocks to store the old version of myfile.txt. Instead, the new data iswritten to new disk blocks. As the following illustration shows, there are now two separate copies ofmyfile.txt on diska new copy in the active file system and an old one in the Snapshot copy:

    14 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Monitoring Snapshot copy disk consumptionYou can monitor Snapshot copy disk consumption using the df command, which displays theamount of free space on a disk. The df command treats Snapshot copies as a partition different fromthe active file system.

    About this task

    For an Infinite Volume, the df command displays information about all of the data constituents, notabout the Infinite Volume as a whole.

    Step

    1. To display information about Snapshot copy disk consumption, use the df command.

    Example

    In the example, the kbytes column shows that the vol0 volume contains 3,000,000 KB (3 GB)of disk space for the active file system and 1,000,000 KB (1 GB) of disk space reserved forSnapshot copies, for a total of 4,000,000 KB (4 GB) of disk space. In this example, 66 percent ofthe active disk space is used (which means that 34 percent is available). Note that the capacitypercentage is rounded to 65 percent. The 1,000,000 KB (1 GB) of disk space for Snapshot copyrepresents 25 percent of the volume capacity, of which 500,000 KB (0.5 GB) is used and 500,000KB (0.5 GB) is available, so that the space for Snapshot copies is at 50 percent capacity.

    Note: The 50 percent figure is not 50 percent of disk space, but 50 percent of the space allottedfor Snapshot copies. If this allotted space is exceeded, this number will be over 100 percent. It

    Protecting data using Snapshot copies | 15

  • is important to understand that the /vol/vol0/.snapshot line counts data that exists only ina Snapshot copy. The Snapshot copy calculation does not include Snapshot copy data that isshared with the active file system.

    cluster1::> dfFilesystem kbytes used avail capacity Mounted on Vserver/vol/vol0/ 3000000 2000000 1000000 65% --- vs1/vol/vol0/.snapshot 1000000 500000 500000 50% --- vs1

    Deleting a Snapshot copyYou can delete a Snapshot copy to reduce the number of Snapshot copies you retain.

    Before you begin

    When working with a Snapshot copy of a volume, you must ensure the following requirements:

    If the volume is an Infinite Volume, the Infinite Volume must be online.You cannot delete a Snapshot copy of an Infinite Volume when the Infinite Volume is in a Mixedstate without assistance from technical support.

    If you are using SnapMirror, you must meet the following SnapMirror requirements:

    Base Snapshot copies must exist for continual updates between source and destinationvolumes.

    At least one common Snapshot copy must exist between the source and destination volume touse the snapmirror resync command.

    Step

    1. Use the volume snapshot delete command to delete a Snapshot copy.

    ExampleThe following example deletes a Snapshot copy named user_joy_snap_3 of a volume nameduser_joy on a Vserver named vs1:

    vs1::> volume snapshot delete -vserver vs1 -volume user_joy-snapshot user_joy_snap_3

    16 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Computing reclaimable space for Snapshot copies of aFlexVol volume

    You can calculate the amount of disk space that can be reclaimed if Snapshot copies of a FlexVolvolume are deleted. This is useful if you are monitoring your disk space. You cannot compute thereclaimable space for Snapshot copies of an Infinite Volume.

    Before you begin

    The command is available only at the advanced privilege level and higher.

    About this task

    This command draws heavily on resources; using it can reduce the performance of the system forclient requests and other system processes. Queries that use wildcards, such as *, are thereforedisabled for this command. If you are specifying more than one Snapshot copy, you must use acomma-separated list of Snapshot copies with no spaces after the commas. You should specify nomore than three Snapshot copies in a single query.

    The command is available only at the advanced privilege level and higher.

    Step

    1. Use the volume snapshot compute-reclaimable command to calculate the amount of diskspace that can be reclaimed if specified Snapshot copies are deleted.

    ExampleThe following example calculates the amount of space that can be reclaimed on a volume nameddept_eng, on a virtual server named vs1, if the Snapshot copies named weekly.2006-01-08_0017and daily.2006-01-09_0013 are deleted:

    vs1::> set -priv advancedvs1::*> volume snapshot compute-reclaimable -vserver vs1-volume dept_eng -snapshots weekly.2006-01-08_0017,daily.2006-01-09_0013

    What the Snapshot copy reserve isThe Snapshot copy reserve sets a specific percent of the disk space for Snapshot copies. By default,the Snapshot copy reserve is 5 percent of the disk space. The active file system cannot consume theSnapshot copy reserve space, but the Snapshot copy reserve, if exhausted, can use space in the activefile system.

    Managing the Snapshot copy reserve involves the following tasks:

    Protecting data using Snapshot copies | 17

  • Ensuring that enough disk space is allocated for Snapshot copies so that they do not consumeactive file system space

    Keeping disk space consumed by Snapshot copies below the Snapshot copy reserve Ensuring that the Snapshot copy reserve is not so large that it wastes space that could be used by

    the active file system

    How Data ONTAP uses deleted active file disk spaceWhen enough disk space is available for Snapshot copies in the Snapshot copy reserve, deleting filesin the active file system frees disk space for new files, while the Snapshot copies that reference thosefiles consume only the space in the Snapshot copy reserve.

    If Data ONTAP created a Snapshot copy when the disks were full, deleting files from the active filesystem does not create any free space because everything in the active file system is also referencedby the newly created Snapshot copy. Data ONTAP has to delete the Snapshot copy before it cancreate any new files.

    The following example shows how disk space being freed by deleting files in the active filesystem ends up in the Snapshot copy:

    If Data ONTAP creates a Snapshot copy when the active file system is full and there is stillspace remaining in the Snapshot reserve, the output from the df commandwhich displaysstatistics about the amount of disk space on a volumeis as follows:

    Filesystem kbytes used avail capacity Mounted on Vserver/vol/vol0/ 3000000 3000000 0 100% -- vs1/vol/vol0/.snapshot 1000000 500000 500000 50% -- vs1

    If you delete 100,000 KB (0.1 GB) of files, the disk space used by these files is no longer partof the active file system, so the space is reassigned to the Snapshot copies instead.

    Data ONTAP reassigns 100,000 KB (0.1 GB) of space from the active file system to theSnapshot reserve. Because there was reserve space for Snapshot copies, deleting files from theactive file system freed space for new files. If you enter the df command again, the output isas follows:

    Filesystem kbytes used avail capacity Mounted on Vserver/vol/vol0/ 3000000 2900000 100000 97% -- vs1/vol/vol0/.snapshot 1000000 600000 400000 60% -- vs1

    For an Infinite Volume, the df command displays information about all of the dataconstituents, not about the Infinite Volume as a whole.

    18 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Example of what happens when Snapshot copies exceed the reserveBecause there is no way to prevent Snapshot copies from consuming disk space greater than theamount reserved for them, it is important to reserve enough disk space for Snapshot copies so that theactive file system always has space available to create new files or modify existing ones.

    Consider what happens in the following example if all files in the active file system aredeleted. Before the deletion, the node run -node nodename df output is as follows:

    Filesystem kbytes used avail capacity/vol/vol0/ 3000000 3000000 0 100%/vol/vol0/.snapshot 1000000 500000 500000 50%

    After the deletion, the node run -node nodename df command generates the followingoutput:

    Filesystem kbytes used avail capacity/vol/vol0/ 3000000 2500000 500000 83%/vol/vol0/.snapshot 1000000 3500000 0 350%

    The entire 3,000,000 KB (3 GB) in the active file system is still being used by Snapshotcopies, along with the 500,000 KB (0.5 GB) that was being used by Snapshot copies before,making a total of 3,500,000 KB (3.5 GB) of Snapshot copy data. This is 2,500,000 KB (2.5GB) more than the space reserved for Snapshot copies; therefore, 2.5 GB of space that wouldbe available to the active file system is now unavailable to it. The post-deletion output of thenode run -node nodename df command lists this unavailable space as used even thoughno files are stored in the active file system.

    If this example used an Infinite Volume, the df command would display information about allof the data constituents. For an Infinite Volume, the df command displays information aboutthe Infinite Volume's data constituents, not about the Infinite Volume as a whole.

    Recovery of disk space for file system useWhenever Snapshot copies consume more than 100% of the Snapshot reserve, the system is indanger of becoming full. In this case, you can create files only after you delete enough Snapshotcopies.

    If 500,000 KB (0.5 GB) of data is added to the active file system, a node run -nodenodename df command generates the following output:

    Filesystem kbytes used avail capacity

    Protecting data using Snapshot copies | 19

  • /vol/vol0 3000000 3000000 0 100%/vol/vol0/.snapshot 1000000 3500000 0 350%

    As soon as Data ONTAP creates a new Snapshot copy, every disk block in the file system isreferenced by some Snapshot copy. Therefore, no matter how many files you delete from theactive file system, there is still no room to add any more. The only way to recover from thissituation is to delete enough Snapshot copies to free more disk space.

    For an Infinite Volume, the df command displays information about all of the dataconstituents, not about the Infinite Volume as a whole.

    20 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Managing Snapshot copies (clusteradministrators only)

    Some Snapshot copy tasks are for the cluster administrator to perform and cannot be performed bythe Vserver administrator.

    Creating a Snapshot policyYou can create a Snapshot policy to specify the frequency and maximum number of automaticallycreated Snapshot copies.

    About this task

    When applied to a volume, a Snapshot policy specifies a schedule or schedules on which Snapshotcopies are taken and the maximum number of Snapshot copies that each schedule can take. ASnapshot policy can include from one to five schedules. To create schedules that can be used inSnapshot policies, use the job schedule cron create or job schedule interval createcommands.

    Step

    1. Use the volume snapshot policy create command to create a Snapshot policy.

    ExampleThe following example creates a Snapshot policy named policy2. It runs on two schedules: aschedule named 8hrs with a maximum of three Snapshot copies, and a schedule named 6am witha maximum of two Snapshot copies. The policy is enabled.

    cluster1::> volume snapshot policy create -policy policy2 -enabled true-schedule1 8hrs -count1 3 -schedule2 6am -count2 2

    Naming convention for scheduled Snapshot copiesThe scheduled Snapshot copy name is composed of an optional prefix or the schedule name specifiedin the Snapshot policy, and the timestamp. Snapshot names cannot be longer than 255 characters.

    If prefix is specified, the Snapshot name is made up of prefix and the timestamp.

    If you do not specify the prefix, by default, the schedule name is prepended with the timestamp toform a Snapshot name.

    21

  • What prefixes areA prefix is an optional string that you can specify to be used in creating automatic Snapshot copies.Using prefixes in Snapshot names provides more flexibility than using schedule names in namingautomatic Snapshot copies.

    Prefix names must be unique within a policy. The length of the prefix cannot exceed the maximumallowable length for Snapshot names; that is, Snapshot names cannot be longer than 255 characters.Prefix names must follow the character encoding rules used by Snapshot names.

    If a prefix is specified in the Snapshot schedule, then the schedule name is not used to name Snapshotcopies. If the prefix is not specified for a Snapshot schedule within a Snapshot policy, then theschedule name is used.

    Using prefixes to name automatic Snapshot copiesYou can use prefixes to provide flexibility to the naming convention for scheduled Snapshot copies.It removes the dependency on using the schedule names in naming scheduled Snapshot copies.

    About this task

    A schedule cannot have more that one prefix. Prefixes within a policy should be unique.

    Step

    1. You can specify prefixes when you create a Snapshot policy or you add a schedule to theSnapshot policy.

    ExampleThe following example creates a Snapshot policy test, which contains the schedule named 5minhaving the temp prefix:

    cluster1::volume snapshot policy create -policy test -enabled true -schedule1 5min -count1 2 -prefix1 temp

    ExampleThe following example adds the 6min schedule with the test prefix to the default policy:

    cluster1::volume snapshot policy add-schedule -policy default -schedule 6min -count 4 -prefix test

    Modifying a Snapshot policyYou can make changes to a previously made Snapshot policy if its schedule or description do notmeet your needs.

    22 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Modifying the maximum number of Snapshot copies for a Snapshotpolicy's schedule

    You can change the maximum number of Snapshot copies for a Snapshot policy's schedule if thecurrent number is not adequate.

    Step

    1. Use the volume snapshot policy modify-schedule command to modify the maximumnumber of Snapshot copies for a Snapshot policy's schedule.

    ExampleThe following example changes the maximum number of Snapshot copies to four for a schedulenamed halfhour on a Snapshot policy named policy2:

    cluster1::> volume snapshot policy modify-schedule -policy policy2-schedule halfhour -count 4

    Adding a schedule to a Snapshot policyYou can add a schedule to an existing Snapshot policy to determine when Snapshot copies are made.

    About this task

    A Snapshot policy can have up to five schedules. For an Infinite Volume, scheduled Snapshot copiescannot occur more often than at an hourly rate.

    Step

    1. Add a schedule to an existing Snapshot policy by using the volume snapshot policy add-schedule command.

    ExampleThe following example adds a schedule named twohour with a maximum count of five Snapshotcopies to a Snapshot policy named policy2:

    cluster1::> volume snapshot policy add-schedule -policy policy2-schedule twohour -count 5

    Changing the description associated with a Snapshot policyYou can change the description associated with a Snapshot policy to better identify the policy.

    Step

    1. Use the volume snapshot policy modify command to modify the description associatedwith a Snapshot policy.

    Managing Snapshot copies (cluster administrators only) | 23

  • ExampleThe following example adds the comment "For user volumes" to the Snapshot policy namedpolicy2:

    cluster1::> volume snapshot policy modify -policy policy2-comment "For user volumes"

    Removing a schedule from a Snapshot policyYou can remove a schedule from a Snapshot policy when it is no longer needed.

    Step

    1. Use the volume snapshot policy remove-schedule command to remove a schedule froma Snapshot policy.

    ExampleThe following example removes a schedule named 8hrs from a Snapshot policy named policy2:

    cluster1::> volume snapshot policy remove-schedule -policy policy2-schedule 8hrs

    Displaying information about Snapshot policiesYou can display information about Snapshot policies to determine when Snapshot copies are made orif it is a certain policy is one you want to apply.

    Step

    1. Use the volume snapshot policy show command to display information about Snapshotpolicies.

    ExampleThe following example displays information about the Snapshot policy named policy2:

    vs1::> volume snapshot policy show -policy policy2 Number Of IsName Schedules Enabled Comment ----------------- ---------- ------- -----------------------------------------

    default 3 true Default policy with hourly, daily & weekly schedules. Schedule: hourly Count: 6 daily 2 weekly 2none 0 false Policy for no automatic snapshots. Schedule: - Count: -nosnapshot 1 false - Schedule: hourly Count: 23 entries were displayed.

    24 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Deleting a Snapshot policyYou can delete a Snapshot policy when you no longer need it.

    About this task

    If you delete a Snapshot policy that is being used by one or more volumes, Snapshot copies of thevolume or volumes are no longer taken according to the deleted policy.

    Steps

    1. Use the volume modify command to dissociate the Snapshot policy from each volume that usesit.

    2. Use the volume snapshot policy delete command to delete a Snapshot policy.

    ExampleThe following example deletes a Snapshot policy named policy2:

    cluster1::> volume snapshot policy delete -policy policy2

    Managing Snapshot copies (cluster administrators only) | 25

  • Providing disaster recovery using mirroringtechnology (cluster administrators only)

    Stored data is susceptible to disaster, either through hardware failure or environmental catastrophe.You can use mirroring technology to create an identical second set of data to replace the primary setof data, should something happen to the primary set of data.

    Understanding mirroring technologyBefore using mirroring technology, you should understand how mirrors work, types of mirrors,where mirrors are located, path naming and language requirements, and what mirror relationships arenot intended to do.

    How mirrors workYou create a mirror relationship between a source volume and a destination volume. The sourcevolume is a read-write volume that clients can access and modify. The destination volume is a read-only volume that exports a Snapshot copy to clients for read-only access.

    Snapshot copies are used by the source volume to update destination volumes. Snapshot copies aretransferred from the source volume to the destination volume using an automated schedule ormanually; therefore, mirrors are updated asynchronously. You use the set of snapmirror commandsto create and manage mirroring relationships.

    Levels of protection that data protection mirrors provideYou can create a data protection mirror within a cluster to protect your data or, for greater disasterprotection, you can create a data protection mirror to a different cluster in a different location.

    A data protection mirror consists of a source volume that can fan out to one or more destinationvolumes. Each data protection mirror relationship is independent from the other.

    You can create data protection mirrors on the same aggregate as the source volume and on the sameVserver or on a different Vserver. For greater protection, you can create the data protection mirror ona different aggregate to recover from the failure of the source volume's aggregate and the sameVserver is required to fail over from the source volume. Neither of these two configurations protectsagainst a cluster failure. To protect against a cluster failure, you can create a data protection mirrorrelationship between two clusters.

    For greater data protection, you can create a data protection mirror relationship between two clustersin which the source volume is on one cluster and the destination volume is on the other. If the clusteron which the primary volume resides experiences a disaster, you can direct user clients to thedestination volume on the cluster peer until the source volume is available again.

    26 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • A data protection mirror can be used for limited disaster recovery, off-loading tape backup, datadistribution, and making offline copies of production data for research, such as data mining.

    Data protection mirror for an Infinite VolumeYou can create a data protection mirror on a different cluster in a different location for an InfiniteVolume for disaster protection.

    You can use SnapMirror to create a data protection mirror relationship between two clusters for anInfinite Volume. The source Infinite Volume is on one cluster, and the destination Infinite Volume ison the other cluster. Each cluster is dedicated to the Vserver with Infinite Volume and the InfiniteVolume. No other Vservers or volumes can exist on each cluster.

    What a data protection mirror for the namespace constituent isA data protection mirror in a cluster for the namespace constituent of an Infinite Volume providesrecovery options for the namespace constituent in case of any disaster. You should contact technicalsupport to create a data protection mirror for the namespace constituent because the commandrequires diagnostic privileges

    The data protection mirror for the namespace constituent is created in the same cluster as the InfiniteVolume, but is on a different aggregate from the source namespace constituent. The data protectionmirror for the namespace constituent is in addition to the data protection mirror for the InfiniteVolume. These two data protection mirrors provide the following data protection:

    A data protection mirror of the namespace constituent in the same cluster as the sourcenamespace constituent, but on a different aggregate

    A data protection mirror of the entire Infinite Volume on another cluster

    The data protection mirror for the namespace constituent increases recovery options for thenamespace constituent. For example, if the aggregate associated with the source namespaceconstituent fails, technical support can recover the destination namespace constituent, or technicalsupport can direct clients to use the destination namespace constituent, until technical supportrecovers the source namespace constituent. The purpose of the data protection mirror for the InfiniteVolume is to protect the entire Infinite Volume from disaster.

    You can view the data protection mirror for the namespace constituent when you use the volumeshow, snapshot show, and snapmirror show commands.

    How SnapMirror recovers constituents for an Infinite VolumeYou can use the snapmirror resync command to recover constituents from the destinationInfinite Volume. However, if a constituent fails, you must call technical support to recover theconstituent because the commands require diagnostic privilege.

    When you run the snapmirror resync command, SnapMirror restores newly added constituentsincluding the namespace constituent and data constituentsfrom the destination Infinite Volume.SnapMirror also creates and initializes new data protection mirrors for newly added constituents.

    Providing disaster recovery using mirroring technology (cluster administrators only) | 27

  • SnapMirror uses the current Infinite Volume configuration to determine whether a constituent isfailed, newly added, or deleted.

    Note: You should contact technical support to recover a failed constituent or to recover a dataprotection mirror in the primary cluster for the namespace constituent because the commandsrequires diagnostic privilege.

    Mirror location in the global namespaceA mirror's location in the global namespace depends on its type, whether it is a load-sharing mirror ora data protection mirror.

    A load-sharing mirror is a read-only volume and is automatically mounted with the same junctionpath as its source volume. If a source volume has one or more load-sharing mirrors, client requestsare automatically routed to the load-sharing mirror or mirrors to reduce the load on the sourcevolume.

    A data protection mirror can be located on a different virtual server from the source read-writevolume. It must be mounted at a different location from its source volume.

    Path name pattern matchingYou can use pattern matching when you use snapmirror commands to have the command work onselected mirroring relationships.

    The snapmirror commands use fully qualified path names in the following format: cluster://vserver/volume. You can abbreviate the path name by not entering the cluster or Vserver names.If you do this, the snapmirror command assumes the local cluster and Vserver context of the user.

    Assuming the cluster is called cluster1, the Vserver is called vserver1, and the volume is called vol1,the fully qualified path name is the following: cluster1://vserver1/vol1.

    You can make the snapmirror commands work on the same volume by excluding the cluster namefrom the path name because the command assumes the local cluster to be cluster1: //vserver1/vol1.

    Likewise, you can exclude the cluster name and virtual server name from the path name because thecommand assumes the local cluster to be cluster 1 and the Vserver context to be vserver1: vol1.

    You can use the asterisk (*) in paths as a wildcard to select matching, fully qualified path names. Thefollowing table provides examples of using the wildcard to select a range of volumes.

    * Matches all paths.

    *volA Matches all virtual servers and volumes with cluster names ending in volA.

    aClus* Matches all virtual servers and volumes with cluster names beginning with aClus.

    *://*/*src* Matches all clusters and Vservers with volume names containing the src text.

    *://*/vol* Matches all clusters and virtual servers with volume names beginning with vol.

    28 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • //*/vol* Does not match anything because no fully qualified path name begins with //.

    cluster1::> snapmirror show -destination-path *://*/*dest*Source Destination Mirror Relationship TotalPath Type Path State Status Progress Healthy------------- ---- ------------ ------------- -------------- ---------- -------

    cluster1://vs1/sm_src2 DP cluster2://vs2/sm_dest1 Snapmirrored Idle - true

    Language setting requirementThe source volume and destination volume of a mirroring relationship must have the same languagesetting; otherwise, NFS or CIFS clients might not be able to access data.

    This is not a problem if the source and destination volumes are located on the same Vserver becausethe language is set on the Verver. If you plan to create a mirroring relationship between volumes ontwo different Vservers, ensure that the language setting on the Vservers is the same.

    When the active file system is available to clients on the destinationvolume

    For FlexVol volumes, the active file system on the destination volume is available after SnapMirrortransfers the Snapshot copy for the volume. For Infinite Volumes, the active file system is availableon the destination volume after SnapMirror transfers coordinated Snapshot copies for all of theconstituents in the Infinite Volume.

    For a FlexVol volume in a SnapMirror relationship, the storage system automatically directs clientsto use the active file system in the latest Snapshot copy on the destination FlexVol volume. Attributesof the file system, such as the number of files or the amount of space consumed, are refreshed afterthe Snapshot copy for the volume is transferred through SnapMirror.

    For an Infinite Volume in a SnapMirror relationship, the storage system automatically directs clientsto use the active file system in the latest coordinated Snapshot copy on the destination InfiniteVolume. A coordinated Snapshot copy means that Snapshot copies exist for the namespaceconstituent and all of the data constituents in the Infinite Volume. The latest coordinated Snapshotcopy must be transferred through SnapMirror before a consistent view of the active file system isavailable on the destination Infinite Volume. Because the SnapMirror relationship between InfiniteVolumes integrates multiple, separate SnapMirror transfers, attributes of file system areincrementally updated as each Snapshot copy transfer is complete.

    Support for locks in Infinite VolumesThe active file system in the destination Infinite Volume in a SnapMirror relationship does notsupport locks. The server denies client requests for locks. However, Snapshot copies of an InfiniteVolume, including the copies on the destination Infinite Volume, support locks.

    Providing disaster recovery using mirroring technology (cluster administrators only) | 29

  • Data protection mirror limitationsWhen working with data protection mirrors, you should be aware that there are limitations to suchthings as the number of active transfers allowed and how many SnapMirror relationships you canconfigure for any one cluster.

    The following are some limitations to data protection mirrors:

    There is a limit to the number of active transfers allowed at one time. There is a limit to the number of destination volumes a single source volume can have. There is a limit to the number of destination volumes that can fan out from a single data

    protection source volume. Snapshot copies cannot be deleted on destination volumes. An empty junction path on a destination volume is not accessible from CIFS clients. There is a limit to the number of SnapMirror relationships configured on one cluster.

    Active transfer limits

    The number of simultaneous active transfers is limited based on the number of active transfersallowed at one time and the node model.

    Active transfers include the total number of SnapMirror transfers, volume move transfers, andvolume copy transfers. An active transfer contains two transfer endpoints, one at the source of thetransfer and one at the destination; therefore, a single operation, such as a scheduled data protectionmirror update, uses two endpoints.

    Model Maximum number ofendpoints on a node

    62xx 100

    6080 and 6070 100

    6040 and 6030 75

    32xx 75

    31xx 75

    All other platforms 40

    These limits apply when no volumes are taken over, that is, a storage failover has not occurred.When a node is in an HA configurationactive/active configuration with another node in the cluster,the maximum number of endpoints applies to the HA pairactive/active configuration. For example, ifyou have two 6080 nodes in an HA pairactive/active configuration, the 100-endpoint maximum is thecombined node limit.

    Each endpoint of a SnapMirror relationship is counted as an endpoint, even if the endpoint is sharedwith other SnapMirror relationships.

    30 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Example of endpoint count for shared SnapMirror relationships

    The following output from the snapmirror show command is used for all of the examplesthat follow:

    cluster1::> snapmirror showSource Destination Mirror Relationship TotalPath Type Path State Status Progress Healthy------------------ ---- ----------------- ------------- -------------- ---------- -------

    cluster1://node_vs0/src1 DP cluster1://node_vs0/dest1 Snapmirrored Idle - true cluster1://node_vs0/dest2 Snapmirrored Idle - truecluster1://node_vs0/src1 DP cluster2://node_vs2/dest3 Snapmirrored Idle - truecluster1://node_vs0/src2 LS cluster1://node_vs0/ls13 Snapmirrored Idle - true cluster1://node_vs0/ls23 Snapmirrored Idle - true cluster1://node_vs0/ls33 Snapmirrored Idle - true

    For the output shown, if transfers are active for all of the shown relationships, there are fivesource endpoints and five destination endpoints, even though src1 and src2 are shared bymultiple destination endpoints.

    Example of determining the number of endpoints

    The number of endpoints for the relationships are as follows:

    If only an update from src1 to dest1 were running, there would be two endpoints (onesource and one destination).

    If an update from src1 to dest1 and from src1 to dest2 were running, there would be fourendpoints (two source and two destination).

    If an update from src2 to ls13, ls14, and ls15 were running, there would be six endpoints(three source and three destination).

    If an update from src1 to cluster2://node_vs2/dest3 were running, there would be oneendpoint on each node of each cluster.

    SnapMirror fanout limits

    When you are determining the number and types of mirror relationships that you will have for asingle source volume, remember that the volume is limited in the number of destination volumes itcan have.

    The fanout limits depend on the type of mirror that you want to fan out from a single source volume:

    For load-sharing mirrors, you can fan out a maximum of one destination volume on a node for asingle source volume.Data ONTAP supports up to 24 nodes in a cluster; therefore, you can fan out a maximum of 24destination volumes from a single source volume.

    Providing disaster recovery using mirroring technology (cluster administrators only) | 31

  • For data protection mirrors, you can fan out a maximum of four destination volumes from asingle source volume.

    A single source volume can have both one load-sharing destination volume on a node and fourdata protection destination volumes.

    You cannot cascade mirrors.For example, you cannot mirror a data protection mirror to another data protection mirror.

    You cannot create a fanout configuration for an Infinite Volume.

    SnapMirror configuration limit

    You can have a total of 3000 SnapMirror relationships configured on one cluster.

    A SnapMirror relationship consists of a source volume and a destination volume. For example, if youhave a set of load-sharing mirrors that consists of a source volume and five destination volumes, youhave five SnapMirror relationships.

    The SnapMirror relationships can be for load-sharing mirrors, data protection mirrors, or both.

    Maximum number of Snapshot copies for volumes that are mirrored

    Data ONTAP limits the number of Snapshot copies that a volume that is to be mirrored can containto 255.

    Whenever an update to a data protection mirror or set of load-sharing mirrors occurs, Data ONTAPcreates one new Snapshot copy. You should consider this as you manage the number of Snapshotcopies on the source volume. You must keep the limit far enough below 255 that mirror updates donot exceed the limit.

    Cannot automatically delete Snapshot copies on destination volumes

    You cannot automatically delete old Snapshot copies on destination volumes of mirroringrelationships because the destination volume is a read-only version of the source volume and shouldcontain the same data as the source.

    Note: Using the snap autodelete command to automatically delete Snapshot copies from adestination volume to remove older Snapshot copies will fail.

    Empty junction path on a destination volume inaccessible from CIFS clients

    If internally mounted volumes form a namespace and you have a mirroring relationship, CIFS clientson a destination volume that attempt to view mirrored volumes not at the highest level of thenamespace are denied access.

    This occurs when you create a namespace using more than one volume, in which one volume is thesource volume of a mirroring relationship and the other volumes are members of the namespace. Forexample, assume that you have two volumes: vol x, which has a junction path /x, and vol y, whichhas a junction path /x/y. When a SnapMirror transfer occurs, a directory under vol x is created forvol y on the destination volume. From an NFS client, you can see that the directory is empty, butfrom a CIFS client, you get the following message:

    32 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • access is denied.

    Unsupported SnapMirror features in Infinite Volumes

    Infinite Volumes do not support all the SnapMirror features. Awareness of the unsupported featureshelps you understand how you can use SnapMirror to protect data in Infinite Volumes.

    Infinite Volumes do not support the following SnapMirror features:

    Load-sharing mirrors Fanout of load-sharing mirrors and data protection mirrors Bidirectional mirrors or bidirectional data exchange in an HA configurationactive/active

    configuration with Infinite Volumes NAS protocols other than NFSv3

    You cannot perform the following SnapMirror tasks for Infinite Volumes:

    Display and view the status of transfer progress for an Infinite Volume.You can view the transfer progress for individual constituents by running the snapmirror showcommand with the -is-constituent parameter.

    Use NDMP-based backup for an Infinite Volume.You can only use the NFSv3 protocol to back up a mirrored Infinite Volume.

    Run the snapmirror abort command on the source cluster.You can only run the snapmirror abort command on the destination cluster.

    Run the snapmirror delete command on the source cluster.You can only run the snapmirror delete command on the destination cluster.

    The following SnapMirror commands are not supported for Infinite Volumes:

    snapmirror promote snapmirror quiesce snapmirror resume

    Managing data protection mirrorsData protection mirror management consists of activities such as creating data protection mirrors forsource volumes, modifying data protection mirrors, and monitoring data protection mirror status.

    Uses for data protection mirrorsYou can create data protection mirrors to solve a number of storage administration tasks, such asbacking up data for archiving, recovering data when disasters occur, and distributing data to varioussites.

    Uses for backup and disaster recovery

    A data protection mirror has a number of different uses when used for data backup or disasterrecovery. You can create solutions for disaster recovery or for offloading backups.You can create

    Providing disaster recovery using mirroring technology (cluster administrators only) | 33

  • solutions for site-level disaster recovery, for offloading backups, for geographical protection, or fordisk-to-disk backups.

    The following are types of backup or disaster recovery uses with a data protection mirror:

    Disasterrecovery

    An administrator replicates data to a data protection mirror to a node located across acampus. A disaster recovery solution protects against the failure of the following:

    storage system disk shelf data center or floor (an administrator initiates an emergency power off, for

    example)

    Site-leveldisasterrecovery

    An administrator replicates data to a data protection mirror to a different cluster at analternate location. This is a failover and giveback solution for volumes reside ondifferent clusters and is not a solution for a cluster whose nodes are distributed acrossa campus (stretch cluster). A site-level disaster recovery solution protects against thefollowing:

    failure of hardware

    storage system disk shelf data center or floor (an administrator initiates an emergency power off, for

    example) failure of an entire site

    Backupoffloading

    An administrator replicates data to a data protection mirror at an alternatelocation and then backs up that mirror to a tape device (disk-to-disk-to- tapesolution).

    Note: You must use NDMP to back up the mirrored volume to the tapedevice.

    A backup offloading solution provides the following advantages:

    The solution moves the creation of the tape backup from the primary nodeor client to the node on which the mirrored volume resides.

    The disk type and model of the node used to create tapes can vary from theproduction models.

    The administrator can write to tape directly from the node on which themirrored volume resides using NDMP or from a client mounting data usingNFS.

    Note: Infinite Volumes do not support the use of NDMP. The onlyprotocol you can use is NFSv3.

    34 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Geographicalprotection

    An administrator distributes data to multiple sites without cascading to createdisaster protection.

    Disk-to-diskbackup

    An administrator replicates data to a disaster protection mirror at an alternatelocation.

    Uses for data distribution

    A data protection mirror has a number of different uses when used for data distribution.

    The following are types of data distribution uses with a data protection mirror:

    Producer to consumerconfiguration

    One site produces data that is replicated to one or more other siteswhere the data is used.

    Data exchange orcollaboration

    Two sites achieve bidirectional data exchange by replicating volumes betweenthe two sites.

    Note: Bidirectional data exchange is not supported for Infinite Volumes.

    Cache One site uses data that Data ONTAP replicates from a remote site to minimizetraffic on the WAN and to minimize application latency. You should use load-sharing mirrors, which are read-only volumes, if you intend to improveperformance by distributing data to eliminate high access to certain data.

    Note: Load-sharing mirrors are not supported by Infinite Volumes.

    Uses for production data that is offline

    A data protection mirror has different uses when it is used to make available read/write copies ofproduction data that is offline.

    The following are the types of uses for production data that is offline:

    Data miningYou replicate production data to a different cluster and then search that data for information.

    Data testingYou test and appraise source data after which you replicate the data for general use.

    Note: You should use FlexClone technology to perform this testing.

    Data experimentationYou replicate existing data and make the mirror writable to explore different scenarios withoutlosing the original data. If required, you can resynchronize the new data to the source volume.

    Note: You should use FlexClone technology to perform this testing.

    FlexClone technology does not support Infinite Volumes.

    Providing disaster recovery using mirroring technology (cluster administrators only) | 35

  • Uses for system administration tasks

    A data protection mirror has a number of different uses when used to facilitate system administrationtasks.

    The following are types of uses for production data that is offline:

    Data migration You can move data from one cluster to another.

    Systemupgrade

    When you are upgrading a site, you can replicate data to another site to providerollback protection.

    Site move You can replicate data from an old site to a new site where it becomes the primarysource of data.

    Heavy loadremoval

    You can remove high access to a volume or aggregate by spreading the load in aheavily loaded volume or a heavily loaded aggregate.

    Note: A better method of lightening the load might be moving the volume orusing load-sharing mirrors, depending on the application.

    Creating a data protection mirrorYou can protect data by replicating it to data protection mirrors. You can use data protection mirrorcopies to recover data when a disaster occurs.

    Before you begin

    You must have a SnapMirror license installed. If you are replicating data to another cluster, thatcluster must also have a SnapMirror license installed.

    About this task

    You can create data protection mirrors in a cluster using FlexVol volumes only. If your cluster has anInfinite Volume and you want to protect the data with a data protection mirror, you must create a dataprotection mirror between two clusters.

    Steps

    1. Create a destination volume on the destination cluster that will become the data protection mirrorby using the volume create command.

    ExampleThe following example creates a data protection mirror volume named dept_eng_dr_mirror1 on aVserver named vs0. The destination volume is located on an aggregate named aggr3. Thedestination volume is on the same cluster, cluster1.

    36 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • cluster1::> volume create -vserver vs0 -volume dept_eng_dr_mirror1 -aggregate aggr3 -type DP

    If you are creating a data protection mirror on a cluster peer, the destination volume is created onthe cluster peer:

    cluster2::> volume create -vserver vs0 -volume dept_eng_dr_mirror1 -aggregate aggr3 -type DP

    2. Create a data protection mirror relationship using the snapmirror create command.

    ExampleThe following example creates a data protection mirror named dept_eng_dp_mirror1 of thesource volume named dept_eng within a cluster. The cluster is named cluster1 and the mirror islocated on a Vserver named vs0.

    cluster1::> snapmirror create -destination-path cluster1://vs0/dept_eng_dp_mirror1 -source-path cluster1://vs0/dept_eng

    If you are creating the data protection mirror relationship with the destination volume on a clusterpeer, you create the data protection mirror relationship from the cluster that contains thedestination volume. For example, if the destination volume of the single cluster example were ona cluster peer named cluster2, the command to create the data protection mirror relationship is asfollows:

    cluster2::> snapmirror create -destination-path cluster2://vs0/dept_eng_dp_mirror1 -source-path clus1://vs0/dept_eng

    Data ONTAP creates the data protection mirror relationship, but the relationship is left in anuninitialized state.

    3. Initialize the data protection mirror using the snapmirror initialize command.

    ExampleThe following example adds a data protection mirror named dept_eng_dr_mirror4 of a sourcevolume named dept_eng. The source volume and data protection mirror are on the Vserver namedvs0 on a cluster named cluster1.

    cluster1::> snapmirror initialize -destination-path cluster1://vs0/dept_eng_dr_mirror4 -source-path cluster1://vs0/dept_eng

    If you are initializing the data protection mirror relationship with the destination volume on acluster peer, you initialize the data protection mirror relationship from the cluster that contains thedestination volume. For example, if the destination volume of the single cluster example were ona cluster peer named cluster2, the command to create the data protection mirror relationship is asfollows:

    cluster2::> snapmirror initialize -destination-path cluster2://vs0/dept_eng_dr_mirror4 -source-path cluster1://vs0/dept_eng

    Providing disaster recovery using mirroring technology (cluster administrators only) | 37

  • Creating a data protection mirror between cluster peersYou can protect data by replicating it to data protection mirror copies on different clusters. Dataprotection mirrors can be used to recover data when a disaster occurs.

    Before you begin

    You must have a SnapMirror license installed on each cluster.

    About this task

    The procedure for creating a data protection mirror relationship with a cluster peer is twofold: createa cluster peer relationship and create a data protection mirror relationship. If the cluster peerrelationship already exists, you need only create the data protection mirror.

    What cluster peer intercluster networking is

    A cluster peer relationship, that is, two different clusters communicating with each other, requires anintercluster network on which the communication occurs. An intercluster network consists ofintercluster logical interfaces (LIFs) that you assign to network ports.

    You define the intercluster network on which replication occurs between two different clusters whenyou create intercluster LIFs. Replication between two clusters can occur on the intercluster networkonly; this is true regardless of whether the intercluster network is on the same subnet as a datanetwork in the same cluster.

    The IP addresses you assign to intercluster LIFs can reside in the same subnet as data LIFs or in adifferent subnet. When you create an intercluster LIF, an intercluster routing group is automaticallycreated on that node too. If the source and destination clusters use different subnets for interclusterreplication, then you must define a gateway address for the intercluster routing group.

    You can assign intercluster LIFs to ports that have the role of data, which are the same ports used forCIFS or NFS access, or you can assign intercluster LIFs to dedicated ports that have the role ofintercluster. Each method has its advantages and disadvantages.

    Cluster peer intercluster networking requirements

    Your cluster peer intercluster network must fulfill requirements that include synchronized clustertime, number of intercluster LIFs, IP addresses for intercluster LIFs, maximum transmission units,and more.

    The following are requirements of cluster peer intercluster networking:

    The time on the clusters that you want to connect using an intercluster network must besynchronized within 300 seconds (5 minutes).Cluster peers can be in different time zones.

    At least one intercluster LIF must be created on every node in the cluster. Every intercluster LIF requires an IP address dedicated for intercluster replication.

    38 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • The correct maximum transmission unit (MTU) value must be used on the network ports that areused for replication.The network administrator can identify which MTU value to use in the environment. The MTUvalue should be set to a value that is supported by the network end point to which it is connected.The default value of 1,500 is correct for most environments.

    All paths on a node used for intercluster networking should have equal performancecharacteristics.

    The intercluster network must provide connectivity among all intercluster LIFs on all nodes in thecluster peers.Every intercluster LIF on every node in a cluster must be able to connect to every intercluster LIFon every node in the peer cluster.

    Considerations when sharing data ports

    When determining whether sharing a data port for intercluster replication is the correct interconnectnetwork solution, you should consider configurations and requirements such as LAN type, availableWAN bandwidth, replication interval, change rate, and number of ports.

    Consider the following aspects of your network to determine whether sharing data ports is the bestinterconnect network solution:

    For a high-speed network, such as a 10-Gigabit Ethernet (10-GbE) network, a sufficient amountof local LAN bandwidth might be available to perform replication on the same 10-GbE ports thatare used for data access.In many cases, the available WAN bandwidth is far less than 10 GbE, which reduces the LANnetwork utilization to only that which the WAN is capable of supporting.

    All nodes in the cluster might have to replicate data and share the available WAN bandwidth,making data port sharing more acceptable.

    Sharing ports for data and replication eliminates the extra port counts required to dedicate portsfor replication.

    If the replication interval is set to perform only after hours, when little or no client activity exists,then using data ports for replication during this time is acceptable, even without a 10-GbE LANconnection.

    Consider the data change rate and replication interval and whether the amount of data that mustbe replicated on each interval requires enough bandwidth that it might cause contention with dataprotocols if sharing data ports.

    When data ports for intercluster replication are shared, the intercluster LIFs can be migrated toany other intercluster-capable port on the same node to control the specific data port that is usedfor replication.

    Providing disaster recovery using mirroring technology (cluster administrators only) | 39

  • Considerations when using dedicated ports

    When determining whether using a dedicated port for intercluster replication is the correctinterconnect network solution, you should consider configurations and requirements such as LANtype, available WAN bandwidth, replication interval, change rate, and number of ports.

    Consider the following aspects of your network to determine whether using a dedicated port is thebest interconnect network solution:

    If the amount of available WAN bandwidth is similar to that of the LAN ports and the replicationinterval is such that replication occurs while regular client activity exists, then you shoulddedicate Ethernet ports for intercluster replication to avoid contention between replication and thedata protocols.

    If the network utilization generated by the data protocols (CIFS, NFS, and iSCSI) is such that thenetwork utilization is above 50 percent, then you should dedicate ports for replication to allow fornondegraded performance if a node failover occurs.

    When physical 10-GbE ports are used for data and replication, you can create VLAN ports forreplication and dedicate the logical ports for intercluster replication.

    Consider the data change rate and replication interval and whether the amount of data that mustbe replicated on each interval requires enough bandwidth that it might cause contention with dataprotocols if sharing data ports.

    If the replication network requires configuration of a maximum transmission unit (MTU) size thatdiffers from the MTU size used on the data network, then you must use physical ports forreplication because the MTU size can only be configured on physical ports.

    Configuring intercluster LIFs to share data ports

    Configuring intercluster LIFs to share data ports enables you to use existing data ports to createintercluster networks for cluster peer relationships. Sharing data ports reduces the number of portsyou might need for intercluster networking.

    Before you begin

    You should have reviewed the considerations for sharing data ports and determined that this is anappropriate intercluster networking configuration.

    About this task

    Creating intercluster LIFs that share data ports involves assigning LIFs to existing data ports and,possibly, creating an intercluster route. In this procedure, a two-node cluster exists in which eachnode has two data ports, e0c and e0d. These are the two data ports that are shared for interclusterreplication. In your own environment, you replace the ports, networks, IP addresses, subnet masks,and subnets with those specific to your environment.

    40 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • Steps

    1. Check the role of the ports in the cluster by using the network port show command.

    Example

    cluster01::> network port show Auto-Negot Duplex Speed(Mbps)Node Port Role Link MTU Admin/Oper Admin/Oper Admin/Oper------ ------ ------------ ---- ----- ----------- ---------- ----------

    cluster01-01 e0a cluster up 1500 true/true full/full auto/1000 e0b cluster up 1500 true/true full/full auto/1000 e0c data up 1500 true/true full/full auto/1000 e0d data up 1500 true/true full/full auto/1000cluster01-02 e0a cluster up 1500 true/true full/full auto/1000 e0b cluster up 1500 true/true full/full auto/1000 e0c data up 1500 true/true full/full auto/1000 e0d data up 1500 true/true full/full auto/1000

    2. Create an intercluster LIF on each node in cluster01 by using the network interface createcommand.

    ExampleThis example uses the LIF naming convention of nodename_icl# for the intercluster LIF.

    cluster01::> network interface create -vserver cluster01-01 -lif cluster01-01_icl01 -role intercluster -home-node cluster01-01 -home-port e0c -address 192.168.1.201 -netmask 255.255.255.0Info: Your interface was created successfully; the routing group i192.168.1.0/24 was created

    cluster01::> network interface create -vserver cluster01-02 -lif cluster01-02_icl01 -role intercluster -home-node cluster01-02 -home-port e0c -address 192.168.1.202 -netmask 255.255.255.0Info: Your interface was created successfully; the routing group i192.168.1.0/24 was created

    3. Verify that the intercluster LIFs were created properly by using the network interface showcommand with the -role intercluster parameter.

    Example

    cluster01::> network interface show role intercluster Logical Status Network Current Current IsVserver Interface Admin/Oper Address/Mask Node Port Home----------- ---------- ---------- ------------------ ------------- ------- ----

    cluster01-01 cluster01-01_icl01 up/up 192.168.1.201/24 cluster01-01 e0c truecluster01-02 cluster01-02_icl01 up/up 192.168.1.202/24 cluster01-02 e0c true

    4. Verify that the intercluster LIFs are configured to be redundant by using the networkinterface show command with the -role intercluster and -failover parameters.

    Providing disaster recovery using mirroring technology (cluster administrators only) | 41

  • ExampleThe LIFs in this example are assigned the e0c port on each node. If the e0c port fails, the LIF canfail over to the e0d port because e0d is also assigned the data role.

    The intercluster LIF is assigned to a data port; therefore, a failover group for the intercluster LIFis created automatically, and contains all ports with the data role on that node. Interclusterfailover groups are node specific; therefore, if changes are required, they must be managed foreach node because different nodes might use different ports for replication.

    cluster01::> network interface show -role intercluster failover Logical Home Failover FailoverVserver Interface Node:Port Group Usage Group-------- --------------- --------------------- --------------- --------

    cluster01-01 cluster01-01_icl01 cluster01-01:e0c system-defined Failover Targets: cluster01-01:e0c, cluster01-01:e0dcluster01-02 cluster01-02_icl01 cluster01-02:e0c system-defined Failover Targets: cluster01-02:e0c, cluster01-02:e0d

    5. Display routing groups by using the network routing-group show command with the -role intercluster parameter.

    An intercluster routing group is created automatically for the intercluster LIFs.

    Example

    cluster01::> network routing-group show role intercluster RoutingVserver Group Subnet Role Metric--------- --------- --------------- ------------ -------

    cluster01-01 i192.168.1.0/24 192.168.1.0/24 intercluster 40cluster01-02 i192.168.1.0/24 192.168.1.0/24 intercluster 40

    6. Display the routes in the cluster by using the network routing-group show command todetermine whether intercluster routes are available or you must create them.

    Creating a route is required only if the intercluster addresses in both clusters are not on the samesubnet and a specific route is needed for communication between the clusters.

    ExampleIn this example, no intercluster routes are available.

    cluster01::> network routing-group route show RoutingVserver Group Destination Gateway Metric--------- --------- --------------- --------------- ------

    cluster01 c192.168.0.0/24 0.0.0.0/0 192.168.0.1 20

    42 | Data ONTAP 8.1 Data Protection Guide for Cluster-Mode

  • cluster01-01 n192.168.0.0/24 0.0.0.0/0 192.168.0.1 10cluster01-02 n192.168.0.0/24 0.0.0.0/0 192.168.0.1 10