38
1 EMC ® RecoverPoint Deploying the Host-based Splitter on Sun Solaris Technical Notes P/N 300-010-947 REV A01 March 29, 2010 These technical notes describe best practices for deploying RecoverPoint with the host- based splitter in Sun Solaris environments. Topics include: Introduction ......................................................................................................... 3 Overview .............................................................................................................. 3 Host-based splitter software components ................................................. 4 Software I/O stack with the Solaris host-based splitter............................ 4 Host-based splitter SAN discovery ............................................................ 5 Host-based splitter host control ................................................................ 5 Host-based splitter host replication .......................................................... 5 Host-based splitter splitter modes ............................................................ 6 Host-based splitter disaster states ............................................................ 6 Installing RecoverPoint host-based splitter software on Sun Solaris ................... 8 Requirements for installation .................................................................... 8 Performing the installation........................................................................ 9 Checking the installation ........................................................................... 9 Solaris containers and host-based splitter .............................................. 10 Files in a host-based splitter implementation .......................................... 10 Managing the Solaris environment with RecoverPoint host-based splitter ......... 11 Logging: Details, gathering logs, checking for connectivity..................... 13 Using kutils to manage volumes and LUNs in Solaris............................... 15 Upgrading the host-based splitter on Solaris .......................................... 16 Solaris with UFS ................................................................................................. 19 Planned failover or failback ..................................................................... 19 Unplanned failover .................................................................................. 19 Testing DR procedures ............................................................................ 20 Recovering production ............................................................................ 21

RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

1

EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris

Technical Notes P/N 300-010-947

REV A01

March 29, 2010

These technical notes describe best practices for deploying RecoverPoint with the host-based splitter in Sun Solaris environments.

Topics include:

Introduction ......................................................................................................... 3

Overview .............................................................................................................. 3 Host-based splitter software components ................................................. 4 Software I/O stack with the Solaris host-based splitter............................ 4

Host-based splitter SAN discovery ............................................................ 5 Host-based splitter host control ................................................................ 5

Host-based splitter host replication .......................................................... 5 Host-based splitter splitter modes ............................................................ 6

Host-based splitter disaster states ............................................................ 6 Installing RecoverPoint host-based splitter software on Sun Solaris ................... 8

Requirements for installation .................................................................... 8 Performing the installation ........................................................................ 9

Checking the installation ........................................................................... 9 Solaris containers and host-based splitter .............................................. 10

Files in a host-based splitter implementation .......................................... 10 Managing the Solaris environment with RecoverPoint host-based splitter ......... 11

Logging: Details, gathering logs, checking for connectivity ..................... 13 Using kutils to manage volumes and LUNs in Solaris ............................... 15

Upgrading the host-based splitter on Solaris .......................................... 16 Solaris with UFS ................................................................................................. 19

Planned failover or failback ..................................................................... 19 Unplanned failover .................................................................................. 19

Testing DR procedures ............................................................................ 20 Recovering production ............................................................................ 21

Page 2: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

2

Introduction

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Veritas Storage Foundation ................................................................................ 23 VERITAS Volume Manager and VERITAS File System .............................. 23

VxVM disk groups and RecoverPoint Consistency groups ........................ 23 Planned failover or failback ..................................................................... 24

Unplanned failover .................................................................................. 25 Testing DR procedures ............................................................................ 27

Recovering production ............................................................................ 29 Solaris Volume Manager ..................................................................................... 31

Solaris ZFS ......................................................................................................... 33 Planned failover or failback ..................................................................... 33

Unplanned failover .................................................................................. 34 Testing DR procedures ............................................................................ 35

Recovering production ............................................................................ 36 Comments and getting help ................................................................................ 38

Page 3: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

3

Overview

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Introduction

Scope of document

This paper discusses Sun Solaris host-based splitter theory, installation, management, and basic troubleshooting in a Sun Solaris 10/RecoverPoint 3.2 environment.

Intended audience

These technical notes are intended for system administrators, storage administrators, solution architects and other IT professionals responsible for designing and implementing disaster recovery solutions for Solaris environments.

Overview

The RecoverPoint host-based splitter is proprietary software that is installed on either host operating systems, storage subsystems, or intelligent fibre switches. The primary function of a splitter is to “split” application writes so that they are sent to storage volumes and to the RPA simultaneously. The splitter carries out this activity efficiently, with no perceptible impact on host performance, since all CPU-intensive processing necessary for replication is performed by the RPA.

The following figure shows how writes are split in a RecoverPoint Solaris host-based splitter environment.

Page 4: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

4

Overview

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Host-based splitter software components

The RecoverPoint host-based splitter for Solaris has three main components:

• Watchdog / Service

• Separate process • Daemon that spawns other processes – for example, Kdriver and host log retriever

(HLR) • Monitors KDriver processes and can restart as needed

• KDriver, in user space

• Contains replication logic, builds the view of the SAN from the host perspective and communicates that view to the RPAs

• Performs SAN discovery • Communicates with and updates the RPAs

• Splitter, in kernel space

• Actual splitter driver • Manages the splitting logic – not the writer, but manages the splitting

Other components within the host-based splitter for Solaris include:

• Process Monitor, which checks high and low watermark for memory usage

• Kernel Log Reader, which reads from kernel memory and writes events that have occurred within the host based splitter system to the host log

Software I/O stack with the Solaris host-based splitter

The host-based splitter runs within the host operating environment in kernel space. The driver itself is inserted into the I/O stack directly before any multipath management software (such as PowerPath or MPxIO). The driver splits writes for any FC LUN that is seen by the host and is attached to the splitter (from the perspective of the RPA). Attaching a volume to a splitter is done on the RecoverPoint management console and initiates protecting that LUN with RecoverPoint replication.

There are instances where the LUN is attached to a splitter but writes are not split and sent to the RPA. This can occur if the module is not running in the kernel or if there are specific operational conditions that prevent the splitter driver from splitting writes (for example, the volume in pass-through mode).

Splitting a write has six stages:

1. Write from host production application, split the write.

2. Send the write to the assigned RPA.

3. Receive acknowledgement from the RPA.

4. Write to FC storage.

5. Receive acknowledgement from FC storage.

6. Acknowledge to host transaction is complete.

Page 5: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

5

Overview

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Host-based splitter SAN discovery

The host-based splitter in user space is responsible for SAN discovery. The host-based splitter discovers all entities in the SAN that it needs to manage and sends this host-based perspective to the RPAs. This view, derived from SAN discovery, is stored in the kernel splitter and in a file named Volumes.db. This file enables persistent host reboots. This discovery view, however, is built after both installation and system reboots.

The splitter is seen as an initiator within the environment. The discovery of SAN components includes the following:

• Mapping: physical entities → code objects (GUID)

• SAN View: storage, RPAs

• Sends SAN view to any RPA cluster

• Sends SAN view to kernel splitter

Host-based splitter host control

Host-based splitter host control has three states: Send, Get, and FC LUN connectivity monitoring.

The Send state reads the state from the host kernel and notifies the control process. In this state, the host-based splitter communicates to the RPA regarding host-to-RPA connectivity and different disaster states, including incomplete writes (ICW), over complete writes (OCW), disconnects. The Get state communicates with the control process within a RPA and obtains information on volumes and splitter states per volume from the RPA. Additionally, the Get state reports on any flags for volumes (for example, ICW, OCW, or disconnect). If the state has changed, the Get state updates and configures the kernel splitter to respond to the state change. The Get state is performed via SAN communication/discovery and is an ongoing process. The Get state also performs base connectivity monitoring for host-to-RPA and host-to-storage connectivity.

Host-based splitter host replication

The host-based splitter host replication logic contains information and configuration details on the host-to-RPA data path, backlog, pipe between delta marker and backlog, and the ability to update backlog from kernel to user space accordingly. The data path is the communication space between the host-based splitter and the RPA replicating the hosts LUNs. In normal operations, the data path is open and the host-based splitter splits writes and sends them to the RPA.

When communication between the host and RPA is disrupted, the host-based splitter begins working off the backlog. The backlog maintains write metadata, offsets and lengths of writes that occur while the data path is disrupted. The delta marker is part of the replication process and works with the backlog to update the RPA/host-based splitter when the data path returns to a normal state.

Page 6: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

6

Overview

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Host-based splitter modes

There are six different host-based splitter modes. The modes suggest different operational states that might affect the entire replication system. The modes essentially describe the state of the splitter as it responds to conditions within the environment. The states are crucial in troubleshooting the environment. The six splitter states are:

• N/A (not attached) - The splitter is not splitting or sending writes to the RPA.

• Split – LUNs are attached to splitters and splitters are sending copies of writes to the RPAs, so that writes can be replicated.

• Pass-through – While I/O still goes through the splitter, the writes go directly to FC storage. No I/O is sent to the RPA.

• Delay – Starts due to an I/O error and delays the I/O. Depending on the application, there are differences in how the application behaves during the delay. Delays expire after 60 seconds. If there is still a delay after 60 seconds (I/O error), the host-based splitter moves into pass-through mode and lets the host continue writing to its FC LUNs.

• Fail-all – A configured state for a consistency group in which the splitter is allowed to interrupt access to the host volumes. If the RPA is configured to regulate the application, writes fail and the application cannot continue.

• Redirect – This mode describes an environment where a consistency group is in virtual access mode. In this mode, the RPAs present a virtual image to the host (which thinks it is writing to actual storage). In this state both writes and reads are managed by the RPA managing the consistency group.

Host-based splitter disaster states

RecoverPoint replication environments are complex integrations of many different components. All components within the system need to be able to account for differences in the environment and adapt their behavior to best meet the defined working state for replication. Host-based splitters have the following disaster states:

• Host Reboot

Marking is done on the host. This is done to make sure no I/O is missed after the host has come up and the application has started, but the host-based splitter has not yet started. In this event, marking on host (MOH) is performed and, when the data path to the RPA becomes available to the host-based splitter, it syncs the backlog with the RPA.

Page 7: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

7

Overview

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

• Incomplete Write (ICW)

An ICW is when I/O reaches the storage but does not reach the RPA. ICW and OCW flags in the volumes database mark the incomplete or over-complete writes. When the problem is resolved, it has to do an undo since the write went to storage without the acknowledgement of the RPA. It verifies this via the delta marker.

• Over-complete write (OCW)

With an OCW, the RPA receives the write, but the volume is in pass-through mode and writes are made to FC LUNs. With MOH, the RPA is able to see the writes made, verify with the delta marker, and then replicate those missed writes over the WAN.

• Multiple disasters

When there is more than a single issue within the replication environment.

Page 8: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

8

Installing RecoverPoint host-based splitter software on Sun Solaris

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Installing RecoverPoint host-based splitter software on Sun Solaris

Requirements for installation

Installing the RecoverPoint host-based splitter software on Solaris requires a supported SAN attached, patched version of Sun Solaris. For supported platforms and operating environment versions, refer to “Sun Solaris” and “Sun Solaris x86” sections in the EMC Support Matrix RecoverPoint and verify the host OS version, patch update, and compatible RecoverPoint versions.

The Solaris host-based splitter package is available on the EMC Powerlink site. Prior to installation, know the RecoverPoint version so that you download a compatible version of the host-based splitter. You can install the host-based splitter package from an NFS share, local/SAN disk or from an optical media.

Complete the following steps to install the RecoverPoint host-based splitter on a Sun Solaris host. (Note that some of these steps may not be required if the SAN or RPAs have been previously installed.)

Storage requirements

The host should have assigned and accessible SAN storage. The host-based splitter splits writes that are going to FC SAN-attached LUNs. Determine which LUNs the hosts access, from which arrays, and which LUNs will be protected by RecoverPoint. You can determine this information from the output of inquiry (inq) command. These LUNs will be attached to the host-based splitter after installation is complete.

Multipathing software must be installed and configured on the hosts to manage all replicated LUNs.

Inq command output

Page 9: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

9

Installing RecoverPoint host-based splitter software on Sun Solaris

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Host-to-RPA zoning

The servers need to be zoned to RPAs so that the splitters can send writes to the RPAs.

On current RPAs (Gen3 & Gen4), all ports can be used for both initiator and target functionality. At a minimum, zone each host HBA port with an RPA port for each fabric. EMC best practices dictate that each zone contains two members: an initiator port (host HBA) and a target port (RPA port per fabric).

Ideal host-to-RPA zoning would be:

• Fabric A: Host_to_RPA_zone: HBA port 0 + RPA port 0 (first port, first HBA)

• Fabric B: Host_to_RPA_zone: HBA port 1 + RPA port 2 (third port, second HBA)

Once the zoning is complete and the fabric configuration refreshed, the host can discover the new RPAs as targets. Run the devfsadm command to refresh the host’s FC device list. Then run the inq command to confirm that you can see the RPA pseudo devices in the SAN. For each RPA, there should be at least one entry per path. Typically, there are at least 4 devices for a 2-node RPA cluster.

Inq showing RPAs as targets

Space requirements

The Solaris host-based splitter software requires approximately 500 KB on the root file system. Log sizes and retention duration for the Solaris host-based splitter are configurable. After installation, the default size for logging is 300 MB.

The driver must be installed in the root file system and requires approximately 50 MB.

Performing the installation

As root, use the pkgadd command to install the RecoverPoint host-based splitter package on Sun Solaris. Refer to the EMC RecoverPoint Deployment Manager Product Guide for detailed instructions.

Checking the installation

After installation is complete, the host-based splitter is running on the host. Verify the installation: check that the daemon is running and that the splitter can be discovered from the RecoverPoint management console.

1. To verify the software is running and properly installed:

ps –ef |grep –i kdriver

Page 10: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

10

Installing RecoverPoint host-based splitter software on Sun Solaris

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

2. Check the status of the running kdriver:

/etc/init.d/kdrv status

3. After the host-based splitter is installed and running, add the splitter to the system and attach all of the replicated LUNs to the system.

Installing host-based splitter in a cluster environment

Installing the host-based splitter package in a host-based cluster follows the same process. In a cluster that has an active and passive node, the host-based splitter can be installed on the passive node. As long as the node is passive, the splitter does not split writes. After failover, the passive node becomes the active node and the host-based splitter on the newly active node begins splitting writes.

Solaris containers and host-based splitter

Containers (zones) within Solaris provide a mechanism to run isolated environments for applications. Processes from individual applications running in a zone are prevented from interfering or interrupting other activities within the system. Access to system-wide processes, network interfaces, file systems, and devices is restricted to prevent interference from different processes within the system and other zones. In a Solaris environment, a host-based splitter that is installed in a single zone can only be attached to devices available to that zone. Likewise, devices in a zone can only be attached to a splitter within that zone.

Files in a host-based splitter implementation

Most files in a host-based splitter installation are placed in the base directory, /kdriver. From there, the major areas of interest include the following subdirectories:

• /kdriver/bin -> location of the host-based splitter software, scripts, utilities

• /kdriver/etc -> start and stop init files, start and stop scripts

• /kdriver/info_collector -> logs collector for a host-based splitter

Page 11: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

11

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

• /kdriver/log -> installation and configuration logs

• /kdriver/tweak -> collection of parameters passed to the host-based splitter upon configuration/startup. These files dictate any unique parameters that the driver may need or require.

Managing the Solaris environment with RecoverPoint host-based splitter

To check the status of a host-based splitter, run the start up script, /etc/init.d/kdrv.

/etc/init.d/kdrv status

In the output shown above, kdriver_swd is the watchdog service, kdriver is the user space module, and hlr is the host log retrieval process.

Stopping the host-based splitter

To stop the host-based splitter:

1. Using the RecoverPoint management console:

1. Detach all volumes from all splitters.

2. Remove all splitters from all hosts.

2. Log in to the host as root.

3. Stop the host-based splitter:

/etc/init.d/kdrv stop

If you choose option 1, a warning appears stating “This may impede operation of the host operation, and may also necessitate a full resynchronization for some volumes.” This action stops the host-based splitter and, according to the policy of each volume, either prevents host I/O writes to the volume or allows host I/O writes to the volume.

If you choose option 2, a warning appears stating “This will necessitate a full resynchronization.” This action stops the host-based splitter and allows host I/O writes to the volumes.

Page 12: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

12

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

If you choose option 3, a warning appears stating “This may impede operation of the host operation.” This action stops the host-based splitter and prevents host I/O writes to the volumes.

Starting the host-based splitter

To start the host-based splitter, enter:

/etc/init.d/kdrv start

Checking devices

The kdrv check_devs option allows you to check the volumes attached to splitters. Checking devices tests all path connectivity to RPAs (referred to as kbox from this utility). The check_devs option also provides details for volumes attached to the splitters, including GUID identification, storage paths including major and minor numbers, volume size, host cluster state, SCSI reservations, and number of reads and writes performed to the volume.

To check volumes, enter:

/etc/init.d/kdrv check_devs

Details on volume 0 (first volume attached to the splitter) shown from check_devs:

volumes:

*********** splitter vol 0: kbox lun = 1 name = device name: major=0 guid=0x383a571bbeb4ebfb storage path 0 :

Page 13: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

13

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

path = major:118 minor:154 storage path 1 : path = major:118 minor:178 storage path 2 : path = major:280 minor:114 kbox-1 = major:118 minor:18 mode = split NonDomino mode = ON Boot Device = NO Dirty flag = clear Incomplete write = clear Meta Data Available = clear Splitter Initiated Unregulate = clear reservation state = reserved cluster type = none active = 0 size = 8256000 blocks delayExpiryTimeout = 60 fo_time = 0 basket id = 1 b0=0 b1=0 open=1 bytes_per_sector=512 number_of_heads=15 sectors_per_track=128 number_of_cylinders=4300 num_writes=0 num_end_writes=0 num_reads=1 num_end_reads=1 num_split=0 num_kbox_io_done=0 num_redirect_reads=0 num_redirect_reads_done=0 num_redirect_writes=0 num_redirect_writes_done=0 num_send_to_storage=0

num_storage_io_done=0

Logging: Details, gathering logs, checking for connectivity

Collect logs via the RPAs in the environment. An option that expedites log collection is to enable “Automatic host info collection” for the splitter. This setting is for the host-based splitter on a specific host within the environment. To set the host for automatic info collection by the RPAs:

/kdriver/bin/kutils/manage_auto_host_info_collection enable

Alternatively, if automatic host info collection is not required, disable it:

/kdriver/bin/kenv.sh /kdriver/bin/kutils manager_auto_host_info_collection <parameter>

To collect logs for the host-based splitter:

1. Login to an RPA within the environment as user boxmgmt:

ssh [email protected]

2. Select [3] Diagnostics, [4] Collect system information.

Page 14: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

14

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

3. Follow the prompts to select for the following options:

• Date and time for collection start (beginning of the range). • Time and date for collection stop (end of range). • Choose to copy the collected logs to an FTP server or output in the default location

(on the RPA available via ftp via user: webdownload) • Choose the number of sites, only one or both – you must choose the site for which

the splitter is installed and requiring collection. • Select to get data for splitters only – use the default collection file name.

Completed splitter collection

After log collection has completed, the log file is available either on the specified FTP server or via webdownload (as shown above). The log file is a tar file with the gathered details as the name of the file:

sysInfo-no-rpas-splitters-from-l1-l2-2010.03.15.15.43.05.tar

The format of the file name is: systemInformation-no RPAs – only splitters – gathered details from site1, box1 and site1, box2- date and time stamp of the log collection.

Untarring the log file yields the following files:

• HLR-l1-2010.03.15.15.43.10 -> directory containing the collected details

• HLR-l1_success -> details on the collection (success or fail)

• HLR_hosts -> host log retriever (HLR) hosts with splitters installed from which data was collected

• HostsCollectionRemote.log -> log of collection activity

The directory containing the actual logs has the date and time stamp as part of the directory name: HLR-l1-2009.09.15.15.43.10. This directory contains a zipped/tarred file for each splitter from which files were collected.

These files should be sent to support for analysis.

Page 15: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

15

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Using kutils to manage volumes and LUNs in Solaris

Source the /kdriver/bin/kenv.sh file before running kutils. kutils on a Sun Solaris platform allows you to perform the following actions:

• Enable or disable automatic host info collection:

• Show volume details by volume name:

• Show all volumes:

• Start/stop the host driver splitter (similar to running /etc/init.d/kdrv directly)

kutils in a Sun Solaris environment does support base OS functionality such as managing disk cache, buffers, or mounting and unmounting devices. Instead, use the native OS tools to perform these operations.

Managing SCSI reservations within a Sun Solaris/RecoverPoint is primarily managed by RecoverPoint. If the consistency group for a given host has been defined to support SCSI reservations, the RPAs know how to handle and manage reservations based on the desired, defined behaviors for the consistency group.

Note: If the application or the cluster system is using SCSI reservation on the replication LUNs, make sure to check the Reservation Support checkbox in the General section of the Advanced Consistency Group settings under the Policy tab.

Page 16: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

16

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Upgrading the host-based splitter on Solaris

Beginning with RecoverPoint 3.1, splitters and RPAs can be at different versions. With 3.1, the RecoverPoint environment (RPAs) can communicate with older versions of the splitters and their protocols (up to two different major versions). Beyond this, the splitters need to be upgraded.

Within the Solaris operating environment, only one unique instance of a package can exist at any given time. Given this requirement, it is necessary to first uninstall the currently installed host-based splitter.

To uninstall and then reinstall the host-based splitter:

1. Using the RecoverPoint management console:

1. Detach all volumes from all splitters.

2. Remove all splitters from all hosts.

2. Log in to the host as root.

3. Stop the host-based splitter:

/etc/init.d/kdrv stop

Page 17: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

17

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

If you choose option 1, a warning appears stating “This may impede operation of the host operation, and may also necessitate a full resynchronization for some volumes.” This action stops the host-based splitter and, according to the policy of each volume, either prevents host I/O writes to the volume or allows host I/O writes to the volume.

If you choose option 2, a warning appears stating “This will necessitate a full resynchronization.” This action stops the host-based splitter and allows host I/O writes to the volumes.

If you choose option 3, a warning appears stating “This may impede operation of the host operation.” This action stops the host-based splitter and prevents host I/O writes to the volumes.

4. Remove the host-based splitter package by running the command:

pkgrm kdriver

5. Restart the host by running the command:

reboot -- -r

6. As root, use the pkgadd command to install the RecoverPoint host-based splitter package on Sun Solaris. Refer to the EMC RecoverPoint Deployment Manager Product Guide for detailed instructions.

7. On the RecoverPoint management console, discover the newly installed splitter.

Page 18: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

18

Managing the Solaris environment with RecoverPoint host-based splitter

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

8. Re-attach volumes to the splitter. If the host was blocked from accessing the volumes during this activity, the volumes can be attached “as clean” and a full sweep will be avoided.

Note: Attaching volumes as clean can potentially lead to data corruption. You should do this only if applications are stopped and all file systems are unmounted.

9. Check the replication consistency groups to make sure that replication can now continue normally.

Page 19: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

19

Solaris with UFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Solaris with UFS

Planned failover or failback

In some scenarios, you may fail over or fail back intentionally— for example, to perform maintenance on the source site, or to perform failback from the secondary site after resolving a problem at the primary site.

To perform failover with no data loss requires a short shutdown of applications (and therefore requires scheduling the downtime during a maintenance window).

1. On the source-site host(s):

1. Stop the applications protected by the consistency group you plan to failover.

2. Unmount all application volumes:

umount /<mountpoint>

2. On the RecoverPoint management console:

1. Create a bookmark for the consistency group.

2. Enable logged access on the bookmarked point in time. (You may need to wait until the system enables access to the replica.)

3. On the target-site host(s):

1. Mount the volumes:

mount /dev/dsk/<device Name> <mount point>

Note: It is important that the hosts at the target site do not mount the replicated volumes during system boot.

2. Test the volumes and make sure that applications work as expected.

4. On the RecoverPoint management console:

3. Click Failover Action and select Recover Production.

4. Approve the operation.

5. Resume production operation from the remote site.

6. To fail back production to the primary site, follow the planned failover procedure.

Unplanned failover

In some cases, a failure at the source site requires an immediate unplanned failover. The following procedure assumes that you cannot perform planned failover.

To perform an unplanned failover:

1. On the source-site host(s):

1. If necessary, stop the production applications.

2. If necessary, unmount file systems:

umount /<mount point>

Page 20: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

20

Solaris with UFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

2. On the target-site host, make sure that applications are not running and file systems are not mounted. If necessary, unmount file systems:

umount /<mount point>

3. On the RecoverPoint management console:

1. Bookmark the current point in time to annotate the unplanned failover.

2. Enable logged access to the earliest point in time possible. (You may need to wait for the system to enable logged access.)

If a disaster or logical corruption occurred, identify the latest bookmark or specify the most recent point in time prior to the disaster.

4. On the target-site host(s):

1. Run the fsck command for each of the replicated volumes:

fsck –y /dev/rdsk/<device name>

2. Mount the volumes:

mount /dev/<device name>/VolName /<mount point>

3. Test volumes or start your application.

4. If the selected point in time does not meet your requirements:

1. Stop the application.

2. Unmount file systems.

3. Using the RecoverPoint management console, enable logged access on a different point in time.

4. Repeat this process until you are satisfied with the selected point in time.

5. On the RecoverPoint management console:

1. Click Failover Action and select Recover Production.

2. Approve the operation.

6. Resume production operations from the remote site.

7. To fail back production to the primary site, follow the planned failover procedure.

Testing DR procedures

You should periodically test your disaster recovery procedures. RecoverPoint allows you to mount and test the replicated image at the target site, without interrupting replication and without affecting the operation at the source site.

To test your DR site:

1. Make sure that the applications you protect can run in parallel both at the production and the DR site. Some applications may require changing settings (such as IP setting and connection names) to allow you to test the application at the DR site without impacting the production site. Make the necessary changes to allow the application to run on both sites.

Page 21: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

21

Solaris with UFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

2. On the target-site host, make sure that application is not running and the file system is unmounted:

1. If necessary, stop the application.

2. If necessary, unmount the file system:

umount /<mount point>

3. On the RecoverPoint management console:

1. Bookmark the current point in time to annotate the unplanned failover.

If possible create an application-consistent point in time bookmark (such as Oracle hot backup mode). Refer to “Appendix A: Bookmark in hot backup mode script” in Replicating Oracle with EMC® RecoverPoint Technical Notes.

Note: Make sure to flush the file system cache by running the sync command prior to taking the bookmark.

2. Enable logged access to the bookmarked point in time. (You may need to wait for the system to enable logged access.)

4. On the target-site host(s):

1. Run the fsck command:

fsck –y /dev/rdsk/<device name>

2. Mount volumes:

mount /dev/dsk/<device name> /<mountpoint>

5. Test volumes.

6. During testing, the RecoverPoint consistency group logs any I/Os to the system journal. Since the journal capacity is limited, you need to monitor journal usage during testing. If you are close to the journal limit, you can switch to an active-active configuration. Consult the RecoverPoint Administrator’s Guide from more information on active-active mode.

After testing is complete, resume distribution at the target site.

7. On the target-site host(s):

1. Stop the application.

2. Unmount volumes:

umount /<mountpoint>

8. On the RecoverPoint management console, disable image access at the target side to resume distribution.

Recovering production

RecoverPoint allows you to recover to any one of multiple consistent point-in-time images maintained in its journal. This enables you to recover information as it existed prior to a problem (for example, before an accidental deletion or file corruption).

The restore production procedure assumes you have a DR server to perform testing at the replica side.

Follow the instructions in the unplanned failover up to step 4 in which you need to disable image access following a successful test.

Page 22: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

22

Solaris with UFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

At this step confirm and approve that the enabled point-in-time is the correct point-in-time, to which you want to restore your production site.

After you complete the following steps you will:

• Restore the information at your production side from the replica, this means that you will lose the data as it was prior to when you initiate the restore production procedure.

• Will lose any information in the replica journal which is later than the selected point in time, earlier PIT will still be available in the journal.

1. On the RecoverPoint management console:

1. Click Failover Action and select Recover Production.

2. Approve the operation.

3. Wait until initialization completes and the production site role states Production (being restored).

2. On the target side host:

1. Stop the application.

2. Unmount volumes:

umount /<mountpoint>

3. On the RecoverPoint management console:

1. Bookmark the current point in time.

2. On the production copy, enable logged access to the bookmarked point in time. (You may need to wait for the system to enable logged access.)

4. On the production host (that is being recovered):

1. Mount volumes:

mount /dev/dsk/<device name> /<mountpoint>

5. Test volumes and make sure that the application works as expected at the production site.

6. On the RecoverPoint management console, click Resume Production.

Page 23: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

23

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Veritas Storage Foundation

Veritas Storage Foundation provides heterogeneous online storage management. Based on Veritas Volume Manager and Veritas File System, it provides a standard set of integrated tools to centrally manage data growth and provide data protection.

VERITAS Volume Manager and VERITAS File System

VERITAS Volume Manager (VxVM) allows you to manage physical disks as logical devices called volumes. A volume is a logical device that appears to data management systems as a physical disk partition device. Through support of Redundant Array of Independent Disks (RAID), VxVM protects against disk and hardware failure. Additionally, VxVM provides features that enable fault tolerance and fast recovery from disk failure.

Veritas File System (VxFS) is a journaled file system. With journaling, metadata changes are first written to a log (or journal), and then to disk. Since changes do not need to be to be written in multiple places, throughput is much faster as the metadata is written asynchronously.

RecoverPoint provides full support for data replication and disaster recovery for VxVM and VxFS. RecoverPoint can replicate physical disks managed by VxVM and VxFS over any distance to a target site, enhancing VxVM and VxFS fault tolerance and recovery capabilities.

VxVM disk groups and RecoverPoint Consistency groups

A VxVM disk group is a collection of physical disks (and a collection of virtual objects connected to these disks: VM disks, subdisks, plexes, and volumes) that share a common configuration. In most cases, a VxVM disk group is used by a single application.

RecoverPoint replication is based on a logical entity called a consistency group. LUNs on primary and secondary sites are assigned to a consistency group to define the set of data to replicate. Data consistency and write-order fidelity are maintained across all disks assigned to a consistency group.

When using VxVM and RecoverPoint, each VxVM disk group must, in its entirety, be part of a RecoverPoint consistency group or a consistency group set. Typically, a separate RecoverPoint consistency group is created for each VxVM disk group. However, a single consistency group can comprise multiple VxVm disk groups. In this case, a failover of a consistency group requires the failover of all VxVM disk groups in the consistency group.

When replicating a disk that belongs to a VxVM disk group, all disks in that disk group must be replicated, including disks that are not currently in use.

RecoverPoint cannot replicate the VxVM rootdg (root disk group). Therefore, make sure that any volume to be replicated must not belong to rootdg.

Page 24: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

24

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Planned failover or failback

In some scenarios, you may fail over or fail back intentionally— for example, to perform maintenance on the source site, or to perform failback from the secondary site after resolving a problem at the primary site.

To perform failover with no data loss requires a short shutdown of applications (and therefore requires scheduling the downtime during a maintenance window).

1. On the source-site host(s):

1. Stop all applications that are protected by the consistency group you plan to failover.

2. Unmount all application volumes:

umount <mount point>

3. Deport the VxVM disk group:

vxdg deport <dg name>

If necessary, deport the disk group forcibly:

vxdg -C -f deport <dg name>

2. On the target-site host(s), make sure that the file systems are not mounted and that the VxVM disk group is deported.

1. If necessary, unmount the file system:

umount /<mount point>

2. If necessary, deport the VxVM disk group:

vxdg deport <dg name>

If necessary, deport the disk group forcibly:

vxdg -C -f deport <dg name>

3. On the RecoverPoint management console:

1. Create a bookmark in the system.

2. Enable logged access on the bookmarked point in time. (You may need to wait until the system enables access to the replica.)

4. On the target-site host(s):

1. Run the command:

vxdctl enable

This forces VxVM to restart its daemon, during which it rescans the attached disks and discovers any changes to them.

2. Import the replicated disk group:

vxdg –C import <dg name>

3. Start all volumes that belong to the disk group:

vxvol –g <dg name> startall

Page 25: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

25

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

If the vxvol command fails, use the following procedure to recover:

1. Print disk group information so that you can determine if any volumes are in recovery mode:

vxprint –htg <dg name>

2. Recover each plex that is in DISABLED RECOVER state:

vxmend -g <dg name>-o force off <plex name> vxmend -g <dg name> on <plex name> vxmend -g <dg name> fix clean <plex name>

3. Confirm that all disabled plexes for the disk group are in the DISABLE CLEAN state:

vxprint –htg <dg name>

4. Start the disk group:

vxvol -g <dg name> startall

4. Mount volumes:

mount –F vxfs [-o largefiles] /dev/vx/dsk/<dg name>/VolName/ <mount point>

Note: It is important that the hosts at the target site do not mount the replicated volumes during system boot.

5. Test the volumes and make sure that applications works as expected.

5. On the RecoverPoint management console:

1. Click Failover Action and select Recover Production.

2. Approve the operation.

6. Resume production operation from the remote site.

7. To failback production back to the primary site, follow the planned failover procedure.

Unplanned failover

In some cases, a failure at the source site requires an immediate unplanned failover. The following procedure assumes that you cannot perform planned failover.

To perform an unplanned failover:

1. On the source-site host(s):

1. If necessary stop the production application.

2. If necessary, unmount the file system:

umount /<mount point>

3. If necessary, deport the VxVM disk group:

vxdg deport <dg name>

If necessary, deport the disk group forcibly:

vxdg -C -f deport <dg name>

Page 26: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

26

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

2. On the target-site host, make sure that the application is not started, the file system is not mounted, and the VxVM disk group is deported.

1. If necessary, stop the application.

2. If necessary, unmount the file system:

umount /<mount point>

3. If necessary, deport the VxVM disk group:

vxdg deport <dg name>

If necessary, deport the disk group forcibly:

vxdg -C -f deport <dg name>

3. On the RecoverPoint management console:

1. Bookmark the current point in time to annotate the unplanned failover.

2. Enable logged access to the earliest point in time possible. (You may need to wait for the system to enable logged access.)

If a disaster or logical corruption occurred, identify the latest bookmark or specify the most recent point in time prior to the disaster.

4. On the target-site host(s):

1. Restart the VxVM daemon: vxdctl enable

This forces VxVM to restart its daemon, during which it rescans the attached disks and discovers any changes to them.

2. Import the replicated disk group: vxdg –C import <dg name>

3. Start all the volumes that belong to the disk group:

vxvol –g <dg name> startall

If the vxvol command fails, use the following procedure to recover:

1. Print disk group information so that you can determine if any volumes are in recovery mode:

vxprint –htg <dg name>

2. Recover each plex that is in DISABLED RECOVER state:

vxmend -g <dg name>-o force off <plex name> vxmend -g <dg name> on <plex name> vxmend -g <dg name> fix clean <plex name>

3. Confirm that all disabled plexes for the disk group are in the DISABLE CLEAN state:

vxprint –htg <dg name>

4. Start the disk group:

vxvol -g <dg name> startall

5. Run the fsck command:

fsck –y –F vxfs [-o full] /dev/vx/rdsk/<dg name>/*

Page 27: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

27

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

6. Mount the volumes:

mount –F vxfs [-o largefiles] /dev/vx/dsk/<dg name>/VolName/ <mountpoint>

4. Test the volumes and ensure that the application works as expected.

5. If the selected point in time does not meet your requirements, do the following:

1. Stop the application.

2. Unmount file systems.

3. Using the RecoverPoint management console, enable logged access on a different point in time.

4. Repeat this process until you are satisfy with the selected point in time.

5. On the RecoverPoint management console:

1. Click Failover Action and select Recover Production.

2. Approve the operation.

6. Resume production operation from the remote site.

7. To failback production back to the primary site, follow the planned failover procedure.

Testing DR procedures

You should periodically test your disaster recovery procedures. RecoverPoint allows you to mount and test the replicated image at the target site, without interrupting replication and without affecting the operation of the source site.

To test your DR site:

1. Make sure that the applications you protect can run in parallel both at the production and the DR site. Some applications may require changing settings (such as IP setting and connection names) to allow you to test the application at the DR site without impacting the production site. Make the necessary changes to allow the application to run on both sites.

2. On the target-site host, ensure that application is not started, the file system is not mounted, and the VxVM disk group is deported.

1. If necessary, stop the application.

2. If necessary, unmount the file system:

umount /<mount point>

3. If necessary, deport the VxVM disk group: vxdg deport <dg name>

If necessary, deport the disk group forcibly:

vxdg -C -f deport <dg name>

3. On the RecoverPoint management console:

4. Bookmark the current point in time to annotate the unplanned failover.

5. If possible create an application-consistent point in time bookmark (such as Oracle hot backup mode). Refer to “Appendix A: Bookmark in hot backup mode script” in Replicating Oracle with EMC® RecoverPoint Technical Notes.

Page 28: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

28

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Note: Make sure to flush the file system cache by running the sync command prior to taking the bookmark.

6. Enable logged access to the bookmarked point in time. (You may need to wait for the system to enable logged access.)

4. On the target-site host(s):

1. Restart the VxVM daemon, so that it rescans the attached disks and discovers any changes to them:

vxdctl enable

2. Import the replicated VxVM disk group: vxdg –C import <dg name>

3. Start all volumes that belong to the disk group: vxvol –g <dg name> startall

If the vxvol command fails, use the following procedure to recover:

1. Print disk group information and check if there are volumes in recovery mode:

vxprint –htg <dg name>

2. For each plex that is in DISABLED RECOVER state, run the following commands in order to recover:

vxmend -g <dg name> -o force off <plex name> vxmend -g <dg name> on <plex name> vxmend -g <dg name> fix clean <plex name>

3. Confirm that all disabled plexes for the disk group are in the DISABLE CLEAN state:

vxprint –htg <dg name>

4. Start the disk group:

vxvol -g <dg name> startall

4. Run the fsck command:

fsck –y –F vxfs [-o full] /dev/vx/rdsk/<dg name>/*

5. Mount volumes:

mount –F vxfs [-o largefiles] /dev/vx/dsk/<dg name>/VolName/ <mountpoint>

5. Perform your testing on the application

6. During testing, the RecoverPoint consistency group logs any I/Os to the system journal. Since the journal capacity is limited, you need to monitor journal usage during testing. If you are close to the journal limit, you can switch to an active-active configuration. Consult the RecoverPoint Administrator’s Guide from more information on active-active mode.

After testing is complete, resume distribution at the target site.

7. On the target-site host(s):

1. Stop the application

2. Unmount volumes:

umount /<mountpoint>

Page 29: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

29

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

3. Deport the VxVM disk group:

vxdg deport <dg name>

8. On the RecoverPoint management console, disable image access at the target side to resume distribution.

Recovering production

RecoverPoint allows you to recover to any one of multiple consistent point-in-time images maintained in its journal. Using this ability, you can recover information as it existed prior to a problem (for example, before an accidental deletion or file corruption).

The restore production procedure assumes you have a DR server to perform testing at the replica side.

Follow the instructions in the unplanned failover up to step 4 in which you need to disable image access following a successful test.

At this step confirm and approve that enabled point-in-time is the correct point-in-time, to which you wish to restore your production site.

1. Restore the information at your production side from the replica, this means that you will lose the data as it was prior to when you initiate the restore production procedure.

2. Will lose any information in the replica journal which is later than the selected point in time, earlier PIT will still be available in the journal.

3. On the RecoverPoint management console:

1. Click on the Failover Action and select Recover Production.

2. Approve the operation.

3. Wait until the initialization completes and the production site role states Production (being restored).

4. On the target side host

1. Stop the application.

2. Unmount volumes:

umount /<mountpoint>

3. Deport the VxVM disk group:

vxdg deport <dg name>

5. On the RecoverPoint management console:

1. Bookmark the current point in time.

2. On the production copy, enable logged access to the bookmarked point in time. (You may need to wait for the system to enable logged access.)

6. On the production host (that is being recovered):

1. Restart the VxVM daemon, during which it rescans the attached disks and discovers any changes to them:

vxdctl enable

2. Import the replicated disk group: vxdg –C import <dg name>

Page 30: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

30

Veritas Storage Foundation

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

3. Start all the volumes that belong to the disk group: vxvol –g <dg name> startall

If the vxvol command fails, use the following procedure to recover:

1. Print disk group information so that you can determine if any volumes are in recovery mode:

vxprint –htg <dg name>

2. Recover each plex that is in DISABLED RECOVER state:

vxmend -g <dg name>-o force off <plex name> vxmend -g <dg name> on <plex name> vxmend -g <dg name> fix clean <plex name>

3. Confirm that all disabled plexes for the disk group are in the DISABLE CLEAN state:

vxprint –htg <dg name>

4. Start the disk group:

vxvol -g <dg name> startall

4. Mount volumes:

mount –F vxfs [-o largefiles] /dev/vx/dsk/<dg name>/VolName/ <mount point>

Note: It is important that the hosts at the target site do not mount the replicated volumes during system boot.

Test the volumes and ensure that application works as expected at the production site

7. On the RecoverPoint management console, click on Resume Production.

Page 31: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

31

Solaris Volume Manager

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Solaris Volume Manager

Solaris Volume Manager is logical volume manager that lets you manage large numbers of disks and the data on those disks. Common Solaris Volume Manager tasks include:

• Increasing storage capacity

• Increasing data availability

• Easing administration of large storage devices

In some instances, Solaris Volume Manager can also improve I/O performance.

Guidelines on how to use Solaris volume manager is available in Sun Solaris Volume Manager Administration Guide.

To replicate volume with SVM, hosts at the replica side must fulfill the following prerequisites:

• SVM services should be enabled using SMF. (Refer to the Solaris Volume Manager Administration Guide for more information.)

• Each host should have a dedicated State Database Replicas. Use the metadb –c –f command to create a State Database Replicas. RecoverPoint does not replicate the state database.

When replicating ZFS pools, place all ZFS pool devices into a single consistency group or a single consistency group set. This includes the following devices:

• Any disk that is member of mirror, raid, or a stripe/concatenation.

• Any spare disk(s)

Perform the following procedure when setting up a new DR host at the replica side or when there is a metadevice configuration change at the production host:

1. On the source, save the metadevices configuration to a file:

metastat -p >> /tmp/md.tab

2. On the DR host backup /etc/lvm/md.tab into a different file. For example:

cp /etc/lvm/md.tab /etc/lvm/md.tab.<date>

3. Copy the file /tmp/md.tab from the production host to the DR host, overwriting the DR host file /etc/lvm/md.tab.

4. On the DR host, edit the devices in the file md.tab, make sure to rename the devices to match the names on the DR host. The mapping should be exactly the same as the devices are correlated in the RecoverPoint consistency group replication set.

5. Delete any meta device that needs to be reconfigured at the DR host:

• For soft partitions:

metaclear –p <metadevice>

• For any other meta device.

metaclear <metadevice>

6. Using the RecoverPoint management console, enable image access for the replicated consistency group.

Page 32: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

32

Solaris Volume Manager

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

7. As the root user at the DR host:

1. Reconfigure devices:

devfsadm –c

2. Recreate the metadevices from the configuration file /etc/lm/md.tab.

metainit –a

3. Confirm that all metadevices were created:

metastat [–p]

4. Mount the replicated file system and start the application.

Note: The metadevices block and raw devices are located under /dev/md/[r]dsk/<metadevice name>.

5. Confirm that data is accessible and that the application is working as expected.

6. Stop your application and unmount the replicated file systems.

8. In the RecoverPoint management console, enable image access to the selected consistency group.

All other procedures for Solaris SVM are the same as the Solaris UFS procedures listed in Solaris with UFS. The only difference is that the metadevices path for Solaris SVM is /dev/md/[r]dsk/<metadevice name>.

Page 33: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

33

Solaris ZFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Solaris ZFS

The ZFS file system is integrated into the Solaris 10 operating system. ZFS presents a pooled storage model that eliminates the concept of volumes as well as all of the related partition management, provisioning, and file system sizing matters. ZFS combines scalability and flexibility while providing a simple command interface.

For more information on ZFS, refer to the Sun’s Solaris ZFS Administration Guide, available at http://docs.sun.com/app/docs/doc/819-5461.

EMC supports ZFS in Solaris 10 11/06 or later. The snapshot and clone features of ZFS are supported only through Sun Microsystems.

Due to a limitation with Sun ZFS, there is currently no way when using ZFS pools to access a CDP ZFS replica from the same host that runs the production environment.

When replicating ZFS pools, place all the ZFS pool devices into a single consistency group or a consistency group set, this includes the following devices:

• Any disk that is member of mirror, raid, or a stripe/concatenation.

• Any spare disk(s) that is part of the ZFS pool

• Any log device (ZIL)

To list all devices that belong to a ZFS pool, run the zpool status command on your production system:

Note: There is a known issue when replicating a pool with hot spares. When the pool is imported at the remote site, it may include an incorrect reference to the spare devices.

Planned failover or failback

In some scenarios, you may fail over or fail back intentionally— for example, to perform maintenance on the source site, or to perform failback from the secondary site after resolving a problem at the primary site.

To perform failover with no data loss requires a short shutdown of applications (and therefore requires scheduling the downtime during a maintenance window).

1. On the source-site host(s):

1. Stop all applications that are protected by the consistency group you plan to failover.

2. Export the ZFS pool(s) in your production system. zpool (-f) export <pool name>

2. On the target-site host(s), ensure that the file systems are not mounted and that the ZFS pools are exported.

1. If necessary, list the imported ZFS pools:

zpool status

2. If necessary, export the ZFS pools:

zpool export <pool name>

If necessary, export the ZFS pool forcibly:

zpool export -f <pool name>

Page 34: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

34

Solaris ZFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

3. On the RecoverPoint management console:

1. Create a bookmark for the consistency group.

2. Enable logged access on the bookmarked point in time. (You may need to wait until the system enables access to the replica.)

4. On the target-site host(s):

1. List any available ZFS pool at the DR host: zpool import

2. To import a replicated ZFS pool: zpool import –f <pool name>

3. Test the volumes and ensure that application works as expected.

5. On the RecoverPoint management console:

1. Click on the Failover Action and select Recover Production.

2. Approve the operation.

6. Resume production operation from the remote site.

7. To failback production back to the primary site, follow the planned failover procedure.

Unplanned failover

In some cases, a failure at the source site requires an immediate unplanned failover. The following procedure assumes that you cannot perform planned failover.

To perform an unplanned failover:

1. On the source-site host(s):

1. If necessary stop the production applications.

2. If necessary, list the imported ZFS pools:

zpool status

3. If necessary, use the following command to unmount the file system and export the ZFS pool:

zpool export <pool name>

If necessary, export the ZFS pool forcibly:

zpool export –f <pool name>

2. On the target-site host, make sure that applications are not started, file systems are not mounted, and ZFS pools are exported.

1. If necessary, stop the application.

2. If necessary, unmount the file system and export the ZFS pool:

zpool export <pool name>

If necessary, export the ZFS pool forcibly:

zpool export –f <pool name>

3. On the RecoverPoint management console:

1. Bookmark the current point in time to annotate the unplanned failover.

Page 35: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

35

Solaris ZFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

2. Enable logged access to the earliest point in time possible. (You may need to wait for the system to enable logged access.)

If a disaster or logical corruption occurred, identify the latest bookmark or specify the most recent point in time prior to the disaster.

4. On the target-site host(s):

1. List the available ZFS pools at the DR host:

zpool import

2. To import a replicated ZFS pool:

zpool import –f <pool name>

3. Test the volumes and make sure that applications works as expected. If the selected point in time does not meet your requirement, perform the following:

1. Stop the application.

2. Export the ZFS pool.

3. Using the RecoverPoint management console, enable logged access on a different point in time.

4. Repeat this process until you are satisfied with the selected point in time.

5. On the RecoverPoint management console:

1. Click on Failover Action and select Recover Production.

2. Approve the operation.

6. Resume production operation from the remote site.

7. To failback production back to the primary site, follow the planned failover procedure.

Testing DR procedures

You should periodically test your disaster recovery procedures. RecoverPoint allows you to mount and test the replicated image at the target site, without interrupting replication and without affecting the operation of the source site.

To test your DR site:

1. Make sure that the applications you protect can run in parallel both at the production and the DR site. Some applications may require changing settings (such as IP setting and connection names) to allow you to test the application at the DR site without impacting the production site. Make the necessary changes to allow the application to run on both sites.

2. On the target-site host(s), make sure that the file systems are not mounted and that ZFS pools are deported.

1. If necessary, list the imported ZFS pools:

zpool status

2. If necessary, export the ZFS pools:

zpool export <pool name>

If necessary, export the ZFS pool forcibly:

zpool export -f <pool name>

Page 36: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

36

Solaris ZFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

3. On the RecoverPoint management console:

1. Bookmark the current point in time to annotate the unplanned failover.

If possible create an application-consistent point in time bookmark (such as Oracle hot backup mode). Refer to “Appendix A: Bookmark in hot backup mode script” in Replicating Oracle with EMC® RecoverPoint Technical Notes.

Note: Make sure to flush the file system cache by running the sync command prior to taking the bookmark.

2. Enable logged access to the bookmarked point in time. (You may need to wait for the system to enable logged access).

4. On the target-site host(s):

1. List the available ZFS pools at the DR host:

zpool import

2. To import a replicated ZFS pool:

zpool import –f <pool name>

5. Perform your testing on the application.

6. During testing, the RecoverPoint consistency group logs any I/Os to the system journal. Since the journal capacity is limited, you need to monitor journal usage during testing. If you are close to the journal limit, you can switch to an active-active configuration. Consult the RecoverPoint Administrator’s Guide from more information on active-active mode.

After testing is complete, resume distribution at the target site.

7. On the target-site host(s):

1. Stop the application.

2. Export the ZFS pool:

zpool export <pool name>

If necessary, export the ZFS pool forcibly:

zpool export –f <pool name>

8. On the RecoverPoint management console, disable image access at the target side to resume distribution.

Recovering production

RecoverPoint allows you to recover to any one of multiple consistent point-in-time images maintained in its journal. This allows you to recover information as it existed prior to a problem (for example, before an accidental deletion or file corruption).

The restore production procedure assumes you have a DR server to perform testing at the replica side.

Follow the instructions in the unplanned failover up to step 4 in which you need to disable image access following a successful test.

At this step confirm and approve that enabled point-in-time is the correct point-in-time, to which you wish to restore your production site.

Note: After you complete the following steps you will:

Page 37: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

37

Solaris ZFS

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

• Restore the information at your production side from the replica. This means that you will lose the data as it was prior to when you initiate the restore production procedure.

• Will lose any information in the replica journal which is later than the selected point in time, earlier PIT will still be available in the journal.

On the RecoverPoint management console:

1. Click Failover Action and select Recover Production.

2. Approve the operation.

Wait until the initialization completes and the production site role states Production (being restored).

3. Perform the following steps on the target side host

1. Stop the application

2. Export the ZFS pool:

zpool export <pool name>

If necessary, export the ZFS pool forcibly:

zpool export –f <pool name>

4. On the RecoverPoint management console:

1. Bookmark the current point in time.

2. On the production copy, enable logged access to the bookmarked point in time. (You may need to wait for the system to enable logged access.)

5. On the production host (the host that is being recovered):

1. List the available ZFS pools at the DR host:

zpool import

2. To import a replicated ZFS pool:

zpool import –f <pool name>

3. Test the volumes and make sure that applications work as expected at the production site.

4. Start your applications to resume operation at the production site.

6. On the RecoverPoint management console, click Resume Production.

Page 38: RecoverPoint Deploying the Host-based Splitter on Sun ... · 1 EMC® RecoverPoint Deploying the Host-based Splitter on Sun Solaris . Technical Notes . P/N 300 -010 -947 . REV A01

38

Comments and getting help

EMC® RecoverPoint Deploying the Host-based Splitter in Sun Solaris Technical Notes

Comments and getting help

Getting help

EMC support, product, and licensing information can be obtained as follows:

• Product information. For documentation and release notes, or for information about EMC products, licensing, and service, go to the Powerlink website (registration required) at http://Powerlink.EMC.com.

• Technical support. For technical support, go to EMC Customer Service on Powerlink. To open a service request through Powerlink, you must have a valid support agreement. Contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account.

Your comments

Please include the title, part number, and revision of the document.

If you have issues, comments, or questions about specific information or procedures, include the relevant page numbers and any other information that will help us locate the information you are addressing.

[email protected]

Copyright © 2007 - 2010 EMC® Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on www.emc.com.

All other trademarks used herein are the property of their respective owners. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.