63
Proven Practice: EMC® Celerra® & VMware© vCenter SRM 4.0 Setup Guide VMware® vCenter Site Recovery Manager 4.0 & EMC® Celerra® Command Line Setup Guide for NFS Replications Cormac Hogan VMware Global Support Services September 2009 Rev A Page 1 of 63

Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Embed Size (px)

Citation preview

Page 1: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra® & VMware© vCenter SRM 4.0 Setup Guide

VMware® vCenterSite Recovery Manager 4.0

& EMC® Celerra®

Command Line Setup Guide for NFS Replications

Cormac HoganVMware Global Support Services

September 2009Rev A

Page 1 of 63

Page 2: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra® & VMware© vCenter SRM 4.0 Setup Guide

Introduction .......................................................................... 3 Requirements ........................................................................ 3 Disclaimer ............................................................................. 3 Celerra Appliance Setup ........................................................ 4

Part 1: Celerra Control Station Configuration Steps ................... 5 Part 2: Celerra Data Mover Network Configuration steps ........... 12 Part 3: Storage (NFS) Configuration Steps .............................. 15 Part 4: Replication Configuration Steps .................................. 20

VMware vCenter Site Recovery Manager v4.0 Install .......... 25 Part 1: Install SRM .............................................................. 25 Part 2: Install the EMC Celerra SRA ....................................... 29 Part 3: Install the SRM plugin ............................................... 31 Part 4: Pair the Protected and Recovery SRM sites ................... 33 Part 5: Configure Inventory Mappings. ................................... 37

Test Failover ........................................................................ 38 Part 1: Array Managers Configuration .................................... 38 Part 2: Create Protection Group & Recovery Plan ..................... 42 Part 3: Do a Test Failover .................................................... 46

Troubleshooting Celerra ..................................................... 49 Part 1: Recovery Celerra State (before test Failover) ................ 49 Part 2: Recovery Celerra State (after test Failover) .................. 50 Part 3: Recovery data mover logs from a test Failover event .... 54 Part 4: Decoding failed commands on the Celerra .................... 57 Part 5: SRM Logs ................................................................ 58 Part 6: Celerra SRA Logs ..................................................... 59

Trademarks ........................................................................ 63 Summary ............................................................................. 63 About the Author ................................................................. 63

Page 2 of 63

Page 3: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

IntroductionOne of the new features of VMware® vCenter Site Recovery Manager 4.0 is the introduction of support for NFS®. This proven practice will use the Celerra simulator appliance from EMC to demonstrate how to configure this new feature.

The Celerra simulator is downloadable free of charge from the EMC Powerlink® web site and is a great Site Recovery Manager learning tool. In this document, you will be guided through the configuration steps to use SRM 4.0 with vSphere 4.0 and EMC Celerra. The replication in this example will be done using NFS volumes.

One caveat with using the simulator is that it may take an additional 15 minutes after a reboot before it becomes manageable via its web based interface. For this reason, this deployment guide uses only the CLI to speed up the deployment.

RequirementsYou will require a minimum of two ESX servers running ESX 4.0, one at your production site and one at your recovery site. Various deployment methods can be used. Discussion on how to do this is beyond the scope of this document. You will also need a VMkernel network interface configured on the ESX server to access the NFS shares on the Celerra appliances.

This guide also assumes that you have vCenter 4.0 servers at both the production and recovery sites. Again, discussions on vCenter deployments are beyond the scope of this document.

The Celerra NAS appliance VM will require two IP addresses. Therefore you will require a total of 4 IP addresses for the two appliances (2 IP addresses each).

Many simulators require that the vSwitch to which the appliance VM is attached are in promiscuous mode. This is not necessary with the Celerra VM, though it will not impact its behaviour.

DisclaimerThis guide is by no means comprehensive. It is literally a step-by-step guide to get the EMC Celerra to a point where it can be used with VMware vCenter Site Recovery Manager for POC, demonstrations and training purposes. For production deployments, you are urged to refer to the detailed documentation provided by both EMC and VMware. You use this proven practice at your discretion. VMware and the author do not guarantee any results from the use of this proven practice. This proven practice is provided on an as-is basis and is for demonstration purposes only.

Page 3 of 63

Page 4: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Celerra Appliance Setup

Step 1: Deploy two Celerra appliances onto the ESX servers, one at the production site and one on the recovery site.

Before powering up the Celerra VM, edit the hardware settings and add a second vCPU and bump the memory up to 2GB.

The Celerra simulator has a single Control Station (Management Network). These may be allocated DHCP addresses or configured with a static IP address. You will connect to these via a web interface to do some of the configuration.

When deploying the Celerra simulator VM, you will need to ensure that it has:

• 2GB of memory, 2 vCPUs & 40 GB of disk space.

• 2 IP addresses – one for the Control Station and one for the Data Mover. The data mover IP won’t be used until setup part 2 step 4.

o IP address for local Control Station: _______________________o IP address for the local Data Mover: _______________________

o IP address for remote Control Station: _____________________o IP address for the remote Data Mover: _____________________

As I mentioned, we will be doing this setup from the CLI. We will not be using the browser interface to the Celerra.

This is very tricky and not at all intuitive. Do not deviate from the setup steps listed below or you will run into problems.

Page 4 of 63

Page 5: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 1: Celerra Control Station Configuration Steps

There are 2 logins configured on the Celerra Simulator:

• root/nasadmin• nasadmin/nasadmin

Step 1: This step may not be necessary for vanilla Celerra appliances. However for older, previously configured appliances, you need to delete any old data mover IP addresses before continuing.

Login as nasadmin and check using the following command:

nasadmin@celerra_B_VM ~]$ server_ifconfig ALL -alldmover_sim_B :loop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost10-21-68-73 protocol=IP device=cge0 inet=10.21.68.73 netmask=255.255.252.0 broadcast=10.21.71.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:ae:41:e3el31 protocol=IP device=el31 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhostel30 protocol=IP device=el30 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

[nasadmin@celerra_B_VM ~]$

If you have only the loopback interface and elX interfaces, then proceed to step 2.

However in this example there is an interface that needs to be removed. It is the interface that is using a cgeX device. In the above this is the interface called 10-21-68-73 (the name is simply a representation of the IP address of the interface). Remove this interface using the following command:

[nasadmin@celerra_B_VM ~]$ server_ifconfig dmover_sim_B -delete 10-21-68-73dmover_sim_B : done[nasadmin@celerra_B_VM ~]$

Once this is removed, proceed to the next step when we will configure the Control Station network.

Page 5 of 63

Page 6: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 2: If this is a preconfigured Celerra, then you will need to provide a unique IP address to the Control Station. If it is a brand new deployment, then it will most likely be using DHCP (DYNAMIC). Login as the root user (password nasadmin) and run ifconfig eth0 to see the current settings.

To change the Control Station (Management) network settings, use the command netconfig –d eth0. This allows you to choose DHCP or setup static networking on the interface. The utility is graphical and will look similar to the following:

Step 3: Populate with an appropriate static IP address. Netmask is 255.255.252.0. Default gateway will automatically populate. Change primary nameserver to 10.16.65.110. After making the appropriate change, run an ifdown eth0 and an ifup eth0.

You can repeat this procedure if using a second and or third interface, eth1 & eth2. However in this example we will only be using a single interface (eth0) in our configuration.

Ignore the dart_eth0 and dart_eth1 interfaces – these are used for communicating with back-end storage. In the case of the Celerra Simulator, it communicates to a simulated EMC Clariion back-end.

Step 4: Run an ifconfig eth0 to verify that your changes have taken affect.

Step 5: Verify that you can ping the new IP address from your local desktop.

Step 6: You can now ssh to the Control Station if the network is functional. It might be easier to complete the remainder of this setup using a utility like PuTTY to ssh onto the Control Station of the Celerra.

Page 6 of 63

Page 7: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 7: Now we setup the Data Mover networking. To make sure that we are using unique MAC addresses, we have to clean out the data mover interfaces and re-add them. Login as nasadmin and cd to /opt/blackbird/tools, (blackbird is the EMC codename for Celerra), run the command configure_nic ALL –l which lists all the defined Data Mover interfaces. This may return something like this:

[nasadmin@localhost ~]$ cd /opt/blackbird/tools/[nasadmin@localhost tools]$ ./configure_nic ALL -l--------------------------------------------------------------- server_2: network devices:Slot Device Driver Stub Ifname Irq Id Vendor--------------------------------------------------------------- 3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4 4 cge1 bbnic direct eth0 0x0019 0x1645 0x14e4------------------------------------------------------------------------------------------------------------------------------ server_3: network devices:Slot Device Driver Stub Ifname Irq Id Vendor--------------------------------------------------------------- 3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4 4 cge1 bbnic direct eth0 0x0019 0x1645 0x14e4---------------------------------------------------------------[nasadmin@localhost tools]$

The objective is to clear all these entries, reboot the Celerra, and re-add new entries. To delete the old entries, use the following command for each datamover defined:

configure_nic <data mover> -d cgeX.

To clear these entries, run:

[nasadmin@localhost tools]$ ./configure_nic server_2 -d cge0server_2: deleted device cge0.--------------------------------------------------------------- server_2: network devices:Slot Device Driver Stub Ifname Irq Id Vendor--------------------------------------------------------------- 4 cge1 bbnic direct eth0 0x0019 0x1645 0x14e4---------------------------------------------------------------

[nasadmin@localhost tools]$ ./configure_nic server_2 -d cge1server_2: deleted device cge1.--------------------------------------------------------------- server_2: network devices:Slot Device Driver Stub Ifname Irq Id Vendor------------------------------------------------------------------------------------------------------------------------------

[nasadmin@localhost tools]$ ./configure_nic server_3 -d cge0server_3: deleted device cge0.--------------------------------------------------------------- server_3: network devices:Slot Device Driver Stub Ifname Irq Id Vendor--------------------------------------------------------------- 4 cge1 bbnic direct eth0 0x0019 0x1645 0x14e4---------------------------------------------------------------

[nasadmin@localhost tools]$ ./configure_nic server_3 -d cge1server_3: deleted device cge1.--------------------------------------------------------------- server_3: network devices:Slot Device Driver Stub Ifname Irq Id Vendor------------------------------------------------------------------------------------------------------------------------------

Page 7 of 63

Page 8: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

[nasadmin@localhost tools]$ ./configure_nic ALL -l--------------------------------------------------------------- server_2: network devices:Slot Device Driver Stub Ifname Irq Id Vendor--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- server_3: network devices:Slot Device Driver Stub Ifname Irq Id Vendor------------------------------------------------------------------------------------------------------------------------------

All interfaces to the data movers have now been cleared.

Step 8: Now before we reboot we initialize the Celerra ID as we want to make sure that both the source and target Celerra IDs are unique when replicating between them.

Change to the root user, go to /opt/blackbird/tools.

We must run the script init_storageID but first we need to modify it as I have found that it hangs waiting for information from the data mover in slot 3 (which we do not use). EMC may be able to provide an alternate (and possible supported) workaround for this issue, but for the purposes of this setup, modifying the script seems to work.

The modification is simple; in the script, change line 237 from:

for server in 2 3;to:

for server in 2;

Now run the command init_storageID. It asks you do you want to reboot the Celerra. Answer y. Be patient as this command can take a minute or so to complete. A number of commands have to run such as nas_license, nas_diskmark & discovering storage. These will be visible during the shutdown.

Step 9: After the Celerra has rebooted, log back in as nasadmin, cd to the /opt/blackbird/tools directory again, run the command ./configure_nic <data mover> -a ethX. For each one of these commands that you run, a new cge interface is added to the data mover. This means that if you add eth0 as your first argument, a cge0 is created which will communicate to the outside world via eth0. Similarly, if you specified eth1 as your first argument, you data mover cge0 interface would communicate to the outside world via eth1. And so on.

[nasadmin@localhost tools]$ ./configure_nic server_2 -a eth0server_2: added new device cge0 in slot 3. Use server_ifconfig to configure the newly added device after reboot the virtual machine.--------------------------------------------------------------- server_2: network devices:Slot Device Driver Stub Ifname Irq Id Vendor--------------------------------------------------------------- 3 cge0 bbnic direct eth0 0x0018 0x1645 0x14e4---------------------------------------------------------------[nasadmin@localhost tools]$

Once again we must become root and reboot. You may notice that I had 2 data movers here. However for this lab we will only be working on a single data mover

Page 8 of 63

Page 9: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

(server_2) so you only have to do this for a single data mover. Reboot using reboot –n. This is the final reboot.

Step 10: After the Celerra has rebooted, login as root using PuTTY & setup the Control Station hostname using the following commands. You should probably choose some memorable names like celerra_a for the protected site & celerra_b for the recovery site:

[root@localhost ~]# ifconfig eth0eth0 Link encap:Ethernet HWaddr 00:50:56:81:79:63 inet addr:10.21.65.213 Bcast:10.21.67.255 Mask:255.255.252.0 inet6 addr: fe80::250:56ff:fe81:7963/64 Scope:Link UP BROADCAST RUNNING MULTICAST DYNAMIC MTU:1500 Metric:1 RX packets:11274 errors:0 dropped:0 overruns:0 frame:0 TX packets:205 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1003651 (980.1 KiB) TX bytes:21250 (20.7 KiB) Interrupt:10 Base address:0x2000

[root@celerra_a ~]# head /etc/hosts# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1 localhost localhost.localdomain localhost10.21.65.211 celerra_a celerra_a.csl.vmware.com10.21.65.213 celerra_b celerra_b.csl.vmware.com# Internal DART Server Primary Network128.221.252.1 server_1 #DART_data_mover_1

[root@localhost ~]# cat /etc/sysconfig/networkNETWORKING=yesHOSTNAME=celerra_bFORWARD_IPV4="no"DOMAINNAME=csl.vmware.com

[root@localhost ~]# hostname celerra_b[root@localhost ~]# hostnamecelerra_b[root@localhost ~]#

Log out and back in and verify that the hostname is correctly reflected in the shell prompt. Verify that you can ping the hostname.

Step 11: To ensure that each Celerra has its own unique ID, we must update the Celerra identification details:

Log off as root and login as nasadmin/nasadmin and run the command nas_cel -l

[nasadmin@celerra_b ~]$ nas_cel -lid name owner mount_dev channel net_path CMU0 localhost 0 127.0.0.1 BB005056AF1EE60000

Notice that the name is localhost. We need to update this to be the current Celerra hostname setting.

Page 9 of 63

Page 10: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Become root and use the following commands:

[nasadmin@celerra_b ~]$ su -Password:[root@celerra_b ~]# NAS_DB=/nas; export NAS_DB[root@celerra_b ~]# /nas/bin/nas_cel -update id=0operation in progress (not interruptible)...spawn /usr/bin/htdigest /nas/http/conf/digest DIC_Authentication BB0050568179630000_BB0050568179630000Adding user BB0050568179630000_BB0050568179630000 in realm DIC_AuthenticationNew password:Re-type new password:

id = 0name = celerra_bowner = 0device =channel =net_path = 10.21.65.213celerra_id = BB0050568179630000Warning 17716815874: server_2 : failed to create the loopback interconnect[root@celerra_b ~]#

The environment variable NAS_DB could be permanently added to root’s profile (/root/.bash_profile) as well as the PATHs /nas/bin and /nas/sbin if you so wish.

Note the loopback interconnect warning. This needs to be addressed if we are going to do replication between Celerra simulators. The error is caused by the default replication database having incorrect identifiers for this Celerra.

Step 12: Edit /nas/dos/slot_2/boot.cfg and disable dpinit.

httpd secmac port=5082# dpinitstatmonService start port=7777 allow=128.221.252.100:128.221.252.101:128.221.253.100:128.221.253.101#verify_eof

Step 13: Edit /nas/server/slot_2/eof and disable dpinit.

mac allow=128.221.252.100:128.221.252.101:128.221.253.100:128.221.253.101httpd secmac port=5082# dpinitstatmonService start port=7777 allow=128.221.252.100:128.221.252.101:128.221.253.100:128.221.253.101#verify_eof

Step 14: Reboot the data mover so that it is no longer doing replication.

[root@celerra_b ~]# /nas/bin/server_cpu server_2 -reboot nowserver_2 : done[root@celerra_b ~]#

Page 10 of 63

Page 11: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 15: Run the /nas/sbin/getreason command until slot_2 (data mover) goes from reset state into loaded state. This can take a minute or two …

[root@celerra_b ~]# /nas/sbin/getreason10 - slot_0 primary control station 0 - slot_2 reset 4 - slot_3 configured

[root@celerra_b ~]# while true> do> /nas/sbin/getreason | grep -i load> sleep 2> done 3 - slot_2 loaded 3 - slot_2 loaded

Step 16: Delete the database (icon_db) associated with replication. If you get an error message ‘unable to connect to host’, simply retry the command until it works:

[root@celerra_b ~]# /nas/bin/server_dbms server_2 -db -delete icon_dbserver_2 :Error 4000: server_2 : unable to connect to host [root@celerra_b ~]# /nas/bin/server_dbms server_2 -db -delete icon_dbserver_2 : done[root@celerra_b ~]#

Step 17: Recreate the database:

[root@celerra_b ~]# /nas/bin/.server_config server_2 -w "dpinit"server_2 : commands processed: 1command(s) succeeded[root@celerra_b ~]#

Step 18: Re-run the nas_cel update command that we ran in step 11, and make sure that there are no errors:

[root@celerra_b ~]# /nas/bin/nas_cel -listid name owner mount_dev channel net_path CMU0 celerra_b 0 10.21.65.213 BB0050568179630000

[root@celerra_b ~]# /nas/bin/nas_cel -update id=0operation in progress (not interruptible)...id = 0name = celerra_bowner = 0device =channel =net_path = 10.21.65.213celerra_id = BB0050568179630000[root@celerra_b ~]#

Step 19: Uncomment the dpinit entries that were commented out in steps 12 & 13.

Page 11 of 63

Page 12: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 2: Celerra Data Mover Network Configuration steps

Step 1: Using the server_ifconfig command, configure a network interface on your data mover. This interface is plumbed up using the second static IP noted at the beginning of this setup:

[root@celerra_b ~]# server_ifconfigError 2100: usage: server_ifconfig { <movername> | ALL } -all | -delete <if_name> | -create -Device <device_name> -name <if_name> -protocol IP <ipaddr> <ipmask> <ipbroadcast> | <if_name> { up | down | ipsec | noipsec | mtu=<MTUbytes>[ vlan=<vlanID>] | vlan=<vlanID>[ mtu=<MTUbytes>]}

[root@celerra_b ~]# server_ifconfig server_2 -create -Device cge0 -name 10-21-65-214 -protocol IP 10.21.65.214 255.255.252.0 10.21.71.255server_2 : done[root@celerra_b ~]#

Note that I have used the IP address for the name of the interface, but replaced the dots with hyphens. This is how the Celerra assigns names to interfaces by default.

Step 2: Verify that you can ping this interface once it is created. Do not try to ping it from the Celerra Control Station – ping it from outside the Celerra, i.e. your desktop.

Use the following commands to verify the network connectivity of the Data Mover:

[root@celerra_A_VM_2 ~]# NAS_DB=/nas[root@celerra_A_VM_2 ~]# export NAS_DB[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -allserver_4 :10-21-68-178 protocol=IP device=cge0 inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8eloop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhostel31 protocol=IP device=el31 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhostel30 protocol=IP device=el30 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

[root@celerra_A_VM_2 ~]#

Page 12 of 63

Page 13: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

If you cannot ping the interface, try downing the interface and bringing it up again:

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig server_4 10-21-68-178 downserver_4 : done

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -allserver_4 :10-21-68-178 protocol=IP device=cge0 inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255 DOWN, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8eloop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhostel31 protocol=IP device=el31 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhostel30 protocol=IP device=el30 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig server_4 10-21-68-178 upserver_4 : done

[root@celerra_A_VM_2 ~]# /nas/bin/server_ifconfig ALL -allserver_4 :10-21-68-178 protocol=IP device=cge0 inet=10.21.68.178 netmask=255.255.252.0 broadcast=10.21.71.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:50:56:af:61:8eloop protocol=IP device=loop inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255 UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhostel31 protocol=IP device=el31 inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhostel30 protocol=IP device=el30 inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255 UP, ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost

If you cannot, go back and check the configure_nic steps in part 1. Until you can ping this interface from outside the Celerra, there is no point in continuing any further.

Page 13 of 63

Page 14: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Note (helpful tip): If you do wish to use the web interface and you are trying to determine if the Celerra appliance is back up and ready to be used, the vCenter performance view of the appliance can help. Here is a view of my Celerra appliance immediately after a reboot. Note the very high CPU usage:

Wait for this CPU usage to drop off, usually after ~ 15 to 20 minutes, as shown below. Once the usage has dropped, the Celerra appliance has now started up and is ready to be worked on via the web interface.

Page 14 of 63

Page 15: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 3: Storage (NFS) Configuration Steps

The final steps of the configuration are to create and present an NFS filesystem to our ESX servers. Login as nasadmin.

Step 1: First thing to do is to license the features that we are going to use. You will not need any license keys to enable the features; it is a simple matter of enabling them. However you may have to first of all initialize the license table.

[nasadmin@celerra_b ~]$ nas_license -listkey status valuesite_key online 4a 8a e6 d6

Only if this command fails with the error ‘license table is not initialized’, run this command. However this suggests that the init_storageID command ran earlier failed to run successfully, so you should ideally go back and check that.

[nasadmin@celerra_A_VM_2 ~]$ nas_license -initdone[nasadmin@celerra_A_VM_2 ~]$

Step 2: Add the additional licenses required for NAS replication on the Celerra:

[nasadmin@celerra_b ~]$ nas_license -create advancedmanagerdone[nasadmin@celerra_b ~]$ nas_license -create nfsdone[nasadmin@celerra_b ~]$ nas_license -create snapsuredone[nasadmin@celerra_b ~]$ nas_license -create replicatorV2done[nasadmin@celerra_b ~]$ nas_license -listkey status valuesite_key online 4a 8a e6 d6advancedmanager onlinenfs onlinesnapsure onlinereplicatorV2 online

Step 3: We must now create a new file systems on both the protected Celerra and the recovery Celerra using the nas_fs command. I called the file system SRM_FS and make it 2GB in size.

[nasadmin@celerra_b ~]$ nas_fs -name SRM_FS -create size=2000M pool=clar_r5_performance -option mover=server_2, slice=y, rw_servers=server_2

id = 25name = SRM_FSacl = 0in_use = Falsetype = uxfsworm = offvolume = v100pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =auto_ext = no,virtual_provision=nodeduplication = unavailable

Page 15 of 63

Page 16: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

stor_devs = BB0050568107A8-0019,BB0050568107A8-0016,BB0050568107A8-0015,BB0050568107A8-0010disks = d16,d13,d12,d7

The command is creating a file system of a size of 2GB from the storage pool called clar_r5_performance. The data mover is called server_2.

Step 4: Check that the file systems have been created:

[nasadmin@celerra_b ~]$ nas_fs -list -all | grep SRM25 n 1 0 100 SRM_FS

Step 5: Create a mount point on each Celerra for mounting the file system using the server_mountpoint command. I gave the mount point the same name as the file system (SRM_FS):

[nasadmin@celerra_b ~]$ server_mountpoint server_2 -create /SRM_FSserver_2 : done

[nasadmin@celerra_b ~]$ server_mountpoint server_2 -listserver_2 :/.etc_common/SRM_FS

Step 6: Mount each of the file systems to their appropriate mount point using the server_mount command. On the protected site only, the file system is mounted read-write, which is the default, e.g.

[nasadmin@celerra_b ~]$ server_mount server_2 SRM_FS /SRM_FS

One the recovery Celerra, the file systems need to be mounted read-only. Therefore the command includes the ‘-option ro’. Here is a shell script wrapper to mount all four file systems on the recovery site:

[nasadmin@celerra_b ~]$ server_mount server_2 -option ro SRM_FS /SRM_FSserver_2 : done

Step 7: Check that the file systems have been mounted:

[nasadmin@celerra_b ~]$ server_df server_2server_2 :Filesystem kbytes used avail capacity Mounted onSRM_FS1 2015984 288 2015696 0% /SRM_FSroot_fs_common 15368 5280 10088 34% /.etc_commonroot_fs_2 258128 6120 252008 2% /

Step 8: The previous 7 steps must be repeated on the recovery site Celerra.

The only difference will be at step 6. The protected site file system must be set to Read Write so do not include the read-only option. The command will be “server_mount server_2 SRM_FS /SRM_FS The recovery site must have a RO file system for file system replication to work between the two Celerras.

Page 16 of 63

Page 17: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 9: If you need to delete a file system for some reason, you will first of all have to permanently unmount it before you can delete it.

[nasadmin@celerra_d ~]$ server_umount server_2 -perm -Force SRM_FSserver_2 : done[nasadmin@celerra_d ~]$ nas_fs -delete SRM_FS

Step 10: The next step is to setup the file system as an NFS share on the protected Celerra using the command server_export. If you do not have a VMkernel port created on your ESX servers, do that now before running this step.

Step 11: This task only needs to be done on the protected site. It does not have to be done on the recovery Celerra. The Celerra SRA takes care of the exporting and mounting on the recovery site.

The IP address specified in the rw and root fields is the VMkernel IP Storage interface IP address of the protected ESX server. Replace these fields with your protected ESX VMkernel IP address.

[nasadmin@celerra_a ~]$ server_export server_2 -Protocol nfs -option rw=10.21.68.56,root=10.21.68.56 /SRM_FS

Step 12: The file systems should now appear as an NFS export:

[nasadmin@celerra_a ~]$ server_export server_2 -list -allserver_2 :export "/SRM_FS" rw=10.21.68.56 root=10.21.68.56export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221 .253.101

Should you make a typo during the export, you can remedy it via the following steps:

[nasadmin@celerra_a ~]$ server_export server_2 -list -allserver_2 :export "/SRM_FS" rw=10.21.66.97 root=10.21.66.97 rw=10.21.66.91 root=10.21.66.91export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101

[nasadmin@celerra_a ~]$ server_export server_2 -unexport -perm /SRM_FSserver_2 : done

[nasadmin@celerra_a ~]$ server_export server_2 -list -allserver_2 :export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101

That completes the setup steps on the Celerra for the moment. We must now ensure that we can access these NFS shares from the ESX.

Page 17 of 63

Page 18: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 13: This file system must now be mounted on the ESX server and a virtual machine created on the NFS share before moving onto the replication steps in the next section. In the protected site vSphere Client > Configuration Tab > Hardware > Storage, click on the Add Storage… link:

Step 14: Select Network File System:

Step 15: Populate with the Celerra Data Mover IP address, share and the name you wish to assign to the share on the ESX:

Page 18 of 63

Page 19: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 16: Click Finish at the summary screen:

Note: If you get a failure about the NFS server does not support MOUNT version 3 over TCP, then in all likelihood, you have used the IP address of the Celerra Control Station and not the IP address of the Celerra Data Mover. Repeat the Add Storage task, but use the Data Mover IP address:

Step 17: Once the mount has been successfully added to the ESX, deploy a small virtual machine to the NFS share.

Once these tasks are completed, you are now ready to setup replication between the two file systems on the protected and recovery Celerra.

Page 19 of 63

Page 20: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 4: Replication Configuration Steps

We will do most of these steps from within the CLI. SSH onto the Celerra Control Stations using the nasadmin/nasadmin credentials. The steps can be summarised as follows:

1. Create a trust between the data movers at the local and remote sites.2. Create an interconnect to allow the data movers to communicate

Step 1: Create a trust between the data movers at the local and remote sites.

On both the protection side Celerra and the recovery side Celerra, run:

# nas_cel -create <cel_name> -ip <ipaddr> -passphrase <phrase>

The IP address is the Control Station IP. For example, on the protection side, I add the recovery Celerra:

[nasadmin@celerra_a ~]$ nas_cel -lid name owner mount_dev channel net_path CMU0 celerra_a 0 10.21.65.211 BB0050569D49CD0000

[nasadmin@celerra_a ~]$ nas_cel -create celerra_b -ip 10.21.65.213 -passphrase vmwareoperation in progress (not interruptible)...id = 3name = celerra_bowner = 0device =channel =net_path = 10.21.65.213celerra_id = BB0050568179630000passphrase = vmware[nasadmin@celerra_a ~]$

The passphrase must be the same in both cases. I used vmware as the phrase.

To verify the trust relationship, run this command on both Celerras:

# nas_cel –l[nasadmin@celerra_a ~]$ nas_cel -lid name owner mount_dev channel net_path CMU0 celerra_a 0 10.21.65.211 BB0050569D49CD00003 celerra_b 0 10.21.65.213 BB0050568179630000[nasadmin@celerra_a ~]$

Page 20 of 63

Page 21: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 2: Create an interconnect to allow the data movers to communicate.

On both the protection side Celerra and the recovery side Celerra, run:

# nas_cel -interconnect -create <name> -source_server <movername> -destination_system {<cel_name> | id=<cel_id>} -destination_server <movername> -source_interfaces {<name_service_interface_name> | ip=<ipaddr> [,{<name_service_interface_name> | ip=<ipaddr>},...] -destination_interfaces {<name_service_interface_name> | ip=<ipaddr>}

[,{<name_service_interface_name> | ip=<ipaddr>},...]

• from the source/protected side (note that here the IPs are for the data movers):

[nasadmin@celerraVM ~]$ nas_cel -interconnect -create srm_inter -source_server server_2 -destination_system celerra_b -destination_server server_2 -source_interfaces ip=10.21.65.212 -destination_interfaces ip=10.21.65.214 operation in progress (not interruptible)...id = 20003name = srm_intersource_server = server_2source_interfaces = 10.21.65.216destination_system = celerra_bdestination_server = server_2destination_interfaces = 10.21.65.214bandwidth schedule = use available bandwidthcrc enabled = yesnumber of configured replications = 0number of replications in transfer = 0current transfer rate (KB/sec) = 0average transfer rate (KB/sec) = 0sample transfer rate (KB/sec) = 0status = The interconnect is OK.

• from the target/recovery side:

[nasadmin@celerra_B_VM ~]$ nas_cel -interconnect -create srm_inter -source_server server_2 -destination_system celerra_a -destination_server server_2 -source_interfaces ip=10.21.65.214 -destination_interfaces ip=10.21.65.212operation in progress (not interruptible)...id = 20003name = srm_intersource_server = server_2source_interfaces = 10.21.65.214destination_system = celerra_adestination_server = server_2destination_interfaces = 10.21.65.216bandwidth schedule = use available bandwidthcrc enabled = yesnumber of configured replications = 0number of replications in transfer = 0current transfer rate (KB/sec) = 0average transfer rate (KB/sec) = 0sample transfer rate (KB/sec) = 0status = The interconnect is OK.

Page 21 of 63

Page 22: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 3: Check the status of the interconnects

[root@celerra_a ~]# nas_cel -interconnect -listid name source_server destination_system destination_server20001 loopback server_2 celerra_a server_220003 srm_inter server_2 celerra_b server_2

[root@celerra_b ~]# nas_cel -interconnect -listid name source_server destination_system destination_server20001 loopback server_2 celerra_b server_220003 srm_inter server_2 celerra_a server_2

Note: The interconnect name must be the same on both Celerra.

You can use the –info option for additional information.

Step 4: Verify the date and time are in sync with the control station. If the time skew is greater than 10 minutes between the Celerra, you will not be able to do replication. Login to the Celerra as root:

[root@celerra_a ~]# date (verify current date and time)[root@celerra_a ~]# pgrep ntpd (ensure the NTP daemon is running)[root@celerra_a ~]# chkconfig ntpd on (ensures the NTP starts with the VSA)[root@celerra_a ~]# service ntpd stop[root@celerra_a ~]# ntpdate –u <NTP server IP address>

Here are a couple of Internet based NTP server addresses for reference:_ 129.6.15.28 (time-a.nist.gov)_ 129.6.15.29 (time-b.nist.gov)_ 128.138.140.44 (utcnist.colorado.edu)

[root@celerra_a ~]# service ntpd start[root@celerra_a ~]# date (verify current date and time are correct)[root@celerra_a ~]# logout

Step 5: Now perform the same process for the data mover. Login to the Celerra as nasadmin:

[nasadmin@celerra_a ~]$ server_date server_2 timesvc delete ntp[nasadmin@celerra_a ~]$ server_date server_2 timesvc start <NTP server IP address>[nasadmin@celerra_a ~]$ server_date server_2[nasadmin@celerra_a ~]$ logout

Step 6: If the times are still out of sync after following these steps reboot the entire Celerra. Repeat these steps for the recovery site Celerra. An alternate method which is not persistent is to use the command date MMDDhhmm as root to make the time the same on both Celerra. However you should consider using the NTP method at some stage in the future.

Page 22 of 63

Page 23: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 7: Now we are finally ready to do the replication. Use the following command from the source/protected Celerra, selecting the appropriate data mover IP addresses for your Celerra (not control station), as well as the correct file systems:

nas_replicate | -create <name> -source -fs { <fsName> | id=<fsId> } [ -sav { <srcSavVolStoragePool> | id=<srcSavVolStoragePoolId> } ] -destination { -fs { id=<dstFsId> | <existing_dstFsName> } | -pool { id=<dstStoragePoolId> | <dstStoragePool> } [ -vdm <dstVdmName> ]} [ -sav { id=<dstSavVolStoragePoolId> | <dstSavVolStoragePool> } ] -interconnect { <name> | id=<interConnectId> } [ -source_interface { ip=<ipAddr> | <nameServiceInterfaceName> } ] [ -destination_interface { ip=<ipAddr> | <nameServiceInterfaceName> } ] [ { -max_time_out_of_sync <maxTimeOutOfSync> | -manual_refresh } ] [ -overwrite_destination ] [ -tape_copy ] [ -background ]

e.g.

[nasadmin@celerra_a ~]$ nas_replicate -create srm_replic -source -fs SRM_FS -destination -fs SRM_FS -interconnect srm_inter -source_interface ip=10.21.65.212 -destination_interface ip=10.21.65.214 -overwrite_destinationOK[nasadmin@celerra_a ~]$

This may take some time to complete, so please be patient. Remember that these are Celerra appliances and do not have the performance capabilities of a real life Celerra.

If the progress never goes past 0%, and the replicate command continually displays 0% progress, stop the replication (nas_replicate –stop srm_replic), run the command nas_cel -interconnect -validate srm_inter on both Celerra and restart (nas_replicate –start srm_replic) the replication.

Step 8: Check the status of the replication by running the following commands:

[nasadmin@celerra_sim_a ~]$ nas_replicate -lName Type Local Mover Interconnect Celerra Statussrm_replic filesystem server_2 -->srm_inter celerra_sim+

[nasadmin@celerra_sim_a ~]$ nas_replicate -info srm_replicID = 112_BB005056A35D70_0000_112_BB0050568D117C_0000Name = srm_replicSource Status = OKNetwork Status = OKDestination Status = OKLast Sync Time =Type = filesystemCelerra Network Server = celerra_sim_bDart Interconnect = srm_interPeer Dart Interconnect = srm_interReplication Role = sourceSource Filesystem = FS3Source Data Mover = server_2Source Interface = 10.21.65.216Source Control Port = 0

Page 23 of 63

Page 24: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Source Current Data Port = 63309Destination Filesystem = FS3Destination Data Mover = server_2Destination Interface = 10.21.65.214Destination Control Port = 5085Destination Data Port = 8888Max Out of Sync Time (minutes) = 10Next Transfer Size (Kb) = 431104Current Transfer Size (KB) = 431104Current Transfer Remain (KB) = 241418Estimated Completion Time = Tue Jun 30 11:34:22 EDT 2009Current Transfer is Full Copy = YesCurrent Transfer Rate (KB/s) = 2173Current Read Rate (KB/s) = 288Current Write Rate (KB/s) = 169Previous Transfer Rate (KB/s) = 0Previous Read Rate (KB/s) = 0Previous Write Rate (KB/s) = 0Average Transfer Rate (KB/s) = 0Average Read Rate (KB/s) = 0Average Write Rate (KB/s) = 0

Note the blank Last Sync Time and the Current Transfer is Full Copy options. This means the LUN is currently doing a full sync. When this is finished we can setup SRM.

Step 9: When the sync is complete, you should notice that the Last Sync Time field becomes populated, and the Current Transfer is Full Copy gets set to No.

[nasadmin@celerra_sim_a ~]$ nas_replicate -info srm_replicID = 112_BB005056A35D70_0000_112_BB0050568D117C_0000Name = srm_replicSource Status = OKNetwork Status = OKDestination Status = OKLast Sync Time = Tue Jun 30 11:34:33 EDT 2009Type = filesystemCelerra Network Server = celerra_sim_bDart Interconnect = srm_interPeer Dart Interconnect = srm_interReplication Role = sourceSource Filesystem = FS3Source Data Mover = server_2Source Interface = 10.21.65.216Source Control Port = 0Source Current Data Port = 0Destination Filesystem = FS3Destination Data Mover = server_2Destination Interface = 10.21.65.214Destination Control Port = 5085Destination Data Port = 8888Max Out of Sync Time (minutes) = 10Next Transfer Size (Kb) = 0Current Transfer Size (KB) = 0Current Transfer Remain (KB) = 0Estimated Completion Time =Current Transfer is Full Copy = NoCurrent Transfer Rate (KB/s) = 1950Current Read Rate (KB/s) = 368Current Write Rate (KB/s) = 71Previous Transfer Rate (KB/s) = 0Previous Read Rate (KB/s) = 0Previous Write Rate (KB/s) = 0Average Transfer Rate (KB/s) = 1950Average Read Rate (KB/s) = 0Average Write Rate (KB/s) = 0

[nasadmin@celerra_sim_a ~]$

Only when the sync has completed will the replicated file system be discovered by SRM.

Page 24 of 63

Page 25: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

VMware vCenter Site Recovery Manager v4.0 Install

Part 1: Install SRM

Note that this process will have to be implemented on both the protected and recovery SRM servers. Databases should have already been created for SRM at both the local and remote site and the appropriate ODBC DSN configured:

• Local Site vCenter Server IP:• Local Site SRM Server IP:• Local Site ODBC DSN:

• Remote Site vCenter Server IP:• Remote Site SRM Sever IP:• Remote Site ODBC DSN:

Step 1: Log onto your SRM server as administrator. Double click on the SRM exe on your SRM server to begin the installation. At the welcome screen, click Next.

Step 2: Accept the EULA:

Page 25 of 63

Page 26: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 3: Select the Destination Folder – leave at the default.

Step 4: Populate the vCenter credentials. You can use either the hostname or IP address. If this is the protected SRM server, populate with the credentials of the protected vCenter server. If this is the recovery SRM server, populate with the recovery vCenter server credentials.

Step 5: Accept the Security Warning.

Page 26 of 63

Page 27: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 6: In this lab exercise, we will not be using signed certificates. We will use a self-signed automatically generated certificate.

Step 7: Populate the Organization and Unit fields – these will be used for the creation of the certificate. You can use VMware/GSS respectively.

Step 8: Populate the SRM extension information. This will be saved in the extension.xml file on the SRM server. For the lab, the only additional piece of information required in the Administrator E-mail.

Page 27 of 63

Page 28: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 9: Populate the database information. Choose the ODBC DSN that you created earlier. Populate the database credentials.

Step 10: Click Install to complete the SRM 4.0 installation.

Step 11: Ensure that the installation completes without errors. If the installation completes without errors, click on the Finish button.

Step 12: Now repeat the installation process on the other SRM server.

Step 13: When completed, launch the Services application from your desktop and verify that the VMware vCenter Site Recovery Manager Service is running on both the local and remote servers.

Page 28 of 63

Page 29: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 2: Install the EMC Celerra SRA

With Site Recovery Manager successfully installed on both the protected and recovery sites, it is now time to install the Storage Replication Adapters (aka SRAs). In this example I will be installing the EMC Celerra Storage Replication Adapters. Note that EMC supply one Celerra SRA for both iSCSI & NAS replications. Also note that I was using a beta version of the software. Your version should be a supported GA version.

This install is very much a point and click exercise so we can run thru these quickly. These installs are done on the protected and recovery SRM servers.

Step 1: Double click on the install exe/msi. At the Welcome screen, click Next.

Step 2: Accept the EULA

Page 29 of 63

Page 30: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 3: Click Install

Step 4: View README.txt if you wish. Else click the Finish button.

Step 5: Navigate to the folder C:\Program Files\VMware\VMware vCenter Site Recovery Manager\scripts\SAN\celerra to verify the installation of the EMC Celerra SRA:

Step 6: After installing SRM & the SRA, it is good practice to reboot the VM (this will restart the VMware vCenter Site Recovery Manager Service).

Page 30 of 63

Page 31: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 3: Install the SRM plugin

Step 1: Open a Remote Desktop Connection to your vCenter server, launch your vSphere client locally and connect to your vCenter server. You should notice that there are no icons in the Solutions and Applications area.

Step 2: Double click the SRM plugin install exe/msi file. (Another way to install SRM plugin is via the vSphere client, click on “Plug-ins” and then in Plug-in Manager wizard select SRM plug in and start the installation). At the Welcome screen for the SRM plugin, click Next.

Step 3: Accept the EULA:

Page 31 of 63

Page 32: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 4: Click Install

Step 5: Click Finish when the install completes:

Step 6: Log out of your vSphere client and log back in. The plugin should now be visible in the Solutions and Applications section.

Repeat the installation of the plugin on vCenter at the other site.

Page 32 of 63

Page 33: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 4: Pair the Protected and Recovery SRM sites

With the infrastructure now in place, we can start communication between the local and remote (protected and recovery) SRM sites.

Step 1: On the protected site vCenter, click on the Site Recovery icon and accept all security warnings displayed:

Step 2: At the summary screen, click on Configure for the Connection:

Page 33 of 63

Page 34: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 3: Enter the IP address of the remote/recovery vCenter server. Click Next.

Step 4: Accept any certificate warnings.

Step 5: Populate the remote vCenter server credentials:

Page 34 of 63

Page 35: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 6: Accept any certificate warnings.

Step 7: Verify that Reciprocity between the local SRM & vCenter servers and remote SRM & vCenter servers is established.

Step 8: When reciprocity is established, you will be required to supply login credentials for the both the local and remote vCenter server once again as per the following pop-up:

Step 9: Accept any security warnings associated with this login:

Page 35 of 63

Page 36: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 10: Click on the Finish button on the Complete Connections window.

Step 11: The sites should now be connected.

Step 12: Log onto the Recovery Site vCenter and verify that the pairing is in place there too. Ignore any security warnings and populate the login credentials for the remote vCenter server.

Page 36 of 63

Page 37: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 5: Configure Inventory Mappings.

Step 1: Back on the protected vCenter, Site Recovery view, click on the Configure link next to Inventory Mappings.

Step 2: For each object (Host, Data Center, and Resource Pool) on the protected site, click on the appropriate component on the recovery site. This example is showing resource pools. Your environment may not:

This now completes the SRM installation. The remaining tasks such as configuring array managers, creating protection groups and creating recovery plans will be done in a later lab.

Page 37 of 63

Page 38: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Test Failover

Part 1: Array Managers Configuration

Step 1: On the protected site vCenter, go to SRM and select the Site Recovery Summary tab:

Step 2: Click the Configure link against Array Managers. This will open the Protected Site Array Managers. Note that the state is current Not Configured.

Step 3: Click on the Add button. If you get the popup:

Restart the VMware vCenter Site Recovery Manager Service on your SRM server.

Page 38 of 63

Page 39: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 4: Populate the Display Name, select the correct Manager Type for Celerra, and insert the Control Station IP Address for the Protected Array as well as the username and password (nasadmin/nasadmin) for the array.

Step 5: Click Connect – the Array ID and Model fields should populate if the connection to the array is successful. Ensure the Array ID box is selected.

Page 39 of 63

Page 40: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 6: Once the protected Celerra array appears in the list of Protection Arrays, click OK. The Replicated Array Pairs part of the window should now populate, and list the Peer Array as well as the replicated Device Count. In your lab environment, the Device Count should be 1, but this environment shows 4.

Step 7: Click Next to discover the Recovery side array. At the Recovery Site Array Managers window, click Add.

Step 8: Populate as per step 2 - 6, but this time for the Recovery side array. Click Connect. Once the recovery Celerra array appears in the list of Protection Arrays, this verifies that the Array Discovery task has completed successfully. Click OK. Again ensure that the Array ID box is selected.

Page 40 of 63

Page 41: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 9: Note the ‘green’ tick mark in the replicated Array Pair. This indicates that so far everything is discovered correctly. Note again that this example is showing there are 4 replicated file systems (but you may only see 1).

Step 10: Click Next to proceed to the Replicated LUN discover task. If the Array Managers screen returns a device for the arrays that you have populated, then the Discover Replicated Datastores task has succeeded. If it has not returned, then it could be that the file system has not replicated within the array, that the share is not mounted to your ESX or that the protected share does not have a running VM.

At the review screen only one FS should be visible for your SRM deployment. As there is only one FS from the Celerra is mounted to your protected ESX:

These steps are not necessary on the recovery side – array manager is only configured on the protected side.

Page 41 of 63

Page 42: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 2: Create Protection Group & Recovery Plan

Estimated time to complete: 5 minutes

Step 1: On the protected site, click on Protection Groups:

Step 2: Click on the Create Protection Groups link, then when the Create Protection Group Window opens, enter the name of your Protection Group. In this case, I have called it PG-Celerra-NAS:

Step 3: Select the Datastore Group that you wish to protect. This will be the datastore group which contains the NFS share that is being replicated, and the Virtual Machine. The list of virtual machines in the datastore group will appear below:

Page 42 of 63

Page 43: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 4: Decide where to hold your virtual machine information (shadow VM) on the recover site:

Step 5: Confirm successful creation of the Protection Group:

Step 6: Move to the vCenter server on the recovery site. On the recovery side, create a Recovery Plan by clicking on the Create Recovery Plan link:

Page 43 of 63

Page 44: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 7: Give the Recovery Plan a name. I called it RP_Celerra_NAS:

Step 8: Choose a Protection Group to use with this Recovery Plan. We chose the Protection Group that we created on the protected side a few steps ago:

Step 9: Set the timeouts for the virtual machines during this failover. To speed up the process, reduce the VM Response Times to 60 from 600.

Note: While this is safe to do in our labs, if a user reduces the time-out on a protected Virtual Machine in production, the wait for OS heartbeat step might fail causing the recovery plan to fail.

Page 44 of 63

Page 45: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 10: Decide which network the VMs should come up on during a test failover. By default, this is set to Auto which means that the VMs come up in a ‘bubble’ network during a test failover.

Step 11: Do you want to suspend any Virtual Machines during failover? In this case No, so click Finish.

Step 12: Recovery Plan is now created.

Page 45 of 63

Page 46: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 3: Do a Test Failover

Estimated time to complete: 5 minutes

Step 1: Now that you have a protection Group and Recovery Plan in place, you can go ahead and do a test failover. On the recovery side, select first your recovery plan and then the Recovery Steps tab:

Step 2: Click on Test.

Step 3: If everything is working successfully, eventually the recovery steps will look like:

Page 46 of 63

Page 47: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 4: Navigate back to your Home > Hosts & Clusters, and you should observe that the protected VM is now running on the recovery side. You should also observe that the VM’s network is part of the ‘bubble’ network and that the VM’s datastores is a snapshot of the original LUN on the protected side:

Step 5: Take a looking at the datastore on the recovery site:

Page 47 of 63

Page 48: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Step 6: Take a look at the network on the recovery site:

Step 7: when you are happy that the testFailover completed successfully, click the continue button on the recovery plan to finish the testFailover.

Page 48 of 63

Page 49: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Troubleshooting Celerra

The following are commands which can be run on the Celerra to assist in troubleshooting SRM issues. You can try these on the protected and recovery Celerra.

As with many other arrays which work with SRM, after the first full synchronization between the local and remote sites, the Celerra replication uses snapshot technology to keep the local and remote site in sync. When we do a test failover with SRM, the SRA takes the most recent snapshot copy, makes a clone of it and presents it to the outside world for test. Using this method, the latest copy of the replication is presented, but the actual replication taking place between the local and remote sites is unaffected.

I have included some sample command outputs for your information. These are the commands one would use to examine the state of the Celerra before during and after a test/run failover:

Part 1: Recovery Celerra State (before test Failover)Display the file systems on the Celerra (note the SRM_FS file systems in particular):

[nasadmin@celerra_sim_b ~]$ nas_fs -listid inuse type acl volume name server1 n 1 0 20 root_fs_12 y 1 0 22 root_fs_2 13 n 1 0 24 root_fs_34 n 1 0 26 root_fs_45 n 1 0 28 root_fs_56 n 1 0 30 root_fs_67 n 1 0 32 root_fs_78 n 1 0 34 root_fs_89 n 1 0 36 root_fs_910 n 1 0 38 root_fs_1011 n 1 0 40 root_fs_1112 n 1 0 42 root_fs_1213 n 1 0 44 root_fs_1314 n 1 0 46 root_fs_1415 n 1 0 48 root_fs_1516 y 1 0 50 root_fs_common 117 n 5 0 83 root_fs_ufslog18 n 5 0 86 root_panic_reserve19 n 5 0 87 root_fs_d320 n 5 0 88 root_fs_d421 n 5 0 89 root_fs_d522 n 5 0 90 root_fs_d623 y 1 0 96 SRM_FS 133 y 1 0 106 SRM_FS2 1

Examine a particular volume to see which disks back it:

[nasadmin@celerra_sim_b ~]$ nas_volume -info v96id = 96name = v96acl = 0in_use = Truetype = metavolume_set = s55disks = d16,d13,d12,d7clnt_filesys= SRM_FS

Page 49 of 63

Page 50: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Display the mounted file systems from a particular data mover:

[nasadmin@celerra_sim_b ~]$ server_df server_2server_2 :Filesystem kbytes used avail capacity Mounted onroot_rep_ckpt_33_1463_2 2015984 400928 1615056 20% /root_rep_ckpt_33_1463_2root_rep_ckpt_33_1463_1 2015984 400928 1615056 20% /root_rep_ckpt_33_1463_1SRM_FS2 2015984 400928 1615056 20% /SRM_FS2root_rep_ckpt_23_1299_2 2015984 400904 1615080 20% /root_rep_ckpt_23_1299_2root_rep_ckpt_23_1299_1 2015984 400904 1615080 20% /root_rep_ckpt_23_1299_1SRM_FS 2015984 400904 1615080 20% /SRM_FSroot_fs_common 15368 5280 10088 34% /.etc_commonroot_fs_2 258128 6208 251920 2% /

Display the exported (NFS) file systems from a particular data mover:

[nasadmin@celerra_sim_b ~]$ server_export server_2server_2 :export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101

[nasadmin@celerra_sim_b ~]$

You can now re-run a testFailover against your recovery plan and examine the Celerra to see the changes occurring.

Part 2: Recovery Celerra State (after test Failover)

Display the file systems on the Celerra (note the new SRM_FS file systems in particular). These are clones of the latest snapshot copy which will now be presented to the ESX for mounting:

[nasadmin@celerra_sim_b ~]$ nas_fs -listid inuse type acl volume name server1 n 1 0 20 root_fs_12 y 1 0 22 root_fs_2 13 n 1 0 24 root_fs_34 n 1 0 26 root_fs_45 n 1 0 28 root_fs_56 n 1 0 30 root_fs_67 n 1 0 32 root_fs_78 n 1 0 34 root_fs_89 n 1 0 36 root_fs_910 n 1 0 38 root_fs_1011 n 1 0 40 root_fs_1112 n 1 0 42 root_fs_1213 n 1 0 44 root_fs_1314 n 1 0 46 root_fs_1415 n 1 0 48 root_fs_1516 y 1 0 50 root_fs_common 117 n 5 0 83 root_fs_ufslog18 n 5 0 86 root_panic_reserve19 n 5 0 87 root_fs_d320 n 5 0 88 root_fs_d421 n 5 0 89 root_fs_d522 n 5 0 90 root_fs_d623 y 1 0 96 SRM_FS 133 y 1 0 106 SRM_FS2 140 y 7 0 112 SRM_FS2_SRM_TEST_FA 141 y 15 0 112 SRM_FS2_SRM_TEST_FA 142 y 7 0 104 SRM_FS_SRM_TEST_FAI 143 y 15 0 104 SRM_FS_SRM_TEST_FAI 1

Page 50 of 63

Page 51: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Examine a particular file system to see additional details:

[nasadmin@celerra_sim_b ~]$ nas_fs -info SRM_FS2_SRM_TEST_FAILOVERid = 40name = SRM_FS2_SRM_TEST_FAILOVERacl = 0in_use = Truetype = ckptworm = offvolume = vp112pool = clar_r5_performancemember_of =rw_servers=ro_servers= server_2rw_vdms =ro_vdms =checkpt_of= SRM_FS2 Thu Aug 6 12:13:55 EDT 2009deduplication = Offckpts = SRM_FS2_SRM_TEST_FAILOVER_WRITEABLEused = 32%full(mark)= 90%stor_devs = BB005056BC116F-0018,BB005056BC116F-0017,BB005056BC116F-0014,BB005056BC116F-0011disks = d15,d14,d11,d8 disk=d15 stor_dev=BB005056BC116F-0018 addr=c0t1l8 server=server_2 disk=d15 stor_dev=BB005056BC116F-0018 addr=c16t1l8 server=server_2 disk=d14 stor_dev=BB005056BC116F-0017 addr=c0t1l7 server=server_2 disk=d14 stor_dev=BB005056BC116F-0017 addr=c16t1l7 server=server_2..

[nasadmin@celerra_sim_b ~]$ nas_fs -info SRM_FS2_SRM_TEST_FAILOVER_WRITEABLEid = 41name = SRM_FS2_SRM_TEST_FAILOVER_WRITEABLEacl = 0in_use = Truetype = wckptworm = offvolume = vp112pool = clar_r5_performancemember_of =rw_servers= server_2ro_servers=rw_vdms =ro_vdms =checkpt_of= SRM_FS2baseline_ckpt = SRM_FS2_SRM_TEST_FAILOVER Thu Aug 6 12:13:55 EDT 2009deduplication = Offused = 32%full(mark)= 90%stor_devs = BB005056BC116F-0018,BB005056BC116F-0017,BB005056BC116F-0014,BB005056BC116F-0011disks = d15,d14,d11,d8 disk=d15 stor_dev=BB005056BC116F-0018 addr=c0t1l8 server=server_2 disk=d15 stor_dev=BB005056BC116F-0018 addr=c16t1l8 server=server_2 disk=d14 stor_dev=BB005056BC116F-0017 addr=c0t1l7 server=server_2 disk=d14 stor_dev=BB005056BC116F-0017 addr=c16t1l7 server=server_2..

[nasadmin@celerra_sim_b ~]$ nas_volume -info vp112id = 112name = vp112acl = 0in_use = Truetype = poolchunk_size = 128volume_set = v111disks = d15,d14,d11,d8clnt_filesys= vpfs37[nasadmin@celerra_sim_b ~]$

Page 51 of 63

Page 52: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Display the new mounted file systems for a particular data mover:

[nasadmin@celerra_sim_b ~]$ server_df server_2server_2 :Filesystem kbytes used avail capacity Mounted onSRM_FS_SRM_TEST_FAILOVER_WRITEABLE 2015984 400904 1615080 20% /SRM_FS_SRM_TEST_FAILOVER_WRITEABLESRM_FS_SRM_TEST_FAILOVER 2015984 400904 1615080 20% /SRM_FS_SRM_TEST_FAILOVERSRM_FS2_SRM_TEST_FAILOVER_WRITEABLE 2015984 400928 1615056 20% /SRM_FS2_SRM_TEST_FAILOVER_WRITEABLESRM_FS2_SRM_TEST_FAILOVER 2015984 400928 1615056 20% /SRM_FS2_SRM_TEST_FAILOVERroot_rep_ckpt_33_1463_2 2015984 400928 1615056 20% /root_rep_ckpt_33_1463_2root_rep_ckpt_33_1463_1 2015984 400928 1615056 20% /root_rep_ckpt_33_1463_1SRM_FS2 2015984 400928 1615056 20% /SRM_FS2root_rep_ckpt_23_1299_2 2015984 400904 1615080 20% /root_rep_ckpt_23_1299_2root_rep_ckpt_23_1299_1 2015984 400904 1615080 20% /root_rep_ckpt_23_1299_1SRM_FS 2015984 400904 1615080 20% /SRM_FSroot_fs_common 15368 5280 10088 34% /.etc_commonroot_fs_2 258128 6208 251920 2% /

Display the new file system (NFS) mounts:[nasadmin@celerra_sim_b ~]$ server_mountpoint server_2 -listserver_2 :/.etc_common/SRM_FS/root_rep_ckpt_23_1299_1/root_rep_ckpt_23_1299_2/SRM_FS_SRM_TEST_FAILOVER/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE/SRM_FS2/root_rep_ckpt_33_1463_1/root_rep_ckpt_33_1463_2/SRM_FS2_SRM_TEST_FAILOVER/SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE[nasadmin@celerra_sim_b ~]$

Display the new file system (NFS) exports:[nasadmin@celerra_sim_b ~]$ server_export server_2server_2 :export "/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE" anon=0 rw=10.21.68.73export "/SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE" anon=0 rw=10.21.68.73export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101

Page 52 of 63

Page 53: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

[nasadmin@celerra_sim_b ~]$ nas_replicate -info -allID = 96_BB005056BC23F5_0000_96_BB005056BC116F_0000Name = srm_replicSource Status = OKNetwork Status = OKDestination Status = OKLast Sync Time = Thu Aug 06 11:57:20 EDT 2009Type = filesystemCelerra Network Server = celerra_sim_aDart Interconnect = srm_interPeer Dart Interconnect = srm_interReplication Role = destinationSource Filesystem = SRM_FSSource Data Mover = server_2Source Interface = 10.21.65.212Source Control Port = 0Source Current Data Port = 0Destination Filesystem = SRM_FSDestination Data Mover = server_2Destination Interface = 10.21.65.214Destination Control Port = 5085Destination Data Port = 8888Max Out of Sync Time (minutes) = 10Next Transfer Size (KB) = 0Current Transfer Size (KB) = 0Current Transfer Remain (KB) = 0Estimated Completion Time =Current Transfer is Full Copy = NoCurrent Transfer Rate (KB/s) = 0Current Read Rate (KB/s) = 0Current Write Rate (KB/s) = 0Previous Transfer Rate (KB/s) = 63Previous Read Rate (KB/s) = 911Previous Write Rate (KB/s) = 78Average Transfer Rate (KB/s) = 166Average Read Rate (KB/s) = 722Average Write Rate (KB/s) = 85

ID = 106_BB005056BC23F5_0000_106_BB005056BC116F_0000Name = srm_rep_fs2Source Status = OKNetwork Status = OKDestination Status = OKLast Sync Time = Thu Aug 06 12:05:40 EDT 2009Type = filesystemCelerra Network Server = celerra_sim_aDart Interconnect = srm_interPeer Dart Interconnect = srm_interReplication Role = destinationSource Filesystem = SRM_FS2Source Data Mover = server_2Source Interface = 10.21.65.212Source Control Port = 0Source Current Data Port = 0Destination Filesystem = SRM_FS2Destination Data Mover = server_2Destination Interface = 10.21.65.214Destination Control Port = 5085Destination Data Port = 8888Max Out of Sync Time (minutes) = 10Next Transfer Size (KB) = 0Current Transfer Size (KB) = 0Current Transfer Remain (KB) = 0Estimated Completion Time =Current Transfer is Full Copy = NoCurrent Transfer Rate (KB/s) = 0Current Read Rate (KB/s) = 0Current Write Rate (KB/s) = 0Previous Transfer Rate (KB/s) = 64Previous Read Rate (KB/s) = 862Previous Write Rate (KB/s) = 77Average Transfer Rate (KB/s) = 743Average Read Rate (KB/s) = 392Average Write Rate (KB/s) = 83

[nasadmin@celerra_sim_b ~]$

Page 53 of 63

Page 54: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 3: Recovery data mover logs from a test Failover event

The server_log command can be used to dump log messages from a Celerra Data Mover.

[nasadmin@celerra_sim_b ~]$ server_log server_22009-08-06 12:05:14: CMD: 6: CmdReplicatev2List::initialize taskId:BB005056BC116F0000_0_1482:42949687782009-08-06 12:05:30: ADMIN: 6: Command succeeded: volume disk 15 num_paths=2 c0t1l8 c16t1l8 size=2464 disk_id=15 cel_id=BB005056BC116F00002009-08-06 12:05:30: ADMIN: 6: Command succeeded: volume disk 14 num_paths=2 c0t1l7 c16t1l7 size=2464 disk_id=14 cel_id=BB005056BC116F00002009-08-06 12:05:30: ADMIN: 6: Command succeeded: volume disk 11 num_paths=2 c0t1l4 c16t1l4 size=2464 disk_id=11 cel_id=BB005056BC116F00002009-08-06 12:05:31: ADMIN: 6: Command succeeded: volume disk 8 num_paths=2 c0t1l1 c16t1l1 size=2464 disk_id=8 cel_id=BB005056BC116F00002009-08-06 12:05:31: ADMIN: 6: Command succeeded: volume stripe 101 64 4 15 14 11 82009-08-06 12:05:31: ADMIN: 6: Command succeeded: volume slice 110 4096000 4096000 1012009-08-06 12:05:31: ADMIN: 6: Command succeeded: volume hyper 111 1 1102009-08-06 12:05:31: ADMIN: 6: Command succeeded: volume hyper 112 1 1112009-08-06 12:05:31: VRPL: 6: volpool is running on volume 112 (refCnt:4)2009-08-06 12:05:31: ADMIN: 6: Command succeeded: volpool start vol=112 chunksize=1282009-08-06 12:05:31: SVFS: 6: freeze() requested on fsid:332009-08-06 12:05:31: CFS: 6: fs 0x21 type = dhfs being unmounted. Waiting for quiesce ...2009-08-06 12:05:31: CFS: 6: fs 0x21 type = dhfs unmounted2009-08-06 12:05:31: SVFS: 6: freeze done on fsid:332009-08-06 12:05:31: SVFS: 6: Cascaded Delete...2009-08-06 12:05:31: SVFS: 6: D40106_0: createBlockMap PBM root=0 keys=0 h=0 nc=02009-08-06 12:05:31: VRPL: 6: 112: Allocating chunk:2 Add:525312 Chunks:152009-08-06 12:05:32: UFS: 6: Volume name:1062009-08-06 12:05:32: UFS: 6: starting gid map file processing.2009-08-06 12:05:32: UFS: 6: gid map file processing is completed.2009-08-06 12:05:32: ADMIN: 6: SnapSure command build ID 40 SavVol 112 FsVol 106 HWM NULL succeeded2009-08-06 12:05:37: UFS: 6: Volume name:Sh112402009-08-06 12:05:37: UFS: 6: starting gid map file processing.2009-08-06 12:05:37: UFS: 6: gid map file processing is completed.2009-08-06 12:05:37: ADMIN: 6: Command succeeded : serverMount ckpt ro /SRM_FS2_SRM_TEST_FAILOVER 112=40 ro2009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume disk 15 num_paths=2 c0t1l8 c16t1l8 size=2464 disk_id=15 cel_id=BB005056BC116F00002009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume disk 14 num_paths=2 c0t1l7 c16t1l7 size=2464 disk_id=14 cel_id=BB005056BC116F00002009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume disk 11 num_paths=2 c0t1l4 c16t1l4 size=2464 disk_id=11 cel_id=BB005056BC116F00002009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume disk 8 num_paths=2 c0t1l1 c16t1l1 size=2464 disk_id=8 cel_id=BB005056BC116F00002009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume stripe 101 64 4 15 14 11 82009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume slice 110 4096000 4096000 1012009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume hyper 111 1 1102009-08-06 12:05:55: ADMIN: 6: Command succeeded: volume hyper 112 1 1112009-08-06 12:05:55: VRPL: 6: volpool is running on volume 112 (refCnt:5)2009-08-06 12:05:55: ADMIN: 6: Command succeeded: volpool start vol=112 chunksize=1282009-08-06 12:05:55: SVFS: 6: freeze() requested on fsid:332009-08-06 12:05:56: CFS: 6: fs 0x21 type = dhfs being unmounted. Waiting for quiesce ...2009-08-06 12:05:56: CFS: 6: fs 0x21 type = dhfs unmounted2009-08-06 12:05:56: SVFS: 6: freeze done on fsid:332009-08-06 12:05:56: SVFS: 6: Cascaded Delete...2009-08-06 12:05:56: SVFS: 6: D41106_0: createBlockMap PBM root=0 keys=0 h=0 nc=02009-08-06 12:05:56: VRPL: 6: 112: Allocating chunk:3 Add:787456 Chunks:152009-08-06 12:05:56: UFS: 6: Volume name:1062009-08-06 12:05:56: UFS: 6: starting gid map file processing.2009-08-06 12:05:57: UFS: 6: gid map file processing is completed.2009-08-06 12:05:57: ADMIN: 6: SnapSure command build ID 41 SavVol 112 FsVol 106 HWM NULL SrcCkpt 40 succeeded2009-08-06 12:06:01: UFS: 3: Warning: failed to make fmp compatible disk2009-08-06 12:06:02: UFS: 6: Volume name:Sh112412009-08-06 12:06:02: UFS: 6: starting gid map file processing.2009-08-06 12:06:02: UFS: 6: gid map file processing is completed.

Page 54 of 63

Page 55: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

2009-08-06 12:06:02: UFS: 6: initFS: initialize acl cache on fs 412009-08-06 12:06:02: UFS: 6: Filesystem 41 acl operation fall back to uncached version2009-08-06 12:06:02: UFS: 6: FS: /SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE, Quota limits will be checked on block usage, with block size of 8192 bytes2009-08-06 12:06:02: ADMIN: 6: Command succeeded : serverMount ckpt rw /SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE 112=41 rw2009-08-06 12:06:20: CMD: 6: CmdReplicatev2List::initialize taskId:BB005056BC116F0000_0_1488:42949687842009-08-06 12:06:37: ADMIN: 6: Command succeeded: volume disk 15 num_paths=2 c0t1l8 c16t1l8 size=2464 disk_id=15 cel_id=BB005056BC116F00002009-08-06 12:06:37: ADMIN: 6: Command succeeded: volume disk 14 num_paths=2 c0t1l7 c16t1l7 size=2464 disk_id=14 cel_id=BB005056BC116F00002009-08-06 12:06:37: ADMIN: 6: Command succeeded: volume disk 11 num_paths=2 c0t1l4 c16t1l4 size=2464 disk_id=11 cel_id=BB005056BC116F00002009-08-06 12:06:38: ADMIN: 6: Command succeeded: volume disk 8 num_paths=2 c0t1l1 c16t1l1 size=2464 disk_id=8 cel_id=BB005056BC116F00002009-08-06 12:06:38: ADMIN: 6: Command succeeded: volume stripe 101 64 4 15 14 11 82009-08-06 12:06:38: ADMIN: 6: Command succeeded: volume slice 102 0 4096000 1012009-08-06 12:06:38: ADMIN: 6: Command succeeded: volume hyper 103 1 1022009-08-06 12:06:38: ADMIN: 6: Command succeeded: volume hyper 104 1 1032009-08-06 12:06:38: VRPL: 6: volpool is running on volume 104 (refCnt:4)2009-08-06 12:06:38: ADMIN: 6: Command succeeded: volpool start vol=104 chunksize=1282009-08-06 12:06:38: SVFS: 6: freeze() requested on fsid:232009-08-06 12:06:38: CFS: 6: fs 0x17 type = dhfs being unmounted. Waiting for quiesce ...2009-08-06 12:06:38: CFS: 6: fs 0x17 type = dhfs unmounted2009-08-06 12:06:38: SVFS: 6: freeze done on fsid:232009-08-06 12:06:38: SVFS: 6: Cascaded Delete...2009-08-06 12:06:38: SVFS: 6: D4296_0: createBlockMap PBM root=0 keys=0 h=0 nc=02009-08-06 12:06:38: VRPL: 6: 104: Allocating chunk:2 Add:525312 Chunks:152009-08-06 12:06:39: UFS: 6: Volume name:962009-08-06 12:06:39: UFS: 6: starting gid map file processing.2009-08-06 12:06:39: UFS: 6: gid map file processing is completed.2009-08-06 12:06:39: ADMIN: 6: SnapSure command build ID 42 SavVol 104 FsVol 96 HWM NULL succeeded2009-08-06 12:06:44: UFS: 6: Volume name:Sh104422009-08-06 12:06:44: UFS: 6: starting gid map file processing.2009-08-06 12:06:45: UFS: 6: gid map file processing is completed.2009-08-06 12:06:45: ADMIN: 6: Command succeeded : serverMount ckpt ro /SRM_FS_SRM_TEST_FAILOVER 104=42 ro2009-08-06 12:07:05: ADMIN: 6: Command succeeded: volume disk 15 num_paths=2 c0t1l8 c16t1l8 size=2464 disk_id=15 cel_id=BB005056BC116F00002009-08-06 12:07:05: ADMIN: 6: Command succeeded: volume disk 14 num_paths=2 c0t1l7 c16t1l7 size=2464 disk_id=14 cel_id=BB005056BC116F00002009-08-06 12:07:05: ADMIN: 6: Command succeeded: volume disk 11 num_paths=2 c0t1l4 c16t1l4 size=2464 disk_id=11 cel_id=BB005056BC116F00002009-08-06 12:07:05: ADMIN: 6: Command succeeded: volume disk 8 num_paths=2 c0t1l1 c16t1l1 size=2464 disk_id=8 cel_id=BB005056BC116F00002009-08-06 12:07:06: ADMIN: 6: Command succeeded: volume stripe 101 64 4 15 14 11 82009-08-06 12:07:06: ADMIN: 6: Command succeeded: volume slice 102 0 4096000 1012009-08-06 12:07:06: ADMIN: 6: Command succeeded: volume hyper 103 1 1022009-08-06 12:07:06: ADMIN: 6: Command succeeded: volume hyper 104 1 1032009-08-06 12:07:06: VRPL: 6: volpool is running on volume 104 (refCnt:5)2009-08-06 12:07:06: ADMIN: 6: Command succeeded: volpool start vol=104 chunksize=1282009-08-06 12:07:06: SVFS: 6: freeze() requested on fsid:232009-08-06 12:07:06: CFS: 6: fs 0x17 type = dhfs being unmounted. Waiting for quiesce ...2009-08-06 12:07:06: CFS: 6: fs 0x17 type = dhfs unmounted2009-08-06 12:07:06: SVFS: 6: freeze done on fsid:232009-08-06 12:07:06: SVFS: 6: Cascaded Delete...2009-08-06 12:07:06: SVFS: 6: D4396_0: createBlockMap PBM root=0 keys=0 h=0 nc=02009-08-06 12:07:06: VRPL: 6: 104: Allocating chunk:3 Add:787456 Chunks:152009-08-06 12:07:06: UFS: 6: Volume name:962009-08-06 12:07:06: UFS: 6: starting gid map file processing.2009-08-06 12:07:07: UFS: 6: gid map file processing is completed.2009-08-06 12:07:07: ADMIN: 6: SnapSure command build ID 43 SavVol 104 FsVol 96 HWM NULL SrcCkpt 42 succeeded2009-08-06 12:07:10: UFS: 3: Warning: failed to make fmp compatible disk2009-08-06 12:07:10: UFS: 6: Volume name:Sh104432009-08-06 12:07:10: UFS: 6: starting gid map file processing.2009-08-06 12:07:11: UFS: 6: gid map file processing is completed.2009-08-06 12:07:11: UFS: 6: initFS: initialize acl cache on fs 432009-08-06 12:07:11: UFS: 6: Filesystem 43 acl operation fall back to uncached version2009-08-06 12:07:11: UFS: 6: FS: /SRM_FS_SRM_TEST_FAILOVER_WRITEABLE, Quota limits will be checked on block usage, with block size of 8192 bytes2009-08-06 12:07:11: ADMIN: 6: Command succeeded : serverMount ckpt rw /SRM_FS_SRM_TEST_FAILOVER_WRITEABLE 104=43 rw

Page 55 of 63

Page 56: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

2009-08-06 12:07:40: NFS: 6: Processed options:2009-08-06 12:07:41: NFS: 6: Export /SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE @ a3ddf020, rootuid 02009-08-06 12:07:44: NFS: 6: Processed options:2009-08-06 12:07:44: NFS: 6: Export /SRM_FS_SRM_TEST_FAILOVER_WRITEABLE @ 9c41e020, rootuid 02009-08-06 12:11:58: CMD: 6: CmdReplicatev2List::initialize taskId:BB005056BC116F0000_0_1494:42949687902009-08-06 12:13:12: RCPD: 6: RCP_Server:: start() RCP.tcpwindow=10485762009-08-06 12:13:12: RCPD: 6: RCP_SessionManager::accept() RCP connection from 10.21.65.212:579802009-08-06 12:13:12: RCPD: 6: RCP_SessionManager::insertSession: adding session 0x941e77b02009-08-06 12:13:13: RCPD: 6: RCP_SessionManager::resetTimeOut: 0x941e77b02009-08-06 12:13:13: RCPD: 6: rcpCollector event received gp:96_BB005056BC23F5_0000_96_BB005056BC116F_00002009-08-06 12:13:13: RCPT: 6: RcpTransportReceiver::startReceiver() rcpGrpName:96_BB005056BC23F5_0000_96_BB005056BC116F_00002009-08-06 12:13:15: RCPT: 6: RcpTransportReceiver:change lwm:524288 hwm:629145 newlwm:5242882009-08-06 12:13:15: SVFS: 6: D4296_0: allocDataBlock: transition index state2009-08-06 12:13:16: RCPT: 6: RcpTransportReceiverCBDisco(): Disco on shandle=0x941e77b0 pRcpTReceiver:0xa5f579a02009-08-06 12:13:16: RCPT: 6: RcpTransportReceiver:readStreamMsg() RCP_CLOSE msg:66, rcpGrpName:96_BB005056BC23F5_0000_96_BB005056BC116F_0000 shandle:0x941e77b02009-08-06 12:13:16: REP: 6: DpCopierReceiver::stopAck issuing stop to ack2009-08-06 12:13:16: REP: 6: DpCopierReceiver::doStop issuing stop to delta2009-08-06 12:13:16: REP: 6: DpCopierReceiver::restartReceiver2009-08-06 12:13:16: RCPT: 6: RcpTransportReceiver::rcpclose() done on handle:0x941e77b02009-08-06 12:13:16: CFS: 6: fs 0x1d type = dhfs being unmounted. Waiting for quiesce ...2009-08-06 12:13:16: CFS: 6: fs 0x1d type = dhfs unmounted2009-08-06 12:13:16: SVFS: 6: freeze() requested on fsid:232009-08-06 12:13:16: CFS: 6: fs 0x17 type = dhfs being unmounted. Waiting for quiesce ...2009-08-06 12:13:16: CFS: 6: fs 0x17 type = dhfs unmounted2009-08-06 12:13:16: SVFS: 6: freeze done on fsid:232009-08-06 12:13:17: SVFS: 6: Cascaded Delete...2009-08-06 12:13:17: SVFS: 6: dpSY252: Reading hdr=1024 fsID=29 chunk=0 stable=12 prev=0x02009-08-06 12:13:17: VRPL: 6: ReleaseChunks_DBitMap, SB chunks = 42009-08-06 12:13:17: SVFS: 6: Cascaded Delete...2009-08-06 12:13:17: SVFS: 6: D2996_0: createBlockMap PBM root=0 keys=0 h=0 nc=02009-08-06 12:13:17: VRPL: 6: 104: Allocating chunk:0 Add:1024 Chunks:152009-08-06 12:13:17: UFS: 6: Volume name:962009-08-06 12:13:17: UFS: 6: starting gid map file processing.2009-08-06 12:13:18: UFS: 6: gid map file processing is completed.2009-08-06 12:13:18: UFS: 6: Volume name:Sh104292009-08-06 12:13:18: UFS: 6: starting gid map file processing.2009-08-06 12:13:18: UFS: 6: gid map file processing is completed.

Page 56 of 63

Page 57: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 4: Decoding failed commands on the Celerra

The nas_message command can be used to decode error messages on the Celerra.

Ex1:

[nasadmin@celerra_sim_a ~]$ nas_replicate -create srm_rep_fs2 -source -fs SRM_FS2 -destination -fs SRM_FS2 -interconnect srm_inter -source_interface ip=10.21.65.212 -destination_interface ip=10.21.65.214Error 13160415307: The object is not in right state for replication.

[nasadmin@celerra_sim_a ~]$ nas_message -info 13160415307MessageID = 13160415307BaseID = 75Severity = ERRORComponent = DARTFacility = REPType = STATUS

Brief_Description = The object is not in right state for replication.

Full_Description = The destination filesystem is not read-only, or the destination vdm state is not mounted, or destination the iscsi lun is not read-only. For Copy, source file system needs to be read-only.

Recommended_Action = For file system replication, please mount the destination file system as read-only; for replications in the Switched Over state, repeat the nas_replicate -start command using the -reverse option; for vdm replication, please set state to mounted for the destination vdm; for iscsi lun replication, please modify the destination iscsi lun to read-only. For a copy operation from a file system or writeable checkpoint, mount the file system as read-only. If the file system is already read-only, be sure the destination file system is read-only. For a copy operation from a checkpoint, mount the destination as read-only.[nasadmin@celerra_sim_a ~]$

Ex2:

[nasadmin@celerra_sim_a ~]$ nas_replicate -create srm_rep_fs2 -source -fs SRM_FS2 -destination -fs SRM_FS2 -interconnect srm_inter -source_interface ip=10.21.65.212 -destination_interface ip=10.21.65.214Error 13160415324: The destination object (fs, vdm or iscsi lun) is not empty and a common base does not exist, or the destination object is different from the common base.

[nasadmin@celerra_sim_a ~]$ nas_message -info 13160415324MessageID = 13160415324BaseID = 92Severity = ERRORComponent = DARTFacility = REPType = STATUS

Brief_Description = The destination object (fs, vdm or iscsi lun) is not empty and a common base does not exist, or the destination object is different from the common base.

Full_Description = This error can be returned when creating or starting a replication. The destination object exists and is different from the last common snap on the source that has been successfully transferred, or a common base does not exist. Continuing the request could result in lost data on the destination object.

Recommended_Action = For nas_copy or nas_replicate -create and -start commands specify the -overwrite_destination option if the specified destination object is correct. This will over-write the destination with the source object content. GUI based requests should select the "Discard Changes on Destination Since Last Copy" option.[nasadmin@celerra_sim_a ~]$

Page 57 of 63

Page 58: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Part 5: SRM LogsSRM logs are located in the following location. This is usually the starting point for troubleshooting SRM issue.

[2009-08-06 11:53:17.171 00384 trivia 'SecondarySanProvider'] testFailover's output:[#2] filebase: command, dir: C:/Program Files/VMware/VMware vCenter Site Recovery Manager/scripts/SAN/celerra/, fileExt: .pl 1 file(s) copied.[#2] [2009-08-06 11:53:17.171 00384 info 'SecondarySanProvider'] testFailover exited with exit code 0[2009-08-06 11:53:17.171 00384 trivia 'SecondarySanProvider'] 'testFailover' returned <?xml version="1.0" encoding="UTF-8" standalone="yes"?>[#2] <Response>[#2] <ReturnCode>0</ReturnCode>[#2] <InitiatorGroupList>[#2] <InitiatorGroup id="iScsi-fc-all">[#2] <Initiator type="FC" id="50:01:43:80:02:51:dc:a0"/>[#2] <Initiator type="FC" id="50:01:43:80:02:51:dc:a2"/>[#2] </InitiatorGroup>[#2] </InitiatorGroupList>[#2] <ReplicaLunList>[#2] <ReplicaLun nfsName="/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE" key="96_BB005056BC23F5_0000_96_BB005056BC116F_0000"/>[#2] </ReplicaLunList>[#2] </Response>

[2009-08-06 11:53:17.171 00384 info 'SecondarySanProvider'] Return code for testFailover: 0[2009-08-06 11:53:17.171 00384 trivia 'SecondarySanProvider'] 'Prepare storage for group 'shadow-group-609' for recovery' took 78.891 seconds..[2009-08-06 11:53:25.671 00384 trivia 'SecondarySanProvider'] 'Mount '10.21.65.214:/SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE' as 'Celerra_NFS2' on 1 hosts' took 3.828 seconds[2009-08-06 11:53:25.890 00384 trivia 'SecondarySanProvider'] 'Retrieved 1 properties of 1 objects of type 'vim.HostSystem'' took 0.219 seconds[2009-08-06 11:53:26.327 00384 trivia 'SecondarySanProvider'] 'Retrieved 1 properties of 3 objects of type 'vim.Datastore'' took 0.219 seconds[2009-08-06 11:53:26.327 00384 info 'SecondarySanProvider'] Trying to mount '10.21.65.214:/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE' as 'Celerra_NFS_PROT' on host 'host-29'[2009-08-06 11:53:29.499 00384 trivia 'SecondarySanProvider'] 'Mount '10.21.65.214:/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE' as 'Celerra_NFS_PROT' on 1 hosts' took 3.828 seconds..[2009-08-06 11:53:30.671 00384 verbose 'SecondarySanProvider'] NAS volume '/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE' at '10.21.65.214' is mounted to host 'host-29'[2009-08-06 11:53:30.671 00384 verbose 'SecondarySanProvider'] Adding NAS datastore 'Celerra_NFS_PROT' with MoId 'datastore-146' at host '10.21.65.214' with path '/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE'

Page 58 of 63

Page 59: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

[2009-08-06 11:53:30.671 00384 verbose 'SecondarySanProvider'] NAS volume '/SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE' at '10.21.65.214' is mounted to host 'host-29'[2009-08-06 11:53:30.671 00384 verbose 'SecondarySanProvider'] Adding NAS datastore 'Celerra_NFS2' with MoId 'datastore-144' at host '10.21.65.214' with path '/SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE'[2009-08-06 11:53:30.671 00384 verbose 'SecondarySanProvider'] NAS volume '/mnt/repo/isos' at '10.21.64.206' is mounted to host 'host-29' [2009-08-06 11:53:30.671 00384 trivia 'SecondarySanProvider'] Matched NFS share '/SRM_FS2_SRM_TEST_FAILOVER_WRITEABLE' from host '10.21.65.214' to '106_BB005056BC23F5_0000_106_BB005056BC116F_0000'[2009-08-06 11:53:30.671 00384 trivia 'SecondarySanProvider'] Found datastore 'datastore-144' spanning 1 shadow LUNs[2009-08-06 11:53:30.671 00384 trivia 'SecondarySanProvider'] NFS share '10.21.65.214//SRM_FS_SRM_TEST_FAILOVER_WRITEABLE' doesn't match: unknown share '/SRM_FS_SRM_TEST_FAILOVER_WRITEABLE'[2009-08-06 11:53:30.671 00384 info 'SecondarySanProvider'] Found shadow datastore 'datastore-144' for datastore 'netfs://10.21.65.212//SRM_FS2/'[2009-08-06 11:53:30.671 00384 verbose 'SecondarySanProvider'] Found 1 VirtualMachines in group 'shadow-group-2257' of 1 ShadowVms

Part 6: Celerra SRA Logs

The Celerra SRA logs all the commands that are run by the SRA on the Celerra. This is a great place to look at which commands returned an error should the issue prove to be an SRA (array) related problem.

2009-08-06 11:51:59,249 [com.emc.celerra.sra.engine.SrmTransaction]: [Version=2.0.8] Begin SRM transaction: testFailover 2009-08-06 11:51:59,265 [com.emc.celerra.sra.engine.Failover]: Command action: start 2009-08-06 11:51:59,562 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_version 2009-08-06 11:52:00,234 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: 5.6.43-18 2009-08-06 11:52:00,249 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_server -query:Type=^vdm -format:'%s`%s`%s`%s`%s`%s`%s`%s`%s|' -fields:Name,Type,Slot,StandbyServers,StandbyPolicy,I18NMode,NtpServers,ID,Model 2009-08-06 11:52:00,734 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: server_2`nas`2```ASCII``1`NS-Simulator| 2009-08-06 11:52:00,734 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_server -query:Name==server_2 -format:%q -fields:ISCSIMaskTable -query:* -format:'%s`%s`%s|' -fields:Name,Initiator,Grant 2009-08-06 11:52:01,171 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: 2009-08-06 11:52:01,171 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_replicate -list -id 2009-08-06 11:52:16,468 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: ID Name Type Local Mover Interconnect Celerra Status96_BB005056BC23F5_0000_96_BB005056BC116F_0000 srm_replic filesystem server_2 <--srm_inter celerra_sim+ OK106_BB005056BC23F5_0000_106_BB005056BC116F_0000 srm_rep_fs2 filesystem server_2 <--srm_inter celerra_sim+ OK

2009-08-06 11:52:16,468 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_cel -query:IsLocal==True -format:%s -fields:CelerraID

Page 59 of 63

Page 60: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

2009-08-06 11:52:17,093 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: BB005056BC116F0000 2009-08-06 11:52:17,109 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_fs -query:VolumeID==96 -format:%s -fields:Name 2009-08-06 11:52:17,656 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: SRM_FS 2009-08-06 11:52:17,656 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_fs -query:Name=SRM_FS_SRM_TEST_FAILOVER_WRITEABLE -format:%s -fields:Name 2009-08-06 11:52:17,906 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: 2009-08-06 11:52:17,906 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_fs -query:VolumeID==96 -format:%s -fields:ID 2009-08-06 11:52:18,171 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: 23 2009-08-06 11:52:18,171 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/fs_ckpt id=23 -name SRM_FS_SRM_TEST_FAILOVER -Create -readonly y 2009-08-06 11:52:42,562 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: operation in progress (not interruptible)...id = 23name = SRM_FSacl = 0in_use = Truetype = uxfsworm = offvolume = v96pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers= server_2rw_vdms =ro_vdms =auto_ext = no,virtual_provision=nodeduplication = Offckpts = root_rep_ckpt_23_1299_1,root_rep_ckpt_23_1299_2,SRM_FS_SRM_TEST_FAILOVERrep_sess = 96_BB005056BC23F5_0000_96_BB005056BC116F_0000(ckpts: root_rep_ckpt_23_1299_1, root_rep_ckpt_23_1299_2)stor_devs = BB005056BC116F-0019,BB005056BC116F-0016,BB005056BC116F-0015,BB005056BC116F-0010disks = d16,d13,d12,d7disk=d16 stor_dev=BB005056BC116F-0019 addr=c0t1l9 server=server_2disk=d16 stor_dev=BB005056BC116F-0019 addr=c16t1l9 server=server_2disk=d13 stor_dev=BB005056BC116F-0016 addr=c0t1l6 server=server_2disk=d13 stor_dev=BB005056BC116F-0016 addr=c16t1l6 server=server_2disk=d12 stor_dev=BB005056BC116F-0015 addr=c0t1l5 server=server_2disk=d12 stor_dev=BB005056BC116F-0015 addr=c16t1l5 server=server_2disk=d7 stor_dev=BB005056BC116F-0010 addr=c0t1l0 server=server_2disk=d7 stor_dev=BB005056BC116F-0010 addr=c16t1l0 server=server_2

id = 42name = SRM_FS_SRM_TEST_FAILOVERacl = 0in_use = Truetype = ckptworm = offvolume = vp104pool = clar_r5_performancemember_of =rw_servers=ro_servers= server_2rw_vdms =ro_vdms =checkpt_of= SRM_FS Thu Aug 6 12:15:02 EDT 2009deduplication = Offused = 19%full(mark)= 90%stor_devs = BB005056BC116F-0018,BB005056BC116F-0017,BB005056BC116F-0014,BB005056BC116F-0011disks = d15,d14,d11,d8disk=d15 stor_dev=BB005056BC116F-0018 addr=c0t1l8 server=server_2disk=d15 stor_dev=BB005056BC116F-0018 addr=c16t1l8 server=server_2disk=d14 stor_dev=BB005056BC116F-0017 addr=c0t1l7 server=server_2disk=d14 stor_dev=BB005056BC116F-0017 addr=c16t1l7 server=server_2disk=d11 stor_dev=BB005056BC116F-0014 addr=c0t1l4 server=server_2

Page 60 of 63

Page 61: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

disk=d11 stor_dev=BB005056BC116F-0014 addr=c16t1l4 server=server_2disk=d8 stor_dev=BB005056BC116F-0011 addr=c0t1l1 server=server_2disk=d8 stor_dev=BB005056BC116F-0011 addr=c16t1l1 server=server_2

2009-08-06 11:52:42,562 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/nas_fs -query:Name==SRM_FS_SRM_TEST_FAILOVER -format:%s -fields:ID 2009-08-06 11:52:44,296 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: 42 2009-08-06 11:52:44,312 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/fs_ckpt id=42 -name SRM_FS_SRM_TEST_FAILOVER_WRITEABLE -Create -readonly n 2009-08-06 11:53:08,109 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: operation in progress (not interruptible)...id = 42name = SRM_FS_SRM_TEST_FAILOVERacl = 0in_use = Truetype = ckptworm = offvolume = vp104pool = clar_r5_performancemember_of =rw_servers=ro_servers= server_2rw_vdms =ro_vdms =checkpt_of= SRM_FS Thu Aug 6 12:15:02 EDT 2009deduplication = Offckpts = SRM_FS_SRM_TEST_FAILOVER_WRITEABLEused = 32%full(mark)= 90%stor_devs = BB005056BC116F-0018,BB005056BC116F-0017,BB005056BC116F-0014,BB005056BC116F-0011disks = d15,d14,d11,d8disk=d15 stor_dev=BB005056BC116F-0018 addr=c0t1l8 server=server_2disk=d15 stor_dev=BB005056BC116F-0018 addr=c16t1l8 server=server_2disk=d14 stor_dev=BB005056BC116F-0017 addr=c0t1l7 server=server_2disk=d14 stor_dev=BB005056BC116F-0017 addr=c16t1l7 server=server_2disk=d11 stor_dev=BB005056BC116F-0014 addr=c0t1l4 server=server_2disk=d11 stor_dev=BB005056BC116F-0014 addr=c16t1l4 server=server_2disk=d8 stor_dev=BB005056BC116F-0011 addr=c0t1l1 server=server_2disk=d8 stor_dev=BB005056BC116F-0011 addr=c16t1l1 server=server_2

id = 43name = SRM_FS_SRM_TEST_FAILOVER_WRITEABLEacl = 0in_use = Truetype = wckptworm = offvolume = vp104pool = clar_r5_performancemember_of =rw_servers= server_2ro_servers=rw_vdms =ro_vdms =checkpt_of= SRM_FSbaseline_ckpt = SRM_FS_SRM_TEST_FAILOVER Thu Aug 6 12:15:02 EDT 2009deduplication = Offused = 32%full(mark)= 90%stor_devs = BB005056BC116F-0018,BB005056BC116F-0017,BB005056BC116F-0014,BB005056BC116F-0011disks = d15,d14,d11,d8disk=d15 stor_dev=BB005056BC116F-0018 addr=c0t1l8 server=server_2disk=d15 stor_dev=BB005056BC116F-0018 addr=c16t1l8 server=server_2disk=d14 stor_dev=BB005056BC116F-0017 addr=c0t1l7 server=server_2disk=d14 stor_dev=BB005056BC116F-0017 addr=c16t1l7 server=server_2disk=d11 stor_dev=BB005056BC116F-0014 addr=c0t1l4 server=server_2disk=d11 stor_dev=BB005056BC116F-0014 addr=c16t1l4 server=server_2disk=d8 stor_dev=BB005056BC116F-0011 addr=c0t1l1 server=server_2disk=d8 stor_dev=BB005056BC116F-0011 addr=c16t1l1 server=server_2

Page 61 of 63

Page 62: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

2009-08-06 11:53:08,109 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Executing command: /nas/bin/server_export server_2 -Protocol nfs -o anon=0,rw=10.21.68.73 /SRM_FS_SRM_TEST_FAILOVER_WRITEABLE 2009-08-06 11:53:16,484 [com.emc.celerra.sra.engine.net.SSHConnection]: 10.21.65.213 Command result: server_2 : done 2009-08-06 11:53:16,531 [com.emc.celerra.sra.engine.SrmTransaction]: End SRM transaction

That completes our verification. Return to the SRM view, and click Continue to stop the test.

Page 62 of 63

Page 63: Emc Celerra Nfs Cli & Srm 4.0 Setup Guide

Proven Practice: EMC® Celerra & VMware© vCenter SRM 4.0 Setup Guide

Trademarks © 2009 VMware, Inc. All rights reserved. Protected by one or more U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886, 6,789,156, 6,795,966, 6,880,022, 6,944,699, 6,961,806, 6,961,941, 7,069,413, 7,082,598, 7,089,377, 7,111,086, 7,111,145, 7,117,481, 7,149,843, 7,155,558, 7,222,221, 7,260,815, 7,260,820, 7,269,683, 7,275,136, 7,277,998, 7,277,999, 7,278,030, 7,281,102, 7,290,253, 7,356,679, 7,409,487, 7,412,492, 7,412,702, 7,424,710, 7,428,636, 7,433,951, 7,434,002, and 7,447,854; patents pending.

VMware, the VMware “boxes” logo and design, Virtual SMP, and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions.

EMC2, EMC, Celerra, CLARiiON, Navisphere, Powerlink, are registered trademarks and Celerra Replicator, SnapSure are trademarks of EMC Corporation.

All other marks and names mentioned herein may be trademarks of their respective companies.

Summary

As stated in the introduction, the point of this proven practice is to take the reader through the steps of deploying replication between some EMC Celerra simulators, and then configure VMware vCenter Site Recovery Manager so that a test failover can be initiated.

The main purpose of this document is to assist in setting up such a configuration for demo and training purposes, or Proof-Of-Concept purposes. While it may be of some assistance with deploying a production environment, further reading is essential to determine best practices.

About the Author

Cormac Hogan is a Staff Technical Course Developer for VMware Global Support Services (GSS) based out of Cork, Ireland. He develops and delivers training for the GSS Technical Support Engineers within VMware, primarily on storage and storage related topics.

Page 63 of 63