Configure NFS NaviExpress

Embed Size (px)

Citation preview

  • 7/30/2019 Configure NFS NaviExpress

    1/38

    Create an NFS export 1

    This procedure explains how to create a Network File System (NFS)export on your Celerrasystem. The Celerra system is amultiprotocol machine that can provide access to data through theNFS protocol to provide file sharing in network environments.

    The NFS protocol enables the Celerra Network Server to assume thefunctions of an NFS server. NFS environments typically include:

    Native UNIX clients

    Linux clients

    Windows systems configured with third-party applications

    that provide NFS client services

    Overview............................................................................................... 2 Pre-implementation tasks ................................................................... 4 Implementation worksheets............................................................... 5 Connect external network cables ....................................................... 7 Configure storage for a Fibre Channel enabled system ................. 9 Configure the network ...................................................................... 19

    Create a file system ............................................................................ 20 Delete the NFS export created during startup............................... 23 Create NFS exports ............................................................................ 24 Configure hosts .................................................................................. 28 Configure and test standby relationships....................................... 29 Appendix............................................................................................. 36

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    2/38

    2 Create an NFS export

    Create an NFS export

    OverviewThis section contains an overview of the NFS implementationprocedure overview and host requirements for NFS implementation.

    Procedure overview To create a NFS export, you must perform the following tasks:

    1. Verify that you have performed the pre-implementation tasks:

    Create a Powerlinkaccount. Register your Celerra with EMCor your service provider.

    Install NavisphereService Taskbar (NST.)

    Add additional disk array enclosures (DAEs) using the NST(Not available for NX4).

    2. Complete the implementation worksheets.

    3. Cable additional Celerra ports to your network system.

    4. Configure unused or new disks with Navisphere Express.

    5. Configure your network by creating a new interface to access theCelerra storage from a host or workstation.

    6. Create a file system using a system-defined storage pool.

    7. Delete the NFS export created during startup.

    8. Create an NFS export from the file system.

    9. Configure host access to the NFS export.

    10. Configure and test standby relationships

    Host requirementsfor NFS

    Software

    Celerra Network Server version 5.6.For secure NFS using UNIX or Linux-based Kerberos:

    Sun Enterprise Authentication Mechanism (SEAM) softwareor Linux KDC running Kerberos version 5

    Note: KDCs from other UNIX systems have not been tested.

    For secure NFS using Windows-based Kerberos: Windows 2000 or Windows Server 2003 domain

  • 7/30/2019 Configure NFS NaviExpress

    3/38

    Create an NFS export 3

    Create an NFS export

    To use secure NFS, the client computer must be running:

    SunOS version 5.8 or later (Solaris 10 for NFSv4) Linux - kernel 2.4 or later (2.6.12 _ NFSv4 patches for NFSv4)

    Hummingbird Maestro version 7 or later (EMC recommendsversion 8); version 9 for NFSv4

    AIX 5.3 ML3

    Note: Other clients have not been tested.

    DNS (Domain Name System)

    NTP (Network Time Protocol) server

    Note: Windows environments require that you configure Celerra in theActive Directory.

    Hardware No specific hardware requirements

    Network

    No specific network requirements

    Storage

    No specific storage requirements

  • 7/30/2019 Configure NFS NaviExpress

    4/38

    4 Create an NFS export

    Create an NFS export

    Pre-implementation tasksBefore you begin this NFS implementation procedure ensure that youhave completed the following tasks.

    Create a Powerlinkaccount

    You can create a Powerlink account at http://Powerlink.EMC.com.

    Use this website to access additional EMC resources, includingdocumentation, release notes, software updates, information aboutEMC products, licensing, and service.

    Register your systemwith EMC

    If you did not register your Celerra at the completion of the CelerraStartup Assistant, you can do so now by downloading theRegistration wizard from Powerlink.

    The Registration wizard can also be found on the Applications and

    Tools CD that was shipped with your system.Registering your Celerra ensures that EMC Customer Support has allpertinent system and site information so they can properly assist you.

    Download andinstall theNavisphere Service

    Taskbar (NST)

    The NST is available for download from the CLARiiONTools pageon Powerlink and on the Applications and Tools CD that wasshipped with your system.

    Add additional diskarray enclosures

    Use the NST to add new disk array enclosures (DAEs) to fullyimplement your Celerra (Not available for NX4).

  • 7/30/2019 Configure NFS NaviExpress

    5/38

    Create an NFS export 5

    Create an NFS export

    Implementation worksheetsBefore you begin this implementation procedure take a moment to fillout the following implementation worksheets with the values of thevarious devices you will need to create.

    Create interfaceworksheet

    The New Network Interface wizard configures individual networkinterfaces for the Data Movers. It can also create virtual network

    devices: Link Aggregation, Fail-Safe Network, or Ethernet Channel.Use Table 1 to complete the New Network Interface wizard. You willneed the following information:

    Does the network use variable-length subnets? Yes No

    Note: If the network uses variable-length subnets, be sure to use the correct

    subnet mask. Do not assume 255.255.255.0 or other common values.

    Table 1 Create interface worksheet

    DataMovernumber

    Device name orvirtual devicename IP address Netmask

    MaximumTransmissionUnit (MTU)(optional)

    Virtual LAN(VLAN)identifier(optional) Devices (optional)

  • 7/30/2019 Configure NFS NaviExpress

    6/38

    6 Create an NFS export

    Create an NFS export

    Create file systemworksheet

    The Create File System step creates a file system on a Data Mover.This step can be repeated as needed to create additional file systems.

    Read/Write Data Mover: server_2 server_3

    Volume Management: Automatic (recommended)

    Storage Pool for Automatic Volume Management:

    CLARiiON RAID 1 (Not available for NX4)

    CLARiiON RAID 5 Performance

    CLARiiON RAID 5 Economy

    CLARiiON RAID 1/0

    CLARiiON RAID 6

    File System Name____________________________________________

    File System Size (megabytes) __________________________________Use Default User and Group Quotas: Yes No

    Hard Limit for User Storage (megabytes) ____________________

    Soft Limit for User Storage (megabytes) _____________________

    Hard Limit for User Files (files)_____________________________

    Soft Limit for User Files (files) _____________________________

    Hard Limit for Group Storage (megabytes) __________________

    Soft Limit for Group Storage (megabytes) ___________________

    Hard Limit for Group Files (files) ___________________________

    Soft Limit for Group Files (files)____________________________

    Enforce Hard Limits: Yes No

    Grace Period for Storage (days)_____________________________

    Grace Period for Files (days) _______________________________

    NFS exportworksheet

    NFS export pathname (for example: /test_fs/):__________________________________________________

    IP address of client computer:______________________________

    When you have completed the Implementation worksheets go toConnect external network cables on page 7.

  • 7/30/2019 Configure NFS NaviExpress

    7/38

    Create an NFS export 7

    Create an NFS export

    Connect external network cablesf you have not already done so, you will need to connect the desiredblade network ports to your network system.

    Figure 1 shows the 4-port copper Ethernet X-blades network portsfor an NX4. They are labeled cge0-cge3.

    Figure 2 on page 7 shows the 2-port copper Ethernet and 2-portoptical 10 GbE X-blades network ports for an NX4. They are labeled

    cge0-cge1, and fxg0-fxg1.

    Figure 1 4-port copper Ethernet X-blade

    Figure 2 2-port copper Ethernet and 2-port optical 10 GbE X-blade

    Any advanced configuration of the external network ports is beyondthe scope of this implementation procedure. For more informationabout the many network configuration options the Celerra system

    supports, such as Ethernet channels, link aggregation, and FSNs,refer to the Configuring and Managing Celerra Networking and

    cge0 cge1

    Com 1

    Com 2

    cge2 cge3

    AUX 0 AUX 1BE 0 BE 1

    CIP-000560

    Internal

    management module

    fxg0 fxg1Internal

    management module

    Com 1

    Com 2

    cge0 cge1

    AUX 0 AUX 1BE 0 BE 1

    CNS-001256

  • 7/30/2019 Configure NFS NaviExpress

    8/38

    8 Create an NFS export

    Create an NFS export

    Configuring and Managing Celerra Network High Availability technicalmodules for more information.

    When you have finished Connect external network cables go toConfigure storage for a Fibre Channel enabled system on page 9.

  • 7/30/2019 Configure NFS NaviExpress

    9/38

    Create an NFS export 9

    Create an NFS export

    Configure storage for a Fibre Channel enabled systemThis section details how to create additional storage for a NX4 FibreChannel enabled storage system using Navisphere Express.

    Configure storagewith NavisphereExpress

    Configure storage with Navisphere Express by doing the following:

    1. To start Navisphere Express, open an internet browser such asInternet Explorer or Mozilla Firefox.

    2. Type the IP address of a storage processor of the storage systeminto the internet browser address bar.

    Note: This IP address is the one that you assigned when you initializedthe storage system.

    3. Type the user name and password to log in to Navisphere

    Express, as shown in Figure 3 on page 10.

    Note: The default username is nasadmin and the default password isnasadmin.

  • 7/30/2019 Configure NFS NaviExpress

    10/38

    10 Create an NFS export

    Create an NFS export

    Figure 3 Navisphere Express Login screen

    4. To configure unused storage, select Disk Pools in the left

    navigation panel from the initial screen shown in Figure 4 onpage 11.

  • 7/30/2019 Configure NFS NaviExpress

    11/38

    Create an NFS export 11

    Create an NFS export

    Figure 4 Manage Virtual Disks screen

    Note: If you are trying to create a new virtual disk (LUN) for AutomaticVolume Management (AVM) to use in a stripe with existing virtual disks,the new virtual disk must match the size of the existing virtual disks.

    Find the information on the existing virtual disks by going to the detailspage for each virtual disk by selecting Manage > Virtual Disks >. Record the MB value of the existingvirtual disks and use this value as the size for any new virtual disk.

    5. Click Create New Disk Pool, as shown in Figure 5 on page 12.

  • 7/30/2019 Configure NFS NaviExpress

    12/38

    12 Create an NFS export

    Create an NFS export

    Figure 5 Manage Disk Pools screen

    Note: You should create at least two disk pools. The software assignseach disk pool that you create to an SP as follows: Disk Pool 1 to SP A,Disk Pool 2 to SP B, Disk Pool 3 to SP A, Disk Pool 4 to SP B, and so on.

    All virtual disks that you create on a disk pool are automatically assignedto the same SP as the disk pool. If you create only one disk pool on thestorage system, all virtual disks on the storage system are assigned to SPA and all data received, or sent, goes through SP A.

  • 7/30/2019 Configure NFS NaviExpress

    13/38

    Create an NFS export 13

    Create an NFS export

    6. Select the RAID group type for the new disk pool, as shown inFigure 6 on page 13.

    The RAID Group Type values should be applicable to yoursystem.

    For more information, see NAS Support Matrix document onhttp://Powerlink.EMC.com.

    Note: RAID5 is recommended.

    Figure 6 Create Disk Pool screen

    7. Select the disks in the Disk Processor Enclosure to include in the

    new disk pool, as shown in Figure 6.

  • 7/30/2019 Configure NFS NaviExpress

    14/38

    14 Create an NFS export

    Create an NFS export

    8. Click Apply.

    9. Click Create a virtual disk that can be assigned to a server.10. Select the disk pool just created, as shown in Figure 6.

    11. Type the Name for the new virtual disk(s), and select its Capacityand the Number of Virtual Disks to create, as shown in Figure 7on page 14.

    Note: It is recommended that virtual disk capacity not be larger than 2

    TB.

    Figure 7 Create Virtual Disks screen

  • 7/30/2019 Configure NFS NaviExpress

    15/38

    Create an NFS export 15

    Create an NFS export

    12. Assign a server to the virtual disk(s) by using the Server list box,as shown in Figure 7.

    Note: To send data to or receive data from a virtual disk, you must assigna server to the virtual disk.

    13. Click Apply to create virtual disk(s).

    Note: The system now creates the virtual disks. This may take some time

    depending on the size of the virtual disks.

    14. Select Virtual Disks from the left navigation panel, to verify thecreation of the new virtual disk(s).

    15. Verify the virtual disk server assignment, by looking underAssigned To on the Manage Virtual Disks page, as shown inFigure 8.

  • 7/30/2019 Configure NFS NaviExpress

    16/38

    16 Create an NFS export

    Create an NFS export

    Figure 8 Verify new virtual disk assignment

    16. To make thenew virtual disks (LUNs) available to the Celerra

    system, Celerra Manager must be used. Launch the CelerraManager by opening Celerra Manager using the following URL:

    https://

    where is the hostname or IP address of theControl Station.

    17. If a security alert appears about the systems security certificate,click Yes to proceed.

  • 7/30/2019 Configure NFS NaviExpress

    17/38

    Create an NFS export 17

    Create an NFS export

    18. At the login prompt, log in as user root. The default password isnasadmin.

    19. If a security warning appears about the systems securitycertificate being issued by an untrusted source, click Yes to acceptthe certificate.

    20. If a warning about a hostname mismatch appears, click Yes.

    21. On the Celerra > Storage Systems page, click Rescan, as shownin Figure 9 on page 17.

    Figure 9 Rescan Storage System in Celerra Manager

  • 7/30/2019 Configure NFS NaviExpress

    18/38

    18 Create an NFS export

    Create an NFS export

    CAUTION!Do not change the host LUN (virtual disk) identifier of the CelerraLUNs (virtual disks) after rescanning. This may cause data loss orunavailability.

    22. The user virtual disks (LUNs) are now available for the Celerrasystem.

    When you have finished the Configure storage for a Fibre Channel

    enabled system go to Configure the network on page 19.

  • 7/30/2019 Configure NFS NaviExpress

    19/38

    Create an NFS export 19

    Create an NFS export

    Configure the network

    Using Celerra Manager, you can create interfaces on devices that arenot part of a virtual device. Host or workstation access to the Celerrastorage is configured by creating a network interface.

    Note: You cannot create a new interface for a Data Mover while the DataMover is failed over to its standby.

    In Celerra Manager, configure a new network interface and device bydoing the following:

    1. Log in to Celerra Manager as root.

    2. Click Celerras> >Wizards.

    3. Click New Network Interface wizard to set up a new networkinterface. This wizard can also be used to create a new virtual

    device, if desired.

    Note: On the Select/Create a network device page, click Create Device tocreate a new virtual network device. The new virtual device can beconfigured with one of the following high-availability features: EthernetChannel, Link Aggregation, or Fail-Safe Network.

    When you have completed Configure the network go to Create a

    file system on page 20.

  • 7/30/2019 Configure NFS NaviExpress

    20/38

    20 Create an NFS export

    Create an NFS export

    Create a file system

    To create a new file system, do the following steps:

    1. Go to Celerras > >File Systems tab in the leftnavigation menu.

    2. Click New at the bottom of the File Systems screen, as shown inFigure 10.

    Figure 10 File Systems screen

    3. Select the Storage Pool radio button to select where the filesystem will be created from, as shown in Figure 11 on page 21.

  • 7/30/2019 Configure NFS NaviExpress

    21/38

    Create an NFS export 21

    Create an NFS export

    Figure 11 Create new file system screen

    4. Name the file system.

    5. Select the system-defined storage pool from the Storage Pooldrop-down menu.

    Note: Based on the disks and the RAID types created in the storagesystem, different system-defined storage pools will appear in the storagepool list. For more information about system-defined storage pools referto theDisk group and disk volume configurations on page 36.

    6. Designate the Storage Capacity of the file system and select anyother desired options.

  • 7/30/2019 Configure NFS NaviExpress

    22/38

    22 Create an NFS export

    Create an NFS export

    Other file system options are listed below:

    Auto Extend Enabled: If enabled, the file systemautomatically extends when the high water mark is reached.

    Virtual Provisioning Enabled: This option can only be usedwith automatic file system extension and together they let yougrow the file system as needed.

    File-level Retention (FLR) Capability: If enabled, it ispersistently marked as an FLR file system until it is deleted.File systems can be enabled with FLR capability only atcreation time.

    7. Click Create. The new file system will now appear on the FileSystem screen, as shown in Figure 12.

    .

    Figure 12 File System screen with new file system

  • 7/30/2019 Configure NFS NaviExpress

    23/38

    Create an NFS export 23

    Create an NFS export

    Delete the NFS export created during startup

    You may have optionally created a NFS export using the CelerraStartup Assistant (CSA). If you have a minimum configuration of fiveor less disks, then you can begin to use this share as a productionshare. If you have more than five disks, delete the NFS export createdduring startup, by doing:

    1. To delete the NFS export created during startup and make the filesystem unavailable to NFS users on the network:

    a. Go to Celerras> and click the NFS Exportstab.

    b. Select one or more exports to delete, and click Delete.

    The Confirm Delete page appears.

    c. Click OK.

    When you have completed Delete the NFS export created duringstartup go to Create NFS exports on page 24.

  • 7/30/2019 Configure NFS NaviExpress

    24/38

    24 Create an NFS export

    Create an NFS export

    Create NFS exports

    To create a new NFS export, do the following:

    1. Go to Celerras > and click the NFS exports tab.

    2. Click New, as shown in figure Figure 13.

    Figure 13 NFS Exports screen

    3. Select a Data Mover that manages the file system from theChoose Data Mover drop-down list on the New NFS export

    page, as shown in Figure 14 on page 25.

  • 7/30/2019 Configure NFS NaviExpress

    25/38

    Create an NFS export 25

    Create an NFS export

    Figure 14 New NFS Export screen

    4. Select the file system or checkpoint that contains the directory to

    export from the File System drop-down list.

    The list displays the mount point for all file systems andcheckpoints mounted on the selected Data Mover.

    Note: The Path field displays the mount point of the selected file system.This entry exports the root of the file system. To export a subdirectory,add the rest of the path to the string in the field.You may also delete the contents of this box and enter a new, completepath. This path must already exist.

    C t NFS t

  • 7/30/2019 Configure NFS NaviExpress

    26/38

    26 Create an NFS export

    Create an NFS export

    5. Fill out the Host Access section by defining export permissionsfor host access to the NFS export.

    Note: The IP address with the subnet mask can be entered in dot (.)notation, slash (/) notation, or hexadecimal format. Use colons toseparate multiple entries.

    Host Access Read-only exports grants read-only access to allhosts with access to this export, except for hosts given explicitread/write access on this page.

    Read-only Hosts field grants read-only access to the export tothe hostnames, IP addresses, netgroup names, or subnetslisted in this field.

    Read/write Hosts field grants read/write access to the exportto the hostnames, IP addresses, netgroup names, or subnetslisted in this field.

    6. Click OK to create the export.

    Note: If a file system is created using the command line interface (CLI), itwill not be displayed for an NFS export until it is mounted on a DataMover.

    The new NFS export will now appear on the NFS export screen,as shown in Figure 15 on page 27.

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    27/38

    Create an NFS export 27

    Create an NFS export

    Figure 15 NFS export screen with new NFS export

    When you have finished Create NFS exports go to Configure

    hosts on page 28.

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    28/38

    28 Create an NFS export

    Create an NFS export

    Configure hosts

    To mount an NFS export you need the source, including the addressor the hostname of the server. You can collect these values from theNFS export implementation worksheet. To use this new NFS exporton the network do the following:

    1. Open a UNIX prompt on the client computer connected to thesame subnet as the Celerra system. Use the values found on theNFS worksheet on page 5 to complete this section.

    2. Log in as root.

    3. Enter the following command at the UNIX prompt to mount theNFS export:

    mount :/ /

    4. Change directories to the new export by typing:

    cd /

    5. Confirm the amount of storage in the export by typing:

    df /

    For more information about NFS exports please refer to theConfiguring NFS on Celerra technical module found at

    http://Powerlink.EMC.com.

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    29/38

    Create an NFS export 29

    Create an NFS export

    Configure and test standby relationships

    EMC recommends that multi-blade Celerra systems be configuredwith a Primary/Standby blade failover configuration to ensure dataavailability in the case of a blade (server/Data Mover) fault.

    Creating a standby blade ensures continuous access to file systems onthe Celerra storage system. When a primary blade fails over to astandby, the standby blade assumes the identity and functionality ofthe failed blade and functions as the primary blade until the faultedblade is healthy and manually failed back to functioning as theprimary blade.

    Configure a standbyrelationship

    A blade must first be configured as a standby for one or moreprimary blades for that blade to function as a standby blade, whenrequired.

    To configure a standby blade:1. Determine the ideal blade failover configuration for the Celerra

    system based on site requirements and EMC recommendations.

    EMC recommends a minimum of one standby blade for up tothree Primary blades.

    CAUTION!The standby blade(s) must have the same network capabilities(NICs and cables) as the primary blades with which it will beassociated. This is because the standby blade will assume thefaulted primary blades network identity (NIC IP and MACaddresses), storage identity (controlled file systems), andservice identity (controlled shares and exports).

    2. Define the standby configuration using Celerra Managerfollowing the blade standby configuration recommendation:

    a. Select > Data Movers > from the left-hand navigation panel.

    b. On the Data Mover Properties screen, configure the standbyblade for the selected primary blade by checking the box of thedesired Standby Mover and define the Failover Policy.

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    30/38

    30 Create an NFS export

    p

    Figure 16 Configure a standby in Celerra Manager

    Note: A failover policy is a predetermined action that the ControlStation invokes when it detects a blade failure based on the failoverpolicy type specified. It is recommended that the Failover Policy beset to auto.

    c. Click Apply.

    Note: The blade configured as standby will now reboot.

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    31/38

    Create an NFS export 31

    d. Repeat for each primary blade in the Primary/Standbyconfiguration.

    Test the standbyconfiguration

    It is recommended that the functionality of the blade failoverconfiguration be tested prior to the system going into production.

    When a failover condition occurs, the Celerra is able to transferfunctionality from the primary blade to the standby blade withoutdisrupting file system availability

    For a standby blade to successfully stand-in as a primary blade, theblades must have the same network connections (Ethernet and FibreCables), network configurations (EtherChannel, Fail Safe Network,High Availability, and so forth), and switch configuration (VLANconfiguration, etc).

    CAUTION!You must cable the failover blade identically to its primary blade. If

    configured network ports are left uncabled when a failover occurs,access to files systems will be disrupted.

    To test the failover configuration, do the following:

    1. Open a SSH session to the Control Station with an SSH client likePuTTY using the CS.

    2. Log in to the CS asnasadmin

    . Change to theroot

    user byentering the following command:

    su root

    Note: The default password for root is nasadmin.

    3. Collect the current names and types of the system blades:

    # nas_server -l

    Sample output:

    id type acl slot groupID state name

    1 1 1000 2 0 server_2

    2 4 1000 3 0 server_3

    Note: In the command output above provides the state name, the

    names of the blades. Also, thetype

    column designates the blade typeas 1 (primary) and 4 (standby).

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    32/38

    32 Create an NFS export

    4. After I/O traffic is running on the primary blades networkport(s), monitor this traffic by entering:

    # server_netstat -i

    Example:

    [nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i

    Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

    ****************************************************************************

    fxg0 9000 0 0 0 0 0:60:16:32:4a:30

    fxg1 9000 0 0 0 0 0:60:16:32:4a:31

    mge0 9000 851321 0 812531 0 0:60:16:2c:43:2

    mge1 9000 28714095 0 1267209 0 0:60:16:2c:43:1

    cge0 9000 614247 0 2022 0 0:60:16:2b:49:12

    cge1 9000 0 0 0 0 0:60:16:2b:49:13

    5. Manually force a graceful failover of the primary blade to thestandby blade by using the following command:

    # server_standby -activate mover

    Example:

    [nasadmin@rtpplat11cs0 ~]$ server_standby server_2

    -activate mover

    server_2 :

    server_2 : going offline

    server_3 : going activereplace in progress ...done

    failover activity complete

    commit in progress (not interruptible)...done

    server_2 : renamed as server_2.faulted.server_3

    server_3 : renamed as server_2

    Note: This command will rename the primary and standby blades. In the

    example above, server_2, the primary blade, was rebooted and renamedserver_2.faulter.server_3 and server_3 was renamed as server_2.

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    33/38

    Create an NFS export 33

    6. Verify that the failover has completed successfully by:

    a. Checking that the blades have changed names and types:

    # nas_server -l

    Sample output:

    id type acl slot groupID state name

    1 1 1000 2 0 server_2.faulted.server_3

    2 1 1000 3 0 server_2

    Note: In the command output above each blades state name haschanged and the type column designates both blades as type 1(primary).

    b. Checking I/O traffic is flowing to the primary blade byentering:

    # server_netstat -i

    Note: The primary blade, though physically a different blade, retainsthe initial name.

    Sample output:

    [nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i

    Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

    ****************************************************************************

    fxg0 9000 0 0 0 0 0:60:16:32:4b:18

    fxg1 9000 0 0 0 0 0:60:16:32:4b:19

    mge0 9000 14390362 0 786537 0 0:60:16:2c:43:30

    mge1 9000 16946 0 3256 0 0:60:16:2c:43:31

    cge0 9000 415447 0 3251 0 0:60:16:2b:49:12

    cge1 9000 0 0 0 0 0:60:16:2b:48:ad

    Note: The WWNs in the PhysAddr column have changed, thusreflecting that the failover completed successfully.

    7. Verify that the blades appear with reason code 5 by typing;

    # /nas/sbin/getreason

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    34/38

    34 Create an NFS export

    8. After the blades appear with reason code 5, manually restore thefailed over blade to its primary status by typing the following

    command:# server_standby -restore mover

    Example:

    server_standby server_2 -restore mover

    server_2 :

    server_2 : going standby

    server_2.faulted.server_3 : going activereplace in progress ...done

    failover activity complete

    commit in progress (not interruptible)...done

    server_2 : renamed as server_3

    server_2.faulted.server_3 : renamed as server_2

    Note: This command will rename the primary and standby blades. In the

    example above, server_2, the standing primary blade, was rebooted andrenamed server_3 and server_2.faulter.server_3 was renamed asserver_2.

    9. Verify that the failback has completed successfully by:

    a. Checking that the blades have changed back to the originalname and type:

    # nas_server -l

    Sample output:

    id type acl slot groupID state name

    1 1 1000 2 0 server_2

    2 4 1000 3 0 server_3

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    35/38

    Create an NFS export 35

    b. Checking I/O traffic flowing to the primary blade by entering:

    # server_netstat -i

    Sample output:

    [nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i

    Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

    ****************************************************************************

    fxg0 9000 0 0 0 0 0:60:16:32:4a:30

    fxg1 9000 0 0 0 0 0:60:16:32:4a:31mge0 9000 851321 0 812531 0 0:60:16:2c:43:2

    mge1 9000 28714095 0 1267209 0 0:60:16:2c:43:1

    cge0 9000 314427 0 1324 0 0:60:16:2b:49:12

    cge1 9000 0 0 0 0 0:60:16:2b:49:13

    Note: The WWNs in the PhysAddr column have reverted to theiroriginal values, thus reflecting that the failback completed

    successfully.

    Refer to the Configuring Standbys on EMC Celerra technical modulehttp://Powerlink.EMC.com for more information about determiningand defining blade standby configurations.

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    36/38

    36 Create an NFS export

    Appendix

    Disk group and diskvolumeconfigurations

    Table 2 maps a disk group type to a storage profile, associating theRAID type and the storage space that results in the automatic volumemanagement (AVM) pool. The storage profile name is a set of rulesused by AVM to determine what type of disk volumes to use toprovide storage for the pool.

    Table 2 Disk group and disk volume configurations

    Disk group type Attach type Storage profile

    RAID 5 8+1 Fibre Channel clar r5 economy (8+1)

    clar_r5_performance (4+1)

    RAID 5 4+1

    RAID 1

    Fibre Channel clar_r5_performance

    clar_r1

    RAID 5 4+1 Fibre Channel clar_r5_performance

    RAID 1 Fibre Channel clar_r1

    RAID 6 4+2

    RAID 6 12+2

    Fibre Channel clar_r6

    RAID 5 6+1 ATA clarata_archive

    RAID 5 4+1

    (CX3 only)

    ATA clarata_archive

    RAID 3 4+1

    RAID 3 8+1

    ATA clarata_r3

    RAID 6 4+2

    RAID 6 12+2

    ATA clarata_r6

    RAID 5 6+1

    (CX3 only)

    LCFC clarata_archive

    RAID 5 4+1

    (CX3 only)

    LCFC clarata_archive

    RAID 3 4+1

    RAID 3 8+1

    LCFC clarata_r3

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    37/38

    Create an NFS export 37

    RAID 6 4+2

    RAID 6 12+2

    LCFC clarata_r6

    RAID 5 2+1 SATA clarata_archive

    RAID 5 3+1 SATA clarata_archive

    RAID 5 4+1 SATA clarata_archive

    RAID 5 5+1 SATA clarata_archive

    RAID 1/0 (2 disks) SATA clarata_r10

    RAID 6 4+2 SATA clarata_r6

    RAID 5 2+1 SAS clarsas_archive

    RAID 5 3+1 SAS clarsas_archive

    RAID 5 4+1 SAS clarsas_archive

    RAID 5 5+1 SAS clarsas_archive

    RAID 1/0 (2 disks) SAS clarsas_r10

    RAID 6 4+2 SAS clarsas_r6

    Table 2 Disk group and disk volume configurations (continued)

    Disk group type Attach type Storage profile

    Create an NFS export

  • 7/30/2019 Configure NFS NaviExpress

    38/38

    38 Create an NFS export