40
Create an NFS export 1 This procedure explains how to create a Network File System (NFS) export on your Celerra ® system. The Celerra system is a multiprotocol machine that can provide access to data through the NFS protocol to provide file sharing in network environments. The NFS protocol enables the Celerra Network Server to assume the functions of an NFS server. NFS environments typically include: Native UNIX clients Linux clients Windows systems configured with third-party applications that provide NFS client services Overview ............................................................................................... 2 Pre-implementation tasks ................................................................... 4 Implementation worksheets ............................................................... 5 Connect external network cables ....................................................... 7 Configure storage for a Fibre Channel enabled system ............... 10 Configure the network ...................................................................... 21 Create a file system ............................................................................ 22 Delete the NFS export created during startup ............................... 25 Create NFS exports ............................................................................ 26 Configure hosts .................................................................................. 30 Configure and test standby relationships....................................... 31 Appendix............................................................................................. 38 Create an NFS export

Configure NFS NaviManager

Embed Size (px)

DESCRIPTION

NFS

Citation preview

  • Create an NFS exportCreate an NFS export 1

    This procedure explains how to create a Network File System (NFS) export on your Celerra system. The Celerra system is a multiprotocol machine that can provide access to data through the NFS protocol to provide file sharing in network environments.

    The NFS protocol enables the Celerra Network Server to assume the functions of an NFS server. NFS environments typically include:

    Native UNIX clients

    Linux clients

    Windows systems configured with third-party applications that provide NFS client services

    Overview............................................................................................... 2 Pre-implementation tasks ................................................................... 4 Implementation worksheets ............................................................... 5 Connect external network cables ....................................................... 7 Configure storage for a Fibre Channel enabled system ............... 10 Configure the network ...................................................................... 21 Create a file system ............................................................................ 22 Delete the NFS export created during startup............................... 25 Create NFS exports ............................................................................ 26 Configure hosts .................................................................................. 30 Configure and test standby relationships....................................... 31 Appendix............................................................................................. 38

  • 2 Cre

    Create an NFS export

    OverviewThis section contains an overview of the NFS implementation

    Proce

    Host rfor NFate an NFS export

    procedure overview and host requirements for NFS implementation.

    dure overview To create a NFS export, you must perform the following tasks:

    1. Verify that you have performed the pre-implementation tasks:

    Create a Powerlink account.

    Register your Celerra with EMC or your service provider.

    Install Navisphere Service Taskbar (NST.)

    Add additional disk array enclosures (DAEs) using the NST (Not available for NX4).

    2. Complete the implementation worksheets.

    3. Cable additional Celerra ports to your network system.

    4. Configure unused or new disks with Navisphere Manager.

    5. Configure your network by creating a new interface to access the Celerra storage from a host or workstation.

    6. Create a file system using a system-defined storage pool.

    7. Delete the NFS export created during startup.

    8. Create an NFS export from the file system.

    9. Configure host access to the NFS export.

    10. Configure and test standby relationships

    equirements S

    Software

    Celerra Network Server version 5.6.

    For secure NFS using UNIX or Linux-based Kerberos:

    Sun Enterprise Authentication Mechanism (SEAM) software or Linux KDC running Kerberos version 5

    Note: KDCs from other UNIX systems have not been tested.

    For secure NFS using Windows-based Kerberos:

    Windows 2000 or Windows Server 2003 domain

  • Create an NFS export

    To use secure NFS, the client computer must be running:

    SunOS version 5.8 or later (Solaris 10 for NFSv4)3

    Linux - kernel 2.4 or later (2.6.12 _ NFSv4 patches for NFSv4)

    Hummingbird Maestro version 7 or later (EMC recommends version 8); version 9 for NFSv4

    AIX 5.3 ML3

    Note: Other clients have not been tested.

    DNS (Domain Name System)

    NTP (Network Time Protocol) server

    Note: Windows environments require that you configure Celerra in the Active Directory.

    Hardware

    No specific hardware requirements

    Network

    No specific network requirements

    Storage

    No specific storage requirements

  • 4 Cre

    Create an NFS export

    Pre-implementation tasksBefore you begin this NFS implementation procedure ensure that you

    Creatacco

    Regiswith E

    DowninstalNavisTaskb

    Add aarrayate an NFS export

    have completed the following tasks.

    e a Powerlink unt

    You can create a Powerlink account at http://Powerlink.EMC.com.

    Use this website to access additional EMC resources, including documentation, release notes, software updates, information about EMC products, licensing, and service.

    ter your system MC

    If you did not register your Celerra at the completion of the Celerra Startup Assistant, you can do so now by downloading the Registration wizard from Powerlink.

    The Registration wizard can also be found on the Applications and Tools CD that was shipped with your system.

    Registering your Celerra ensures that EMC Customer Support has all pertinent system and site information so they can properly assist you.

    load and l the phere Service ar (NST)

    The NST is available for download from the CLARiiON Tools page on Powerlink and on the Applications and Tools CD that was shipped with your system.

    dditional disk enclosures

    Use the NST to add new disk array enclosures (DAEs) to fully implement your Celerra (Not available for NX4).

  • Create an NFS export

    Implementation worksheets Before you begin this implementation procedure take a moment to fill

    Creatworks

    Data Mover numbe5

    out the following implementation worksheets with the values of the various devices you will need to create.

    e interface heet

    The New Network Interface wizard configures individual network interfaces for the Data Movers. It can also create virtual network devices: Link Aggregation, Fail-Safe Network, or Ethernet Channel. Use Table 1 to complete the New Network Interface wizard. You will need the following information:

    Does the network use variable-length subnets? Yes No

    Note: If the network uses variable-length subnets, be sure to use the correct subnet mask. Do not assume 255.255.255.0 or other common values.

    Table 1 Create interface worksheet

    r

    Device name or virtual device name IP address Netmask

    Maximum Transmission Unit (MTU) (optional)

    Virtual LAN (VLAN) identifier (optional) Devices (optional)

  • 6 Cre

    Create an NFS export

    Create file system worksheet

    The Create File System step creates a file system on a Data Mover. This step can be repeated as needed to create additional file systems.

    NFS eworksate an NFS export

    Read/Write Data Mover: server_2 server_3 Volume Management: Automatic (recommended)Storage Pool for Automatic Volume Management:

    CLARiiON RAID 1 (Not available for NX4) CLARiiON RAID 5 Performance CLARiiON RAID 5 Economy

    CLARiiON RAID 1/0 CLARiiON RAID 6File System Name____________________________________________

    File System Size (megabytes) __________________________________

    Use Default User and Group Quotas: Yes NoHard Limit for User Storage (megabytes) ____________________

    Soft Limit for User Storage (megabytes) _____________________

    Hard Limit for User Files (files)_____________________________

    Soft Limit for User Files (files) _____________________________

    Hard Limit for Group Storage (megabytes) __________________

    Soft Limit for Group Storage (megabytes) ___________________

    Hard Limit for Group Files (files) ___________________________

    Soft Limit for Group Files (files)____________________________

    Enforce Hard Limits: Yes NoGrace Period for Storage (days)_____________________________

    Grace Period for Files (days) _______________________________

    xport heet

    NFS export pathname (for example: /test_fs/): __________________________________________________

    IP address of client computer:______________________________

    When you have completed the Implementation worksheets, go to Connect external network cables on page 7.

  • Create an NFS export

    Connect external network cablesIf you have not already done so, you will need to connect the desired

    ConnX-blaports7

    blade network ports to your network system.

    This section covers the following topics:

    Connecting the 1U X-blade network ports on page 7

    Connecting the 3U X-blade I/O module network ports on page 8

    ecting the 1U de network

    The Celerra NS20 and NS40 integrated systems and the NX4, NS-120, and NS-480 unified storage systems have 1U blade enclosures. There are three possible X-blades available to fill the 1U blade enclosure, depending on the Celerra:

    4-port copper Ethernet X-blade

    2-port copper Ethernet and 2-port optical 1 GbE X-blade

    2-port copper Ethernet and 2-port optical 10 GbE X-blade

    To connect the desired blade network ports to your network system, follow these guidelines depending on your specific blade configuration.

    Figure 1 shows the 4-port copper Ethernet X-blade. This X-blade is available to the NX4, NS20, NS40, NS-120 or NS-480. It has four copper Ethernet ports available for connections to the network system labeled cge0-cge3. Cable these ports as desired.

    Figure 1 4-port copper Ethernet X-blade

    Figure 2 on page 8 shows the 2-port copper Ethernet and 2-port optical 1 Gb Ethernet X-blade. This X-blade is available to the NS40 or NS-480. It has four ports available for connections to the network

    cge0 cge1

    Com 1Com 2

    cge2 cge3

    AUX 0 AUX 1BE 0 BE 1CIP-000560

    Internalmanagement module

  • 8 Cre

    Create an NFS export

    system labeled cge0-cge1, copper Ethernet ports, and fge0-fge1, optical 1 GbE ports. Cable these ports as desired.

    ConnX-blanetwoate an NFS export

    Figure 2 2-port copper Ethernet and 2-port optical 1 GbE X-blade

    Figure 3 shows the 2-port copper Ethernet and 2-port optical 10 Gb Ethernet X-blade. This X-blade is available to the NX4, NS-120 or NS-480. It has four ports available for connections to the network system labeled cge0-cge1, copper Ethernet ports, and fxg0-fxeg1, optical 10 GbE ports. Cable these ports as desired.

    Figure 3 2-port copper Ethernet and 2-port optical 10 GbE X-blade

    ecting the 3U de I/O module rk ports

    Currently, the Celerra NS-960 is the only unified storage system with a 3U blade enclosure. The 3U blade enclosure utilizes different I/O modules for personalized blade configurations. To connect the desired blade I/O module network ports to your network system, follow these guidelines based on your specific I/O module configuration.

    In the four-port Fibre Channel I/O module located in slot 0, the first two ports, port 0 (BE 0) and port 1 (BE 1), connect to the array. The next two ports, port 2 (AUX 0) and port 3 (AUX 1) optionally connect

    fge0 fge1Internalmanagement module

    Com 1Com 2

    cge0 cge1

    AUX 0 AUX 1BE 0 BE 1CIP-000559

    fxg0 fxg1Internal

    management module

    Com 1

    Com 2

    cge0 cge1

    AUX 0 AUX 1BE 0 BE 1CNS-001256

  • Create an NFS export

    to tape backup.Do not use these ports for connections to the network system.

    The other I/O modules appear in various slot I/O positions 9

    (represented by x I/O slot ID) to create the supported blade options. The ports in these I/O modules are available to connect to the network system.

    The four-port copper I/O module has four copper-wire Gigabit Ethernet (GbE) ports, 0 to 3 (logically, cge-x-0 to cge-x-3). Cable these ports as desired.

    The two-port GbE copper and two-port GbE optical I/O module has ports, 0 to 3 (logically, cge-x-0, cge-x-1, fge-x-0, and fge-x-1). Cable these ports as desired.

    The one-port 10-GbE I/O module has one port, 0 (logically, fxg-x-0). Cable these ports as desired.

    Note: The x in the port names above are variables representing the slot position of the I/O module. Thus for a four port copper I/O module in slot 2, its ports would be cge-2-0 -cge-2-3.

    Any advanced configuration of the external network ports is beyond the scope of this implementation procedure. For more information about the many network configuration options the Celerra system supports, such as Ethernet channels, link aggregation, and FSNs, refer to the Configuring and Managing Celerra Networking and Configuring and Managing Celerra Network High Availability technical modules for more information.

    When you have finished Connect external network cables go to Configure storage for a Fibre Channel enabled system on page 10.

  • 10 Cre

    Create an NFS export

    Configure storage for a Fibre Channel enabled systemThere are two ways to configure additional storage in a Fibre

    Confiwith NManaate an NFS export

    Channel enabled system. This section details how to create additional storage for a Fibre Channel enabled system using Navisphere Manager.

    gure storage avisphere ger

    Configure storage with Navisphere Manager by doing the following:

    1. Open an Internet browser: Microsoft Internet Explorer or Mozilla Firefox.

    2. In the browser window, enter the IP address of a storage processor (SP) that has the most recent version of the FLARE Operating Environment (OE) installed.

    Note: If you do not have a supported version of the JRE installed, you will be directed to the Sun website where you can select a supported version to download.

    3. Enter the Navisphere Username and Password in the login prompt, as shown in Figure 4 on page 11.

    Note: The default values are username nasadmin and password nasadmin.If you changed the Control Station password during the Celerra Startup Assistant, the Celerra credentials were changed to use those same passwords. You must use the new password for your Navisphere credentials.

  • Create an NFS export11

    Figure 4 Navisphere Manage login

    4. In Storage Domain, expand the backend storage array, and then right-click RAID Groups and select Create RAID Group to create a RAID group for the user LUNs you would like to create, as shown in Figure 5 on page 12.

  • 12 Cre

    Create an NFS exportate an NFS export

    Figure 5 Navisphere Manager tree

    5. Enter the RAID Group properties and click Apply to create the new RAID group, as shown in an example in Figure 6 on page 13.

    The RAID Group Parameter values should be applicable to your system.

    For more information, see NAS Support Matrix on http://Powerlink.EMC.com.

    6. To complete the previous step, click Yes to confirm the operation, click OK for the Success dialog box, and close (or click Cancel) the Create RAID Group window.

  • Create an NFS export13

    Figure 6 Create RAID Group screen

    7. Right-click the newly created RAID Group and select Bind LUN to create a new LUN in the new RAID Group, as shown in the example in Figure 7 on page 14.

    8. Enter the new LUN(s) properties and click Apply.

    Note: The LUN ID must be greater than or equal to 16 if the LUN is to be managed with Celerra Manager.

  • 14 Cre

    Create an NFS exportate an NFS export

    Figure 7 Bind LUN screen

    Listed below are recommended best practices:

    Always create user LUNs in balanced pairs; one owned by SP A and one owned by SP B. The paired LUNs must be equal size. Selecting the Auto radio button in the Default Owner will automatically assign and balance the pairs.

    For Fibre Channel disks, the paired LUNs do not have to be in the same RAID group. For RAID 5 on Fibre Channel disks, the RAID group must use five or nine disks. A RAID 1 group always uses two disks.

  • Create an NFS export

    For ATA disks, all LUNs in a RAID group must belong to the same SP. The Celerra system will stripe across pairs of disks from different RAID groups. ATA disks can be configured as 15

    RAID 5 with seven disks or RAID 3 with five or nine disks.

    Use the same LUN ID number for the Host ID number.

    We recommend the following settings when creating the user LUNs:

    RAID Type: RAID 5 or RAID 1 for FC disks and RAID 5 or RAID 3 for ATA disks.

    LUN ID: Select the first available value, greater than or equal to 16.

    Element Size: 128

    Rebuild Priority: ASAP

    Verify Priority: ASAP

    Enable Read Cache: Selected

    Enable Write Cache: Selected

    Enable Auto Assign: the checkbox is unmarked (off)

    Number of LUNs to Bind: 2

    Alignment Offset: 0

    LUN Size: Must not exceed 2 TB

    Note: If you are creating 4+1 RAID 3 LUNs, the Number of LUNs to Bind value should be 1.

    9. To complete the previous step, click Yes to confirm the operation, click OK for the Success dialog box, and close (or click Cancel) the Bind Lun window.

    10. To add LUNs to the Celerra storage group, expand Storage Groups, right-click the Celerra storage group, and select Select LUNs.

  • 16 Cre

    Create an NFS exportate an NFS export

    Figure 8 Select LUNs screen

    11. Maximize the Storage Group Properties window and in the Available LUNs section, and select the new LUN(s) by expanding the navigation trees beneath SP A and SP B and clicking the check box(es) for the new LUNs, as shown in the example in Figure 8.

    Clicking the checkbox(es) causes the new LUN(s) to appear in Selected LUNs section.

    Note that in the Selected LUNs section, the Host ID for the new LUN(s) is blank.

  • Create an NFS export

    WARNING

    Do not uncheck LUNs that are already checked in the Available 17

    LUNs section when you open the dialog box. Unchecking these LUNs will render your system inoperable.

    12. In the Selected LUNs section, click in the white space in the Host ID column and select a Host ID greater than 15 for each new LUN to be added to the Celerra storage group. Click OK to add the new LUN(s) to the Celerra storage group, as shown in Figure 9 on page 18.

    WARNING

    The Host ID must be greater than or equal to 16 for user LUNs. An incorrect Host ID value can cause serious problems.

    13. To complete the previous step, click Yes to confirm the operation to add LUNs to the storage group, and click OK for the Success dialog box.

  • 18 Cre

    Create an NFS exportate an NFS export

    Figure 9 Selecting LUNs host ID screen

    14. To make the new LUN(s) available to the Celerra system, Celerra Manager must be used. Launch the Celerra Manager by opening Celerra Manager using the following URL:

    https://

    where is the hostname or IP address of the Control Station.

    15. If a security alert appears about the systems security certificate, click Yes to proceed.

  • Create an NFS export

    16. At the login prompt, log in as user root. The default password is nasadmin.

    17. If a security warning appears about the systems security 19

    certificate being issued by an untrusted source, click Yes to accept the certificate.

    18. If a warning about a hostname mismatch appears, click Yes.

    19. On the Celerra > Storage Systems page, click Rescan, as shown in Figure 10.

    Figure 10 Rescan Storage System in Celerra Manager

  • 20 Cre

    Create an NFS export

    CAUTION!Do not change the host LUN identifier of the Celerra LUNs after ate an NFS export

    rescanning. This may cause data loss or unavailability.

    The user LUNs are now available for the Celerra system. When you have finished the Configure storage for a Fibre Channel enabled system go to Configure the network on page 21.

  • Create an NFS export

    Configure the networkUsing Celerra Manager, you can create interfaces on devices that are 21

    not part of a virtual device. Host or workstation access to the Celerra storage is configured by creating a network interface.

    Note: You cannot create a new interface for a Data Mover while the Data Mover is failed over to its standby.

    In Celerra Manager, configure a new network interface and device by doing the following:

    1. Log in to Celerra Manager as root.

    2. Click Celerras > > Wizards.

    3. Click New Network Interface wizard to set up a new network interface. This wizard can also be used to create a new virtual device, if desired.

    Note: On the Select/Create a network device page, click Create Device to create a new virtual network device. The new virtual device can be configured with one of the following high-availability features: Ethernet Channel, Link Aggregation, or Fail-Safe Network.

    When you have completed Configure the network go to Create a file system on page 22.

  • 22 Cre

    Create an NFS export

    Create a file systemTo create a new file system, do the following steps:ate an NFS export

    1. Go to Celerras > > File Systems tab in the left navigation menu.

    2. Click New at the bottom of the File Systems screen, as shown in Figure 11.

    Figure 11 File Systems screen

    3. Select the Storage Pool radio button to select where the file system will be created from, as shown in Figure 12 on page 23.

  • Create an NFS export23

    Figure 12 Create new file system screen

    4. Name the file system.

    5. Select the system-defined storage pool from the Storage Pool drop-down menu.

    Note: Based on the disks and the RAID types created in the storage system, different system-defined storage pools will appear in the storage pool list. For more information about system-defined storage pools refer to the Disk group and disk volume configurations on page 38.

    6. Designate the Storage Capacity of the file system and select any other desired options.

  • 24 Cre

    Create an NFS export

    Other file system options are listed below:

    Auto Extend Enabled: If enabled, the file system automatically extends when the high water mark is reached.

    .ate an NFS export

    Virtual Provisioning Enabled: This option can only be used with automatic file system extension and together they let you grow the file system as needed.

    File-level Retention (FLR) Capability: If enabled, it is persistently marked as an FLR file system until it is deleted. File systems can be enabled with FLR capability only at creation time.

    7. Click Create. The new file system will now appear on the File System screen, as shown in Figure 13.

    Figure 13 File System screen with new file system

  • Create an NFS export

    Delete the NFS export created during startupYou may have optionally created a NFS export using the Celerra 25

    Startup Assistant (CSA). If you have a minimum configuration of five or less disks, then you can begin to use this share as a production share. If you have more than five disks, delete the NFS export created during startup, by doing:

    1. To delete the NFS export created during startup and make the file system unavailable to NFS users on the network:

    a. Go to Celerras > and click the NFS Exports tab.

    b. Select one or more exports to delete, and click Delete.

    The Confirm Delete page appears.

    c. Click OK.

    When you have completed Delete the NFS export created during startup go to Create NFS exports on page 26.

  • 26 Cre

    Create an NFS export

    Create NFS exportsTo create a new NFS export, do the following:ate an NFS export

    1. Go to Celerras > and click the NFS exports tab.

    2. Click New, as shown in figure Figure 14.

    Figure 14 NFS Exports screen

    3. Select a Data Mover that manages the file system from the Choose Data Mover drop-down list on the New NFS export page, as shown in Figure 15 on page 27.

  • Create an NFS export27

    Figure 15 New NFS Export screen

    4. Select the file system or checkpoint that contains the directory to export from the File System drop-down list.

    The list displays the mount point for all file systems and checkpoints mounted on the selected Data Mover.

    Note: The Path field displays the mount point of the selected file system. This entry exports the root of the file system. To export a subdirectory, add the rest of the path to the string in the field. You may also delete the contents of this box and enter a new, complete path. This path must already exist.

  • 28 Cre

    Create an NFS export

    5. Fill out the Host Access section by defining export permissions for host access to the NFS export.ate an NFS export

    Note: The IP address with the subnet mask can be entered in dot (.) notation, slash (/) notation, or hexadecimal format. Use colons to separate multiple entries.

    Host Access Read-only exports grants read-only access to all hosts with access to this export, except for hosts given explicit read/write access on this page.

    Read-only Hosts field grants read-only access to the export to the hostnames, IP addresses, netgroup names, or subnets listed in this field.

    Read/write Hosts field grants read/write access to the export to the hostnames, IP addresses, netgroup names, or subnets listed in this field.

    6. Click OK to create the export.

    Note: If a file system is created using the command line interface (CLI), it will not be displayed for an NFS export until it is mounted on a Data Mover.

    The new NFS export will now appear on the NFS export screen, as shown in Figure 16 on page 29.

  • Create an NFS export29

    Figure 16 NFS export screen with new NFS export

    When you have finished Create NFS exports go to Configure hosts on page 30.

  • 30 Cre

    Create an NFS export

    Configure hostsTo mount an NFS export you need the source, including the address ate an NFS export

    or the hostname of the server. You can collect these values from the NFS export implementation worksheet. To use this new NFS export on the network do the following:

    1. Open a UNIX prompt on the client computer connected to the same subnet as the Celerra system. Use the values found on the NFS worksheet on page 5 to complete this section.

    2. Log in as root.

    3. Enter the following command at the UNIX prompt to mount the NFS export:

    mount :/ /

    4. Change directories to the new export by typing:

    cd /

    5. Confirm the amount of storage in the export by typing:

    df /

    For more information about NFS exports please refer to the Configuring NFS on Celerra technical module found at http://Powerlink.EMC.com.

  • Create an NFS export

    Configure and test standby relationshipsEMC recommends that multi-blade Celerra systems be configured

    ConfirelatioConfigure and test standby relationships 31

    with a Primary/Standby blade failover configuration to ensure data availability in the case of a blade (server/Data Mover) fault.

    Creating a standby blade ensures continuous access to file systems on the Celerra storage system. When a primary blade fails over to a standby, the standby blade assumes the identity and functionality of the failed blade and functions as the primary blade until the faulted blade is healthy and manually failed back to functioning as the primary blade.

    gure a standby nship

    A blade must first be configured as a standby for one or more primary blades for that blade to function as a standby blade, when required.

    To configure a standby blade:

    1. Determine the ideal blade failover configuration for the Celerra system based on site requirements and EMC recommendations.

    EMC recommends a minimum of one standby blade for up to three Primary blades.

    CAUTION!The standby blade(s) must have the same network capabilities (NICs and cables) as the primary blades with which it will be associated. This is because the standby blade will assume the faulted primary blades network identity (NIC IP and MAC addresses), storage identity (controlled file systems), and service identity (controlled shares and exports).

    2. Define the standby configuration using Celerra Manager following the blade standby configuration recommendation:

    a. Select > Data Movers > from the left-hand navigation panel.

    b. On the Data Mover Properties screen, configure the standby blade for the selected primary blade by checking the box of the desired Standby Mover and define the Failover Policy.

  • 32 Cre

    Create an NFS exportate an NFS export

    Figure 17 Configure a standby in Celerra Manager

    Note: A failover policy is a predetermined action that the Control Station invokes when it detects a blade failure based on the failover policy type specified. It is recommended that the Failover Policy be set to auto.

    c. Click Apply.

    Note: The blade configured as standby will now reboot.

  • Create an NFS export

    d. Repeat for each primary blade in the Primary/Standby configuration.

    Test thconfigConfigure and test standby relationships 33

    e standby uration

    It is recommended that the functionality of the blade failover configuration be tested prior to the system going into production.

    When a failover condition occurs, the Celerra is able to transfer functionality from the primary blade to the standby blade without disrupting file system availability

    For a standby blade to successfully stand-in as a primary blade, the blades must have the same network connections (Ethernet and Fibre Cables), network configurations (EtherChannel, Fail Safe Network, High Availability, and so forth), and switch configuration (VLAN configuration, etc).

    CAUTION!You must cable the failover blade identically to its primary blade. If configured network ports are left uncabled when a failover occurs, access to files systems will be disrupted.

    To test the failover configuration, do the following:

    1. Open a SSH session to the Control Station with an SSH client like PuTTY using the CS.

    2. Log in to the CS as nasadmin. Change to the root user by entering the following command:

    su root

    Note: The default password for root is nasadmin.

    3. Collect the current names and types of the system blades:

    # nas_server -l

    Sample output:

    id type acl slot groupID state name1 1 1000 2 0 server_22 4 1000 3 0 server_3

    Note: In the command output above provides the state name, the names of the blades. Also, the type column designates the blade type as 1 (primary) and 4 (standby).

  • 34 Cre

    Create an NFS export

    4. After I/O traffic is running on the primary blades network port(s), monitor this traffic by entering:

    # server_netstat -i

    [nasad

    Name

    ******

    fxg0 fxg1 mge0 mge1 cge0 cge1 ate an NFS export

    Example:

    min@rtpplat11cs0 ~]$ server_netstat server_2 -i

    Mtu Ibytes Ierror Obytes Oerror PhysAddr

    **********************************************************************

    9000 0 0 0 0 0:60:16:32:4a:30 9000 0 0 0 0 0:60:16:32:4a:31 9000 851321 0 812531 0 0:60:16:2c:43:2 9000 28714095 0 1267209 0 0:60:16:2c:43:1 9000 614247 0 2022 0 0:60:16:2b:49:12 9000 0 0 0 0 0:60:16:2b:49:13

    5. Manually force a graceful failover of the primary blade to the standby blade by using the following command:

    # server_standby -activate mover

    Example:

    [nasadmin@rtpplat11cs0 ~]$ server_standby server_2 -activate mover

    server_2 : server_2 : going offline server_3 : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done

    server_2 : renamed as server_2.faulted.server_3server_3 : renamed as server_2

    Note: This command will rename the primary and standby blades. In the example above, server_2, the primary blade, was rebooted and renamed server_2.faulter.server_3 and server_3 was renamed as server_2.

  • Create an NFS export

    6. Verify that the failover has completed successfully by:

    a. Checking that the blades have changed names and types:

    [nasad

    Name

    ******

    fxg0 fxg1 mge0 mge1 cge0 cge1 Configure and test standby relationships 35

    # nas_server -l

    Sample output:

    id type acl slot groupID state name1 1 1000 2 0 server_2.faulted.server_32 1 1000 3 0 server_2

    Note: In the command output above each blades state name has changed and the type column designates both blades as type 1 (primary).

    b. Checking I/O traffic is flowing to the primary blade by entering:

    # server_netstat -i

    Note: The primary blade, though physically a different blade, retains the initial name.

    Sample output:

    min@rtpplat11cs0 ~]$ server_netstat server_2 -i

    Mtu Ibytes Ierror Obytes Oerror PhysAddr

    **********************************************************************

    9000 0 0 0 0 0:60:16:32:4b:18 9000 0 0 0 0 0:60:16:32:4b:19 9000 14390362 0 786537 0 0:60:16:2c:43:30 9000 16946 0 3256 0 0:60:16:2c:43:31 9000 415447 0 3251 0 0:60:16:2b:49:12 9000 0 0 0 0 0:60:16:2b:48:ad

    Note: The WWNs in the PhysAddr column have changed, thus reflecting that the failover completed successfully.

    7. Verify that the blades appear with reason code 5 by typing;

    # /nas/sbin/getreason

  • 36 Cre

    Create an NFS export

    8. After the blades appear with reason code 5, manually restore the failed over blade to its primary status by typing the following command:ate an NFS export

    # server_standby -restore mover

    Example:

    server_standby server_2 -restore mover

    server_2 : server_2 : going standby server_2.faulted.server_3 : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done

    server_2 : renamed as server_3server_2.faulted.server_3 : renamed as server_2

    Note: This command will rename the primary and standby blades. In the example above, server_2, the standing primary blade, was rebooted and renamed server_3 and server_2.faulter.server_3 was renamed as server_2.

    9. Verify that the failback has completed successfully by:

    a. Checking that the blades have changed back to the original name and type:

    # nas_server -l

    Sample output:

    id type acl slot groupID state name1 1 1000 2 0 server_22 4 1000 3 0 server_3

  • Create an NFS export

    b. Checking I/O traffic flowing to the primary blade by entering:

    # server_netstat -i

    [nasad

    Name

    ******

    fxg0 fxg1 mge0 mge1 cge0 cge1 Configure and test standby relationships 37

    Sample output:

    min@rtpplat11cs0 ~]$ server_netstat server_2 -i

    Mtu Ibytes Ierror Obytes Oerror PhysAddr

    **********************************************************************

    9000 0 0 0 0 0:60:16:32:4a:30 9000 0 0 0 0 0:60:16:32:4a:31 9000 851321 0 812531 0 0:60:16:2c:43:2 9000 28714095 0 1267209 0 0:60:16:2c:43:1 9000 314427 0 1324 0 0:60:16:2b:49:12 9000 0 0 0 0 0:60:16:2b:49:13

    Note: The WWNs in the PhysAddr column have reverted to their original values, thus reflecting that the failback completed successfully.

    Refer to the Configuring Standbys on EMC Celerra technical module http://Powerlink.EMC.com for more information about determining and defining blade standby configurations.

  • 38 Cre

    Create an NFS export

    Appendix

    Disk gvolumconfigate an NFS export

    roup and disk e urations

    Table 2 maps a disk group type to a storage profile, associating the RAID type and the storage space that results in the automatic volume management (AVM) pool. The storage profile name is a set of rules used by AVM to determine what type of disk volumes to use to provide storage for the pool.

    Table 2 Disk group and disk volume configurations

    Disk group type Attach type Storage profile

    RAID 5 8+1 Fibre Channel clar r5 economy (8+1)clar_r5_performance (4+1)

    RAID 5 4+1 RAID 1

    Fibre Channel clar_r5_performanceclar_r1

    RAID 5 4+1 Fibre Channel clar_r5_performance

    RAID 1 Fibre Channel clar_r1

    RAID 6 4+2RAID 6 12+2

    Fibre Channel clar_r6

    RAID 5 6+1 ATA clarata_archive

    RAID 5 4+1 (CX3 only)

    ATA clarata_archive

    RAID 3 4+1 RAID 3 8+1

    ATA clarata_r3

    RAID 6 4+2RAID 6 12+2

    ATA clarata_r6

    RAID 5 6+1(CX3 only)

    LCFC clarata_archive

    RAID 5 4+1(CX3 only)

    LCFC clarata_archive

    RAID 3 4+1RAID 3 8+1

    LCFC clarata_r3

  • Create an NFS export

    Table 2 Disk group and disk volume configurations (continued)

    Disk group type Attach type Storage profileConfigure and test standby relationships 39

    RAID 6 4+2RAID 6 12+2

    LCFC clarata_r6

    RAID 5 2+1 SATA clarata_archive

    RAID 5 3+1 SATA clarata_archive

    RAID 5 4+1 SATA clarata_archive

    RAID 5 5+1 SATA clarata_archive

    RAID 1/0 (2 disks) SATA clarata_r10

    RAID 6 4+2 SATA clarata_r6

    RAID 5 2+1 SAS clarsas_archive

    RAID 5 3+1 SAS clarsas_archive

    RAID 5 4+1 SAS clarsas_archive

    RAID 5 5+1 SAS clarsas_archive

    RAID 1/0 (2 disks) SAS clarsas_r10

    RAID 6 4+2 SAS clarsas_r6

  • 40 Cre

    Create an NFS exportate an NFS export

    Create an NFS exportOverviewProcedure overviewHost requirements for NFS

    Pre-implementation tasksCreate a Powerlink accountRegister your system with EMCDownload and install the Navisphere Service Taskbar (NST)Add additional disk array enclosures

    Implementation worksheetsCreate interface worksheetCreate file system worksheetNFS export worksheet

    Connect external network cablesConnecting the 1U X-blade network portsConnecting the 3U X-blade I/O module network ports

    Configure storage for a Fibre Channel enabled systemConfigure storage with Navisphere Manager

    Configure the networkCreate a file systemDelete the NFS export created during startupCreate NFS exportsConfigure hostsConfigure and test standby relationshipsConfigure a standby relationshipTest the standby configuration

    AppendixDisk group and disk volume configurations