76
Red Hat Cluster Configuration and Management Available resources: 3 Servers out of which two of them can be used as nodes and 1 as the shared storage. DB1 IP Address – 192.168.188.173 NIC cards – 2 RAM - 3 GB RAM OS – 32 bit RHEL 5.2 Kernel -2.6.18-92.el5 DB2 IP Address – 192.168.188.153 NIC cards – 2 RAM - 3 GB RAM OS – 32 bit RHEL 5.2 Kernel -2-6.18-92.el5 Openfiler IP Address – 192.168.188.209 NIC cards – 2 Kernel - 2 2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686 (SMP) OS -Openfiler 2.3 Setting Up Hardware Setting up hardware consists of connecting cluster nodes to other hardware required to run a RedHat Cluster. The amount and type of hardware varies according to the purpose and availability requirements of the cluster.

clustering (Autosaved)

Embed Size (px)

Citation preview

Page 1: clustering (Autosaved)

Red Hat Cluster Configuration and Management

Available resources:

3 Servers out of which two of them can be used as nodes and 1 as the shared storage. DB1

IP Address – 192.168.188.173NIC cards – 2RAM - 3 GB RAM OS – 32 bit RHEL 5.2Kernel -2.6.18-92.el5

DB2 IP Address – 192.168.188.153NIC cards – 2RAM - 3 GB RAM OS – 32 bit RHEL 5.2Kernel -2-6.18-92.el5

Openfiler IP Address – 192.168.188.209NIC cards – 2Kernel - 2 2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686 (SMP)OS -Openfiler 2.3

Setting Up Hardware

Setting up hardware consists of connecting cluster nodes to other hardware required to run a RedHat Cluster. The amount and type of hardware varies according to the purpose and availability requirements of the cluster.

Cluster nodes — Computers that are capable of running Red Hat Enterprise Linux 5 software, with at least 1GB of RAM. The maximum number of nodes supported in a Red Hat Cluster is 16.

Ethernet switch or hub for public network — This is required for client access to the cluster.

Ethernet switch or hub for private network — This is required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.

Page 2: clustering (Autosaved)

Storage (here we used another server)— Some type of storage is required for a cluster. The type required depends on the purpose of the cluster.

Configuring Red Hat Cluster Software Conga

Conga is an integrated set of software components that provides centralized configuration and

Management of Red Hat clusters and storage. Conga provides the following major features:

• One Web interface for managing cluster and storage

• Automated Deployment of Cluster Data and Supporting Packages

• Easy Integration with Existing Clusters

• No Need to Re-Authenticate

• Integration of Cluster Status and Logs

• Fine-Grained Control over User Permissions

Packages needed luci and ricci

Page 3: clustering (Autosaved)

Considerations for Using Conga

There are no explicit private interconnects. It has a single xml configuration file : /etc/cluster/cluster.conf Requires 3 services:

cman

rgmanager

clvmd (assuming you are using LVM)

Installing packages

Conga will install packages for you using yum.  However it may be better to pre-install the packages so that you decide on the version you are using.

If using kickstart then add the following package clusters to your ks.cfg:

@clustering@cluster-storage

This will install the following packages.

Clustering:

cluster-cim cluster-snmp ipvsadm luci modcluster piranha rgmanager ricci system-config-cluster

Cluster-storage:

gfs-utils gnbd kmod-gfs kmod-gnbd lvm2-cluster

Page 4: clustering (Autosaved)

Enabling IP Ports on Cluster Nodes

IP Port Number Protocol Component

5404, 5405 UDP cman (Cluster Manager)

11111 TCP ricci (part of Conga remote agent)

14567 TCP gnbd (Global Network Block Device)

16851 TCP modclusterd (part of Conga remote agent)

21064 TCP dlm (Distributed Lock Manager)

50006, 50008,

50009 TCP ccsd (Cluster Configuration System daemon)

50007 UDP ccsd (Cluster Configuration System daemon)

8084 TCP luci (Conga user interface server)

Installing the group packages for RHEL cluster

yum groupinstall -y Cluster Storage Clustering

Starting luci and ricci

To administer Red Hat Clusters with Conga, install and run luci and ricci as follows:1. At each node to be administered by Conga, install the ricci agent. For example:

# yum install ricci

2. 2. At each node to be administered by Conga, start ricci. For example:

# service ricci start

Starting ricci: [ OK ]

3. Select a computer to host luci and install the luci software on that computer. For example:

Page 5: clustering (Autosaved)

# yum install luci

4. At the computer running luci, initialize the luci server using the luci_admin init command.For example:

# luci_admin init Initializing the Luci serverCreating the 'admin' userEnter password: <Type password and press ENTER.>Confirm password: <Re-type password and press ENTER.>Please wait...The admin password has been successfully set.Generating SSL certificates...Luci server has been successfully initializedRestart the Luci server for changes to take effecteg. service luci restart

5. Start luci using service luci restart. For example:

# service luci restartShutting down luci: [ OK ]Starting luci: generating https SSL certificates... done[ OK ]Please, point your web browser to https://nano-01:8084 to access luci

6. At a Web browser, place the URL of the luci server into the URL address box and click Go (or the equivalent). The URL syntax for the luci server is https://luci_server_hostname:8084.

Change the channel of the server after registering in RHN Select the system that u want to use as cluster node and go to Alter channel

subscriptions select.

RHEL Cluster-Storage (v. 5 for 32-bit x86)

RHEL Clustering (v. 5 for 32-bit x86)

Install system-config-cluster

Conga should install the cluster packages for you.  However you may also wish to install system-config-cluster

Page 6: clustering (Autosaved)

# yum install system-config-clusterLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirror.as29550.net * updates: mirror.as29550.net * addons: mirror.as29550.net * extras: mirror.as29550.netSetting up Install ProcessParsing package install argumentsResolving Dependencies--> Running transaction check---> Package system-config-cluster.noarch 0:1.0.55-1.0 set to be updated--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================= Package Arch Version Repository Size=============================================================================================================================================================Installing: system-config-cluster noarch 1.0.55-1.0 base 287 k

Transaction Summary=============================================================================================================================================================Install 1 Package(s) Update 0 Package(s) Remove 0 Package(s)

Total download size: 287 kIs this ok [y/N]: yDownloading Packages:system-config-cluster-1.0.55-1.0.noarch.rpm | 287 kB 00:00 Running rpm_check_debugRunning Transaction TestFinished Transaction TestTransaction Test SucceededRunning Transaction Installing : system-config-cluster [1/1]

Installed: system-config-cluster.noarch 0:1.0.55-1.0Complete!

Page 7: clustering (Autosaved)

Creating a Cluster

Hostname resolution

Make sure your conga server (luci) and your cluster nodes can each look the other up using DNS or /etc/hosts

Install /  Setup conga.

Install and setup conga (luci and ricci).

Enable cluster-based locking in lvm.conf

This makes sure that lvm is cluster safe and that any modifications to volumes is performed using the Distributed Lock Manager (DLM).

On each node in the cluster do the following to set locking_type = 3.

# vi /etc/lvm/lvm.conflocking_type = 3

When the cluster is created the clvmd service will be started which will read this file.

Creating a Cluster

Assuming Conga is already setup, use it to create the cluster.  This will create you an initial /etc/cluster/cluster.conf file.  You could alternatively just create your own cluster.conf file using a text editor and put it on each node and start the cluster services.  You can also use ccs_tool.  However Conga is by far the easiest method.

As administrator of luci, select the cluster tab.

2. Click Create a New Cluster.

3. At the Cluster Name text box, enter a cluster name. The cluster name cannot exceed 15

characters. Add the node name and password for each cluster node. Enter the node name

for each node in the Node Hostname column; enter the root password for each node in the

in the Root Password column. Check the Enable Shared Storage Support checkbox if

Page 8: clustering (Autosaved)

clustered storage is required.

4. Click Submit. Clicking Submit causes the following actions:

a. Cluster software packages to be downloaded onto each cluster node.

b. Cluster software to be installed onto each cluster node.

c. Cluster configuration file to be created and propagated to each node in the cluster.

d. Starting the cluster.

A progress page shows the progress of those actions for each node in the cluster.

When the process of creating a new cluster is complete, a page is displayed providing a

configuration interface for the newly created cluster.

After the creation has completed you will have a simple /etc/cluster/cluster.conf file and the cman service will have been started.

The cluster.conf file

After creating the cluster with conga you will have a cluster.conf file similar to the following:

<?xml version="1.0"?><cluster alias="linuxcluster2" config_version="1" name="Sierra"> <fence_daemon post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="192.168.188.153" nodeid="1" votes="1"/> <clusternode name="192.168.188.173" nodeid="2" votes="1"/> </clusternodes> <cman expected_votes="1" two_node="1"/> <fencedevices/> <rm/></cluster>

Check cluster services

Check that the main cluster services are enabled at this point.

# chkconfig --list modclusterdmodclusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off# chkconfig --list cmancman 0:off 1:off 2:on 3:on 4:on 5:on 6:off# chkconfig --list clvmdclvmd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Page 9: clustering (Autosaved)

# chkconfig --list rgmanagerrgmanager 0:off 1:off 2:on 3:on 4:on 5:on 6:off# service modclusterd statusmodclusterd (pid 3264) is running...# service cman statuscman is running.# service clvmd statusclvmd (pid 30846) is running...active volumes: LogVol00 LogVol01# service rgmanager statusclurgmgrd (pid 30916 30915) is running...

Check the cluster status

At this point we can run clustar to check the cluster is up.

# clustat

Shared Storage Openfiler setup Graphical Installation

The use of software RAID, or software Logical Volume Management (LVM), is not supported on shared storage.The Red Hat Cluster Manager requires that all cluster members have simultaneous access to the shared storage. These products typically do not allow for online repair of a failed member. Only host RAID adapters listed in the Red Hat Hardware Compatibility List are supported.

Here, we’ve used Openfiler for creating the shared storage.

System Requirements

Openfiler has the following hardware requirements to be successfully installed:

1. x86 or x64 based computer with at least 512MB RAM and 1GB storage for the OS image.

2. At least one supported network interface card3. A CDROM or DVD-ROM drive if you are performing a local install4. A supported disk controller with data drives attached.

Installation

The installation process is described with screenshots for illustrative purposes. If you are unable to proceed at any point with the installation process or you make a mistake, use the Back button to return to previous points in the installation process.

Starting the Installation

Page 10: clustering (Autosaved)

To begin the installation, insert the Openfiler disk into your CD/DVD-ROM drive and ensure your system is configured to boot off the CD/DVD-ROM drive. After the system POSTs, the installer boot prompt will come up. At this point, just hit the Enter key to proceed.

After a few moments, the first screen of the installer will be presented. If at this point your screen happens to be garbled, it is likely that the installer has been unable to automatically detect your graphics subsystem hardware. You may restart the installation process in text-mode and

proceed accordingly in that case. The first screen of the installer is depicted below. The next step is to click on the Next button to proceed with the installation.

Page 11: clustering (Autosaved)

Keyboard Selection

This screen deals with keyboard layout selection. Use the scroll bar on the right to scroll up and down and select your desired keyboard layout from the list. Once you are satisfied with your selection, click the Next button to proceed.

Page 12: clustering (Autosaved)

Disk Partitioning Setup

Next comes the disk partitioning.  You must select manual disk partitioning as it ensures you will end up with a bootable system and with the correct partitioning scheme. Openfiler does not

Page 13: clustering (Autosaved)

support automatic partitioning and you will be unable to configure data storage disks in the Openfiler graphical user interface if you select automatic partitioning. Click the Next button once you have selected the correct radiobutton option.

Page 14: clustering (Autosaved)

Disk Setup

On the disk setup screen, if you have any existing partitions on the system, please delete them. DO NOT DELETE ANY EXISTING OPENFILER DATA PARTITIONS UNLESS YOU NO LONGER REQUIRE THE DATA ON THEM. To delete a partition, highlight it in the list of partitions and click the Delete button. You should now have a clean disk on which to create your partitions.

Page 15: clustering (Autosaved)

You need to create three partitions on the system in order to proceed with the installation:

1. "/boot" - this is where the kernel will reside and the system

Page 16: clustering (Autosaved)

will boot from

2. "swap" - this is the swap partition for memory swapping to disk

3. "/"- this is the system root partition where all system applications and libraries will be installed

Page 17: clustering (Autosaved)

Create /boot Partition

Proceed by creating a boot partition. Click on the New button. You will be presented with a form with several fields and checkboxes. Enter the partition mount path "/boot" and the select the disk on with to create the partition. In the illustrated example, this disk is hda (the first IDE hard disk). Your setup will very likely be different as you may have several disks of different types. You should make sure that only the first disk is checked and

Page 18: clustering (Autosaved)

no others. If you are installing on a SCSI-only system, this disk will be designated sda. If you are installing on a system that has both IDE and SCSI disks, please select hda if you intend to use the IDE disk as your boot drive.

The following is a list of all entries required to create the boot partition:

1. Mount Point: /boot

2. Filesystem Type: ext3

Page 19: clustering (Autosaved)

3. Allowable Drives: select one disk only. This should be the first IDE (hda) or first SCSI disk (sda)

4. Size(MB): 100 (this is the size in Megabytes, allocate 100MB by entering

Page 20: clustering (Autosaved)

"100")

5. Additional Size Options: select Fixed Size radiobutton from the options.

6. Force to be a primary partition: checked (select this checkbox to force the partition to be created as a

Page 21: clustering (Autosaved)

primary partition)

After configuration, your settings should resemble the following illustration:

Once you are satisfied with your entries, click the OK button to create the partition.

Create / (root) Partition

Page 22: clustering (Autosaved)

Proceed by creating a root partition. Click on the New button. You will be presented with the same form as previously when creating the boot partition. The details are identical to what was entered for the /boot partition except this time the Mount Point: should be "/" and the Size(MB): should be 2048MB or at a minimum 1024MB.

Page 23: clustering (Autosaved)

Once you are satisfied with your entries, click the OK button to proceed.

Create Swap Partition

Proceed by creating a swap partition. Click on the New button. You will be presented with the same form as previously when creating the boot and root

Page 24: clustering (Autosaved)

partitions. The details are identical to what was entered for the boot partition except this time the Mount Point: should swap. Use the drop down list to select a swap partition type. The Size(MB): of the partition should be at least 1024MB and need not exceed 2048MB.

Page 25: clustering (Autosaved)

Once you are satisfied with your entries, proceed by clicking the OK button to create the partition. You should now have a set of partitions ready for the Openfiler Operating System image to install to. Your disk partition scheme should resemble the following:

You have

Page 26: clustering (Autosaved)

now completed the partitioning tasks of the installation process and should click Next to proceed to the next step.

Network Configuration

In this section you will configure network devices, system hostname and DNS parameters. You will need to configure at least one network interface card in order to access the Openfiler web interface and to serve data to clients on a

Page 27: clustering (Autosaved)

network. In the unlikely event that you will be using DHCP to configure the network address, you can simply click Next and proceed to the next stage of the installation process.

If on the other hand you wish to define a specific IP address and hostname, click the Edit button

Page 28: clustering (Autosaved)

at the top right corner of the screen in the Network Devices section. Network interface devices are designated ethX where X is a number starting at 0. The first network interface device is therefore eth0. If you have more than one network interface device, they will all be listed in the Network Devices section.

When you click the Edit button, a new form will popup for you to configure the network device in question. As you do not wish to use DHCP for

Page 29: clustering (Autosaved)

this interface, uncheck the Configure Using DHCP checkbox. This will then allow you to enter a network IP address and Netmask in the appropriate form fields. Enter your desired settings and click OK to proceed.

Once you have

Page 30: clustering (Autosaved)

configured a network IP address, you may now enter a hostname for the system. The default hostname localhost.localdomain is not suitable and you will need to enter a proper hostname for the system. This will be used later when you configure the system to participate on your network either as an Active Directory / Windows NT PDC client or as an LDAP domain member server. You will also, at this point, need to configure gateway IP

Page 31: clustering (Autosaved)

address and DNS server IP addresses. To complete this task you will need the following information:

1. Desired hostname - this is the name you will call the system. Usually this will be a fully qualified hostname e.g homer.the-simpsons.com

Page 32: clustering (Autosaved)

.

2. Gateway IP address - this is the IP address of your network gateway to allow routing to the Internet

3. Primary DNS Server - this is the DNS server on your netw

Page 33: clustering (Autosaved)

ork. Note that if you intend to use Active Directory or LDAP as your authentication mechanism, you will need to assign a functional DNS IP address so that the authentication mechanism

Page 34: clustering (Autosaved)

is able to resolve the authentication server hostnames.

4. Secondary/Tertiary DNS Server - enter a second and third DNS server if they are available on your network.

The following

Page 35: clustering (Autosaved)

illustration shows an example where a hostname has been assigned, and gateway IP, primary and secondary DNS information has also been entered.

Once you are satisfied with your entries, please proceed by clicking the Next button.

Time

Page 36: clustering (Autosaved)

Zone Selection

Set the default system time zone. You can achieve this by following the instructions on the left side of the screen. If your system BIOS has been configured to use UTC, check the UTC checkbox at the bottom of the screen and click Next to proceed.

Page 37: clustering (Autosaved)

Set Root Password

You need to configure a root password for the system. The root password is the superuser administrator password. With the root account, you can log into the system to perform any administrative tasks that are not offered via the web interface.

Page 38: clustering (Autosaved)

Select a suitable password and enter it twice in the provided textboxes. When you are satisfied with your entries, click Next to proceed with the installation process.

NB: the root password is meant for logging into the console of the Openfiler server. The default username and

Page 39: clustering (Autosaved)

password for the Openfiler web management GUI are: "openfiler" and "password" respectively.

About To Install

This screen informs you that installation configuration has been completed and the installer is awaiting your input to start the installation process which will format disks, copy data to the system and configure system parameters such as setting up the boot loader and adding system

Page 40: clustering (Autosaved)

users. Click Next if you are satisfied with the entries you have made in the previous screens.

Note

You cannot go back to previous screens once you have gone past this point. The installer will erase any data on the partitions you defined in the partitioning section.

Page 41: clustering (Autosaved)

Installation

Once you have clicked Next in the preceding section, the installer will begin the installation process. The following screenshots depict what happens at this point.

Page 42: clustering (Autosaved)
Page 43: clustering (Autosaved)

Installation Complete

Once the installation has completed, you will be presented with a congratulatory message. At this point you simply need to click the Reboot button to finish the installer and boot into the installed

Page 44: clustering (Autosaved)

Openfiler system.

Note

After you click Reboot remove the installation CD from the CD/DVD-ROM drive.

Once the system boots up, start configuring Openfiler by pointing your browser at

Page 45: clustering (Autosaved)

the host name or IP address of the Openfiler system. The interface is accessible from https port 446. e.g.. https://homer.the-simpsons.com:446.

Management Interface: https://<ip of openfiler host>:446

Administrator Username: openfiler

Administrator Password: password

Page 46: clustering (Autosaved)

You need to configure Openfiler as a iSCSI target. What’s this? Basicaly that Openfiler will be acting as a server.

First create physical volume. Go to Volumes and click on the link (see image below).

Then create a partition on this volume. Clic on /dev/sdb

You will get onto a page which looks like this one… Click on the button Create

Page 47: clustering (Autosaved)

Then Create a Volume Group. Give it some name and click the button Add Volume Group

Then add a volume at Volumes, then Add Volume.

Page 48: clustering (Autosaved)

You will then be directed to the page where you can fill the name for your iSCSI volume and also Description. You then must pull the button indicated with the slider bar to the right to specify how big the volume should be. Don’t forgot to use the drop down box to select iSCSI as a type of volume.

Go to General Tab and edit the properties of the volume.

Just go to general tab and click on Network Setup. Now, fill in the IP adress of Openfiler VM, set the subnet mask, select Share, and then click on the button Update.

Page 49: clustering (Autosaved)

Go and enable the iSCSI target

To be able to do that you must go to Services and click on a link to enable the iSCSI target.

Page 50: clustering (Autosaved)

Configuring shared storage on both the nodes

Install the iscsi-initiator rpm on both the nodes

yum install iscsi-initiator-utils-6.2.0.872-6.el5

After you have installed your iSCSI initiator, we have to configure it.  The first step is to enter your authentication information.  This is stored in the /etc/iscsi/iscsid.conf file. Open this file in your favorite text editor.  Assuming your server requires authentication, there are four lines you need to watch for, they are as follows:

node.session.auth.username = testnode.session.auth.password = test

discovery.sendtargets.auth.username = testdiscovery.sendtargets.auth.password = test

[root@db2 ~]# /etc/init.d/iscsi start

Execute the following set of commands on both the nodes to configure shared storage iscsiadm -m discovery -t st -p 192.168.188.209 I default -P 1

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.345e82b7ef0a -p 192.168.188.209:3260 --login [root@db1 httpd]# fdisk –l

Disk /dev/sda: 160.0 GB, 160000000000 bytes255 heads, 63 sectors/track, 19452 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sda1 * 1 1434 11518573+ 83 Linux/dev/sda2 1435 6456 40339215 83 Linux/dev/sda3 6457 10660 33768630 82 Linux swap / Solaris/dev/sda4 10661 19452 70621740 5 Extended/dev/sda5 10661 11425 6144831 83 Linux/dev/sda6 11426 15629 33768598+ 83 Linux/dev/sda7 15630 16139 4096543+ 83 Linux/dev/sda8 16140 18942 22515066 83 Linux/dev/sda9 18943 19197 2048256 83 Linux/dev/sda10 19198 19452 2048256 83 Linux

Page 51: clustering (Autosaved)

Disk /dev/sdb: 369.8 GB, 369836949504 bytes255 heads, 63 sectors/track, 44963 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

gfs_mkfs -p lock_dlm -t Sierra:storage1 -j 8 /dev/vg1/lvo

Now make an entry in /etc/fstab as follows

/dev/vg1/lvo /var/www/html/ gfs defaults 0 0

[root@db2 ~]# /etc/init.d/gfs statusConfigured GFS mountpoints:/var/www/html/Active GFS mountpoints:/var/www/html

[root@db1 ~]# lvsLV VG Attr LSize Origin Snap% Move Log Copy% Convertlvo vg1 -wi-a- 78.12G

[root@db1 ~]# vgsVG #PV #LV #SN Attr VSize VFreevg1 1 1 0 wz--nc 344.43G 266.31G

[root@db1 ~]# pvsPV VG Fmt Attr PSize PFree/dev/sdb vg1 lvm2 a- 344.43G 266.31G[root@db1 ~]# vgscanReading all physical volumes. This may take a while...Found volume group "vg1" using metadata type lvm2

Note:

Execute the following set of commands on both the nodes to remove any previous logs or shared storage

Logging out of session

Page 52: clustering (Autosaved)

iscsiadm -m node --logoutall=all

This command will remove the record

iscsiadm -m session --sid=1 --op=delete

iscsiadm -d 9 -m discovery -t sendtargets -p 192.168.1.5

Adding the fence device in Conga

as

Installing and Configuring the Apache HTTP Server

The Apache HTTP Server must be installed and configured on all nodes in the assigned failover domain, if used, or in the cluster. The basic server configuration must be the same on all nodes on which it runs for the service to fail over correctly. The following example shows a basic Apache HTTP Server installation that includes no third-party modules or performance tuning. On all node in the cluster (or nodes in the failover domain, if used), install the httpd RPM package.

yum install httpd-2.2.3-11.el5_1.3

To configure the Apache HTTP Server as a cluster service, perform the following tasks:

1. Edit the /etc/httpd/conf/httpd.conf configuration file and customize the file according to your configuration. For example:

Specify the directory that contains the HTML files. Also specify this mount point when adding the service to the cluster configuration. It is only required to change this field if the mount point for the web site's content differs from the default setting of /var/www/html/. For example:

DocumentRoot "/mnt/httpdservice/html"

Specify a unique IP address to which the service will listen for requests. For example: Listen 192.168.188.201:80

This IP address then must be configured as a cluster resource for the service using the Cluster Configuration Tool.

If the script directory resides in a non-standard location, specify the directory that contains the CGI programs. For example:

ScriptAlias /cgi-bin/ "/mnt/httpdservice/cgi-bin/"

Specify the path that was used in the previous step, and set the access permissions to default to that directory. For example:

Page 53: clustering (Autosaved)

<Directory /mnt/httpdservice/cgi-bin">AllowOverride NoneOptions None Order allow,deny Allow from all </Directory>

Additional changes may need to be made to tune the Apache HTTP Server or add module functionality.

The standard Apache HTTP Server start script, /etc/rc.d/init.d/httpd is also used within the cluster framework to start and stop the Apache HTTP Server on the active cluster node. Accordingly, when configuring the service, specify this script by adding it as a Script resource in the Cluster Configuration Tool.

2. Copy the configuration file over to the other nodes of the cluster (or nodes of the failover domain, if configured).

Before the service is added to the cluster configuration, ensure that the Apache HTTP Server directories are not mounted. Then, on one node, invoke the Cluster Configuration Tool to add the service, as follows. This example assumes a failover domain named httpd-domain was created for this service.

1. Add the init script for the Apache HTTP Server service.

Select the Resources tab and click Create a Resource. The Resources Configuration properties dialog box is displayed.

Select Script form the drop down menu. Enter a Name to be associated with the Apache HTTP Server service. Specify the path to the Apache HTTP Server init script (for example,

/etc/rc.d/init.d/httpd) in the File (with path) field. Click OK.

2. Add a device for the Apache HTTP Server content files and/or custom scripts.

Click Create a Resource. In the Resource Configuration dialog, select File System from the drop-

down menu. Enter the Name for the resource (for example, httpd-content. Choose ext3 from the File System Type drop-down menu. Enter the mount point in the Mount Point field (for example,

/var/www/html/). Enter the device special file name in the Device field (for example,

/dev/sda3).

3. Add an IP address for the Apache HTTP Server service.

Page 54: clustering (Autosaved)

Click Create a Resource. Choose IP Address from the drop-down menu. Enter the IP Address to be associated with the Apache HTTP Server service. Make sure that the Monitor Link checkbox is left checked. Click OK.

4. Click the Services property. 5. Create the Apache HTTP Server service.

Click Create a Service. Type a Name for the service in the Add a Service dialog.

In the Service Management dialog, select a Failover Domain from the drop-down menu or leave it as None.

Click the Add a Shared Resource to this service button. From the available list, choose each resource that you created in the previous steps. Repeat this step until all resources have been added.

Click OK.

6. Choose File => Save to save your changes.

Referenceshttp://www.openfiler.com/learn/how-to/graphical-installation

http://www.everythingvm.com/content/connecting-storage-systems-using-iscsi-nfs-and-cifs-smb

Page 55: clustering (Autosaved)

http://www.vladan.fr/how-to-configure-openfiler-iscsi-storage-for-use-with-vmware-esx/

http://www.vladan.fr/how-to-connect-esx4-vsphere-to-openfiler-iscsi-nas/

http://sources.redhat.com/cluster/conga/doc/user_manual.htm

http://rhel-cluster.blogspot.com/

https://access.redhat.com/kb/docs/DOC-9826

http://www.itchythinking.com/itchythinking/knowledge/node/81http://www.vladan.fr/how-to-configure-openfiler-iscsi-storage-for-use-with-vmware-esx/