28
High-Availability oVirt-Cluster with iSCSI-Storage

Ha Ovirt Iscsi

  • Upload
    zennro

  • View
    275

  • Download
    12

Embed Size (px)

DESCRIPTION

heartbeat

Citation preview

  • High-Availability oVirt-Cluster withiSCSI-Storage

  • High-Availability oVirt-Cluster with iSCSI-StorageBenjamin Alfery , Philipp Richter Copyright 2013 LINBIT HA-Solutions GmbH

    Trademark noticeDRBD and LINBIT are trademarks or registered trademarks of LINBIT in Austria, the United States, and other countries. Other namesmentioned in this document may be trademarks or registered trademarks of their respective owners.

    License informationThe text and illustrations in this document are licensed under a Creative Commons Attribution-Noncommercial-NoDerivs 3.0 Unportedlicense ("CC BY-NC-ND").

    A summary of CC BY-NC-ND is available at http://creativecommons.org/licenses/by-nc-nd/3.0/.

    The full license text is available at http://creativecommons.org/licenses/by-nc-nd/3.0/legalcode.

    In accordance with CC BY-NC-ND, if you distribute this document, you must provide the URL for the original version.

  • iii

    Table of Contents1. Introduction ............................................................................................................................. 1

    1.1. Goal of this guide .......................................................................................................... 11.2. Limitations .................................................................................................................... 21.3. Conventions in this document ......................................................................................... 2

    2. Software ................................................................................................................................. 32.1. Software repositories ..................................................................................................... 3

    2.1.1. LINBIT DRBD and pacemaker repositories .............................................................. 32.1.2. Enable EPEL repository ........................................................................................ 4

    2.2. DRBD and pacemaker installation .................................................................................... 42.3. Optional: Install csync2 .................................................................................................. 4

    3. Backing devices ........................................................................................................................ 54. oVirt Manager preparation ........................................................................................................ 6

    4.1. DRBD resource for the oVirt Manager virtual machine ....................................................... 64.2. Create a network bridge ................................................................................................. 64.3. Install libvirt/qemu ......................................................................................................... 64.4. KVM definition and system installation ............................................................................. 7

    5. Heartbeat configuration ............................................................................................................ 86. Pacemaker rules for KVM-DRBD resource and virtual machine ....................................................... 97. iSCSI preparation .................................................................................................................... 10

    7.1. DRBD resource for the iSCSI target ............................................................................... 108. Pacemaker rules for iSCSI-DRBD resource, iSCSI target and iSCSI service IP address ....................... 11

    8.1. Portblock for the iSCSI target ....................................................................................... 118.2. iSCSI resource group .................................................................................................... 128.3. Constraints for the iSCSI resources ................................................................................ 12

    9. oVirt Manager, hypervisors and iSCSI storage ............................................................................ 139.1. oVirt Manager installation ............................................................................................. 13

    9.1.1. Reconfigure oVirt machinetypes ......................................................................... 139.2. Reconfigure LVM ......................................................................................................... 149.3. Extend udev rule for DRBD ........................................................................................... 149.4. Hypervisor installation .................................................................................................. 14

    9.4.1. Adjust libvirt access .......................................................................................... 159.5. Second node hypervisor installation ............................................................................... 169.6. Storage setup .............................................................................................................. 16

    10. Test, fence and backup ......................................................................................................... 1911. Further documentation and links ............................................................................................ 2012. Appendix ............................................................................................................................. 21

    12.1. Configurations ........................................................................................................... 2112.1.1. DRBD ............................................................................................................ 2112.1.2. KVM .............................................................................................................. 2212.1.3. Heartbeat ...................................................................................................... 2312.1.4. Pacemaker ..................................................................................................... 2412.1.5. Others ........................................................................................................... 25

  • 1Chapter1.IntroductionoVirt1 is a management application for virtual machines that uses the libvirt interface. It consists of a web-based userinterface (oVirt Manager), one or more hypervisors, and data storage for the virtual guests.

    DRBD2 is a distributed storage system for the Linux OS. It is included in the Linux kernel since 2.6.333.LINBIT4, the authors of DRBD, actively supports and develops DRBD, which is the world leading Open Sourceshared-nothing storage solution for the Linux ecosystem. LINBIT is a premier business partner of Red Hat andDRBD is an accepted 3rd party solution by Red Hat. This means that you wont lose your Red Hat supportwhen you are using DRBD with a LINBIT support subscription.

    1.1.Goal of this guideTo provide a highly avialable virtualization environment with oVirt we are going to use two physical machinesproviding a bare KVM (hosting the oVirt Manager) and an iSCSI target as storage for the virtual guests.Furthermore, the two physical nodes will be used as oVirt hypervisors.

    We will use DRBD for data replication of the KVM and the iSCSI-storage between the nodes. Pacemaker andheartbeat will serve as the cluster management system.

    kvm-ovirtm iscsi

    oVirtmVM

    store1

    oVirt iSCSI initiator

    VM VM VM

    local disk

    LVM

    DRBD

    iSCSI

    oVirt LVM

    oVirt VMs

    ovirt-hyp1

    LV kvm_oVirtm

    kvm-ovirtm iscsi

    oVirtmVM

    store1

    oVirt iSCSI initiator

    VM

    local disk

    ovirt-hyp2

    LV kvm_oVirtm

    VM

    LV iscsi LV iscsi

    VM

    1http://www.ovirt.org2http://www.drbd.org3http://www.drbd.org/download/mainline/4http://www.linbit.com/

  • Introduction

    2

    1.2.LimitationsThis guide covers only the important steps to set up a highly available oVirt-Cluster with iSCSI-storageusing DRBD for data-replication. It does not cover additional important topics that should be considered ina production environment:

    Performance tuning of any kind (DRBD, iSCSI, oVirt, )

    oVirt power management configuration

    Fencing

    WARNING

    This guide does not cover the configuration of your clusters fencing strategy.This is vitally important in production environments. If you are uncertain of howto setup fencing in your environment or any other topic within this document youmay want to consult with the friendly experts at LINBIT beforehand.

    1.3.Conventions in this documentThis guide assumes two machines named ovirt-hyp1 and ovirt-hyp2. They are connectedvia a dedicated cross-over 1 gigabit-ethernet link, using the IP addresses 192.168.10.10 and192.168.10.11.

    DRBD will use the minor numbers 0 (resource name: kvm-ovirtm) and 1 (resource name: iscsi) for thereplicated volumes.

    This document describes a oVirt/iSCSI/DRBD/Pacemaker installation on a x86_64 machine running Linuxkernel version 2.6.32-358.14.1.el6.x86_64 with Scientific Linux 6.4 user space, up-to-date as of August2013. The DRBD kernel module and user space version is 8.4.3.

    It is also assumed that for the backing devices logical volumes are used. While not necessarily needed, logicalvolumes are highly recommended for flexibility.

    This guide assumes basic Linux administration, DRBD and Pacemaker knowledge.

    All configuration files used are available in Chapter12, Appendix [21].

  • 3Chapter2.SoftwareIts assumed that the base system is already setup. Most of the needed packages are already installed on SL6.

    Pacemaker is a cluster resource management framework which you will use to automatically start, stop,monitor, and migrate resources. This technical guide assumes that you are using at least pacemaker 1.1.6.

    Heartbeat is the cluster messaging layer that pacemaker uses. This guide assumes at least heartbeatversion 3.0.5. Using the LINBIT pacemaker repository this should come bundled with pacemaker.

    DRBD is a kernel block-level synchronous replication facility which serves as an imported shared-nothingcluster building block. Pre-compiled packages are available in official repositories from LINBIT. You will installthe drbd-utils and drbd-kmod packages. These comprise the DRBD administration utilities and kernelmodule.

    libvirt is an open source management tool for virtualization. It provides a unique API to virtualizationtechnologies such as KVM, QEMU, Xen and VMware ESX.

    oVirt/oVirtm is a management application for virtual machines. This guide assumes oVirt engine Version 3.2

    Csync2, while not strictly necessary, is a highly recommended tool to keep configuration files synchronizedon multiple machines. Its sources can be downloaded on LINBITs OSS pages1. A paper providing an overviewand describing the use is available as well2.

    2.1.Software repositoriesAssuming the operating system is fully installed and up-to-date, the network interfaces (cross-link andservice-link) are configured and operational, the first step is to add some missing repositories to the system.(Be sure to add them on both nodes)

    2.1.1.LINBIT DRBD and pacemaker repositories# cat > /etc/yum.repos.d/drbd-8.repo

  • Software

    4

    2.1.2.Enable EPEL repositoryThere may be some packages or dependencies that need the EPEL3 repository.

    # rpm -Uvh http://download.fedoraproject.org/pub/epel/6//epel-release-6-8.noarch.rpm

    Replace by your architecture (i386, x86_64, )

    2.2.DRBD and pacemaker installationIf you are using LINBIT repositories you can easily install DRBD and pacemaker by executing

    # yum -y install drbd kmod-drbd drbd-pacemaker pacemaker-hb pacemaker-hb-cli heartbeat

    As we will use pacemaker to manage DRBD we need to disable it on startup (on both nodes):

    # chkconfig drbd off

    2.3.Optional: Install csync2If you want install csync2 (this will need the EPEL respository) issue

    # yum install csync2

    Configuration and usage of csync2 is not covered in this guide. Please consult the corresponding paper4

    for usage information.

    3https://fedoraproject.org/wiki/EPEL4http://oss.linbit.com/csync2/paper.pdf

  • 5Chapter3.Backing devicesWe will need two resources that will be replicated with DRBD: one for the KVM hosting the oVirt Manager,and one containing the iSCSI target.

    To create the backing devices for DRBD, create two logical volumes (size and naming may vary in yourinstallation):

    # lvcreate -L10G -n kvm_oVirtm system# lvcreate -L30G -n iscsi system

    Note

    Be sure to create identical logical volumes on both nodes.

  • 6Chapter4.oVirt Manager preparation4.1.DRBD resource for the oVirt Managervirtual machineConfigure the DRBD resource on both nodes:

    # cat > /etc/drbd.d/kvm-ovirtm.res

  • oVirt Manager preparation

    7

    4.4.KVM definition and system installationPrepare a KVM definition file (identical on both nodes) for your oVirt Manager KVM. This definition shouldcontain the configured DRBD resource as hard disk. It should look similar to this:

    oVirtm 34ad3032-f68e-734a-8e84-47af69e7848a 1572864 1572864 1

    hvm destroy restart restart /usr/libexec/qemu-kvm

    You may now start your virtual machine on the node that holds the primary role for the used resource andinstall the base operation system. (we will come back to this later on).

  • 8Chapter5.Heartbeat configurationTo enable cluster communication we need to configure heartbeat (again on both nodes). In /etc/ha.d/create the file ha.cf with the heartbeat parameters. This should contain something like:

    autojoin nonenode ovirt-hyp1node ovirt-hyp2bcast eth1mcast ovirtmgmt 239.192.0.51 694 1 0use_logd yesinitdead 120deadtime 20warntime 10keepalive 1compression bz2crm respawn

    Make sure to also create the authkeys file in this directory, containing something like:

    auth 11 sha1 sdrsdfrgaqerbqerbq34bgaebaqejrbnSDFQ23Fwe

    The string in the second line is the shared secret for cluster communication. Be sure to set the properownership and permissions for the authentication files on both nodes:

    # chown root: /etc/ha.d/authkeys# chmod 600 /etc/ha.d/authkeys

    Set the heartbeat service to be started on system startup (on both nodes):

    # chkconfig heartbeat on

    Now start heartbeat:

    # service heartbeat start

    Check your firewall settings to allow the cluster communication.

  • 9Chapter6.Pacemaker rules for KVM-DRBD resource and virtual machinePrepare your initial pacemaker configuration and add the primitive for the DRBD resource (these actions aredone via the CRM shell):

    $ primitive p_drbd_kvm-ovirtm ocf:linbit:drbd \ params drbd_resource="kvm-ovirtm" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100"

    Add a master-slave statement as this resource spans over two cluster nodes:

    $ ms ms_drbd_kvm-ovirtm p_drbd_kvm-ovirtm \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true"

    For the virtual machine set the following primitive:

    $ primitive p_kvm-ovirtm ocf:heartbeat:VirtualDomain \ params config="/etc/libvirt/qemu/oVirtm.xml" \ op start interval="0" timeout="180s" \ op stop interval="0" timeout="300s" \ op monitor interval="60s" timeout="60s"

    To bind the two primitives together we need to set two constraints. The first is a colocation constraint tomake the primary side of DRBD run with the virtual machine:

    $ colocation co_kvm-ovirtm_with_drbd +inf: p_kvm-ovirtm:Started ms_drbd_kvm-ovirtm:Master

    The second rule defines the order. The DRBD resource must be promoted before the KVM can start:

    $ order o_drbd-kvm-ovirtm_before_kvm +inf: ms_drbd_kvm-ovirtm:promote p_kvm-ovirtm:start

    Test and commit the changes.

  • 10

    Chapter7.iSCSI preparationTo configure an iSCSI target, we need the iSCSI user space tools. As we are going to to use a tgt target,this will be:

    # yum install scsi-target-utils

    Make sure its started on system startup:

    # chkconfig tgtd on

    Then start the service:

    # service tgtd start

    (Again, do all these steps on both nodes.)

    7.1.DRBD resource for the iSCSI targetConfigure the DRBD resource on both nodes:

    # cat > /etc/drbd.d/iscsi.res

  • 11

    Chapter8.Pacemaker rules for iSCSI-DRBD resource, iSCSI target and iSCSIservice IP addressIn the CRM shell add a primitive for the iSCSI-DRBD resource:

    $ primitive p_drbd_iscsi ocf:linbit:drbd \ params drbd_resource="iscsi" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100"

    A master-slave statement as this resource also spans over two cluster nodes:

    $ ms ms_drbd_iscsi p_drbd_iscsi \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true"

    Add a primitive for the iSCSI target:

    $ primitive p_iscsi_store1 ocf:heartbeat:iSCSITarget \ params implementation="tgt" iqn="iqn.2013-08.linbit.ovirtiscsi:store1" tid="1" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60"

    As we need a logical unit (lun) for the target that refers to the backing device, we need to set anotherprimitive:

    $ primitive p_iscsi_store1_lun1 ocf:heartbeat:iSCSILogicalUnit \ params implementation="tgt" target_iqn="iqn.2013-08.linbit.ovirtiscsi:store1" lun="1" \ path="/dev/drbd/by-res/iscsi/0" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60"

    To access the target independently from the node it is running on, we configure a service IP address for it:

    $ primitive p_ip_iscsi ocf:heartbeat:IPaddr2 \ params ip="192.168.10.50" \ op start interval="0" timeout="20" \ op stop interval="0" timeout="20" \ op monitor interval="30" timeout="20"

    8.1.Portblock for the iSCSI targetAs some clients might have problems receiving a tcp-reject from the iSCSI service during a switch- or failover,we are going to set a cluster managed rule to set a DROP policy in the firewall during the transfer from onenode to the other. This is considered safer as the clients dont receive a response from the server during thisaction and do not drop their connection (but try it again for some time). This gives the cluster some timeto start all the resources.

    The portblock resource agent is designed to achieve this kind of requirement:

    $ primitive p_portblock-store1-block ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="block"

  • Pacemaker rules for iSCSI-DRBD resource, iSCSI target and iSCSI service IP address

    12

    To unblock the port again set the primitive:

    $ primitive p_portblock-store1-unblock ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="unblock" \ op monitor interval="30s"

    8.2.iSCSI resource groupAs we configured some primitives that must always run together, we will define a group for them:

    $ group g_iscsi p_portblock-store1-block p_ip_iscsi p_iscsi_store1 \ p_iscsi_store1_lun1 p_portblock-store1-unblock

    8.3.Constraints for the iSCSI resourcesAs the DRBD resource and services in the iSCSI group have to run on the same node we define a colocationconstraint specifying the two resource should always run together on the same node:

    $ colocation co_g_iscsi_with_drbd +inf: g_iscsi:Started ms_drbd_iscsi:Master

    Of course, the DRBD resource has to be promoted before the other services can access it. We must then setan order constraint to that effect:

    $ order o_drbd_iscsi_before_g_iscsi +inf: ms_drbd_iscsi:promote g_iscsi:start

    Test and commit the changes.

  • 13

    Chapter9.oVirt Manager, hypervisorsand iSCSI storageAs pacemaker is already fully configured by now, its time to install the oVirt Manager inside the virtualmachine, install the hypervisor components on the physical nodes, and enable the iSCSI storage for use inoVirt.

    9.1.oVirt Manager installationAssuming the operating system inside the VM is already installed and up to date, connect to it and enablethe oVirt repository:

    # cat > /etc/yum.repos.d/ovirt.repo

  • oVirt Manager, hypervisors and iSCSI storage

    14

    To get the current machine types, type:

    # engine-config -g EmulatedMachine

    If this show something like:

    EmulatedMachine: pc-0.14 version: 3.1EmulatedMachine: pc-0.14 version: 3.2EmulatedMachine: pc-0.14 version: 3.0

    Then we need to set it manually. As we use hypervisor version 3.2 the EmulatedMachine should berhel6.4.0.

    # engine-config -s EmulatedMachine=rhel6.4.0 --cver=3.2

    Restart the ovirt engine:

    # service ovirt-engine restart

    9.2.Reconfigure LVMBefore we can begin installing the hypervisors on the physical nodes, we need to reconfigure the LVMconfiguration on the physical nodes, as the hypervisor installation will use LVM as well.

    Do the following on both physical nodes.

    In /etc/lvm/lvm.conf, set:

    write_cache_state = 0

    Extend the preferred_names parameter by your volume group name, e.g. if your volume group name issystem this looks similar to the following:

    preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", ... , "^/dev/system"]

    Finally, extend (or set) a filter for the DRBD devices:

    filter = [ "r|^/dev/drbd.*|" ]

    9.3.Extend udev rule for DRBDAs the hypervisor installation changes the qemu configuration and pacemaker has to access the DRBDdevice, we need to extend the DRBD udev rule.

    In /etc/udev/rules.d/65-drbd.rules extend the existing rule (on both nodes) by setting:

    OWNER="vdsm", GROUP="kvm"

    The rule should look now something like:

    SUBSYSTEM=="block", KERNEL=="drbd*", IMPORT{program}="/sbin/drbdadm sh-udev minor-%m",NAME="$env{DEVICE}", SYMLINK="drbd/by-res/$env{RESOURCE} drbd/by-disk/$env{DISK}",OWNER="vdsm", GROUP="kvm"

    9.4.Hypervisor installationAs we want our physical nodes to act as hypervisors, we need to install the corresponding componentson them. The installation process itself is done via the oVirt Manager webfront-end, except for minoradjustments as we have a special setup.

  • oVirt Manager, hypervisors and iSCSI storage

    15

    First we have to enable the oVirt repositories on both physical nodes (as shown in Section9.1, oVirt Managerinstallation [13]).

    Second, we must set one node (the one that gets installed first) to "standby" in the pacemaker clustermanager, preferably the one that runs the virtual machine, as we then dont have to wait for it while it restarts.

    Wait until all resources are switched over to the remaining online node and login into the oVirt Managerwebinterface.

    In the left navigation tree click on "System" followed by "Hosts" tab in the top navgiation bar and then on"New". Fill in the from with informations of the host in the form that is currently in standby: "Name", "Address"and "Root Password". Uncheck "Automatically configure host firewall".

    Click "OK". The hypervisor installation is now performed. You can watch the process in the logarea (grey bar atthe bottom of the page). When the installation is finished the node gets rebooted (as part of the installationprocess). The hypervisor should now show up in the webinterface.

    9.4.1.Adjust libvirt accessBecause the hypervisor installation protects the access to libvirt by password, we need to enable theaccess again, as pacemaker has to manage the oVirt Manager KVM. This can be done by setting a passwordfor pacemaker to the libvirt application:

    # saslpasswd2 -a libvirt pcmk

    In /etc/libvirt/auth.conf set:

    [credentials-pcmk]authname=pcmkpassword=your-password-from-saslpasswd2

    [auth-libvirt-ovirt-hyp1]credentials=pcmk

    [auth-libvirt-ovirt-hyp2]credentials=pcmk

    [auth-libvirt-localhost]credentials=pcmk

  • oVirt Manager, hypervisors and iSCSI storage

    16

    Set the password accordingly and deploy this file on the other node also.

    You can test if the access works by running:

    # virsh list

    The above command should not ask for a password anymore.

    9.5.Second node hypervisor installationTo install the second hypervisor on the remaining node, we need to set the standby node online again. Beforesetting the other node to standby, make sure that all DRBD resources are in sync again.

    Set the remaining node to standby and wait for the resources to switch-over. If the oVirt Manager KVM isback up again, login to the web interface and perform the installation for the second node (as previouslydescribed).

    When the node is rebooted, dont forget to set a password via:

    # saslpasswd2 -a libvirt pcmk

    If /etc/libvirt/auth.conf is not already in place set it as shown above.

    Set the standby node online again to ensure full cluster functionality.

    9.6.Storage setupWe can now setup the iSCSI Storage in the oVirt Manager. Login into the web interface. In the left navigationtree click on "System", in the top navigation bar click on the "Storage" tab and then on "New Domain". Fill inthe "Name" of the storage and select a host ("Use Host") - it actually does not matter which host you select,as this is only used for setup.

    In the "Discover Targets" area fill in the "Address" and the "Port". The address is the service IP address weconfigured in pacemaker that should always point to the active iSCSI target. The port is "3260" if not specifiedotherwise.

    Click on "Discover" to discover the iSCSI target. If its discovered correctly, click "Login".

  • oVirt Manager, hypervisors and iSCSI storage

    17

    Select the "LUN ID" you want to use (in this case there is only one available).

    Click "OK". The new storage is being initialized and should go up after a while.

  • oVirt Manager, hypervisors and iSCSI storage

    18

    Now all the components should be in place. You should test the whole setup extensively and create test virtualmachines.

  • 19

    Chapter10.Test, fence and backupA word of warning

    As mentioned at the beginning of this document this guide describes only the basicsteps of how to enable a high-available oVirt-Cluster with iSCSI-Storage and DRBD.Testing and fencing (such as STONITH) are vitally important for production usage of thissetup. Without extensive tests and fencing strategies in place, it can easily destroy yourenvironment and data set on top.

    Also think of a proper backup policy, as DRBD is not a replacement for backups.

    If you are unsure on one or more of these topics (or any other within this document),consult with the friendly experts at LINBIT.

  • 20

    Chapter11.Further documentation andlinksoVirt documentation The oVirt documentation page

    http://www.ovirt.org/Documentation

    Building oVirt engine Building oVirt Engine from scratchhttp://www.ovirt.org/Building_oVirt_engine

    RHEL V2V Guide Red Hat guide how to import virtual machines from foreign hypervisors toRed Hat Enterprise Virtualization and KVM managed by libvirthttps://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/V2V_Guide/index.html

    DRBD users guide The reference guide for DRBDhttp://www.drbd.org/docs/about/

    LINBIT Tech Guides LINBIT provides a lot of in-depth knowledge via the tech guideshttp://www.linbit.com/en/education/tech-guides/

  • 21

    Chapter12.Appendix12.1.Configurations12.1.1.DRBD/etc/drbd.d/kvm-ovirtm.res

    resource kvm-ovirtm {

    net { protocol C; } volume 0 { device minor 0; disk /dev/system/kvm_oVirtm; meta-disk internal; } on ovirt-hyp1 { address 192.168.10.10:7788; } on ovirt-hyp2 { address 192.168.10.11:7788; }}

    /etc/drbd.d/iscsi.res

    resource iscsi {

    net { protocol C; } volume 0 { device minor 1; disk /dev/system/iscsi; meta-disk internal; } on ovirt-hyp1 { address 192.168.10.10:7789; } on ovirt-hyp2 { address 192.168.10.11:7789; }}

  • Appendix

    22

    12.1.2.KVM/etc/libvirt/qemu/oVirtm.xml (be sure to take this configuration only as a guideline)

    oVirtm 34ad3032-f68e-734a-8e84-47af69e7848a 1572864 1572864 1

    hvm destroy restart restart /usr/libexec/qemu-kvm

  • Appendix

    23

    12.1.3.Heartbeat/etc/ha.d/ha.cf

    autojoin nonenode ovirt-hyp1node ovirt-hyp2bcast eth1mcast ovirtmgmt 239.192.0.51 694 1 0use_logd yesinitdead 120deadtime 20warntime 10keepalive 1compression bz2crm respawn

    /etc/ha.d/authkeys

    auth 11 sha1 sdrsdfrgaqerbqerbq34bgaebaqejrbnSDFQ23Fwe

    (adapt the key in this configuration)

  • Appendix

    24

    12.1.4.PacemakerCRM config:

    primitive p_drbd_iscsi ocf:linbit:drbd \ params drbd_resource="iscsi" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100"primitive p_drbd_kvm-ovirtm ocf:linbit:drbd \ params drbd_resource="kvm-ovirtm" \ op monitor interval="29" role="Master" timeout="30" \ op monitor interval="30" role="Slave" timeout="30" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100"primitive p_ip_iscsi ocf:heartbeat:IPaddr2 \ params ip="192.168.10.50" \ op start interval="0" timeout="20" \ op stop interval="0" timeout="20" \ op monitor interval="30" timeout="20"primitive p_iscsi_store1 ocf:heartbeat:iSCSITarget \ params implementation="tgt" iqn="iqn.2013-08.linbit.ovirtiscsi:store1" tid="1" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60" \ meta is-managed="true"primitive p_iscsi_store1_lun1 ocf:heartbeat:iSCSILogicalUnit \ params implementation="tgt" target_iqn="iqn.2013-08.linbit.ovirtiscsi:store1" \ lun="1" path="/dev/drbd/by-res/iscsi/0" \ op start interval="0" timeout="60" \ op stop interval="0" timeout="60" \ op monitor interval="30" timeout="60"primitive p_kvm-ovirtm ocf:heartbeat:VirtualDomain \ params config="/etc/libvirt/qemu/oVirtm.xml" \ op start interval="0" timeout="180s" \ op stop interval="0" timeout="300s" \ op monitor interval="60s" timeout="60s"primitive p_portblock-store1-block ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="block"primitive p_portblock-store1-unblock ocf:heartbeat:portblock \ params ip="192.168.10.50" portno="3260" protocol="tcp" action="unblock" \ op monitor interval="30s"group g_iscsi p_portblock-store1-block p_ip_iscsi p_iscsi_store1 p_iscsi_store1_lun1 \ p_portblock-store1-unblockms ms_drbd_iscsi p_drbd_iscsi \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true"ms ms_drbd_kvm-ovirtm p_drbd_kvm-ovirtm \ meta clone-node-max="1" clone-max="2" master-max="1" master-node-max="1" notify="true"colocation co_g_iscsi_with_drbd +inf: g_iscsi:Started ms_drbd_iscsi:Mastercolocation co_kvm-ovirtm_with_drbd +inf: p_kvm-ovirtm:Started ms_drbd_kvm-ovirtm:Masterorder o_drbd-kvm-ovirtm_before_kvm +inf: ms_drbd_kvm-ovirtm:promote p_kvm-ovirtm:startorder o_drbd_iscsi_before_g_iscsi +inf: ms_drbd_iscsi:promote g_iscsi:startproperty $id="cib-bootstrap-options" \ dc-version="1.1.6-0c7312c689715e096b716419e2ebc12b57962052" \ cluster-infrastructure="Heartbeat" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ default-resource-stickiness="200" \ maintenance-mode="off" \rsc_defaults $id="rsc-options" \ resource-stickiness="200"

  • Appendix

    25

    12.1.5.Others/etc/libvirt/auth.conf

    [credentials-pcmk]authname=pcmkpassword=your-password-from-saslpasswd2

    [auth-libvirt-ovirt-hyp1]credentials=pcmk

    [auth-libvirt-ovirt-hyp2]credentials=pcmk

    [auth-libvirt-localhost]credentials=pcmk

    High-Availability oVirt-Cluster with iSCSI-StorageTable of ContentsChapter1.Introduction1.1.Goal of this guide1.2.Limitations1.3.Conventions in this document

    Chapter2.Software2.1.Software repositories2.1.1.LINBIT DRBD and pacemaker repositories2.1.2.Enable EPEL repository

    2.2.DRBD and pacemaker installation2.3.Optional: Install csync2

    Chapter3.Backing devicesChapter4.oVirt Manager preparation4.1.DRBD resource for the oVirt Manager virtual machine4.2.Create a network bridge4.3.Install libvirt/qemu4.4.KVM definition and system installation

    Chapter5.Heartbeat configurationChapter6.Pacemaker rules for KVM-DRBD resource and virtual machineChapter7.iSCSI preparation7.1.DRBD resource for the iSCSI target

    Chapter8.Pacemaker rules for iSCSI-DRBD resource, iSCSI target and iSCSI service IP address8.1.Portblock for the iSCSI target8.2.iSCSI resource group8.3.Constraints for the iSCSI resources

    Chapter9.oVirt Manager, hypervisors and iSCSI storage9.1.oVirt Manager installation9.1.1.Reconfigure oVirt machinetypes

    9.2.Reconfigure LVM9.3.Extend udev rule for DRBD9.4.Hypervisor installation9.4.1.Adjust libvirt access

    9.5.Second node hypervisor installation9.6.Storage setup

    Chapter10.Test, fence and backupChapter11.Further documentation and linksChapter12.Appendix12.1.Configurations12.1.1.DRBD12.1.2.KVM12.1.3.Heartbeat12.1.4.Pacemaker12.1.5.Others