High Availability MySQL Database Replication With Solaris Zone Cluster

Embed Size (px)

Citation preview

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    1/26

    HIGH AVAILABILITY MYSQL

    DATABASE REPLICATION WITHSOLARIS ZONE CLUSTER

    Pedro Lay, Technical Systems Marketing

    Sun BluePrints Online

    Part No 820-7582-10

    Revision 1.0, 2/23/09

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    2/26

    Sun Microsystems, Inc.

    Table of Contents

    MySQL database replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Solaris Zones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Solaris Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    Failover scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    About the author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Ordering Sun documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Accessing Sun documentation online. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Appendix A: Command files for clzonecluster

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    Appendix B: MySQL configuration file

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    Appendix C: Configuration files: /config-files/mysql_config

    . . . . . . . . . . . . . . . . .17

    Appendix D: Configuration files: /config-files/ha_mysql_config

    . . . . . . . . . . . . . .19

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    3/26

    1

    H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    High Availability MySQL Database Replication with

    Solaris Zone Cluster

    New technology offers new possibilities to accomplish things in a different and more

    efficient way, moreover when there is a convergence of three new technologies that

    complements and provides additional capabilities to solve business requirements. Such

    is the case when Sun MySQL Replication, Solaris Containers and Solaris Zone Cluster

    come into play.

    MySQL Replication is an option which allows the content of one database to be

    replicated to another database or databases, providing a mechanism to scale out the

    database. Scaling out the database allows more activities to be processed and more

    users to access the database by running multiple copies of the databases on different

    machines.

    Solaris Containers provide a virtualized runtime environment by using Solaris Zones

    partitioning technology and resource management tools that are part of the Solaris

    operating system; the zone is a container for an application. With Solaris Containers,

    MySQL Replication can be run with the databases replicated across zones.

    The latest release of Solaris Cluster 3.2 1/09 introduces the concept of a zone cluster

    consisting of a set of virtual nodes where each virtual node is a Solaris Zone. This new

    feature of zone clustering, along with the High Availability service for MySQL provided

    by Solaris Cluster, allows automatic failover across machines of a MySQL Replication

    deployed with Solaris Zones.

    This paper describes the benefit of deploying the master and slave database of MySQL

    Replication using zone clusters. In addition, it provides details on how to deploy the

    master and slave databases in two different zone clusters, using non-global zones from

    two different machines as the virtual cluster nodes.

    MySQL database replication

    Replication is an option that enables data changes from one MySQL database server,

    called the master

    , to be duplicated to one or more MySQL database servers, calledslaves

    . The replication of data is performed in a one way asynchronous mode. The

    replication is one way because data is only updated at the master database while data

    retrieval is the only operation that can be performed at the slave databases. It is

    asynchronous because the master database does not need to wait for the data to be

    actually replicated in the slave databases before continuing its operation. This type of

    replication is a poll model, where the slave is connected to a master and asks the

    master to send events the slave does not have.

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    4/26

    2

    H g Ava a ty MySQL Data ase Rep cat on wt So ar s Zone Cus- Sun Microsystems, Inc.

    Depending on how replication is configured, an entire database or selected tables

    within a database can be replicated. And, depending on how replication is deployed to

    fulfill a business need, MySQL Replication can be deployed with the following

    topologies:

    Figure 1. MySQL database replication topologies.

    Note More complex topologies are possible and have been deployedsuch as master-

    master replications, where both servers act as both master and slave. While these configura-tions are supported, care must be taken to ensure that they are configured correctly to avoid

    overwriting of data.

    A number of threads and files from both the master and slave database servers are

    involved when replication is enabled, as shown in Figure 2.

    Figure 2. MySQL replication process.

    From the master database server:

    Binary Log filewhere updates of the master database are captured

    IO_Threadcaptures the updates of the master to the Binary Log file

    From the slave database server:

    Relay Log filecopy of the binary log from master database

    Relay Index logindex of the Relay Log file

    Master.info filecontains all information about the master server

    Relay-log.info filecontains information of the SQL_Thread

    IO_Threaddownloads the Binary Log file from the master into the Relay Log file

    SQL_Threadexecutes SQL commands from the Relay Log file

    Master Slave SlaveMaster Slaves Master Slaves

    Database

    Master

    Server

    Slave

    Server

    Database

    Binary Log Relay Log .info files

    IOThread

    SQLThread

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    5/26

    3

    H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    Solaris Zones

    Sun provides many technologies for hosting multiple applications on a single system

    hardware, including hardware partitions

    (Dynamic Domains), virtual machines

    (Sun

    Logical Domains, Sun xVM software), operating system virtualization

    (Solaris Zones)

    and resource management

    .

    The Solaris OS provides zones that enable software partitioning of a single Solaris 10 OS

    instance to support multiple virtual independent operating systems with independent

    process space, allocated resources, and users. There are two types of zones: global

    and

    non-global

    . Every Solaris system contains a global zone: it is the default zone for the

    system, and system infrastructure administration (such as configuring physical devices,

    routing, etc.) occurs in the global zone. Non-global zones contain an installed subset of

    the complete Solaris operating system software packages and provide security

    isolation, application fault isolation and a resource-managed environment to runapplications.

    Solaris zones allow both the master database and the slave database of MySQL

    Replication to run independently and isolated from each other on the same machine.

    Solaris Cluster

    A cluster is two or more systems, or nodes, that work together as a single entity to

    provide increased availability and/or performance. Solaris Cluster provides failover

    services for MySQL database by making the database able to survive any single

    software or hardware failure in the system. When a failure occurs, the MySQL database

    is restarted on the surviving node of the cluster without user intervention.

    MySQL database can be deployed in the following configuration on Solaris Clusters:

    MySQL database in the global zone

    MySQL database in the nonglobal failover zone

    MySQL database in a non-global zone

    MySQL database in a zone cluster

    The Sun BluePrints Online article Deploying MySQL Database in Solaris Cluster

    Environments for Increased High Availability

    provides a good overview of Solaris

    Cluster and its components, and provides an example on deploying MySQL databases in

    the global zone.

    The release of Solaris Cluster 3.2 1/09 introduces the concept of azone cluster

    consisting of a set of virtual nodes, where each node is a Solaris zone. The zone cluster

    extends the Solaris zone principles to work across a cluster. The following section

    describes how the master and slave database of MySQL Replication were deployed in

    two different zone clusters, using the cluster-branded non-global zones from two

    different machines as the virtual cluster nodes.

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    6/26

    4

    H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    Example configuration

    The following example configuration was used to deploy MySQL database replication

    using Solaris Zone Clusters.

    Hardware configuration:

    Two servers (a Sun SPARC Enterprise T5220 and T5140 server) with the following

    components:

    Local disk with a 512 MB slice for /globaldevices

    Three network ports: one port to be used for the public network interface, and

    two ports to be used for the Cluster Interconnect (private network)

    Two storage interface ports

    Multihost storage device for the shared disks (Sun StorEdge 3510 FC Array)

    Software configuration:

    Solaris 10 5/08 s10s_u5wos_10 SPARC

    System Firmware (sysfw_version): 7.1.6 2008/08/15 02:51

    SP Firmware Version/Build Nbr: 2.0.4.26 / 35945

    Solaris Cluster 3.2 1/09

    MySQL 5.0.22

    Patches for MySQL Data Services:

    126032-05 for Solaris 10 OS (SPARC)

    126031-05 for Solaris 9 OS (SPARC)

    126033-06 for Solaris 10 OS (x86)

    The actual hardware configuration used for deployment is shown in Figure 3.

    Figure 3. Example hardware configuration.

    PCI 0CI 0

    Sun SPARC Enterpriseun SPARC EnterpriseT5220 Server5220 Server

    Sun StorEdge 3510 FC Array

    0123

    2FCC 3FCC

    P11

    0FCC

    P22 PCI 3CI 3

    0123

    P11 P22

    1FCC

    3 LUNS of 20GB eachLocal Disks Local Disks

    Sun SPARC Enterpriseun SPARC EnterpriseT5140 Server5140 Server

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    7/26

    5

    H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    The following steps were followed to install and configure Solaris Cluster and MySQL

    Data Services:

    1. Install Solaris Cluster software on both servers, then run the installer command to

    invoke the Solaris Cluster installer GUI.

    2. Configure the Solaris Cluster software from one node of the cluster by executing

    the command /usr/cluster/bin/scinstall

    .

    3. Solaris Zone Cluster Configuration: create two zone clusters (sparse root) named

    ZC2

    and ZC3

    with the following characteristics:

    Two cluster nodes using non-global zones from two different machines.

    A storage volume accessible by both cluster nodes. This is the High Availability

    Storage Plus (HASP) resource.

    A public IP address. This is the Logical Host (LH) resource.

    A logical diagram of the Solaris zone configuration is depicted in Figure 4.

    Figure 4. Example Solaris zone configuration.

    Zone Cluster ZC2:

    Used for the master database

    Logical Host name: lh-2

    Zone host names: paris

    and mumbai

    Has Logical Host resource and HASP resource on a zpool

    HA ZFS file system /pool2 on shared storage

    SC Node:C Node:tm16-180m16-180

    SC Node:C Node:tm16-184m16-184

    Zone: ZC2one: ZC2Logical Host: Ih-2ogical Host: Ih-2

    Zone: ZC3one: ZC3Logical Host: Ih-3ogical Host: Ih-3

    /pool3 /pool2

    Zone Host:one Host:bangaloreangalore

    Zone Host:one Host:romeome

    Zone Host:one Host:mumbaiumbai

    Zone Host:one Host:parisaris

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    8/26

    6

    H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    Zone Cluster ZC3:

    Used for the slave database

    Logical Host name: lh-3

    Zone host names: rome

    and bangalore

    Has Logical Host resource and HASP resource on a zpool

    HA ZFS file system /pool3

    on shared storage

    The command files used to configure Zone Cluster ZC2 and ZC3 are listed in

    Appendix A, Command files for clzonecluster on page 13.

    The creation of the zone cluster is done by executing the following commands:

    4. The following commands were used to configure Solaris Cluster Resource for

    MySQL Data Service. These commands perform the following tasks:

    Create failover resource group and logical host name.

    Register the HAStoragePlus resource type.

    Create a HAStorage resource to the resource group for the local ZFS.

    Bring the resource group online.

    In MySQL1 zone 2:

    In MySQL2 zone 3:

    5. Install the MSQL Agent packages on the physical hosts:

    system SUNWmysqlrMySQL Database Management System (root component)

    system SUNWmysqltMySQL Database Management System (test component)

    system SUNWmysqluMySQL Database Management System (usr component)

    6. MySQL Replication deployment:

    A minimum of two MySQL database instances is needed to deploy MySQL Replica-

    tion:

    The master database was installed in the first zone cluster

    The slave database was installed in the second zone cluster

    The following steps were used to setup the MySQL database for replication:

    # clzonecluster configure -f cmd-file-2 zc2

    # clzonecluster configure -f cmd-file-3 zc3

    paris @ / $

    clrg create mysql-1-rg

    paris @ / $

    clreslogicalhostname create -g mysql-1-rg -h lh-2 mysql-1-lh

    paris @ / $

    clrt register SUNW.HAStoragePlus

    paris @ / $

    clrs create -g mysql-1-rg -t SUNW.HAStoragePlus

    -p zpools=pool2 mysql-1-hasp-rs

    paris @ / $

    clrg online -eM mysql-1-rg

    bangalore @ / $

    clrg create mysql-2-rg

    bangalore @ / $

    clreslogicalhostname create -g mysql-2-rg -h lh-3

    mysql-2-lh

    bangalore @ / $

    clrt register SUNW.HAStoragePlus

    bangalore @ / $

    clrs create -g mysql-2-rg -t SUNW.HAStoragePlus

    -p zpools=pool3 mysql-2-hasp-rs

    bangalore @ / $

    clrg online -eM mysql-2-rg

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    9/26

    7

    H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    From the master database server:

    a. Create a new user with REPLICATION SLAVE privilege.

    b. Turn on binary log and set the MySQL database server-id to a unique number.

    (This requires shutting down the MySQL server and modifying the MySQL con-

    figuration file.)

    c. Determine master database position and binary log file name by executing

    the following MySQL commands:

    flush tables with read lock;

    show master status;

    d. Create a backup of the master database that contains binary log coordinates.

    From the slave database server:

    a. Shutdown the database and modify the MySQL configuration file by assigning

    a unique number to the server-id.

    b. Load in the backup from the master database.

    c. Bring up MySQL server.

    d. Configure the slave database with master database information (such as

    where the master database resides and the master database position number

    in the binary log).

    This operation is performed by executing the MySQL command:

    CHANGE MASTER TO

    e. Start replication by executing the MySQL command:

    SLAVE START

    The configuration files of the master and slave databases are shown in

    Appendix B, MySQL configuration file on page 15.

    7. Enable and register the MySQL database to be used under Solaris Cluster:

    a. Copy and edit the mysql_config

    file.

    See Appendix C, Configuration files: /config-files/mysql_config on page 17

    for master and slave databases.

    b. Run the mysql_register

    script from the logical host of zone cluster ZC2 and

    ZC3:

    c. Shutdown the MySQL database.

    d. Copy and edit the ha_mysql_config

    file.

    See Appendix D, Configuration files: /config-files/ha_mysql_config on

    page 19 for master and slave databases.

    e. Run the ha_mysql_register

    script from the logical host of zone cluster ZC2

    and ZC3:

    paris @ / $

    ksh mysql_register -f /config-files/mysql_config

    bangalore @ / $ ksh mysql_register -f /config-files/mysql_config

    paris @ / $ ksh ha_mysql_register -f /config-files/ha_mysql_cofig

    bangalore@ / $ ksh ha_mysql_register -f /config-files/ha_mysql_config

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    10/26

    8

    H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us-

    Sun Microsystems, Inc.

    f. Execute the following command from the logical host of zone cluster ZC3 and

    ZC3:

    At this point, the MySQL Replication deployment is configured as shown in Figure 5.

    Figure 5. Example master and slave database configuration.

    Note With Solaris Cluster, sc3_test_database

    is created and used for monitoring pur-

    poses. This database must be excluded from replication.

    Failover scenariosNow that the MySQL Replication is deployed with Solaris Cluster, consider how an

    application running on a different host on the network will interact with the MySQL

    database, and how Solaris Cluster provides data services, with the following scenario:

    An application running from a host on the network connects via the logical host lh-2

    to the MySQL master database to perform update transactions. A second application running from a host on the network connects via the logical

    host lh-3 to the MySQL slave database to do query transactions.

    With MySQL Replication, whatever update happens at the master database is replicated

    in the slave database. As a simple example, if user A is inserting rows to a table X in the

    master database, then user B will see the new rows of table X as he/she queries table X

    from the slave database.

    paris @ / $ clresource enable mysql-1-hasp-rs

    bangalore @ / $ clresource enable mysql-2-hasp-rc

    SC Node:C Node:tm16-180m16-180

    SC Node:C Node:tm16-184m16-184

    Zone: ZC2one: ZC2Logical Host: Ih-2

    Zone: ZC3one: ZC3Logical Host: Ih-3

    Zone Host:paris

    Zone Host:paris

    Zone Host:

    Zone Host:one Host:mumbaiumbai

    Zone Host:one Host:bangaloreangalore

    Zone Host:

    Master

    Slave

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    11/26

    9 H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    The following describes the two failover scenarios: master database failover and slave

    database failover.

    Master database failover

    If the master database fails, Solaris Zone Cluster will fail over the master database

    to the surviving zone, as shown in Figure 6.

    During the fail over, the client application will briefly pause and resume

    transactions after the database services are failed over. After the failover, MySQL

    will resume the replication of update transactions from the master database to

    the slave database; this time the master database will be running from the

    surviving node (zone host mumbai, in this example). Likewise, the client

    application doing queries against the slave database will briefly pause and the

    query will resume.

    Figure 6. Master database failover.

    Slave database failover

    If the slave database fails, Solaris Zone Cluster will fail over the slave database to

    the surviving zone, as shown in Figure 7.

    The sequence of events is similar to the master database failover scenariowith

    the difference that it is the slave database that is failing over. The client doing

    updates against the master database is not affected at all. The client doing

    queries against the slave database will briefly pause and the query will resume

    after the slave database is failed over to the surviving zone.

    SC Node:C Node:tm16-180m16-180

    SC Node:C Node:tm16-184m16-184

    Zone: ZC2one: ZC2Logical Host: Ih-2

    Zone: ZC3one: ZC3Logical Host: Ih-3

    Zone Host:paris

    Zone Host:paris

    Zone Host:

    Zone Host:mumbai

    Zone Host:one Host:bangaloreangalore

    Zone Host:

    Master Master

    Slave

    one Host:mumbai

    Master

    X

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    12/26

    10 H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    Figure 7. Slave database failover.

    To test the failover process of both master and slave databases, this study injected fault

    conditions by executing zoneadm halt, clrg switch, and clrs disable commands;

    killing the mysql daemon; and even pulling out the power cord on one of the server

    machines.

    SummaryThe MySQL database delivers a fast, multi-threaded, multi-user, and robust SQL

    (Structured Query Language) database server. With the MySQL Replication option

    enabled, this database provides a mean to horizontally improve application

    performance and scalability by adding multiple replicated database servers. MySQL

    Replication uses a poll model, where slave databases are connected to a master

    database and ask the master to send events the slave does not have.

    There are many ways MySQL Replication can be deployed and used, such as running

    database backup from the slave while the master continues processing transactions, or

    using the master database to process updates while the slave is used for queries, toname a few. Often customers provide high availability to the master database, and

    Solaris Cluster HA for MySQL data service provides a mechanism for orderly startup and

    shutdown, fault monitoring and automatic failover of MySQL database, making

    recovery from failure transparent to clients.

    Furthermore, to improve power consumption from the multiple replicated database

    servers, Solaris OS provides zones to deploy MySQL Replication, and along with Solaris

    Cluster the zones can be part of a virtual zone.

    SC Node:C Node:tm16-180m16-180

    SC Node:C Node:tm16-184m16-184

    Zone: ZC2one: ZC2Logical Host: Ih-2

    Zone: ZC3one: ZC3Logical Host: Ih-3

    Zone Host:paris

    Zone Host:paris

    Zone Host:

    Zone Host:one Host:mumbaiumbai

    Zone Host:bangalore

    Zone Host:

    Master

    SlaveSlave

    one Host:bangalore

    SlaveX

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    13/26

    11 H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    About the authorPedro Lay is an Enterprise Solutions Architect in Suns Systems Technical Marketing

    Group. He has over 20 years of industry experience that spans application development,

    database and system administration, and performance and tuning efforts. Since joining

    Sun in 1990, Pedro has worked in various organizations including Information

    Technology, the Customer Benchmark Center, the Business Intelligence and Data

    Warehouse Competency Center, and the Performance Applications Engineering group.

    AcknowledgementsThe author would like to recognize Gia-Khanh Nguyen, and Detlet Ulherr from the Sun

    Cluster group for their contributions to this article.

    ReferencesKamboj, Ritu. Deploying MySQL Database in Solaris Cluster Environments for Increased

    High Availability, Sun BluePrints Online, November, 2008.

    http://wikis.sun.com/display/BluePrints/Deploying+MySQL+Database+in+Solar

    is+Cluster++Environments

    Kloski, Nick. MySQL Database Scale-Out and Replication for High-Growth Businesses,

    Sun BluePrints Online, November, 2008.

    http://wikis.sun.com/download/attachments/57508101/820-6824.pdf?version=1

    Sun Cluster 3.2 1/09 Documentation Center:

    http://docs.sun.com/app/docs/doc/820-4683?l=en

    Sun Cluster 3.2 1/09 Release Notes

    http://wikis.sun.com/display/SunCluster/(English)+Sun+Cluster+3.2+1-

    09+Release+Notes#(English)SunCluster3.21-09ReleaseNotes-zc

    Sun Cluster Data Service for MySQL Guide for Solaris OS (January 2009)

    http://docs.sun.com/app/docs/doc/820-5027?l=en

    Ordering Sun documentsThe SunDocsSM program provides more than 250 manuals from Sun Microsystems, Inc.

    If you live in the United States, Canada, Europe, or Japan, you can purchase

    documentation sets or individual manuals through this program.

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    14/26

    12 H g Ava a ty MySQL Data ase Rep cat on w t So ar s Zone C us- Sun Microsystems, Inc.

    Accessing Sun documentation onlineThedocs.sun.comweb site enables you to access Sun technical documentation

    online. You can browse the docs.sun.comarchive or search for a specific book title

    or subject. The URL is

    http://docs.sun.com/

    To reference Sun BluePrints Online articles, visit the Sun BluePrints Online Web site at:

    http://www.sun.com/blueprints/online.html

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    15/26

    13 Comman es or c zonec uster Sun Microsystems, Inc.

    Appendix A

    Command files for clzonecluster

    The clzonecluster command file used for Zone Cluster zc2:

    create

    set zonepath=/zones/zc2

    add sysid

    set root_password=ZiitH.NOLOrRg

    set name_service=NONE

    set nfs4_domain=dynamic

    set security_policy=NONE

    set system_locale=C

    set terminal=xterms

    set timezone=US/Pacific

    end

    add nodeset physical-host=tm16-180

    set hostname=paris

    add net

    set address=paris/24

    set physical=nxge0

    end

    end

    add node

    set physical-host=tm16-184

    set hostname=mumbai

    add net

    set address=mumbai/24

    set physical=e1000g0

    end

    end

    add net

    set address=lh-2

    end

    commit

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    16/26

    14 Comman es or c zonec uster Sun Microsystems, Inc.

    The clzonecluster command file used for Zone Cluster zc3:

    create

    set zonepath=/zones/zc3

    add sysidset root_password=ZiitH.NOLOrRg

    set name_service=NONE

    set nfs4_domain=dynamic

    set security_policy=NONE

    set system_locale=C

    set terminal=xterms

    set timezone=US/Pacific

    end

    add node

    set physical-host=tm16-180

    set hostname=rome

    add net

    set address=rome/24

    set physical=nxge0

    end

    end

    add node

    set physical-host=tm16-184

    set hostname=bangalore

    add net

    set address=bangalore/24

    set physical=e1000g0

    end

    end

    add net

    set address=lh-3

    end

    commit

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    17/26

    15 MySQL con gurat on e Sun Microsystems, Inc.

    Appendix B

    MySQL configuration file

    MySQL master database configuration file:/pool2/data/my.cnf

    [mysqld]

    server-id=1

    #port=3306

    #bind-address=pool2

    socket=/tmp/lh-2.sock

    log=/pool2/data/logs/log1

    log-bin=/pool2/data/logs/bin-log

    binlog-ignore-db=sc3_test_database

    log-slow-queries=/pool2/data/logs/log-slow-queries

    #log-update=/pool2/data/logs/log-update

    # Innodb#skip-innodb

    innodb_data_home_dir = /pool2/data/innodb

    innodb_data_file_path = ibdata1:10M:autoextend

    innodb_log_group_home_dir = /pool2/data/innodb

    #innodb_log_arch_dir = /pool2/data/innodb

    # You can set .._buffer_pool_size up to 50 - 80 %

    # of RAM but beware of setting memory usage too high

    set-variable = innodb_buffer_pool_size=50M

    set-variable = innodb_additional_mem_pool_size=20M

    # Set .._log_file_size to 25 % of buffer pool size

    set-variable = innodb_log_file_size=12M

    set-variable = innodb_log_buffer_size=4M

    innodb_flush_log_at_trx_commit=1

    set-variable = innodb_lock_wait_timeout=50

    # MySQL 4.x

    relay-log=/pool2/data/logs/slave-bin.log

    relay-log-info-file=/pool2/data/logs/slave-info

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    18/26

    16 MySQL con gurat on e Sun Microsystems, Inc.

    MySQL slave database configuration file: /pool3/data/my.cnf

    [mysqld]

    server-id=2

    #port=3306#bind-address=pool3

    socket=/tmp/lh-3.sock

    log=/pool3/data/logs/log1

    log-bin=/pool3/data/logs/bin-log

    binlog-ignore-db=sc3_test_database

    log-slow-queries=/pool3/data/logs/log-slow-queries

    #log-update=/pool3/data/logs/log-update

    # Innodb

    #skip-innodb

    innodb_data_home_dir = /pool3/data/innodb

    innodb_data_file_path = ibdata1:10M:autoextend

    innodb_log_group_home_dir = /pool3/data/innodb

    #innodb_log_arch_dir = /pool3/data/innodb

    # You can set .._buffer_pool_size up to 50 - 80 %

    # of RAM but beware of setting memory usage too high

    set-variable = innodb_buffer_pool_size=50M

    set-variable = innodb_additional_mem_pool_size=20M

    # Set .._log_file_size to 25 % of buffer pool size

    set-variable = innodb_log_file_size=12M

    set-variable = innodb_log_buffer_size=4M

    innodb_flush_log_at_trx_commit=1

    set-variable = innodb_lock_wait_timeout=50

    # MySQL 4.x

    relay-log=/pool3/data/logs/slave-bin.log

    relay-log-info-file=/pool3/data/logs/slave-info

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    19/26

    17 Con gurat on es: /con g- es/mysq _con g Sun Microsystems, Inc.

    Appendix C

    Configuration files: /config-files/mysql_config

    From logical host lh-2 (master database):

    # Copyright 2003 Sun Microsystems, Inc. All rights reserved.

    # Use is subject to license terms.

    #

    # This file will be sourced in by mysql_register and the parameters

    # listed below will be used.

    #

    # Where is mysql installed (BASEDIR)

    MYSQL_BASE=/usr/local/mysql

    # Mysql admin-user for localhost (Default is root)

    MYSQL_USER=root

    # Password for mysql admin user

    MYSQL_PASSWD=root

    # Configured logicalhost

    MYSQL_HOST=lh-2

    # Specify a username for a faultmonitor user

    FMUSER=fmuser

    # Pick a password for that faultmonitor user

    FMPASS=fmuser

    # Socket name for mysqld ( Should be /tmp/.sock )

    MYSQL_SOCK=/tmp/lh-2.sock

    # FOR SC3.1 ONLY, Specify the physical hostname for the

    # physical NIC that this logicalhostname belongs to for every node in the

    # cluster this Resourcegroup can located on.

    # IE: The logicalhost lh1 belongs to hme1 for physical-node phys-1 and

    # hme3 for physical-node phys-2. The hostname for hme1 is phys-1-hme0

    and

    # for hme3 on phys-2 it is phys-2-hme3.

    # IE: MYSQL_NIC_HOSTNAME="phys-1-hme0 phys-2-hme3"

    MYSQL_NIC_HOSTNAME="paris mumbai"

    # where are your databases installed, (location of my.cnf)

    MYSQL_DATADIR=/pool2/data

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    20/26

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    21/26

    19 Con gurat on es: /con g- es/ a_mysq _con g Sun Microsystems, Inc.

    Appendix D

    Configuration files: /config-files/ha_mysql_config

    From logical host lh-2 (master database):

    #

    # Copyright 2003 Sun Microsystems, Inc. All rights reserved.

    # Use is subject to license terms.

    #

    # This file will be sourced in by ha_mysql_register and the parameters

    # listed below will be used.

    #

    # These parameters can be customized in (key=value) form

    #

    # RS - name of the resource for the application# RG - name of the resource group containing RS

    # PROJECT - A project in the zone, that will be used for this service

    # specify it if you have an su - in the start stop or probe,

    # or to define the smf credentials. If the variable is not

    set,

    # it will be translated as :default for the sm and default

    # for the zsh component

    # Optional

    # ZUSER - A user in the the zone which is used for the smf method

    # credentials. Yur smf servic e will run under this user

    # Optional

    #

    # BASEDIR - name of the Mysql bin directory# DATADIR - name of the Mysql Data directory

    # MYSQLUSER - name of the user Mysql should be started of

    # LH - name of the LogicalHostname SC resource

    # MYSQLHOST - name of the host in /etc/hosts

    # FMUSER - name of the Mysql fault monitor user

    # FMPASS - name of the Mysql fault monitor user password

    # LOGDIR - name of the directory mysqld should store it's logfile.

    # CHECK - should HA-MySQL check MyISAM index files before start

    YES/NO.

    # HAS_RS - name of the MySQL HAStoragePlus SC resource

    #

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    22/26

    20 Con gurat on es: /con g- es/ a_mysq _con g Sun Microsystems, Inc.

    # The following examples illustrate sample parameters

    # for Mysql

    #

    # BASEDIR=/usr/local/mysql# DATADIR=/global/mysqldata

    # MYSQLUSER=mysql

    # LH=mysqllh

    # MYSQLHOST=mysqllh

    # FMUSER=fmuser

    # FMPASS=fmuser

    # LOGDIR=/global/mysqldata/logs

    # CHECK=YES

    #

    RS=master-mys-rs

    RG=mysql-1-rg

    PORT=22

    LH=mysql-1-lhHAS_RS=mysql-1-hasp-rs

    # local zone specific options

    ZONE=

    ZONE_BT=

    ZUSER=

    PROJECT=

    # mysql specifications

    BASEDIR=/usr/local/mysql

    DATADIR=/pool2/dataMYSQLUSER=mysql

    MYSQLHOST=lh-2

    FMUSER=fmuser

    FMPASS=fmuser

    LOGDIR=/pool2/data/logs

    CHECK=YES

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    23/26

    21 Con gurat on es: /con g- es/ a_mysq _con g Sun Microsystems, Inc.

    From logical host lh-3 (slave database):

    #

    # Copyright 2003 Sun Microsystems, Inc. All rights reserved.

    # Use is subject to license terms.#

    # This file will be sourced in by ha_mysql_register and the parameters

    # listed below will be used.

    #

    # These parameters can be customized in (key=value) form

    #

    # RS - name of the resource for the application

    # RG - name of the resource group containing RS

    # PROJECT - A project in the zone, that will be used for this service

    # specify it if you have an su - in the start stop or probe,

    # or to define the smf credentials. If the variable is not

    set,

    # it will be translated as :default for the sm and default# for the zsh component

    # Optional

    # ZUSER - A user in the the zone which is used for the smf method

    # credentials. Yur smf servic e will run under this user

    # Optional

    #

    # BASEDIR - name of the Mysql bin directory

    # DATADIR - name of the Mysql Data directory

    # MYSQLUSER - name of the user Mysql should be started of

    # LH - name of the LogicalHostname SC resource

    # MYSQLHOST - name of the host in /etc/hosts

    # FMUSER - name of the Mysql fault monitor user

    # FMPASS - name of the Mysql fault monitor user password# LOGDIR - name of the directory mysqld should store it's logfile.

    # CHECK - should HA-MySQL check MyISAM index files before start

    YES/NO.

    # HAS_RS - name of the MySQL HAStoragePlus SC resource

    #

    # The following examples illustrate sample parameters

    # for Mysql

    #

    # BASEDIR=/usr/local/mysql

    # DATADIR=/global/mysqldata

    # MYSQLUSER=mysql

    # LH=mysqllh

    # MYSQLHOST=mysqllh

    # FMUSER=fmuser

    # FMPASS=fmuser

    # LOGDIR=/global/mysqldata/logs

    # CHECK=YES

    #

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    24/26

    22 Con gurat on es: /con g- es/ a_mysq _con g Sun Microsystems, Inc.

    RS=slave-mys-rs

    RG=mysql-2-rg

    PORT=22

    LH=mysql-2-lhHAS_RS=mysql-2-hasp-rs

    # local zone specific options

    ZONE=

    ZONE_BT=

    ZUSER=

    PROJECT=

    # mysql specifications

    BASEDIR=/usr/local/mysql

    DATADIR=/pool3/dataMYSQLUSER=mysql

    MYSQLHOST=lh-3

    FMUSER=fmuser

    FMPASS=fmuser

    LOGDIR=/pool3/data/logs

    CHECK=YES

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    25/26

    23 Con gurat on es: /con g- es/ a_mysq _con g Sun Microsystems, Inc.

  • 7/27/2019 High Availability MySQL Database Replication With Solaris Zone Cluster

    26/26

    High Availability MySQL Database Replication with Solaris Zones Cluster On the Websun.com

    Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 1-650-960-1300 or 1-800-555-9SUN (9786) Web sun.com

    2009 Sun Microsystems, Inc. All rights reserved. Sun, Sun Microsystems, the Sun logo, MySQL, StorEdge, and Sun BluePrints are trademarks or registered trademarks of Sun Microsystems, Inc. or its subsidiaries in

    the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and other countries. Products bearing SPARC

    trademarks are based upon an architecture developed by Sun Microsystems, Inc. Information subject to change without notice. Printed in USA 02/09