Click here to load reader

Mastering Galera

  • View
    35

  • Download
    1

Embed Size (px)

DESCRIPTION

Mastering Galera. Data Masters. Special Thanks To…. 1010 NE 2 nd Ave Miami, FL 33132 305-735-1274 www.venturehive.co. Our Sponsor!. John Jadvani 954-527-0090. Short bio about me… Andrew Simkovsky 15 years working with database technology - PowerPoint PPT Presentation

Text of Mastering Galera

Miniclip Tech US

MasteringGaleraData Masters

1Special Thanks To1010 NE 2nd AveMiami, FL 33132305-735-1274www.venturehive.co

Venture Hive Miami was created last year with grants from the Miami Downtown Development Authority to create accelerator and incubator for entrepreneurs and budding startups. Not limited to Miami. Its attracting startups from all over the world to come to Miami to build their businesses.2Our Sponsor!

John Jadvani954-527-00903Short bio about meAndrew Simkovsky15 years working with database technologyOracle, MySQL/MariaDB, SQL Server, PostgresRedis, MongoDB, CouchDB, Cassandra, Neo4jWorked across many industriesConsulting, Retail, Telecommunications, EnergyData, Marketing, Gaming, Health care

4DBTekPro

www.dbtekpro.comandrew@dbtekpro.com@asimkovsky5Lets Get Started!Galera Cluster for MySQLHigh availability (HA) and scaling solution for MySQL/MariaDBA clustering solution that integrates with MySQL / MariaDB7Galera Cluster ConceptMySQL/MariaDBGaleraEach node has:MySQL databaseGalera replication API8Other HA and Scaling SolutionsShardingMaster / Slave(s)Master / MasterMaster / Master plus Slaves

9ShardingA..FG..LM..RApplicationBenefits: Can split workload across multiple databases. You have direct control over which nodes hold which data. Adding more shards reduces amount of data lost in the event of database failure. Drawbacks: Application needs to be shard aware, so it can find the data where it lives. Cant easily add nodes without re-sharding everything. 10Master / Slave(s)MasterSlaveApplicationBenefits: Slave can act as failover in case master dies. Application still only has one place to read and write fromDrawbacks: Slave is idle, but it costs money to run it!You *could* split your reads across both master and slaves. But if you lose the master, you still have to fail over all writes to the slave.11Master / Slave(s)MasterSlaveApplicationSlaveSlaveBenefits: Adds more horsepower to spread the read workload across more slavesDrawbacks: Master is single point of failure. If master dies, you can promote one slave, but then the others are orphans (unless you get lucky that all slaves stopped at the same point in time).12Master / MasterMasterMasterApplicationBenefits: Master is no longer single point of failure. One master dies, the other keeps going. Drawbacks: Either application needs to be aware of both masters, or you need a load balancer to split the workload. Also, this is limited to two masters (a single MySQL database cannot have more than one master). Also, slave lag potentially becomes an issue.13Master / Master Plus SlavesMasterSlaveApplicationSlaveSlaveMasterSlaveSlaveSlaveBenefits: Master is not single point of failure. Reads can be distributed across slaves.Drawbacks: Application needs to be node aware or you need a load balancer. Replication lag could become a factor. Also, no matter how many slaves you have, you will never get past the two master maximum (meaning, I can only spread my writes two nodes wide).14Basic Galera ConceptsContains multiple nodesEach node has a full copy of the dataSynchronous multi-master replication across all nodesAll changes to every node are replicated to all other nodesEach node can be for reads and writesAll nodes can be accessed at the same time

Galera ConceptGaleraGaleraGaleraGaleraGaleraNodeNodeNodeNodeNodeApplicationBenefits: All nodes are read and write. You can lose any node, or even more than one node, and it keeps going. No single point of failure. Dynamically scalable.Drawbacks: Application must be cluster aware, or you need a load balancer to split the workload. Slightly more complex setup than master/slave replication setups.16Quorum CommitDont have to wait for all nodes to answer back for your changesMajority rulesCommitted nodes >= (N / 2 ) + 1Minimum recommended number of nodes is 3 Need more than half of total nodes to get the changes, and then you get an answer on your update request.Theres no simple way to get a quorum commit if you have less than three nodes.17Split Brain SyndromeNetwork partition between the nodesEach node thinks its in chargeBoth nodes keep taking trafficNow both nodes have different contentsWhen nodes start talking to each other again, they are very confusedGaleraGaleraSplit Brain SyndromeNodeNodeApplicationApplicationData Center 1Data Center 2View as slide show to see animations!19Arbitrator NodeCheating with 2 nodesActs as a third nodeDoesnt store any dataAware of cluster state and replication statusProvides that third commit voteReplication changes pass through itLoss of direct connectivity between nodes can still be handled if arbitrator can talk to bothArbitrator NodeGaleraGaleraNodeNodeApplicationApplicationData Center 1Data Center 2Data Center 3ArbitratorIf there is a break in network connectivity between DC1 and DC2, as long as arbitrator can see both nodes, there will be no split brain syndrome.21Some Other GotchasBe careful with non-deterministic functions like NOW(), CURTIME(), etcDELETE commands on tables without a primary key are not supportedDirect writes to system tables (mysql database) are not replicatedCluster enforces optimistic concurrency controlI got here first, my transaction is goodAll other transactions locking the same row get deadlock errorApplication should be configured to retry the transactionSetup OverviewStuff to install on each nodeOperating systemMariaDB serverGaleraPercona XtraBackupLinux is preferred. Windows is not supported.24Setup StepsOn each node:Set up configuration filesStart up MySQLOn first node: during startup, you tell it that its the first node.The data it contains will become the master copy.For each additional node that starts up, it will seek out one of its neighbors, and try to sync a copy of the data to itselfSource node is called the donorAfter syncing the data, the node joins the cluster and becomes activeDatabase State TransferWhen starting up, each node needs a copy of the dataThe copying of the data is called a state transferWill copy from existing nodes using one of these methods:mysqldumprsyncxtrabackupFor mysqldump and rsync, the donor is locked for writes during the entire copy processPerconas xtrabackup allows writes to happen on donor node during copy processLive Demo!

Questions?

28Thank You For Coming! Please rate this Meet Up:www.meetup.com/data-masters(or go there to join!)Check out my blog and forums:www.dbtekpro.com

29

Search related