57
Architecting your application for the cloud

Architecting cloud

Embed Size (px)

DESCRIPTION

Architecting your application for the cloud

Citation preview

Page 1: Architecting cloud

Architecting your application for the cloud

Page 2: Architecting cloud

Traditional solutionTraditional solution1) Buy servers2) Buy storage3) Sign a CDN contract

(Content Delivery Network)4) Launch website/application5) Manage scaling and provisioning

Page 3: Architecting cloud

Cloud solutionCloud solutionBenefits from Cloud Computing:1)No need to buy IT Infrastructure2)Deploy worldwide3)Scale up/down when needed4)Save time5)Focus on your business

Page 4: Architecting cloud

Stage One – The Beginning

• Simple architecture • Low complexity and overhead means quick

development and lots of features, fast.• No redundancy, low operational costs – great

for startups.

Page 5: Architecting cloud

Stage 2 - More of the same, just bigger

• Business is becoming successful – risk tolerance low.

• Add redundant firewalls, load balancers.• Add more web servers for high performance.• Scale up the database.• Add database redundancy.• Still simple .

Page 6: Architecting cloud

Stage 3 – The pain begins.

• Publicity hits.• Squid or varnish reverse proxy or high end

load balancers.• Add even more web servers. Managing

contents becomes painful.• Single database can’t cut it anymore. Splits

read and write. All writes go to a single master server with read only slaves.

• May require some re-coding of the apps.

Page 7: Architecting cloud

Stage 4 – The pain intensifies

• Replication doesn’t work for everything. Single writes database – Too many writes – Replication takes too long.

• Database partitioning starts to make sense. Certain features get their own database.

• Shared storage makes sense for contents.• Requires significant re-architecting of the app

and DB.

Page 8: Architecting cloud

Stage 5 – This Really Hurts !!

• Panic sets in. Re-thinking entire application. Now we want to go for scale?

• Can’t just partition on features – what else can we use? Geography, lastname, user Id etc. Create user-cluster.

• All features available on each cluster.• Use a hashing scheme or master DB for

locating which user belongs to which cluster.

Page 9: Architecting cloud

Stage 6 – Getting a little less painful

• Scalable application and database architecture.

• Acceptable performance.• Starting to add new features again.• Optimizing some of the code.• Still growing, but manageable.

Page 10: Architecting cloud

Stage 7 – entering the unknown...

• Where are the remaining bottleneck?– Power, Space– Bandwidth, CDN, Hosting provider big enough?– Firewall, load balancer bottlenecks?– Storage– Database technology limits – key/value store

anyone?

Page 11: Architecting cloud

Amazon Services usedAmazon Services usedServers: Amazon EC2Storage: Amazon S3Database: Amazon RDSContent Delivery: Amazon CloudFrontExtra: Autoscaling, Elastic Load Balancing

Page 12: Architecting cloud
Page 13: Architecting cloud

What is in step 1What is in step 1Launched a Linux server (EC2)Installed a web serverDownloaded the websiteOpened the website

Now, our traffic goes up...

Page 14: Architecting cloud

To reach fans worldwide, we need a CDN.To reach fans worldwide, we need a CDN.

Page 15: Architecting cloud
Page 16: Architecting cloud

Changes in HTML codeChanges in HTML codeimages/stirling1.jpg

Becomes

d135c2250.cloudfront.net/stirling1.jpg

Page 17: Architecting cloud

What is in step 2What is in step 2Uploaded files to Amazon S3Enabled a Cloudfront DistributionUpdated our picture location

Page 18: Architecting cloud

Our IT Architecture needs an updateOur IT Architecture needs an update

Page 19: Architecting cloud
Page 20: Architecting cloud
Page 21: Architecting cloud

What is in step 3What is in step 3We added Autoscaling, and watched it grow the number of serversWe added Elastic Load Balancer

Page 22: Architecting cloud
Page 23: Architecting cloud

What we is in step 4What we is in step 4Launched a Database InstancePointed the web servers to RDSCreated a Read ReplicaCreated a Snapshot

Page 24: Architecting cloud

What is difficult about Databases?What is difficult about Databases?

Page 25: Architecting cloud

Availablity Patterns

• Fail-over IP• Replication– Master-slave– Master-master– Tree replication– Buddy replication

Page 26: Architecting cloud

Master-Slave Replication

Page 27: Architecting cloud

Master-Slave ReplicationAssume both Master and Slave is running on Ubuntu Natty(11.04) with

MySQL installed.Configure the Master: we must configure the mysql to listen to all IP

addresses. We move to /etc/mysql/my.cnf

#skip-networking#bind-address = 127.0.0.1Set the mysql log file, the database name that we will replicate and tell that

this will be the master log-bin = /var/log/mysql/mysql-bin.log

binlog-do-db=exampledbserver-id=1

Then we made a restart:/etc/init.d/mysql restart

Page 28: Architecting cloud

Master – Slave ReplicationNow we enter the mysql on master server:mysql -u root -p

Enter password:We grant all privileges for slave for this database:GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%'

IDENTIFIED BY '<some_password>'; FLUSH PRIVILEGES;

Then we run the following commands:USE exampledb;

FLUSH TABLES WITH READ LOCK;This will show the master log file name and the read position:

SHOW MASTER STATUS;

Page 29: Architecting cloud

Master-Slave ReplicationWe make a dump of the database of the master server:mysqldump -u root -p<password> --opt exampledb >

exampledb.sqlOr we can run this command on the slave to fetch the

data from master:LOAD DATA FROM MASTER;Now we will unlock the tables:mysql -u root -p

Enter password:UNLOCK TABLES;quit;

Page 30: Architecting cloud

Master-Slave Replication : Configure the Slave

First we enter the slave mysql and create the database:mysql -u root -p

Enter password:CREATE DATABASE exampledb;quit;

We import the database using the mysql dump:mysql -u root -p<password> exampledb < /path/to/exampledb.sqlNow we will configure the slave server:/etc/mysql/my.cnfWe write the below information:server-id=2

master-host=192.168.0.100master-user=slave_usermaster-password=secretmaster-connect-retry=60replicate-do-db=exampledb

Then we restart mysql:

/etc/init.d/mysql restart

Page 31: Architecting cloud

Master-Slave Replication: Configure the Slave

We can also load the database using the below command:mysql -u root -p

Enter password:LOAD DATA FROM MASTER;quit;

Then we stop the slave server.mysql -u root -p

Enter password:SLAVE STOP;

And we run the below command to adjust the master informations:CHANGE MASTER TO MASTER_HOST='192.168.0.100',

MASTER_USER='slave_user', MASTER_PASSWORD='<some_password>', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;

And then we start the slave server:START SLAVE;

quit;

Page 32: Architecting cloud

Master-Master Replication:

Page 33: Architecting cloud

Master-Master Replication: master1 configuration

we will call system 1 as master1 and slave2 and system2 as master2 and slave 1.We go to the master mysql configuration file:/etc/mysql/my.cnf.Then we add the below code block. We show the path and socket path, the log file for the db to replicate.[mysqld]

datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockold_passwords=1

log-binbinlog-do-db=<database name> binlog-ignore-db=mysql binlog-ignore-db=test

server-id=1

[mysql.server]user=mysqlbasedir=/var/lib

[mysqld_safe]err-log=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid

mysql> grant replication slave on *.* to 'replication'@192.168.16.5 \identified by 'slave';

Page 34: Architecting cloud

Master-Master Replication: slave2 configuration

Now we enter the slave2 mysql configuration file.[mysqld]

datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sockold_passwords=1

server-id=2

master-host = 192.168.16.4master-user = replicationmaster-password = slavemaster-port = 3306

[mysql.server]user=mysqlbasedir=/var/lib

[mysqld_safe]err-log=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid

Page 35: Architecting cloud

Master-Master Replication: start master1/slave1 server

We start the slave:

mysql> start slave;mysql> show slave status\G;

Slave_IO_State: Waiting for master to send event Master_Host: 192.168.16.4 Master_User: replica Master_Port: 3306 Connect_Retry: 60 Master_Log_File: MASTERMYSQL01-bin.000009 Read_Master_Log_Pos: 4 Relay_Log_File: MASTERMYSQL02-relay-bin.000015 Relay_Log_Pos: 3630 Relay_Master_Log_File: MASTERMYSQL01-bin.000009 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table:Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 4 Relay_Log_Space: 3630 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 1519187

Page 36: Architecting cloud

Master-Master Replication: Creating the master2/slave2

On Master2/Slave 1, edit my.cnf and master entries into it: [mysqld]

datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sock

old_passwords=1server-id=2

master-host = 192.168.16.4master-user = replicationmaster-password = slavemaster-port = 3306

log-bin binlog-do-db=adam

[mysql.server]user=mysqlbasedir=/var/lib

[mysqld_safe]err-log=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid

Create a replication slave account on master2 for master1:mysql> grant replication slave on *.* to 'replication'@192.168.16.4 identified by 'slave2';

Page 37: Architecting cloud

Master-Master Replication: Creating the master2/slave2

Edit my.cnf on master1 for information of its master.[mysqld]

datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.sock

old_passwords=1

log-binbinlog-do-db=adambinlog-ignore-db=mysqlbinlog-ignore-db=test

server-id=1#information for becoming slave.master-host = 192.168.16.5master-user = replicationmaster-password = slave2master-port = 3306

[mysql.server]user=mysqlbasedir=/var/lib

Page 38: Architecting cloud

Master-Master Replication:

• Restart both mysql master1 and master2.• On mysql master1:• mysql> start slave;• On mysql master2: • mysql > show master status;• On mysql master 1:• mysql> show slave status\G;

Page 39: Architecting cloud

Managing overload

Page 40: Architecting cloud

Load Balancing Algorithm

Random allocation

Round robin allocation

Weighted allocation

Dynamic load balancing

Least connections

Least server CPU

Page 41: Architecting cloud

Load Balancer in Rackspace

1. Add a cloud load balancer. If you already have a Rackspace Cloud account, use the “Create Load Balancer” API operation.

2. Configure cloud load balancer. Then we select name, protocol, port, algorithm, and which servers we need load balanced.

3. Enjoy the cloud load balancer which will be online in just a few minutes. each cloud load balancer can be customized or removed as our needs change.

Page 42: Architecting cloud

SecuritySecurity

Page 43: Architecting cloud

Security

• Firewalls – iptables.• The iptables program lets slice admins

configure the Linux kernel firewall• Logrotator.• "Log rotation" refers to the practice of

archiving an application's current log, starting a fresh log, and deleting older logs.

Page 44: Architecting cloud

Iptables

Page 45: Architecting cloud

Configuring the IPtablesudo /sbin/iptables -F sudo /sbin/iptables -A INPUT -i eth0 -p tcp -m tcp --dport

30000 -j ACCEPTsudo /sbin/iptables -A INPUT -m state --state

ESTABLISHED,RELATED -j ACCEPTsudo /sbin/iptables -A INPUT -j REJECT sudo /sbin/iptables -A FORWARD -j REJECTsudo /sbin/iptables -A OUTPUT -j ACCEPT sudo /sbin/iptables -I INPUT -i lo -j ACCEPTsudo /sbin/iptables -I INPUT 5 -p tcp --dport 80 -j ACCEPT sudo

/sbin/iptables -I INPUT 5 -p tcp --dport 443 -j ACCEPT

Page 46: Architecting cloud

Secure??

DDoS Attack: Dynamic Denial of Service attack.

Wikileaks.com is it alive?

Page 47: Architecting cloud

Log Rotate/etc/logrotate.confls /etc/logrotate.d/var/log/apache2/*.log { weekly missingok rotate 52 compress delaycompress notifempty create 640 root adm sharedscripts postrotate if [ -f "`. /etc/apache2/envvars ; echo ${APACHE_PID_FILE:-/var/run/apache2.pid}`" ]; then /etc/init.d/apache2 reload

> /dev/null fi endscript }

Page 48: Architecting cloud

Failover IP

• You can actually 'share' an IP between two servers so when one server is not available the other takes over the IP address.

• For this you need two servers. Let's keep it simple and call one the 'Master‘ and one the 'Slave'.

• What this comes down to is creating a High Availability network with your Slices. Your site won't go down.

Page 49: Architecting cloud

Heartbeat

• The failover system is not automatic. You need to install an application to allow the failover to occur.

• Heartbeat runs on both the Master and Slave servers. They chat away and keep an eye on each other. If the Master goes down, the Slave notices this and brings up the same IP address that the Master was using.

Page 50: Architecting cloud

How to Configure Heartbeat

• sudo aptitude update Once you have done that, have a check to see if anything needs upgrading on the server:

• sudo aptitude safe-upgrade• sudo aptitude install heartbeat• /etc/heartbeat/

Page 51: Architecting cloud

Configuring Heartbeat

• sudo nano /etc/heartbeat/authkeys The contents are as simple as this:

• auth 1 • 1 sha1 YourSecretPassPhrase• sudo chmod 600 /etc/heartbeat/authkeys

Page 52: Architecting cloud

Configuring Heartbeat

• sudo nano /etc/heartbeat/haresources• master 123.45.67.890/24 • The name 'master' is the hostname of the

MASTER server and the IP address (123.45.67.890) is the IP address of the MASTER server.

• To drive this home, this file needs to be the same on BOTH servers.

Page 53: Architecting cloud

Master ha.cf filesudo nano /etc/heartbeat/ha.cf The contents would be as follows:logfacility daemon keepalive 2 deadtime 15 warntime 5 initdead 120 udpport 694 ucast eth1 172.0.0.0 # The Private IP address of your SLAVE server.

auto_failback on node master # The hostname of your MASTER server. node slave # The hostname of your SLAVE server. respawn hacluster /usr/lib/heartbeat/ipfail use_logd yes

Page 54: Architecting cloud

Creating Slave ha.cfLet's open the file on the Slave server:sudo nano /etc/heartbeat/ha.cf The contents will need to be:logfacility daemon keepalive 2 deadtime 15 warntime 5 initdead 120 udpport 694 ucast eth1 172.0.0.1 # The Private IP address of your MASTER server. auto_failback on node masternode slaverespawn hacluster /usr/lib/heartbeat/ipfail use_logd yesOnce done, save the file and restart Heartbeat on the Slave Slice:sudo /etc/init.d/heartbeat restart

Page 55: Architecting cloud

Testing the failover IP

Start off with both servers running and ping the main IP (the IP we have set to be the failover) on the Master server:

ping -c2 123.45.67.890 The '-c2' option simply tells ping to 'ping' twice. Now shutdown the Master Slice:sudo shutdown -h now Without the failover IP, there would be no response from

the ping request as the server is down.We will notice that the IP is still responding to pings.

Page 56: Architecting cloud

Who Am I?

Tahsin HasanSenior Software EngineerTasawr Interactive.

Author of two books ‘Joomla Mobile Web Development Beginner’s Guide’ and ‘Opencart 1.4 Template Design Cookbook’with Packt Publishers Uk.

[email protected]://newdailyblog.blogspot.com (tahSin’s gaRage).

Page 57: Architecting cloud

Questions?