View
217
Download
4
Category
Preview:
Citation preview
DB On Demand
A DB as a Service story
Ruben.Gaspar.Aparicio_@_cern.ch
On behalf DBoD team, IT Department
HEPIX 2014
Nebraska Union, University of Nebraska – Lincoln, USA
3
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
4
Agenda• Manifesto • Current status • Architecture • Demo • Management• Infrastructure• High Availability• Monitoring • Data protection: backups and recoveries in detail • Future development• Summary
5
Manifesto• https://cern.ch/twiki/bin/view/DB/DBOnDemandManifesto
• Making users database owners• Full DBA privileges
• Covers a demand from CERN community not addressed by the Oracle service• Different RDBMS: MySQL, PostgreSQL and Oracle (private use)
• No access to underlying hardware• Foreseen as single instance service• No DBA support or application support• No vendor support (except for Oracle)• It provides tools to manage DBA actions: configuration,
start/stop, upgrades, backups & recoveries, instance monitoring
6
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
7
Current status
8
Current status• Openstack• Puppetdb (MySQL)• Lhcb-dirac• Atlassian databases• LCG VOMS• Geant4• Hammercloud dbs• Webcast• QC LHC Splice• FTS3• DRUPAL• CernVM• VCS• IAXO• UNOSAT• …
Database usually supports an open source/commercial software
9
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
Architecture
10
https://cern.ch/dbondemand
Oracle VM & physical servers
Storage network
https://cern.ch/resources
Data Logs
Diag (Oracle)
FIM DB DBOD DB
RACMON DBRACMON
DAEMONFIM
WEB APPCERN AI
MONITORING
ORACLEEM
https://oem.cern.ch
DBOD WS
Syscontrol
DB client
11
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
12
https://cern.ch/dbondemand
(Please check http://indico.cern.ch/event/313869/ from 12’13’’ )
13
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
14
DBOD Daemon • Small program which:
• Fetches to-be executed jobs from the database• Manage jobs execution (via IT-DB framework)• Carries job post-execution tasks, if necessary• Updates the application DB with job results and instance
status• Executes around ~350 jobs per day for DBoD and ~ 900
jobs for MiddleWareOnDemand
• Modular design with focus on expansion• Easy to add support for new systems
(MiddleWareOnDemand)• Reusable code
15
DBOD State Checker• Part of the daemon package. • Cron managed script which periodically checks
each instance availability and accordingly updates its status in the DB
• Necessary to correctly display externally caused changes to the status of the service instances (e.g. host downtime, network issues, etc) in the user interface
16
Migration to the CERN Agile Infrastructure• IT-DB Virtualization infrastructure is being
migrated from RHEL + OVM to the standard CERN AI OpenStack setup (KVM + SLC)
• Storage access performance is vital to DB applications• IT-DB runs its own OpenStack installation on
servers physically connected to its storage servers for performance reasons
17
Migration to the CERN Agile Infrastructure• DBOD customized RPM packages for MySQL
and PostgreSQL servers already built using Koji• A Puppet module configures each host
according to the instance-resource relations stored on the Syscontrol LDAP directory• NAS Volumes, service startup scripts, users, etc.
18
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
19
High Availability• Driven by demand/need, not initially in the plans • Not relying on virtualization features so far (it may
change in the future, as OpenStack evolves)• 4 node clusters
• Nowadays two clusters running under Oracle cluster ware 12.1.0.1.
• Clusterware controls: • Virtual IP• RDBMS instance
• PostgreSQL and MySQL instances can co-exist, different versions supported.
20
High Availability
• For instances running on an Oracle cluster ware, care must be taken in case of server crash for MySQL instances.• "InnoDB: Unable to lock ./ibdata1, error: 11" Error Sometimes
Seen With MySQL on NFS (Doc ID 1522745.1)
Failover test\Downtime Avg. (s) Min. (s) Max(s)
Kill process 16.9 4 39
Kill process (different node) 21.7 10 34
Network down 39.9 37 47
Server down 37 33 43
Relocate 6.2 5 7
Testing the cluster (MySQL & Postgresql instances)
21
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
22
Infrastructure: Hardware servers• Dell blades PowerEdge M610
• 2x Quad-Core Intel Xeon @ 2.53GHz• 48 GB RAM
• Transtec Database server• 2x eight-core Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz• 128 GB RAM
NetApp cluster
Next release
10GbE
Public Network
Private Network
23
Agenda• Manifesto• Current status• Architecture• Demo• Management• Infrastructure• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
24
Monitoring • Critical component for both DBoD managers
and DBAs (our clients)• Different monitoring tools, two main
categories• Servers, OS, Storage → + Scripts +
Netapp tools• DBoD instance: + RACmon
• Trying to outsource DBoD instance monitoring• Very demanding task: configure alerts, adapt to
new RDBMS releases, add functionality like reporting, execution plans,etc.
25
Monitoring• Three main axes
• IT GNI service, IT dashboards• Oracle being monitored by → fine
grained access per pluggable database in Oracle12c
• MySQL, PostgreSQL monitored by• Very intuitive interface• Database – storage volumes – protocol correlation• DB activity view and SQL analysis
26
Monitoring (nowadays view)
27
Monitoring: Enterprise manager 12c
Pluggable database
Container database
28
Monitoring: AppDynamics
29
Monitoring: AppDynamics
30
Monitoring: AppDynamics
31
Agenda• Manifesto• Current status• Architecture• Demo• Management• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Conclusions
32
Storage evolution
FAS3240 FAS8060
NVRAM 1.0 GB 8.0 GB
System memory 8GB 64GB
CPU 1 x 64-bit 4-core 2.33 Ghz 2 x 64-bit 8-core 2.10 Ghz
SSD layer (maximum)
512GB 8TB
Aggregate size 180TB 400TB
OS controller Data ONTAP® 7-mode
Data ONTAP® C-mode*
scaling up
scaling out
* Cluster made of 8 controllers (FAS8060 & FAS6220). Shared with other services.
33
Data protection• 2 file systems: data + redo logs on different
Netapp appliances• Storage is monitored: Netapp tools + home
made tools• Multipath access to disks (redundancy +
performance) → disks are seen by two controllers (HA pair) → Transparent interventions
• RAID6• Automatic scrubbing (based on checksum)• Rapid RAID Recovery + Disk Maintenance
Center
34
Backup management• Same backup procedure for all RDBMS. Only data
volume is snapshot.• Backup workflow:
mysql> FLUSH TABLES WITH READ LOCK;mysql> FLUSH LOGS;
orOracle>alter database begin backup;
OrPostgresql> SELECT pg_start_backup('$SNAP');
mysql> UNLOCK TABLES;Or
Oracle>alter database end backup;or
Postgresql> SELECT pg_stop_backup(), pg_create_restore_point('$SNAP');
snapshotresume
… some time later
new snapshot
35
Snapshots• Taken programmatically via our API using ZAPI
(NetappManagementSDK)
• Logs can be controlled via DB On Demand site• It is a very fast operation. Example:
Hc_atlas (MySQL) about 3 secs
Indico_p (PostgreSQL) about 3 secs
36
Tape backups
• Driven by demand/need, not initially in the plans • Likely to be removed• Possible only on PostgreSQL and MySQL
• Oracle12c solution comes already with a tape backup!
• Consistent snapshot + redo logs sent to tape• Database activity is not impacted• Tape backups are not validated• Manual process to set them up, need to contact
us (DBOD + TSM service)
37
Instance restore
37
Binary logs
TIME
Data files
Manual snapshot
Now
Automatic snapshots
Point-in-timerecovery
38
Agenda• Manifesto• Current status• Architecture• Demo• Management• High Availability• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
39
High density consolidation: LXC• Scaling up servers (128GB RAM, 32 CPUs),
LXC should help to consolidate even more.• Red Hat 7 Atomic Host
Fine control on memory and CPU using control groups
MySQL 5.5.30 - sysbench 0.5 query test - data set fits into innodb buffer
40
Data protection: SnapVault
Based on snapshots
It should cover tape backup functionality → Disaster Recovery location
*Image taken from Netapp documentation.
41
Agenda• Manifesto• Current status• Architecture• Demo• Management• Monitoring• Data protection: backups and recoveries in detail• Future development• Summary
42
Summary• Many lessons learned during the design and
implementation of the DBoD service• Building Database as a Service helped CERN DB group to
• Gain experience with MySQL, PostgreSQL and multi-tenancy Oracle 12c
• Provide a solution for Oracle database with special needs e.g. Unicode character sets
• Improve tools and operations• Standardize on tools and frameworks• Consolidate
• Face new use cases from CERN community • e.g. Increase data protection
• On-going integration with IT central services
43
Acknowledge
• IT-DB colleagues• Our former colleagues Daniel Gomez Blanco and
Dawid Wojcik
• Ignacio Coterillo and David Collados as members of DBoD team
44
Questions
Recommended