Upload
mongodb
View
3.470
Download
1
Tags:
Embed Size (px)
Citation preview
Scaling 40x on the ObjectRocket MongoDB Platform Jon Hyman & Kenny Gorman MongoDB World, June 25, 2014 NYC
@appboy @objectrocket @jon_hyman @kennygorman
A LITTLE BIT ABOUT JON & APPBOY
Jon Hyman CIO :: @jon_hyman !
Appboy is a marketing automation platform for apps
Harvard Bridgewater
A LITTLE BIT ABOUT KENNY & OBJECTROCKET
Kenny Gorman Co-Founder & Chief Architect :: @kennygorman !
ObjectRocket is a highly available, sharded, unbelievably fast MongoDB as a service
ObjectRocket eBay Shutterfly
Agenda
• Evolution of Appboy’s MongoDB installation as we grew to handle billions of data points per month
!
• Operational MongoDB issues we worked through
MongoDB Evolution:
March, 2013
Mar May July Sept Nov Jan
Apr Jun Aug Oct Dec Feb
Mar
What did Appboy look like in March, 2013?•~2.5 million events per day tracking 8 million users
• Event storage: every data point as a new document
• Single, unsharded replica set on AWS (m2.xlarge)
• Mostly long-tail customers; biggest app had 2M users
What did Appboy look like in March, 2013?•~2.5 million events per day tracking 8 million users
• Event storage: every data point as a new document
• Single, unsharded replica set on AWS (m2.xlarge)
• Mostly long-tail customers; biggest app had 2M users
!
Growing a lot on disk. :-( !
Started running into locking issues (30-40%). :-(
MongoDB Evolution:
April, 2013
Mar May July Sept Nov Jan
Apr Jun Aug Oct Dec Feb
Mar
Scaled vertically
What happened in April, 2013?
• First enterprise client signs
• More than 50 million users
• They estimated sending us over 1 billion data points per month
What happened in April, 2013?
• First enterprise client signs
• More than 50 million users
• They estimated sending us over 1 billion data points per month
!
“Btw, we’re going live next month”
MongoDB Evolution:
April, 2013: holy crap!
ObjectRocket: Getting Started
• The landscape of a simple configuration
• It’s all about choosing shard keys
• Locks - you know you love them
20%
80%
What are we going to do?• Contain growth from data points:
• Shifted to Amazon Redshift for “raw data”
• Moved MongoDB to storing pre-aggregated analytics for time series data
• Figure out sharding ASAP
• Moved to ObjectRocket, worked on shard key selection
• Sharding was hard:
• Tough to figure out the right shard key, make tradeoffs
• Rewrite a lot of application code to include shard keys in queries, inserts, adjust to life without unique indexes
Shard key selections• Users
• Had multiple ways to identify a user
• Device identifier, “external user id”, BSON ID
• Often performed large scans of user bases
Shard key selections• Users
• Had multiple ways to identify a user
• Device identifier, “external user id”, BSON ID
• Often performed large scans of user bases
!
{_id: “hashed”} !
• Cache secondary identifiers to BSON ID to reduce scatter-gather queries
• Doing scatter gathers goes against conventional wisdom
Shard key selections• Pre-aggregated analytics
• Always query history for a single app
• 1 document per day per app per metric
!
{app_id: 1}
MongoDB Evolution:
May - October, 2013
Mar May July Sept Nov Jan
Apr Jun Aug Oct Dec Feb
Mar
Scaled vertically
Start sharding
Everything sharded
What did Appboy look like in May - October, 2013?• textPlus goes live, as do other customers
• > 1 billion events per month, doing great!
• 4, 100GB shards on ObjectRocket
MongoDB Evolution:
November, 2013
Mar May July Sept Nov Jan
Apr Jun Aug Oct Dec Feb
Mar
Scaled vertically
Start sharding
Everything sharded
Various customer launches
What happened in November, 2013?
• One of the largest European soccer apps
What happened in November, 2013?
• One of the largest European soccer apps
• Soccer games crushed us: 15 million data points per hour just from this app!
• Lock percentage ran high, a single shard was pegged
• Real-time analytics processing got severely delayed, adding more servers did not help (in fact, it made things worse)
What happened in November, 2013?
• One of the largest European soccer apps
• Soccer games crushed us: 15 million data points per hour just from this app!
• Lock percentage ran high, a single shard was pegged
• Real-time analytics processing got severely delayed, adding more servers did not help (in fact, it made things worse)
Why a single shard?
Shard key selections• Pre-aggregated analytics
• Always query history for a single app
• 1 document per day per app per metric
!
{app_id: 1}
Shard key selections• Pre-aggregated analytics
• Always query history for a single app
• 1 document per day per app per metric
!
{app_id: 1}
ObjectRocket: Capacity, Growth
• Concurrency
• Did I mention locks?
• Cache management
• Compaction
• The shell game
• Indexing at scale
How to fix this?• Fundamentally, all updates are going to a single document
• Can’t shard out a single document
• Asked ObjectRocket for their suggestions
How to fix this?• Fundamentally, all updates are going to a single document
• Can’t shard out a single document
• Asked ObjectRocket for their suggestions
!
Introduce write buffering
Write buffering• Buffer writes to something that can be sharded out, then flush to MongoDB
• Need something transactional, so MongoDB was out for this
• Decided on multiple Redis instances:
• Redis has native hash data structure with atomic hash increments, works nicely with MongoDB in this use-case
Write buffering
Incoming data Flush to MongoDB
Write buffering• Wrote write buffering over a weekend to buffer writes to MongoDB every 3 seconds
!
Pre-aggregated analytics bottleneck was solved!
MongoDB Evolution:
January, 2014
Mar May July Sept Nov Jan
Apr Jun Aug Oct Dec Feb
Mar
Scaled vertically
Start sharding
Everything sharded
Various customer launches
Bad shard key hit upper limit
Added write buffering
What did Appboy look like in January, 2014?• > 3 billion events per month
• 4, 100GB shards on ObjectRocket
• Performance started to have really bad bursty behavior: sometimes user experience would slow down to what we thought was unacceptable for our customers
Why was performance getting worse?
• Appboy customers send millions of messages in a single campaign, most are sending hundreds of thousands to millions of messages each week
• Campaign times tend to cluster together across all Appboy customers: evenings, Saturday/Sunday afternoons, etc.
A lot of enormous read activity
Why was performance getting worse?
• Appboy customers send millions of messages in a single campaign, most are sending hundreds of thousands to millions of messages each week
• Campaign times tend to cluster together across all Appboy customers: evenings, Saturday/Sunday afternoons, etc.
A lot of enormous read activity Reads and writes and more reads start conflicting :-(
!
• Users visiting our dashboard during simultaneous large campaign sends would have sporadic poor performance
ObjectRocket: Splits• Split out collections to different MongoDB clusters
After Before
What did Appboy look like in February, 2014?
• Splits helped
• > 4 billion events per month
• We needed more
What did Appboy look like in February, 2014?
• Splits helped
• > 4 billion events per month
• We needed more Isolation
ObjectRocket: Isolation• Isolate large enterprise customers on their own MongoDB databases/clusters
• Appboy built this in March, 2014
Enterprise customer
Long-tail customer
Mar May July Sept Nov Jan
Apr Jun Aug Oct Dec Feb
Mar
Scaled vertically
Start sharding
Everything sharded
Various customer launches
Bad shard key hit upper limit
Added write buffering
Start splitting DBs Isolation
Summary
What’s next?• Figure out capacity planning
• Continue down isolation path
0
15000000
30000000
45000000
60000000
[email protected] [email protected]
@appboy @objectrocket @jon_hyman @kennygorman