29

Analyzing Large-Scale User Data with Hadoop and HBase

Embed Size (px)

DESCRIPTION

A more technical presentation about WibiData presented to the Bay Area Search meetup 1/25/2012.

Citation preview

Page 1: Analyzing Large-Scale User Data with Hadoop and HBase
Page 2: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

: Analyzing Large-Scale User Data with Hadoop and HBase

Odiago, Inc.

Aaron Kimball – CTO

Page 3: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

is…

• A large-scale storage, serving, and analysis platform

• For user- or other entity-centric data

Page 4: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

use cases

• Product/content recommendations

– “Because you liked book X, you may like book Y”

• Ad targeting

– “Because of your interest in sports, check out…”

• Social network analysis

– “You may know these people…”

• Fraud detection, anti-spam, search personalization…

Page 5: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Use case characteristics

• Have a large number of users

• Want to store (large) transaction data as well as derived data (e.g., recommendations)

• Need to serve recommendations interactively

• Require a combination of offline and on-the-fly computation

Page 6: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

A typical workflow

Page 7: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Challenges

• Support real-time retrieval of profile data

• Store a long transactional data history

• Keep related data logically and physically close

• Update data in a timely fashion without wasting computation

• Fault tolerance

• Data schema changes over time

Page 8: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

architecture

Certified Technology product

Page 9: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

HBase data model

• Data in cells, addressed by four “coordinates”

– Row Id (primary key)

– Column family

– Column “qualifier”

– Timestamp

Page 10: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Schema free: not what you want

• HBase may not impose a schema, but your data still has one

• Up to the application to determine how to organize & interpret data

• You still need to pick a serialization system

Page 11: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Schemas = trade-offs

• Different schemas enable efficient storage/retrieval/analysis of different types of data

• Physical organization of data still makes a big difference

– Especially with respect to read/write patterns

Page 12: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

WibiData workloads

• Large number of fat rows (one per user)

• Each row updated relatively few times/day

– Though updates may involve large records

• Raw data written to one set of columns

• Processed results read from another

– Often with an interactive latency requirement

• Needs to support complex data types

Page 13: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Serialization with

• Apache Avro provides flexible serialization • All data written along with its “writer schema” • Reader schema may differ from the writer’s

{

"type": "record",

"name": "LongList",

"fields" : [

{"name": "value", "type": "long"},

{"name": "next", "type": ["LongList", "null"]}

]

}

Page 14: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Serialization with

• No code generation required

• Producers and consumers of data can migrate independently

• Data format migrations do not require structural changes to underlying data

Page 15: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

WibiData: An extended data model

• Columns or whole families have common Avro schemas for evolvable storage and retrieval

<column>

<name>email</name>

<description>Email address</description>

<schema>"string"</schema>

</column>

Page 16: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

WibiData: An extended data model

• Column families are a logical concept

• Data is physically arranged in locality groups

• Row ids are hashed for uniform write pressure

Page 17: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

WibiData: An extended data model

• Wibi uses 3-d storage

• Data is often sorted by timestamp

Page 18: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Analyzing data: Producers

• Producers create derived column values

• Produce operator works on one row at a time

– Can be run in MapReduce, or on a one-off basis

• Produce is a row mutation operator

Page 19: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Analyzing data: Gatherers

• Gatherers aggregate data across all rows

• Always run within MapReduce

• A bridge between rows and (key, value) pairs

Page 20: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Interactive access: REST API

• REST API provides interactive access

• Producers can be triggered “on demand” to create fresh recommendations

GET request PUT request

Page 21: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Example: Ad Targeting

Page 22: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Example: Ad Targeting

Producer

Page 23: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Example: Ad Targeting

Producer

Page 24: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Gathering Category Associations

• Gather observed behavior

Page 25: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Gathering Category Associations

• Associate interests, clicks

Page 26: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Gathering Category Associations

• … for all pairs

Page 27: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Gathering Category Associations

• And aggregate across all users

Map phase: Reduce phase:

Page 28: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

Conclusions

• Hadoop, HBase, Avro form the core of a large-scale machine learning/analysis platform

• How you set up your schema matters

• Producer/gatherer programming model allows computations over tables to be expressed naturally; works with MapReduce

Page 29: Analyzing Large-Scale User Data with Hadoop and HBase

Developed By:

www.wibidata.com / @wibidata Aaron Kimball – [email protected]