14
Big Data – Exercises Fall 2019 – Week 5 – ETH Zurich HBase Reading: Chang, F. et al. (2006). Bigtable: A Distributed Storage System for Structured Data. In OSDI. [ PDF ] (mandatory) George, L. (2011). HBase: The Definitive Guide (1st ed.). O’Reilly. [ ETH library ] [ online ] (Chapters 1 and 3: mandatory, Chapter 8: recommended) *White, T. (2015). Hadoop: The Definitive Guide (4th ed.). O’Reilly Media, Inc. [ETH library ] [ online ] (Chapter 20: optional) This exercise will consist of 3 main parts: Hands-on practice with your own HBase cluster running in Azure Using HBase through Java API Architecture of HBase Exercise 1 — Creating and using an HBase cluster It's time to touch HBase! You will create, fill with data, and query an HBase cluster running on Azure. Do the following to set up an HBase cluster: Important: we want you to use a small but real cluster for running HBase rather than a single machine. But, these clusters burn Azure credit very quickly—the cheapest configuration consumes roughly 2 CHF per hour, which is a lot relative to your overall credit—so it is very important for you to delete your cluster once you are done. Luckily, it is possible to keep your data intact when you delete a cluster, and see it again when you recreate it; we will touch upon this in the process. Now, let's start. 1. In Azure portal click the "+ Create a Resource" button on the left, type "hdinsight" in the search box, and select "Azure HDInsight". HDInsight is Microsoft's cloud service which wraps Hadoop, HBase, Spark and other Big Data technologies; read more here .

Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

Big Data – Exercises

Fall 2019 – Week 5 – ETH Zurich

HBase

Reading: Chang, F. et al. (2006). Bigtable: A Distributed Storage System for Structured Data. In OSDI. [PDF]

(mandatory) George, L. (2011). HBase: The Definitive Guide (1st ed.). O’Reilly. [ETH library] [online] (Chapters 1 and

3: mandatory, Chapter 8: recommended) *White, T. (2015). Hadoop: The Definitive Guide (4th ed.). O’Reilly Media,

Inc. [ETH library] [online] (Chapter 20: optional)

This exercise will consist of 3 main parts:

Hands-on practice with your own HBase cluster running in Azure

Using HBase through Java API

Architecture of HBase

Exercise 1 — Creating and using an HBase cluster

It's time to touch HBase! You will create, fill with data, and query an HBase cluster running on Azure.

Do the following to set up an HBase cluster:

Important: we want you to use a small but real cluster for running HBase rather than a single machine. But, these

clusters burn Azure credit very quickly—the cheapest configuration consumes roughly 2 CHF per hour, which is a

lot relative to your overall credit—so it is very important for you to delete your cluster once you are done.

Luckily, it is possible to keep your data intact when you delete a cluster, and see it again when you recreate it; we

will touch upon this in the process. Now, let's start.

1. In Azure portal click the "+ Create a Resource" button on the left, type "hdinsight" in the search box, and select

"Azure HDInsight". HDInsight is Microsoft's cloud service which wraps Hadoop, HBase, Spark and other Big Data

technologies; read more here.

Page 2: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

2. First select "Go to classic creation experience".

3. Switch to the "Custom" mode (see the screenshot), fill in the form with cluster name, user names and passwords,

and select "HBase" as the cluster type. Click "Next".

4. In the "Security + Networking" step do not choose anything, just click "Next".

5. The canonical way would be to use an HDFS cluster as a storage layer for an HBase cluster, but we will be using

the Blob service of Windows Azure Storage for this purpose. This has a significant advantage of allowing you to

delete your HBase cluster without losing the data: you can recreate the cluster using the same Azure Storage

Account and the same container and you will see the same data. This is useful, for example, if you don't have

time to finish this exercise in one sitting: you can just delete your cluster, recreate it later, and continue your

work. Azure storage is selected by default (see the screenshot). In "Select a Storage Account" click "Create new"

and specify a name. Important: if you are recreating your HBase cluster and want to see the existing

data, then choose "Select existing" and set the container name to the one that you see in the

"Storage Accounts" tab of Azure—by default Azure generates a new container name every time you

create a cluster, which then points to a different container. Leave everything else as it is and click

"Next".

Page 3: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

6. In the "Applications" step do not choose anything, just click "Next". We will be using HBase directly.

7. Now we need to choose the configuration of the nodes in our HBase cluster. It will be enough to have only 2

RegionServers (see the screenshot). As for the node size, let us be wise and select the economical option: click

on "Region node size", click "View all" (see the screenshot), and choose "A4 v2"; do the same for the Head nodes;

the "Zookeeper" nodes should have "A4 v2" selected by default (Zookeeper is a distributed coordination service

used by HBase). Click "Next".

8. In the "Script Actions" step do not choose anything, just click "Next".

9. In the last step, "Summary", check if the settings are as you intend. These clusters are expensive, so it is worth

checking the price estimate at this step: for me it is 1.73 CHF/hour; if your price is larger than this, check your

node sizes and counts. When done, initiate the cluster creation by clicking "Create". The process will take time,

around 15—25 minutes; in my own case it took 15 minutes.

Accessing your cluster

The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we

will interact with the HBase cluster through the command-line interface. For this, you will need to run the ssh

program in a terminal in order to connect to your cluster. There are three options of how you can do this:

1. On your own machine you can just use a normal terminal if you have ssh installed. Linux usually has it, as

does MacOS. Windows doesn't have it by default (perhaps Win10 does, though), but Windows users can use one

of the browser-based options, which are described next, and the other option is to install PuTTY.

2. In your browser:

A. Use the Azure Cloud Shell. Click on the Cloud Shell icon at the top of Azure Dashboard toolbar:

Page 4: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

It will request your approval for creating a Storage Account required for the shell; agree to it.

B. Use a terminal on Jupyter. In your notebooks.azure.com tab, click "Libraries" in the upper-left corner of the

page. Then, select any library and click "Run"

In the opened Jupyter page, select "New" -> "Terminal"

In your terminal of choice, run the following (this command with everything filled-in is also available on the Azure

page of your HBase cluster, if you click "Secure Shell (SSH)"):

ssh <ssh_user_name>@<cluster_name>-ssh.azurehdinsight.net

in this command, <ssh_user_name> is the "ssh username" that you have chosen in the first step of creating the

HBase cluster, and <cluster_name> also comes from that form. Note that the cluster name has to be suffixed with

-ssh .

If after running the ssh command you see a message similar to this:

Welcome to HBase on HDInsight.

Last login: Sat Oct 14 15:56:56 2017 from 180.220.17.157

To run a command as administrator (user "root"), use "sudo <command>".

See "man sudo_root" for details.

<ssh_user_name>@hn0-cluster:~$

then you have successfully connected to your HBase cluster. Now proceed to the next task.

Page 5: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

Interact with your HBase cluster using the shell

In this task we will go through some basic HBase commands, in preparation for the exercise after the next one,

where we will import a big dataset and run queries against it.

Open the HBase shell by running the following command:

hbase shell

Let's say we want to create an HBase table that will store sentences adhering to the structure subject-verb-object

(e.g., "I eat mangoes", "She writes books") in different languages. Here is a schema that we may use:

Table name = sentences

Column family: words

column: subject

column: verb

column: object

Column family: info

column: language

With the following command we can create such a table (a description of HBase shell commands is available here):

create 'sentences', 'words', 'info'

You can see the schema of the table with this command:

describe 'sentences'

Let's insert some sentences into our table. We will put data cell by cell with the command put <table>, <rowId>,

<columnFamily:columnQualifier>, <value> :

put 'sentences', 'row1', 'words:subject', 'I'

put 'sentences', 'row1', 'words:verb', 'drink'

put 'sentences', 'row1', 'words:object', 'coffee'

Now, let's try to query this sentence from the table:

get 'sentences', 'row1'

You should see output similar to this:

COLUMN CELL

words:object timestamp=1507998158489, value=coffee

words:subject timestamp=1507998139704, value=I

words:verb timestamp=1507998148303, value=drink

3 row(s) in 0.1640 seconds

As you can see, HBase shell returns data as key-value pairs rather than as rows literally. You may also notice that the

lines are lexicographically sorted by the key, which is why "subject" appears after "object" in the list.

I don't know about you, but I like tea more than coffee, so let me update our sentence...

put 'sentences', 'row1', 'words:object', 'tea'

As you can see, we are using the same put command to update a cell. But remember that HBase does not actually

update cells in place—it just inserts new versions instead. If you now run the query again, you will see the new data:

get 'sentences', 'row1'

returns:

COLUMN CELL

words:object timestamp=1507998793452, value=tea

Page 6: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

words:subject timestamp=1507998139704, value=I

words:verb timestamp=1507998148303, value=drink

3 row(s) in 0.0470 seconds

We actually wanted to store sentences in different languages, so let's first set the language for the existing one:

put 'sentences', 'row1', 'info:language', 'English'

Note that we are now inserting a value into a different column family but for the same row. Verify with a get that

this took effect.

Now, let's add a sentence in another language (note that we are using another rowID now—row2 ):

put 'sentences', 'row2', 'words:subject', 'Ich'

put 'sentences', 'row2', 'words:verb', 'trinke'

put 'sentences', 'row2', 'words:object', 'Wasser'

put 'sentences', 'row2', 'info:language', 'Deutsch'

Let's check that we indeed have 2 rows now:

count 'sentences'

Now, let's query all rows from the table:

scan 'sentences'

This, indeed, returns all two rows, in key-value format as before.

It is, of course, possible to do some filtering in queries:

scan 'sentences', {FILTER => "ValueFilter(=, 'binary:English')"}

scan 'sentences', {COLUMNS => 'words:subject', FILTER => "ValueFilter(=, 'substring:I')"}

scan 'sentences', {COLUMNS => 'words:object', ROWPREFIXFILTER => 'row'}

What if we want to store a sentence that also contains an adjective, in addition to the subject, verb, and object? This

is not a problem with HBase, because we can create new columns inside existing column families on the fly:

put 'sentences', 'row3', 'words:subject', 'Grandma'

put 'sentences', 'row3', 'words:verb', 'bakes'

put 'sentences', 'row3', 'words:adjective', 'delicious'

put 'sentences', 'row3', 'words:object', 'cakes'

This row now has more columns in the words column family than others:

get 'sentences', 'row3'

We can also add new columns to existing rows:

put 'sentences', 'row1', 'words:adjective', 'hot'

get 'sentences', 'row1'

Interact with your HBase cluster using the Java API

While the shell provides a good way to run ad-hoc queries during development and debugging, applications usually

access HBase clusters through a different interface. We will thus briefly review the offical Java API. Similar libraries

also exists for other languages.

Introduction to the Java API

We will first briefly describe the API, and we then will explain how to run the code.

Page 7: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

In order to interact with our cluster, we need to gather connection parameters in a Configuration object, which is

often done automatically from environment variables and system config files.

Configuration conf = HBaseConfiguration.create();

Based on this configuration, we can get a handle on a particular table, against which we may run queries.

HTable table = new HTable(conf, "sentences");

Queries are represented by objects that are assembled and modified. Look at the following example, which you

previously saw for the shell:

// Create a 'Scan' object

Scan scan = new Scan();

// Restrict scan to some column

scan.addColumn(

Bytes.toBytes("words"),

Bytes.toBytes("subject")

);

// Create a filter for the scan

ValueFilter filter = new ValueFilter(

CompareOp.EQUAL,

new SubstringComparator("I")

);

scan.setFilter(filter);

We can then run a scan object against a handle to a table:

ResultScanner results = table.getScanner(scan);

Finally, we can iterate over the result:

for (Result row : results) {

// do something with each row

}

results.close();

table.close();

Server side operations

To run some code on the server side of HBase, we need to use Coprocessors. The idea of HBase Coprocessors was

inspired by Google’s BigTable coprocessors for running arbitrary code closer to the data, i.e., at each RegionServer in

the case of HBase. HBase Coprocessors come in two types:

Observers, which resemble triggers in traditional databases

Endpoints, which are dynamic RPC endpoints resembling stored procedures

Run the following command from the HBase shell in order to enable the AggregateImplementation coprocessor

which enables us to run algebraic aggregations within each RegionServer.

alter 'sentences', METHOD =>

'table_att','coprocessor'=>'|org.apache.hadoop.hbase.coprocessor.AggregateImplementation||'

Then check the table description

describe 'sentences'

It will show something like:

Table sentences is ENABLED

sentences, {TABLE_ATTRIBUTES => {coprocessor1 => '|org.apache.hadoop.hbase.coprocessor.AggregateIm

plementation||'}

COLUMN FAMILIES DESCRIPTION

{NAME => 'info', BLOOMFILTER => 'ROW', ...}

{NAME => 'words', BLOOMFILTER => 'ROW', ...}

Page 8: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

{NAME => 'words', BLOOMFILTER => 'ROW', ...}

In this case we have enabled a coprocessor for a particular table. However, it is also possible to enable a

coprocessor for all tables in the cluster. But for our use-case enabling it for a single table is okay.

In order to use such coprocessor from the JavaAPI, we have to use the aggregation client (

org.apache.hadoop.hbase.client.coprocessor.AggregationClient ) which takes a Configuration object in its

constructor.

AggregationClient aggregationClient = new AggregationClient(config);

Then, different server side aggregations (API) can be invoked throught it by passing two values: a scan (defines

which values will be scanned), and a column interpreter which defines how a value for a specific column is

interpreted, i.e., it takes column family, column qualifier and returns the cell value.

Scan scan = new Scan();

long rowCount = aggregationClient.rowCount(table, new LongColumnInterpreter(), scan);

Running example queries

We suggest that you use the head node of your HBase cluster for this. SSH to it, if you aren't already there.

We wrote a very simple client that implements the four scan queries described above for the shell plus a query for

retrieving the existing number of subjects on the sentences table. Download and unpacke the client:

wget https://exerciseassets.blob.core.windows.net/exercise05/hbase-queries.zip

unzip hbase-queries.zip

To compile it, you need a Java JDK (version 1.7 or higher) as well as Maven. On the head node of your cluster, JDK is

already installed and sudo apt-get install maven is enough to install Maven.

Before you compile, run the following commands inside the directory of the Java client, hbase-query , which you

have unpacked in the step above:

# Get the HBase configuration details of the cluster

cp /etc/hbase/conf/hbase-site.xml ./conf/

# Set up an alias that allows us to run our client on the cluster

alias hbase-query='java -cp hbase-queries-1.0-SNAPSHOT.jar ch.ethz.systems.bigdata19.HBaseQueries "$@"'

Now, compile the client:

mvn package

and switch to the directory with the compiled package:

cd target

You can now run queries using this client by running hbase-query <table-name> <query-name> . This uses the

alias defined above, which connects to your cluster and runs the client there. For example ( ex1q* are predefined

queries in the Java code, with ex1q5 being a new query, which does a row count via server-side operations):

hbase-query sentences ex1q1

hbase-query sentences ex1q2

hbase-query sentences ex1q3

hbase-query sentences ex1q4

hbase-query sentences ex1q5

In the exercise below, you will be asked to write some queries inside hbase shell and in this Java client. The code

contains stubs for queries that you are asked to write in that exercise. The source file that you will have to modify is

in the src folder, following the usual Java directory structure. You can either edit that *.java file directly on the

head node (e.g., with emacs , vim , or nano ), or edit it on your machine and then scp it to the cluster, or through

an SFTP connection from your machine to the cluster.

Do not forget to copy any modifications to the code off the cluster before you delete it: even though the HBase's

data remains intact when you delete a cluster, the local filesystem of the head nodes doesn't.

This was a quick overview of HBase shell commands and Java API. In the following task we will import a real, sizeable

Page 9: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

This was a quick overview of HBase shell commands and Java API. In the following task we will import a real, sizeable

dataset (a subset of Wikipedia) and see how HBase will handle it.

Important: if you do not plan to do the next section right now, please deleteyour cluster and just recreate it when you need it again.

Exercise 2 — The Wikipedia dataset

In this task we will see how HBase will handle a large dataset and see how a choice of column families may affect

performance.

Let's begin. First, SSH to your cluster as in the previous task:

ssh <ssh_user_name>@<cluster_name>-ssh.azurehdinsight.net

Download the compressed dataset:

wget https://exerciseassets.blob.core.windows.net/exercise05/wikibig.tar.gz

Uncompress it:

tar xvf wikibig.tar.gz

The dataset comprises approximately 50K articles of the English Wikipedia. You will see four files:

File What's inside

text.csv Text of the article

author.csv The username of the latest version's author

comment.csv Comment that the author left about the last change to the article

timestamp.csv When this happened

The files are in a comma-separated " key,value " format in which key is the article title.

Before we can insert the data into HBase, we need to upload it into "HDFS" (for our HDInsight cluster it is actually

Azure Blobs). Note that uploading text.csv can take a couple of minutes:

hadoop fs -put author.csv /tmp/

hadoop fs -put comment.csv /tmp/

hadoop fs -put timestamp.csv /tmp/

hadoop fs -put text.csv /tmp/

Let us create the schemas in HBase now

hbase shell

We need to create two different tables, each with a different schema, which we will populate with the same data.

One of them will have a single column family, into which all data will go:

create 'wiki_1colfam', 'data'

The other will have two column families—one for metadata (author, timestamp, comment) and another for the article

text (the biggest column):

create 'wiki_2colfams', 'metadata', 'content'

After the two tables are created, we need to exit the HBase shell and return back to the head node's shell:

exit

Now we need to populate both tables with data. We will use the ImportTsv utility of HBase.

Populate the table 'wiki_1colfam' by running the following four commands, each of which uploads one column. Note

that these commands print a lot of messages, but they are mostly informational with an occasional non-critical

warning; unless something goes wrong, of course :) The commands will also report some "Bad Lines", but you can

safely ignore this—some lines may contain illegal characters and be dropped, but most of the data is in good shape.

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Page 10: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, data:author" wiki_1colfam wasbs:///tmp/author.csv

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, data:comment" wiki_1colfam wasbs:///tmp/comment.csv

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, data:timestamp" wiki_1colfam wasbs:///tmp/timestamp.csv

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, data:text" wiki_1colfam wasbs:///tmp/text.csv

The last command imports the biggest column, text , so it will take time; up to a couple of minutes.

Now we need to populate the other table, wiki_2colfams . We will use the same four commands, but notice that we

use a different table name and that the text column now gets its own column family.

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, metadata:author" wiki_2colfams wasbs:///tmp/author.csv

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, metadata:comment" wiki_2colfams wasbs:///tmp/comment.csv

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, metadata:timestamp" wiki_2colfams

wasbs:///tmp/timestamp.csv

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -

Dimporttsv.columns="HBASE_ROW_KEY, content:text" wiki_2colfams wasbs:///tmp/text.csv

As you have seen above, in order to run server-side aggregations, we need to enable the coprocessor for each table

in the hbase shell .

hbase shell

alter 'wiki_1colfam', METHOD =>

'table_att','coprocessor'=>'|org.apache.hadoop.hbase.coprocessor.AggregateImplementation||'

alter 'wiki_2colfams', METHOD =>

'table_att','coprocessor'=>'|org.apache.hadoop.hbase.coprocessor.AggregateImplementation||'

Tasks to do

Reminder: don't forget to recompile the Java source file by using mvn packageafter inserting your changes.

The Java API path is under hbase-query/src/main/java/ch/ethz/systems/bigdata19/HBaseQueries.java .

1. Write the following queries, once using the shell and once using the Java API (by modifying the sample client):

A. Select all article titles (remember, this is the row key) and author names where the row name starts with

' Albert '

B. Select all article titles and author names where the author contains the substring 'tom '

2. Write the following queries, using the Java API (by modifying the sample client):

A. Return the number of articles from 2017.

B. Return the number of articles that contain the word Attacks on them.

3. Execute your queries on the two tables (more than once) and observe the query execution times

4. What are the advantages and disadvantages of pure row stores?

5. What are the advantages and disadvantages of pure column stores?

6. What are the advantages and disadvantages of wide column stores?

7. What are the advantages and disadvantages of denormalization?

Important: delete your HBase cluster now. As has been saidabove, these clusters burn Azure credit very fast!

Exercise 3 — Architecture of HBase

Page 11: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

In the previous tasks, we have seen HBase in action. Let us now take a look at the internal architecture of HBase.

You may want to consult the lecture slides when solving these tasks.

Task 3.1 — Inside a RegionServer

In this exercise you will see how a RegionServer in HBase would execute a query.

Imagine that we have an HBase table called 'phrases ', which has the following schema:

Column family: words

column: A

column: B

column: C

(potentially also columns D, E, F, etc.)

Thus, the table has only one column family. Each column in this family holds one word.

Recall from the lecture slides that keys in HBase have the following structure:

We need make certain simplifications to the format of keys to avoid excessive clutter in this exercise. Since the table

in this exercise has only one column family, we will omit it from the key and will only specify the column name

(A,B,C, ...). We will also omit the length fields and the "key type" field. The timestamp field in this exercise will

contain integers from 1 to 10, where in reality it would contain the number of milliseconds since an event in the long

past. Thus, keys as will be used in this exercise consist of three fileds: row, column, timestamp.

Tasks to do

State which Key-Value pairs will be returned by each of the following queries, given in HBase shell syntax which you

have already seen in the first exercise. Assume that the HBase instance is configured to return only the latest

version of a cell.

1. get 'phrases', '278'

2. get 'phrases', '636'

3. get 'phrases', '593'

4. get 'phrases', '640'

5. get 'phrases', '443'

To answer this question, use the diagram below, which represents the state of a RegionServer responsible for the

row region in the range of row IDs 100–999, which is the region into which all these queries happen to fall.

A larger, zoomable, PDF version of this diagram is available here.

Page 12: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

Task 3.2 — Bloom filters

Bloom filters are a data structure used to speed up queries, useful in the case in which it's likely that the value we

are looking doesn't exist in the collection we are querying. Their main component is a bit array with all values initially

set to 0. When a new element is inserted in the collection, its value is first run through a certain number of (fixed)

hash functions, and the locations in the bit array corresponding to the outputs of these functions are set to 1.

This means that when we query for a certain value, if the value has previously been inserted in the collection then all

the locations corresponding to the hash function outputs will certainly already have been set to 1. On the contrary, if

the element hasn't been previously inserted, then the locations may or may not have already been set to 1 by other

elements.

Then, if prior to accessing the collection we run our queried value through the hash functions, check the locations

corresponding to the outputs, and find any of them to be 0, we are guaranteed that the element is not present in the

collection (No False Negatives), and we don't have to waste time looking. If the corresponding locations are all set to

1, the element may or may not be present in the collection (Possibility of False Positives), but in the worst case we're

just wasting time.

As you have seen in the task above, HBase has to check all HFiles, along with the MemStore, when looking for a

particular key. As an optimisation, Bloom filters are used to avoid checking an HFile if possible. Before looking inside

a particular HFile, HBase first checks the requested key against the Bloom filter associated with that HFile. If it says

that the key does not exist, the file is not read.

In this task we will look at how Bloom filters work. We will use a list of words to populate a Bloom filter and we will

then query it.

Bloom filter requires several hash functions. To keep things easily computable by a human, we will define the

following three hash functions for the purpose of this exercise:

1. Given an English word x, the value of the first hash function, hash1(x), is equal to the first letter of the word. E.g.:

hash1(" federal ") = " f "

2. Given an English word x, the value of the second hash function, hash2(x), is equal to the second letter of the

word. E.g.: hash2(" federal ") = " e "

3. Given an English word x, the value of the third hash function, hash3(x), is equal to the third letter of the word.

E.g.: hash3(" federal ") = " d "

A Bloom filter starts with a bit array which has value "0" recorded for each possible output value of all three hash

functions (or, for example, modulo the size of the bit array, if the output range of the hash functions is too large).

When we add an element to a Bloom filter, we compute the three values of the three hash functions and set those

locations in the Bloom filter to "1". For example, if we add " federal " to the Bloom filter using the three hash

functions that we have defined above, we get the following:

1 1 1

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Here, only values "1" are displayed to avoid cluttering the view; thus, if a cell is empty, it is assumed to hold a "0".

First, populate the following table (double-click the table to edit it and hit Ctrl+Enter to exit the editing mode; you

are also free to do this task in some other tool, of course):

Word hash1 hash2 hash3

Page 13: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

Word hash1 hash2 hash3

round

sword

past

pale

nothing

darkness

water

feet

thin

passage

corner

Word hash1 hash2 hash3

Now, add each word from the list into the following Bloom filter (you can also double-click to edit it; you can double-

click the Bloom filter populated with "federal" above to see an example of a filled-in filter):

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

For each word from the following list, state whether this Bloom filter reports it as belonging to the set or not (skip

filling-in the hash columns, if you want):

Word hash1 hash2 hash3The Bloom filter says the word belongs to the set:

(yes/no)

sword

sound

psychic

pale

book

deaf

truss

Which of the words that were flagged by the Bloom filter as belonging to the set are actually not in the set (a false

positive outcome)?

Which of the words that were flagged by the Bloom filter as not belonging to the set actually do belong to the set (a

false negative outcome)?

Task 3.3 — Building an HFile index

When performing a get, the RegionServer needs to check its MemStore and all HFiles (unless the Bloom filter returns

negative) for the existence of the requested key. In order to avoid scanning HFiles entirely, HBase uses index

structures to quickly skip to the position of the HBase block which may hold the requested key.

By default, each HBase block is 64KB (configurable) in size and always contains whole key-value pairs, so, if a block

needs more than 64KB to avoid splitting a key-value pair, it will just grow.

In this task, you will be building the index of an HFile. For the purpose of this exercise, assume that each HBase

block is 40 bytes long, and each character in keys and values is worth 1 byte: for example, the first key-value pair in

the diagram below is worth 3 + 1 + 1 + 6 = 11 bytes. Below this diagram you will find a table for you to fill in.

Page 14: Exercise 1 — Creating and using an HBase cluster · The standard way to talk to an HBase cluster is via the Java API, and we will do this later in this exercise. But first, we will

Based on the contents of the HFile above, you need to populate the index, following the approach described in the

lecture slides. Use the following table (again, you can edit it by double-clicking). Use as many or as few rows as you

need.

RowId Column Version

Exercise 4 — Thinking about the schema (optional)

A very important schema design question in HBase is the choice of the row key.

Imagine that you have a dataset containing:

addresses of websites (URLs), potentially of all websites available online

for each URL: the country in which the owner of the website is registered

for each URL and for each country in the world: the number of visits to that URL from that country during the last

month

You plan to store this dataset in HBase. For each of the following queries, state what you think is the best choice for

the row key:

1. Given a particular URL, count the total number of visits

2. Given a particular country, find the URL that is visited the most by the users from that country

3. Among all URLs whose owners are registered in a particular country, find the most visited one.