Distributed scans with HBase

HBase is by design a columnar store, that is optimized for random reads. You just ask for a row using rowId as an identifier and you get your data instantaneously. Performing a scan on part or whole table is a completely different thing. First of all, it is sequential. Meaning it is rather slow, because it doesn’t use all the RegionServers at the same time. It is implemented that way to realize the contract of Scan command – which has to return results sorted by key. So, how to do this efficiently?HBase is by design a columnar store, that is optimized for random reads. You just ask for a row using rowId as an identifier and you get your data instantaneously. Performing a scan on part or whole table is a completely different thing. First of all, it is sequential. Meaning it is rather slow, because it doesn’t use all the RegionServers at the same time. It is implemented that way to realize the contract of Scan command – which has to return results sorted by key. So, how to do this efficiently?

HBase is by design a columnar store, that is optimized for random reads.
You just ask for a row using rowId as an identifier and you get your
data instantaneously.

Performing a scan on part or whole table is a completely different thing.
First of all, it is sequential. Meaning it is rather slow, because it
doesn’t use all the RegionServers at the same time. It is implemented
that way to realize the contract of Scan command – which has to return
results sorted by key.

So, how to do this efficiently?

The usual way of getting data from HBase is with the help of its API,
mainly Scan objects. To accomplish the task I’ll use just them. I’ll
specify startRow and stopRow, so that each Scan request will be looking
through only part of the key space.

It is crucial to note, that this whole solution works because of key
sorting properties in HBase. So, HBase scans a table according to ascending key
values. Since keys are of String type, key with value “1” is smaller
than “2”, because they are sorted lexicographicly. So, also key with value “12345” is smaller than “2”. I’ve
leveraged this property so that I’ve partitioned my whole key space according to
the first character of the key. In my case keys contain only digits. So I
have 10 ranges:

null-1
1-2
2-3
3-4
4-5
5-6
6-7
7-8
8-9
9-null

The speedup comes from the fact, that each range resides in its own
partition. That’s right, I’ve presplit the table to have 10 partitions.
This corresponds rather nicely with my cluster’s setup, because I have
more than 10 RegionServers. So every partition should be on different
RegionServer. It will allow the code to do the requested scan operations
in parallel – giving me this exact performance boost.

How I’ve created the input table:

$ create 'tariff_changes', { NAME => 'cf', SPLITS_FILE => 'splits.txt', VERSIONS => 50, MAX_FILESIZE => 1073741824 }

$ alter 'tariff_changes', { NAME => 'cf', SPLITS_FILE => 'splits.txt', VERSIONS => 50, MAX_FILESIZE => 1073741824 }

Split file is just something along this lines:

1
2
3
4
5
6
7
8
9
0

This tells HBase what are the rowKeys starting and ending each of the
partitions.

Ok, so after this rather lengthy introduction, what the actual code
does? It just spins of a few threads – one for each partition – and runs
a Scan request tailored to that partitions key space. This way, I got a
10x speedup for this particular scan. The execution time went down from
30 minutes to 3 for my use case.

I’ve created an example implementation of this idea. You can find it on
GitHub:
https://github.com/zygm0nt/hbase-distributed-search.

Any ideas on how to speed things up even more with HBase?

You May Also Like

Need to make a quick json fixes – JSONPath for rescue

From time to time I have a need to do some fixes in my json data. In a world of flat files I do this with grep/sed/awk tool chain. How to handle it for JSON? Searching for a solution I came across the JSONPath. It quite mature tool (from 2007) but I haven't hear about it so I decided to share my experience with others.

First of all you can try it without pain online: http://jsonpath.curiousconcept.com/. Full syntax is described at http://goessner.net/articles/JsonPath/



But also you can download python binding and run it from command line:
$ sudo apt-get install python-jsonpath-rw
$ sudo apt-get install python-setuptools
$ sudo easy_install -U jsonpath

After that you can use inside python or with simple cli wrapper:
#!/usr/bin/python
import sys, json, jsonpath

path = sys.argv[
1]

result = jsonpath.jsonpath(json.load(sys.stdin), path)
print json.dumps(result, indent=2)

… you can use it in your shell e.g. for json:
{
"store": {
"book": [
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95
},
{
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99
},
{
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99
},
{
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99
}
],
"bicycle": {
"color": "red",
"price": 19.95
}
}
}

You can print only book nodes with price lower than 10 by:
$ jsonpath '$..book[?(@.price 

Result:
[
{
"category": "reference",
"price": 8.95,
"title": "Sayings of the Century",
"author": "Nigel Rees"
},
{
"category": "fiction",
"price": 8.99,
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"author": "Herman Melville"
}
]

Have a nice JSON hacking!From time to time I have a need to do some fixes in my json data. In a world of flat files I do this with grep/sed/awk tool chain. How to handle it for JSON? Searching for a solution I came across the JSONPath. It quite mature tool (from 2007) but I haven't hear about it so I decided to share my experience with others.