Hadoop HA setup

With the advent of Hadoop’s 2.x version, there finally is a working
High-Availability solution. Even two of those. Now it really is easy to
configure and use those solutions. It no longer require external
components, like
DRBD.
It all is just neatly packed into Cloudera Hadoop distribution – the
precursor of this solution.

Read on to find out how to use it.

The most important weakness of previous Hadoop releases was the
single-point-of-failure, which happend to be NameNode. NameNode as a key
component of every Hadoop cluster, is responsible for managing
filesystem namespace information and block location. Loosing its data results in loosing all the data
stored on DataNodes. HDFS is no longer able to reach for specific files,
or its blocks. This renders your cluster inoperable.

So it is crucial to be able to detect and counter problems with NameNode.
The most desirable behavior is to have a hot backup, that would ensure
a no-downtime cluster operation. To achieve this, the second NameNode
need to have up-to-date information on filesystem metadata and it needs
to be also up and running. Starting NameNode with existing set of data
may easily take many minutes to parse the actual filesystem state.

Previously used solution – depoying SecondaryNameNode – was somewhat
flawed. It took long time to recover after failure. It was not a
hot-backup solution, which also added to the problem. Some other
solution was required.

So, what needed to be made redundant is the edits dir contents and
sending block location maps from each of the DataNodes to NameNodes –
in case of HA deployment – to both NameNodes. This was accomplished in
two steps. The first one with the release of CDH 4 beta – solution based
on sharing edits directory. Than, with CDH 4.1 came quorum based solution.

Find out how to configure those on your cluster.

Shared edits directory solution

For this kind of setup, there is an assumption, that in a cluster exists
a shared storage directory. It should be deployed using some kind of
network-based filesystem. You could try with NFS or GlusterFS.

<property>
  <name>fs.default.name/name>
  <value>hdfs://example-cluster</value>
</property>
<!-- common server name -->
<property>
  <name>dfs.nameservices</name>
  <value>example-cluster</value>
</property>

<!-- HA configuration -->
<property>
  <name>dfs.ha.namenodes.example-cluster</name>
  <value>nn1,nn2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.example-cluster.nn1</name>
  <value>master1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.example-cluster.nn2</name>
  <value>master2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.example-cluster.nn1</name>
  <value>0.0.0.0:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.example-cluster.nn2</name>
  <value>0.0.0.0:50070</value>
</property>

<!-- Storage for edits' files -->
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>file:///mnt/filer1/dfs/ha-name-dir-shared</value>
</property>

<!-- Client failover -->
<property>
  <name>dfs.client.failover.proxy.provider.example-cluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<!-- Fencing configuration -->
<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>
 <property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/home/user/.ssh/id_dsa</value>
</property>


<!-- Automatic failover configuration -->
<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>
<property>
  <name>ha.zookeeper.quorum</name>
  <value>zk1:2181,zk2:2181,zk3:2181</value>
</property>

This setup is quite OK, as long as you’re comfortable with maintaining a
separate service (network storage) for handling the HA state. It seems
error prone to me, because it adds another service which high
availability should be ensured. NFS seems to be a bad choice here,
because AFAIK it does not offer HA out of the box.

On the other hand, we have GlusterFS, which is a distributed filesystem,
you can deploy on multiple bricks and increase the replication level.

Nevertheless, it still brings additional burden of another service to
maintain.

Quorum based solution

With the release of CDH 4.1.0 we are now able to use a much better
integrated solution called JournalNode. Now all the updates are
synchronized through a JournalNode. Each JournalNode have the same data
and all the NameNodes are able to recive filesystem state updates from
that daemons.

This solution is much more consistent with Hadoop ecosystem.

Please note, that the config is almost identical to the one needed for
shared edits directory solution. The only difference is the value for
dfs.namenode.shared.edits.dir. This now points to all the journal
nodes deployed in our cluster.

<property>
  <name>fs.default.name/name>
  <value>hdfs://example-cluster</value>
</property>
<!-- common server name -->
<property>
  <name>dfs.nameservices</name>
  <value>example-cluster</value>
</property>

<!-- HA configuration -->
<property>
  <name>dfs.ha.namenodes.example-cluster</name>
  <value>nn1,nn2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.example-cluster.nn1</name>
  <value>master1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.example-cluster.nn2</name>
  <value>master2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.example-cluster.nn1</name>
  <value>0.0.0.0:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.example-cluster.nn2</name>
  <value>0.0.0.0:50070</value>
</property>

<!-- Storage for edits' files -->
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://node1:8485;node2:8485;node3:8485/example-cluster</value>
</property>

<!-- Client failover -->
<property>
  <name>dfs.client.failover.proxy.provider.example-cluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<!-- Fencing configuration -->
<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>
 <property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/home/user/.ssh/id_dsa</value>
</property>


<!-- Automatic failover configuration -->
<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>
<property>
  <name>ha.zookeeper.quorum</name>
  <value>zk1:2181,zk2:2181,zk3:2181</value>
</property>

Infrastructure

In both cases you need to run Zookeeper-based Failover Controller
(hadoop-hdfs-zkfc). This daemon negotiates which NameNode should
become active and which standby.

But that’s not all. Depending on the way you’ve choosen to deploy HA you
need to do some other things:

Shared edits dir

With shared edits dir you need to deploy networked filesystem, and mount
it on your NameNodes. After that you can run your cluster and be happy
with your new HA.

Quroum based

For QJournal to operate you need to install one new package called
hadoop-hdfs-journalnode. This provides startup scripts for Journal
Node daemons. Choose at least three nodes that will be responsible for
handling edits state and deploy journal nodes on them.

Conclusion

Thanks to guys from Cloudera we now can use an enterprise grade High
Availability features for Hadoop. Eliminating the single point of
failure in your cluster is essential for easy maintainability of your
infrastructure.

Given the above choices, I’d suggest using QJournal setup, becasue of
its relatively small impact on the overal cluster architecture. It’s
good performance and fairly simple setup enable the users to easily
start using Hadoop in HA setup.

Are you using Hadoop with HA? What are your impressions?

You May Also Like

After WHUG meeting

Here are the slides from the talk a gave yesterday. If you have any questions, please ask. Here are the slides from the talk a gave yesterday. If you have any questions, please ask.

CasperJS for Java developers

Why CasperJS

Being a Java developer is kinda hard these days. Java may not be dead yet, but when keeping in sync with all the hipster JavaScript frameworks could make us feel a bit outside the playground. It’s even hard to list JavaScript frameworks with latest releases on one website.

In my current project, we are using AngularJS. It’a a nice abstraction of MV* pattern in frontend layer of any web application (we use Grails underneath). Here is a nice article with an 8-point Win List of Angular way of handling AJAX calls and updating the view. So it’s not only a funny new framework but a truly helper of keeping your code clean and neat.

But there is also another area when you can put helpful JS framework in place of plan-old-java one - functional tests. Especially when you are dealing with one page app with lots of asynchronous REST/JSON communication.

Selenium and Geb

In Java/JVM project the typical is to use Selenium with some wrapper like Geb. So you start your project, setup your CI-functional testing pipeline and… after 1 month of coding your tests stop working and being maintainable. The frameworks itselves are not bad, but the typical setup is so heavy and has so many points of failure that keeping it working in a real life project is really hard.

Here is my list of common myths about Selenium: * It allows you to record test scripts via handy GUI - maybe some static request/response sites. In modern web applications with asynchronous REST/JSON communication your tests must contain a lot of “waitFor” statements and you cannot automate where these should be included. * It allows you to test your web app against many browsers - don’t try to automate IE tests! You have to manually open your app in IE to see how it actually bahaves! * It integrates well with continuous integration servers like Jenkins - you have to setup Selenium Grid on server with X installed to run tests on Chrome or Firefox and a Windows server for IE. And the headless HtmlUnit driver lacks a lot of JS support.

So I decided to try something different and introduce a bit of JavaScript tooling in our project by using CasperJS.

Introduction

CasperJS is simple but powerful navigation scripting & testing utility for PhantomJS - scritable headless WebKit (which is an rendering engine used by Safari and Chrome). In short - CasperJS allows you to navigate and make assertions about web pages as they’d been rendered in Google Chrome. It is enough for me to automate the functional tests of my application.

If you want a gentle introduction to the world of CasperJS I suggest you to read: * Official website, especially installation guide and API * Introductionary article from CasperJS creator Nicolas Perriault * Highlevel testing with CasperJS by Kevin van Zonneveld * grails-angular-scaffolding plugin by Rob Fletcher with some working CasperJS tests

Full example

I run my test suite via following script:

casperjs test --direct --log-level=debug --testhost=localhost:8080 --includes=test/casper/includes/casper-angular.coffee,test/casper/includes/pages.coffee test/casper/specs/

casper-angular.coffe

casper.test.on "fail", (failure) ->
    casper.capture(screenshot)

testhost   = casper.cli.get "testhost"
screenshot = 'test-fail.png'

casper
    .log("Using testhost: #{testhost}", "info")
    .log("Using screenshot: #{screenshot}", "info")

casper.waitUntilVisible = (selector, message, callback) ->
    @waitFor ->
        @visible selector
    , callback, (timeout) ->
        @log("Selector [#{selector}] not visible, failing")
        withParentSelector selector, (parent) ->
            casper.log("Output of parent selector [#{parent}]")
            casper.debugHTML(parent)
        @echo message, "RED_BAR"
        @capture(screenshot)
        @test.fail(f("Wait timeout occured (%dms)", timeout))

withParentSelector = (selector, callback) ->
    if selector.lastIndexOf(" ") > 0
       parent = selector[0..selector.lastIndexOf(" ")-1]
       callback(parent)

Sample pages.coffee:

x = require('casper').selectXPath

class EditDocumentPage

    assertAt: ->
        casper.test.assertSelectorExists("div.customerAccountInfo", 'at EditDocumentPage')

    templatesTreeFirstCategory: 'ul.tree li label'
    templatesTreeFirstTemplate: 'ul.tree li a'
    closePreview: '.closePreview a'
    smallPreview: '.smallPreviewContent img'
    bigPreview: 'img.previewImage'
    confirmDelete: x("//div[@class='modal-footer']/a[1]")

casper.editDocument = new EditDocumentPage()

End a test script:

testhost = casper.cli.get "testhost" or 'localhost:8080'

casper.start "http://#{testhost}/app", ->
    @test.assertHttpStatus 302
    @test.assertUrlMatch /\/fakeLogin/, 'auto login'
    @test.assert @visible('input#Create'), 'mock login button'
    @click 'input#Create'

casper.then ->
    @test.assertUrlMatch /document#\/edit/, 'new document'
    @editDocument.assertAt()
    @waitUntilVisible @editDocument.templatesTreeFirstCategory, 'template categories not visible', ->
        @click @editDocument.templatesTreeFirstCategory
        @waitUntilVisible @editDocument.templatesTreeFirstTemplate, 'template not visible', ->
            @click @editDocument.templatesTreeFirstTemplate

casper.then ->
    @waitUntilVisible @editDocument.smallPreview, 'small preview not visible', ->
        # could be dblclick / whatever
        @mouseEvent('click', @editDocument.smallPreview)

casper.then ->
    @waitUntilVisible @editDocument.bigPreview, 'big preview should be visible', ->
        @test.assertEvalEquals ->
            $('.pageCounter').text()
        , '1/1', 'page counter should be visible'
        @click @editDocument.closePreview

casper.then ->
    @click 'button.cancel'
    @waitUntilVisible '.modal-footer', 'delete confirmation not visible', ->
        @click @editDocument.confirmDelete

casper.run ->
    @test.done()

Here is a list of CasperJS features/caveats used here:

  • Using CoffeeScript is a huge win for your test code to look neat
  • When using casper test command, beware of different (than above articles) logging setup. You can pass --direct --log-level=debug from commandline for best results. Logging is essential here since Phantom often exists without any error and you do want to know what just happened.
  • Extract your helper code into separate files and include them by using --includes switch.
  • When passing server URL as a commandline switch remember that in CoffeeScript variables are not visible between multiple source files (unless getting them via window object)
  • It’s good to override standard waitUntilVisible with capting a screenshot and making a proper log statement. In my version I also look for a parent selector and debugHTML the content of it - great for debugging what is actually rendered by the browser.
  • Selenium and Geb have a nice concept of Page Objects - an abstract models of pages rendered by your application. Using CoffeeScript you can write your own classes, bind selectors to properties and use then in your code script. Assigning the objects to casper instance will end up with quite nice syntax like @editDocument.assertAt().
  • There is some issue with CSS :first and :last selectors. I cannot get them working (but maybe I’m doing something wrong?). But in CasperJS you can also use XPath selectors which are fine for matching n-th child of some element (x("//div[@class='modal-footer']/a[1]")).
    Update: :first and :last are not CSS3 selectors, but JQuery ones. Here is a list of CSS3 selectors, all of these are supported by CasperJS. So you can use nth-child(1) is this case. Thanks Andy and Nicolas for the comments!

Working with CasperJS can lead you to a few hour stall, but after getting things working you have a new, cool tool in your box!