Gerrit notifications via Rocket.Chat

Case

Gerrit often sends a lot of emails, especially if you take part in many projects. For a while we felt that sometimes it’s hard to notice the most important ones, like reviewers’ and Sputnik‘s comments on our changes. We use Rocket.Chat for text communication inside the company and most of us are connected throughout the day, so we thought it might be useful to get chat message notification every time a comment is added on one’s changes.

Gerrit hooks

Gerrit has a built-in mechanism for running hooks – scripts that are called whenever a specific event occurs. The script must be named the same as the hook. We created a bash script named comment-added, which is run every time someone adds a comment. Gerrit provides it with a lot of useful parameters, like project name, comment author, score, change owner, etc. Full documentation can be found here. After parsing those parameters, we can send a message to change owner on Rocket.Chat.

Gerrit hooks script have to be placed in a certain location. To avoid manually updating the files there, we set up a repository for hook scripts. They are periodically pulled to the correct location which simplifies the process of making changes to the scripts.

Integration with Rocket.Chat

Rocket.Chat has a pretty versatile REST API that allows us to send messages by calling curl from the comment-added script. Full documentation for the API can be found here. Currently, the API is in beta version, but so far we haven’t had any issues with it.

We previously set up a user that sends Jenkins automatic messages to our team chat and we reuse this user here. Firstly, we have to log in by calling api/v1/login endpoint:

curl https://<Rocket.Chat server address>/api/v1/login -d "username=<username>&password=<password>"

In response, we get a JSON with the logged user’s id and authorization token:

{
  "status": "success",
  "data": {
    "authToken": "<authorization token>",
    "userId": "<user id>"
  }
}

Next, we send a direct message to user by their username (in our case, we can get the username from change owner’s email), calling api/v1/chat.postMessage endpoint. This sends a direct message to the user, even if there was no previous conversation between the users – no need to set up a room or open chat. Example:

curl -H "X-Auth-Token: <authorization token>" \
  -H "X-User-Id: <user id>" \
  -H "Content-Type: application/json" \
  -d "{\"channel\": \"@<recipient username>\", \"text\": \"<message>\"}" \
  https://<Rocket.Chat server address>/api/v1/chat.postMessage

Summary

We created a simple script to solve the issue of getting notified when we get comments on our changes. So far, the team seems pleased with how this works and finds it quite useful. We hope that it would be useful for you as well – full code can be found here.

You May Also Like

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.