New HTTP Logger Grails plugin

I’ve wrote a new Grails plugin – httplogger. It logs: request information (url, headers, cookies, method, body), grails dispatch information (controller, action, parameters), response information (elapsed time and body). It is mostly useful for logging your REST traffic. Full HTTP web pages can be huge to log and generally waste your space. I suggest to map all of your REST controllers with the same path in UrlMappings, e.g. /rest/ and configure this plugin with this path. Here is some simple output just to give you a taste of it. 17:16:00,331 INFO filters.LogRawRequestInfoFilter - 17:16:00,340 INFO filters.LogRawRequestInfoFilter - 17:16:00,342 INFO filters.LogGrailsUrlsInfoFilter - 17:16:00,731 INFO filters.LogOutputResponseFilter - >> #1 returned 200, took 405 ms.17:16:00,745 INFO filters.LogOutputResponseFilter - >> #1 responded with '{count:0}' 17:18:55,799 INFO filters.LogRawRequestInfoFilter - 17:18:55,799 INFO filters.LogRawRequestInfoFilter - 17:18:55,800 INFO filters.LogRawRequestInfoFilter - 17:18:55,801 INFO filters.LogOutputResponseFilter - >> #2 returned 404, took 3 ms.17:18:55,802 INFO filters.LogOutputResponseFilter - >> #2 responded with '' Official plugin information can be found on Grails plugins website here: http://grails.org/plugins/httplogger or you can browse code on github: TouK/grails-httplogger.

I’ve wrote a new Grails plugin – httplogger. It logs:

  • request information (url, headers, cookies, method, body),
  • grails dispatch information (controller, action, parameters),
  • response information (elapsed time and body).

It is mostly useful for logging your REST traffic. Full HTTP web pages can be huge to log and generally waste your space. I suggest to map all of your REST controllers with the same path in <span class="class">UrlMappings</span>, e.g. /rest/ and configure this plugin with this path.

Here is some simple output just to give you a taste of it.

17:16:00,331 INFO  filters.LogRawRequestInfoFilter  - << #1 GET http://localhost:8080/riddle/rest/index?username=admin&search=foo<br />17:16:00,340 INFO  filters.LogRawRequestInfoFilter  - << #1 headers Cookie: 'JSESSIONID=DF4EA5725AC4A4990281BD96963739B0; splashShown1.6=1', Accept-Language: 'en-US,en;q=0.8,pl;q=0.6', X-MyHeader: 'null'<br />17:16:00,342 INFO  filters.LogGrailsUrlsInfoFilter  - << #1 dispatched to rest/index with parsed params [username:[admin], search:[foo]].<br />17:16:00,731 INFO  filters.LogOutputResponseFilter  - >> #1 returned 200, took 405 ms.<br />17:16:00,745 INFO  filters.LogOutputResponseFilter  - >> #1 responded with '{count:0}'

17:18:55,799 INFO  filters.LogRawRequestInfoFilter  - << #2 POST http://localhost:8080/riddle/rest/login<br />17:18:55,799 INFO  filters.LogRawRequestInfoFilter  - << #2 headers Cookie: 'JSESSIONID=DF4EA5725AC4A4990281BD96963739B0; splashShown1.6=1', Accept-Language: 'en-US,en;q=0.8,pl;q=0.6', X-MyHeader: 'null'<br />17:18:55,800 INFO  filters.LogRawRequestInfoFilter  - << #2 body: 'username=admin&password=password'<br />17:18:55,801 INFO  filters.LogOutputResponseFilter  - >> #2 returned 404, took 3 ms.<br />17:18:55,802 INFO  filters.LogOutputResponseFilter  - >> #2 responded with ''

Official plugin information can be found on Grails plugins website here: http://grails.org/plugins/httplogger or you can browse code on github: TouK/grails-httplogger.

You May Also Like

Distributed scans with HBase

HBase is by design a columnar store, that is optimized for random reads. You just ask for a row using rowId as an identifier and you get your data instantaneously. Performing a scan on part or whole table is a completely different thing. First of all, it is sequential. Meaning it is rather slow, because it doesn't use all the RegionServers at the same time. It is implemented that way to realize the contract of Scan command - which has to return results sorted by key. So, how to do this efficiently?HBase is by design a columnar store, that is optimized for random reads. You just ask for a row using rowId as an identifier and you get your data instantaneously. Performing a scan on part or whole table is a completely different thing. First of all, it is sequential. Meaning it is rather slow, because it doesn't use all the RegionServers at the same time. It is implemented that way to realize the contract of Scan command - which has to return results sorted by key. So, how to do this efficiently?