Control your bandwidth using ntop

I was looking for tool which could help me check who is using my bandwidth. Here are requirements which I want from this kind of tool:local hosts bandwidth distribution – it is helpful when you are loosing your bandwidth and don’t know who abuse it in …

I was looking for tool which could help me check who is using my bandwidth. Here are requirements which I want from this kind of tool:

  1. local hosts bandwidth distribution – it is helpful when you are loosing your bandwidth and don’t know who abuse it in your local network
  2. remote hosts bandwidth distribution – it is useful in situation when you want to have control over DoS attacks for your public homepage or when your QoS are not set well

 

Gargoyle

My first shoot is to check what features can give me my TP-Link TL-WR941ND router. I’ve installed on it Gargoyle (modification of OpenWRT with some additional features) some time ago. It has some useful monitoring features:

  • bandwidth distribution pie charts which answer for my first requirement but I can’t check the time when bandwidth was used there
  • connections track – from this I can check two sides of connection (also remote host) and how much of data was send/received but it also doesn’t show this information in time domain and it is served in less friendly, text form

 

It was no exactly what I’m looking for. Therefor I checked what what can we find in OPKG (OpenWRT Package Management).

SNMP + NagiosGraph

I tried to find how I can link Nagios (with NagiosGraph) with my router because I already have some experience with this tools. I found out that there is check_snmp Nagios plugin which can realize this. In OPKG there is mini-snmpd package. It is light SNMP server implementation. You can run it after login by SSH to you router and execute this command:

After this you can check available from server data:
In returned MIB tree there are some useful data like server’s uptime, disk space and also interface’s bandwidth. The last one, stored in Round Robin Database and printed by NagiosGraph will give graphs of bandwidth usage in time domain. But will not show who exactly use bandwidth!

 

Other software

I continue searches in OpenWRT packages. I came across good OpenWRT wiki page: http://wiki.openwrt.org/doc/howto/bwmon describing some available stuff.

 

ntop

Among other there is mentioned ntop – extensive application written in C with many views showing statistics of network protocols usage. Installation of this application on my router with 400MHz CPU will be not the best idea. So I tried to install it on my home server and only send data to it from router by fprobe. At first I installed ntop available from ubuntu 12.04 server’s APT repository. There is available 3:4.1.0+dfsg1-1 version. After some simple configuration steps ntop start drawing graphs.

 

I simulate situation when from remote server I was downloading a big file from my home server. I was disappointed when noticed that I can’t read that this situation taking place from ntop graphs.

 

listening on interface in promiscuous mode

Some time ago I’ve done tcpdump logs analyzer on my studies. I remind that interface working in promiscuous mode can collect all data about local network traffic just like the router. To enable this mode you should exec this command:

Or if you want to set this state persistent you should edit your /etc/network/interfaces to look like this:
If the server where you want to listen for all packages is a VirutalBox vhost you should also verify that it is set promiscuous mode to ”Allow all” in their network configuration like on screenshot below.

ntop v.5.0.2

After this settings we can run ntop on any server in our local network. I give a try for a development version which you can download from ntop homepage: http://www.ntop.org/get-started/download/. Configure script led me through necessary packages that you must install before compilation. After this I run make and sudo make install. To manage ntop using init scripts I used existing /etc/init.d/ntop script and just edited a line with location of DEAMON value – setting them to /usr/local/bin/ntop value. I also removed -n 0 switch from /etc/default/ntop because I hope that bug with DNS resolution is already fixed (it is a little note in config about it).

 

I started deamon by service ntop start. In syslog there was nothing alarming – ntop started collecting traffic statistics. After login I checked available features.

  • Network load – this page shows all load in our network in four time intervals: 10mins, last hour, last day, last month
  • Top talkers – similar to network load intervals, shows how hosts were using bandwidth in past
  • Traffic maps: Region map & hosts map – ntop is connected to Google Maps and shows where are located hosts that we are talking to
  • Activity: how changes activity of hosts in every hour
  • And other – there are other useful things like Protocol statistics, Map of connections between hosts generated in dot and many more
After some tests I noticed that now I have full control about how my network is used (also find out that I have some scheduled script that every minute send unnecessary MBs of data ;-)).

 

little fix

This tests help me find out that there is a little bug in page showing top talkers of an hour. I submitted patch fixing it to ntop’s request tracker if you are interested in: http://sourceforge.net/tracker/?func=detail&aid=3559097&group_id=17233&atid=367233. This is a patch to r5644.

On the end

My adventure with traffic monitoring tools ended on ntop. It is a great tool which fits my needs. Now I know who consumes my resources and can set QoS rules which make my internet connection more responsive.

You May Also Like

Turing completeness II

Well, as I wrote in the previous post, sed is a Turing complete language. We can use it to implement some simple algorithms, or even a dc interpreter. But what does it really mean? How complex tasks may we achieve using plain sed?What about writin...Well, as I wrote in the previous post, sed is a Turing complete language. We can use it to implement some simple algorithms, or even a dc interpreter. But what does it really mean? How complex tasks may we achieve using plain sed?What about writin...

33rd Degree day 1 review

33rd Degree is over. After the one last year, my expectations were very high, but Grzegorz Duda once again proved he's more than able to deliver. With up to five tracks (most of the time: four presentations + one workshop), and ~650 attendees,  there was a lot to see and a lot to do, thus everyone will probably have a little bit different story to tell. Here is mine.

Twitter: From Ruby on Rails to the JVM

Raffi Krikorian talking about Twitter and JVM
The conference started with  Raffi Krikorian from Twitter, talking about their use for JVM. Twitter was build with Ruby but with their performance management a lot of the backend was moved to Scala, Java and Closure. Raffi noted, that for Ruby programmers Scala was easier to grasp than Java, more natural, which is quite interesting considering how many PHP guys move to Ruby these days because of the same reasons. Perhaps the path of learning Jacek Laskowski once described (Java -> Groovy -> Scala/Closure) may be on par with PHP -> Ruby -> Scala. It definitely feels like Scala is the holy grail of languages these days.

Raffi also noted, that while JVM delivered speed and a concurrency model to Twitter stack, it wasn't enough, and they've build/customized their own Garbage Collector. My guess is that Scala/Closure could also be used because of a nice concurrency solutions (STM, immutables and so on).

Raffi pointed out, that with the scale of Twitter, you easily get 3 million hits per second, and that means you probably have 3 edge cases every second. I'd love to learn listen to lessons they've learned from this.

 

Complexity of Complexity


The second keynote of the first day, was Ken Sipe talking about complexity. He made a good point that there is a difference between complex and complicated, and that we often recognize things as complex only because we are less familiar with them. This goes more interesting the moment you realize that the shift in last 20 years of computer languages, from the "Less is more" paradigm (think Java, ASM) to "More is better" (Groovy/Scala/Closure), where you have more complex language, with more powerful and less verbose syntax, that is actually not more complicated, it just looks less familiar.

So while 10 years ago, I really liked Java as a general purpose language for it's small set of rules that could get you everywhere, it turned out that to do most of the real world stuff, a lot of code had to be written. The situation got better thanks to libraries/frameworks and so on, but it's just patching. New languages have a lot of stuff build into, which makes their set of rules and syntax much more complex, but once you get familiar, the real world usage is simple, faster, better, with less traps laying around, waiting for you to fall.

Ken also pointed out, that while Entity Service Bus looks really simple on diagrams, it's usually very difficult and complicated to use from the perspective of the programmer. And that's probably why it gets chosen so often - the guys selling/buying it, look no deeper than on the diagram.

 

Pointy haired bosses and pragmatic programmers: Facts and Fallacies of Software Development

Venkat Subramaniam with Dima
Dima got lucky. Or maybe not.

Venkat Subramaniam is the kind of a speaker that talk about very simple things in a way, which makes everyone either laugh or reflect. Yes, he is a showman, but hey, that's actually good, because even if you know the subject quite well, his talks are still very entertaining.
This talk was very generic (here's my thesis: the longer the title, the more generic the talk will be), interesting and fun, but at the end I'm unable to see anything new I'd have learned, apart from the distinction between Dynamic vs Static and Strong vs Weak typing, which I've seen the last year, but managed to forgot. This may be a very interesting argument for all those who are afraid of Groovy/Ruby, after bad experience with PHP or Perl.

Build Trust in Your Build to Deployment Flow!


Frederic Simon talked about DevOps and deployment, and that was a miss in my  schedule, because of two reasons. First, the talk was aimed at DevOps specifically, and while the subject is trendy lately, without big-scale problems, deployment is a process I usually set up and forget about. It just works, mostly because I only have to deal with one (current) project at a time. 
Not much love for Dart.
Second, while Frederic has a fabulous accent and a nice, loud voice, he tends to start each sentence loud and fade the sound at the end. This, together with mics failing him badly, made half of the presentation hard to grasp unless you were sitting in the first row.
I'm not saying the presentation was bad, far from it, it just clearly wasn't for me.
I've left a few minutes before the end, to see how many people came to Dart presentation by Mike West. I was kind of interested, since I'm following Warsaw Google Technology User Group and heard a few voices about why I should pay attentions to that new Google language. As you can see from the picture on the right, the majority tends to disagree with that opinion.

 

Non blocking, composable reactive web programming with Iteratees

Sadek Drobi's talk about Iteratees in Play 2.0 was very refreshing. Perhaps because I've never used Play before, but the presentation was flawless, with well explained problems, concepts and solutions.
Sadek started with a reflection on how much CPU we waste waiting for IO in web development, then moved to Play's Iteratees, to explain the concept and implementation, which while very different from the that overused Request/Servlet model, looked really nice and simple. I'm not sure though, how much the problem is present when you have a simple service, serving static content before your app server. Think apache (and faster) before tomcat. That won't fix the upload/download issue though, which is beautifully solved in Play 2.0

The Future of the Java Platform: Java SE 8 & Beyond


Simon Ritter is an intriguing fellow. If you take a glance at his work history (AT&T UNIX System Labs -> Novell -> Sun -> Oracle), you can easily see, he's a heavy weight player.
His presentation was rich in content, no corpo-bullshit. He started with a bit of history of JCP and how it looks like right now, then moved to the most interesting stuff, changes. Now I could give you a summary here, but there is really no point: you'd be much better taking look at the slides. There are only 48 of them, but everything is self-explanatory.
While I'm very disappointed with the speed of changes, especially when compared to the C# world, I'm glad with the direction and the fact that they finally want to BREAK the compatibility with the broken stuff (generics, etc.).  Moving to other languages I guess I won't be the one to scream "My god, finally!" somewhere in 2017, though. All the changes together look very promising, it's just that I'd like to have them like... now? Next year max, not near the heat death of the universe.

Simon also revealed one of the great mysteries of Java, to me:
The original idea behind JNI was to make it hard to write, to discourage people form using it.
On a side note, did you know Tegra3 has actually 5 cores? You use 4 of them, and then switch to the other one, when you battery gets low.

BOF: Spring and CloudFoundry


Having most of my folks moved to see "Typesafe stack 2.0" fabulously organized by Rafał Wasilewski and  Wojtek Erbetowski (with both of whom I had a pleasure to travel to the conference) and knowing it will be recorded, I've decided to see what Josh Long has to say about CloudFoundry, a subject I find very intriguing after the de facto fiasco of Google App Engine.

The audience was small but vibrant, mostly users of Amazon EC2, and while it turned out that Josh didn't have much, with pricing and details not yet public, the fact that Spring Source has already created their own competition (Could Foundry is both an Open Source app and a service), takes a lot from my anxiety.

For the review of the second day of the conference, go here.