Hacking a robot

Previous post have summarized “THE HACKATHON” in TouK. Today we will present one of the projects in greater detail – “Lidar/ROS.org based robot”. Our team wanted to either transport sandwiches or monitor WiFi quality in the office. Not deciding on the final goal we immediately saw that mobile robot platform will be needed in both cases.

Analysis

Some of our coworkers own Xiaomi vacuum cleaners. Such robot can be managed from mobile app which displays accurate map of your premises and allows to select areas that need cleaning. Xiaomi robot looked promising but two problems arose. First, the price is significant. Second, the communication protocol is not open. There are libraries on GitHub which try to reverse engineer the details but quick analysis has shown that we may end up stuck on some irritating problem and fail to realise our goals.

Closer examination of Xiaomi robot has revealed the core piece that allows it to automatically navigate around the house. It is LIDAR – laser distance measurement device. Another brand of Roomba vacuum cleaners relies on camera-like sensor instead. As computer vision seemed more difficult to approach we have decided on using LIDAR to build two-dimensional map of robot’s surroundings and navigate around the office.

Hardware

LIDAR technology is not as cheap as simple ultrasonic distance sensors but we have found two promising solutions on the market: YDLIDAR and RPLIDAR. Both brands are lines of different products with increasing capabilities and prices but the basic ones were within our budget. Quick comparison has shown that parameters of respective lowest-end models are similiar so we decided to order YDLIDAR because it was the quickest to ship from Amazon to Poland.

One thing to note is that core LIDAR component can be acquired for even lower price but such device will have fixed line of sight. Our chosen model, just like the one built into Xiaomi, has full 360 degree rotating head. It makes 5 to 10 rotations per second scanning 5 thousand points in that time. The output data stream contains angle and distance for each measured point.

For our robot we needed a mobile base. Common choice is base with two motorized wheels plus one or two support wheels that freely rotate in all directions giving minimal friction. We discarded that option because such simple bases for amateur constructions have weak motors and we had quite a load to put on top. Professional bases with two powerful motors are expensive. In the end we bought four-wheeled base with independent low-grade DC motors driving each wheel. The wheels are not steerable – turning is done like in a tank. Our kit was also equipped with two motor encoders – devices that measure how many times wheel has rotated – we will try to use that knowledge later.

According to the manufacturer’s data our base can bear 800 grams of load. We have put there:

  • Raspberry Pi 3 B+, the most powerful model available
  • Arduino Uno
  • 4-channel motor shield for Arduino
  • YDLIDAR
  • LiPo battery pack
  • DC voltage regulators

We have decided that our robot should be fully autonomous so all the data processing will happen on-board using Raspberry Pi. It was connected via USB with Arduino which was used to control motors using standard Arduino-compatible controller. We knew that RPi has GPIO pins that can be used to control peripherals like Arduino would but we wanted to separate the concerns and use all the power of RPi for other responsibilities.

Ross robot
Ross robot

Software

We have assembled a mobile base with YDLIDAR mounted on top but YDLIDAR itself cannot build a complete map of our office and navigate around it. We needed algorithms that could interpret incoming data stream of distance measurements and convert it into usable map. We have found the ROS.org project. It is called Robot Operating System but instead of being full OS it is Linux-based framework – collection of tools and algorithms that makes programing robots easier. As hackathon was designed to deliver working products each team was given time to prepare before the main event. We have spent that time on learning ROS and gathering main components for our robot.

ROS is capable of handling LIDAR data, building a map and performing navigation of robot. If some feature is not available in ROS it can be added by coding of a “node” – separate program that communicates with another nodes using “topics” – ordered streams of events. Fortunately, all parts of our use case were already available as standard ROS nodes, topics and event types. YDLIDAR’s manufacturer provided custom ROS-compatible node which handles low-level interaction with device. There is also an Arduino relay library that makes it possible to write Arduino code that directly subscribes and publishes events to ROS topics.

Having written less than 100 lines of ROS nodes’ launch configuration in XML and less that 100 lines of Arduino code, we were able to remote control our robot and see the map on screen. We have used separate notebook which handled joystick controller and displayed a map. The notebook was configured as ROS slave connected over WiFi to ROS master running on RPi. Below we show how simple it was to setup USB joystick controller:

<launch>
  <node pkg="joy"
        type="joy_node"
        name="ross_joy"
        respawn="true" >

    <param name="autorepeat_rate" value="10" />
  </node>

  <node pkg="teleop_twist_joy"
        type="teleop_node"
        name="ross_teleop"
        respawn="true" >

    <param name="scale_linear" value="0.3" />
    <param name="scale_angular" value="0.3" />
  </node>
</launch>

Unexpected problems and spontaneous solutions

During the hackathon days we have experienced some difficulties. As we were not able to test full robot assembly before the main event, some problems have surfaced very late in the process and dirty solutions must have been quickly hacked.

First problem: power source. During initial tests we used 24V DC power source connected to wall power. It had to be calibrated because internal protection cut off the power when current drawn by motors become too high. We also had 12V LiPo battery for final tests and show. Different input voltages were converted by on-board step-down regulators. One has provided 5V needed by RPi, Arduino and YDLIDAR. Second regulator fed 10V to the motors. During the tests it appeared that YDLIDAR cannot be powered from RPi’s USB port because its voltage was not stable enough. In the end we have connected YDLIDAR power input directly to 5V output from appropriate regulator.

Second problem: jerky movement. We thought (because ROS wiki suggested so) that it would be a good design to include PID controller driving the wheels. It is an algorithm that tries to maintain one value (in our case: measured actual speed of wheels) by varying another value that directly influences the first (in our case: power applied to motors). After some tests we have disabled PID controller because it requires fine tuning to behave correctly. As our rotational wheel encoders report only 10 ticks per revolution, the PID was confused by such low measurement resolution and tried to vary motor power too sharply rendering smooth movement impossible. We believe it can be tuned properly but during the hackathon we have setup joystick to directly control motors’ power, and not robot’s target speed, making human operator responsible for adjustments.

Third problem: faulty encoders. Our will to have wheel encoders originated not from possibility to enable PID controller, but from opportunity to increase mapping precision. Knowing how much robot has moved can help to better correlate data from multiple laser scans, producing more accurate map. Unfortunately, one of encoders appeared to work incorrectly, reporting too few ticks per revolution. Not having much time to investigate that we decided to disconnect encoders completely.

Making maps

At the beginning of second day we have already known main limitations of hardware and software and decided that we will use those parts that work predictably. We confirmed that PID controller was not neccessary for our purposes and mapping can be done using laser data only. We decided to enable simplest mapping algorith in ROS – a method called Hector SLAM. We could start first tests.

At first we mapped small room with two desks in it and a glass door. We have put additional objects in the middle to see how their presence would be handled. Everything worked smoothly using default parameters of Hector method. We also confirmed that it is easy to overlay map with additional data coming from sensors – in our case it was simple photoresistor measuring light intensity in the room.

Room
Room

Then we moved onto mapping bigger area – the hall between rooms. There is additional wall dividing it in the middle and a pillar. We added few other objects. During the tests robot was wired to the power source. We tested how to move our bodies around so to not interfere with the measurements. It appeared that after mapping initial fragment it is safe to walk around and Hector algorithm will ignore moving objects. Only after staying for too long in the same place our legs started to be included as part of the map.

Hall
Hall

Final test shows straight corridor. Its map is bended and we are not sure of the cause. It may be related to slow scan rate of YDLIDAR which accumulates error during robot’s movement.

Corridor
Corridor

Future work

Having learnt ROS before the hackathon and solved mostly hardware-related problems during the event, we have shown that map building is possible even with simple setup. We would like to expand the algorithmic part of the solution, enabling our robot to autonomously move in the office environment. Proper navigation components are already available in ROS.

Conclusions

ROS is a powerful tool that can be used both by amateurs and proffesionals. Its sophisticated architecture allows for complex definitions and management of industrial-grade robots, but can also fit in quick and dirty home projects. For us the biggest challenges lied in the hardware layer, but having electronic engineer on the team helped to connect all the parts together. All Ross team members are happy with results and wish to continue the project.

You May Also Like

Mock Retrofit using Dagger and Mockito

Retrofit is one of the most popular REST client for Android, if you never use it, it is high time to start. There are a lot of articles and tutorial talking about Retrofit. I just would like to show how to mock a REST server during develop of app and i...Retrofit is one of the most popular REST client for Android, if you never use it, it is high time to start. There are a lot of articles and tutorial talking about Retrofit. I just would like to show how to mock a REST server during develop of app and i...

33rd Degree day 2 review

Second day of 33rd had no keynotes, and thus was even more intense. A good conference is a conference, where every hour you have a hard dilemma, because there are just too many interesting presentations to see. 33rd was definitely such a conference, and the seconds day really shined.

There were two workshops going on through the day, one about JEE6 and another about parallel programming in Java. I was considering both, but decided to go for presentations instead. Being on the Spring side of the force, I know just as much JEE as I need, and with fantastic GPars (which has Fork/Join, actors, STM , and much more), I won't need to go back to Java concurrency for a while.

GEB - Very Groovy browser automation

Luke Daley works for Gradleware, and apart from being cheerful Australian, he's a commiter to Grails, Spock and a guy behind Geb, a  browser automation lib using WebDriver, similar to Selenium a bit (though without IDE and other features).

I have to admit, there was a time where I really hated Selenium. It just felt so wrong to be writing tests that way, slow, unproductive and against the beauty of TDD. For years I've been treating frontend as a completely different animal. Uncle Bob once said at a Ruby conference: "I'll tell you what my solution to frontend tests is: I just don't". But then, you can only go so far with complex GUIs without tests, and once I've started working with Wicket and its test framework, my perspective changed. If Wicked has one thing done right, it's the frontend testing framework. Sure tests are slow, on par with integration tests, but it is way better than anything where the browser has to start up front, and I could finally do TDD with it.

Working with Grails lately, I was more than eager to learn a proper way to do these kind of tests with Groovy.

GEB looks great. You build your own API for every page you have, using CSS selectors, very similar to jQuery, and then write your tests using your own DSL. Sounds a bit complicated, but assuming you are not doing simple HTML pages, this is probably the way to go fast. I'd have to verify that on a project though, since with frontend, too many things look good on paper and than fall out in code.

The presentation was great, Luke managed to answer all the questions and get people interested. On a side note, WebDriver may become a W3C standard soon, which would really easy browser manipulation for us. Apart from thing I expected Geb to have, there are some nice surprises like working with remote browsers (e.g. IE on remote machine), dumping HTML at the end of the test and even making screenshots (assuming you are not working with headless browser).

Micro services - Java, the Unix Way

James Lewis works for ThoughtWorks and gave a presentation, for which alone it was worth to go to Krakow. No, seriously, that was a gem I really didn't see coming. Let me explain what it was about and then why it was such a mind-opener.
ThoughtWorks had a client, a big investment bank, lots of cash, lots of requirements. They spent five weeks getting the analysis done on the highest possible level, without getting into details yet (JEDI: just enough design initially). The numbers were clear: it was enormous, it will take them forever to finish, and what's worse, requirements were contradictory. The system had to have all three guarantees of the CAP theorem, a thing which is PROVED to be impossible.
So how do you deal with such a request? Being ThoughtWorks you probably never say "we can't", and having an investment bank for a client, you already smell the mountains of freshly printed money. This isn't something you don't want to try, it's just scary and challenging as much as it gets.
And then, looking at the requirements and drawing initial architecture, they've reflected, that there is a way to see the light in this darkness, and not to end up with one, monstrous application, which would be hard to finish and impossible to maintain. They have analyzed flows of data, and came up with an idea.
What if we create several applications, each so small, that you can literally "fit it in your head", each communicating with a simple web protocol (Atom), each doing one thing and one thing only, each with it's own simple embedded web server, each working on it's own port, and finding out other services through some location mechanism. What if we don't treat the web as an external environment for our application, but instead build the system as if it was inside the web, with the advantages of all the web solutions, like proxies, caches, just adding a small queue before each service, to be able to turn it off and on, without loosing anything. And we could even use a different technology, with different pair of CAP guarantees, for each of those services/applications.
Now let me tell you why it's so important for me.
If you read this blog, you may have noticed the subtitle "fighting chaos in the Dark Age of Technology". It's there, because for my whole IT life I've been pursuing one goal: to be able to build things, that would be easy to maintain. Programming is a pure pleasure, and as long as you stay near the "hello world" kind of complexity, you have nothing but fun. If we ever feel burned out, demotivated or puzzled, it's when our systems grow so much, that we can no longer understand what's going on. We lose control. And from that point, it's usually just a way downward, towards complete chaos and pain.
All the architecture, all the ideas, practices and patterns, are there for just this reason - to move the border of complexity further, to make the size of "possible to fit in your head" larger. To postpone going into chaos. To bring order and understanding into our systems.
And that really works. With TDD, DDD, CQRS I can build things which are larger in terms of features, and simpler in terms of complexity. After discovering and understanding the methods (XP, Scrum/Kanbad) my next mental shift came with Domain Driven Design. I've learned the building block, the ideas and the main concept of Bounded Contexts. And that you can and should use a different architecture/tools for each of them, simplifying the code with the usage patterns of that specific context in your ming.
That has changed a lot in my life. No longer I have to choose one database, one language and one architecture for the whole application. I can divide and conquer, choose what I want to sacrifice and what advantages I want here, in this specific place of my app, not worrying about other places where it won't fit.
But there is one problem in here: the limit of technologies I'm using, to keep the system simple, and not require omnipotence to be able to maintain, to fix bugs or implement Change Requests.
And here is the accidental solution, ThoughtWorks' micro services bring: if you system is build of the web, of small services that do one thing only, and communicate through simple protocol (like Atom), there is little code to understand, and in case of bugs or Change Requests, you can just tear down one of the services. and build it anew.
James called that "Small enough to throw them away. Rewrite over maintain". Now, isn't that a brilliant idea? Say you have a system like that, build over seven years ago, and you've got a big bag of new requests from your client. Instead of re-learning old technologies, or paying extra effort to try to bring them up-to-date (which is often simply impossible), you decide which services you are going to rewrite using the best tools of your times, and you do it, never having to dig into the original code, except for specification tests.
Too good to be true? Well, there are caveats. First, you need DevOps in your teams, to get the benefits of the web inside your system, and to build in the we as opposite to against it. Second, integration can be tricky. Third, there is not enough of experience with this architecture, to make it safe. Unless... unless you realize, that UNIX was build this way, with small tools and pipes.
That, perhaps. is the best recommendation possible.

Concurrency without Pain in Pure Java

Throughout the whole conference, Grzegorz Duda had a publicly accessible wall, with sticky notes and two sides: what's bad and what's good. One of the note on the "bad" side was saying: "Sławek Sobótka and Paweł Lipiński at the same time? WTF?". 
I had the same thought. I wanted to see both. I was luckier though, since I'm pretty sure I'll yet be able too see their presentations this year, as 33rd is the first conference in a long run of conferences planned for 2012. Not being able to decide which one to see, I've decided to go for Venkat Subramaniam and his talk about concurrency. Unless we are lucky at 4Developers, we probably won't see Venkat again this year.
Unfortunately for me, the talk ("show" seems like a more proper word), was very basic, and while very entertaining, not deep enough for me. Venkat used Closure STM to show how bad concurrency is in pure Java, and how easy it is with STM. What can I say, it's been repeated so often, it's kind of obvious by now.
Venkat didn't have enough time to show the Actor model in Java. That's sad, as the further his talk, the more interesting it was. Perhaps there should be a few 90min sessions next year?

Smarter Testing with Spock

After the lunch, I had a chance to go for Sławek Sobótka again, but this time I've decided to listen to one of the commiters of Spock, the best thing in testing world since Mockito. 
Not really convinced? Gradle is using Spock (not surprisingly), Spring is starting to use Spock. I've had some experience with Spock, and it was fabulous. We even had a Spock workshop at TouK, lately. I wanted to see what Luke Daley can teach me in an hour. 
That was a time well spent. Apart from things I knew already, Luke explained how to share state between tests (@Shared), how to verify exceptions (thrown()), keep old values of variables (old()), how to parametrize description with @Unroll and #parameterName, how to set up data from db or whatever with <<, and a bit more advanced trick with mocking mechanism. Stubbing with closures was especially interesting.

What's new in Groovy 2.0?

Guillaume Laforge is the project lead of Groovy and his presentation was the opposite to what we could see earlier about next versions of Java. Most visible changes were already done in 1.8, with all the AST transformations, and Guillaume spent some time re-introducing them, but then he moved to 2.0, and here apart from multicatch in "throw", the major thing is static compilation and type checking.
We are in the days, were the performance difference between Java and Groovy falls to a mere 20%.  That's really little compared to where it all started from (orders of magnitude). That's cool. Also, after reading some posts and successful stories about Groovy++ use, I'd really like to try static compilation with this language
Someone from the audience asked a good question. Why not use Groovy++ as the base for static compilation instead. It turned out that Groovy++ author was also there. The main reason Guillaume gave, were small differences in how they want to handle internal things. If static compilation works fine with 2.0, Groovy++ may soon die, I guess.

Scala for the Intrigued


For the last talk this day, I've chosen a bit of Scala, by Venkat Subramaniam. That was unfortunately a completely basic introduction, and after spending 15 minutes listening about differences between var and val, I've left to get prepared to the BOF session, which I had with Maciek Próchniak.

BOF: Beautiful failures


I'm not in the position to review my own talk, and conclude whether it's failure was beautiful or not, but there is one things I've learned from it.
Never, under none circumstances, never drink five coffees the day you give a talk. To keep my mind active without being overwhelmed by all the interesting knowledge, I drank those five coffees, and to my surprise, when the talk started, the adrenaline shot brought me over the level, where you loose your breath, your pulse, and you start to loose control over your own voice. Not a really nice experience. I've had the effects of caffeine intoxication for the next two days. Lesson learned, I'm staying away from black beans for some time.
If you want the slides, you can find them here.
And that was the end of the day. We went to the party, to the afterparty, we got drunk, we got the soft-reset of our caches, and there came another day of the conference.

You can find my review from the last day in here.