AR Drone 2/Mapping

From PaparazziUAV
Revision as of 09:04, 11 January 2013 by Koudshoorn (talk | contribs)
Jump to navigation Jump to search

One of the functions that the AR.Drone2 should have is to be able to map a certain area (starting off with a room). In order to do this the AR.Drone2 needs to be able to: measure distance to objects, keep track of it's own position, yaw and pitch and avoid crashing into objects. The first two things are needed to create a map from measurements during flight, the third is to maintain flight. It would also be usefull to make an algorithm for the AR.Drone2 to explore the area and store the map in the most efficient way.

Storing the map

For storing the map, we believe Octomap (paper) to be a good solution. It's a datastructure to store a 3D map that is updateable, flexible and compact. One minor setback was that it was implemented in C++ instead of C. At first this seemed like a problem, but after some research it was concluded that this should be no problem. C code can easily use C++ code if a wrapper is used to link C++ methods to C (link). This was also tested with a small program: a C program that increments a number using a method of a C++ program. With this it was confirmed that it really shouldn't be a problem to use Octomap within paparazzi.

Exploring the area

Since the AR.Drone2 will only have point measurement sensors to measure distance to objects it is better to approach the exploring as a 2D problem and solve this at different heights and thus still creating a 3D map. We're using Frontier-Based Exploration for this purpose. With this approach you start with a map filled with unknown cells. Then, by scanning the surroundings of the robot, you fill in the unknown cells: occupied or unoccupied. Next, you search all the frontiers (the border between unoccupied and unknown cells) and pick the frontier with the optimum of being the largest and with the least travel distance. Subsequently you take that frontier as your next destination and upon arival you scan again. There are also variations for 3D (paper). But this is based on a wide range sensor (kinect for example) and therefore not suited for our AR.Drone2.

As explained, Frontier-Based Exploration consists out of a few steps that will be repeated until the whole area is explored:

  • Scan
  • Search for frontiers
  • Find paths
  • Pick best frontier
  • Travel

The reason find paths also is in that list, is because it is neccessary for computing the travel distance (pick best frontier) and for finding a path (travel). Since this would mean that we'd have to execute the same action twice it's better to put it as a single step. Another solution would be to estimate travel distance instead of computing a path for all the frontiers, but we couldn't find a reliable way to do so (as the bird flies isn't a good method considering there could be frontiers behind a wall).

Scan

Some robots can scan entire areas while moving with wide range sensors. Our Ar.Drone2 is limited to point measurement sensors and therefore has to have an explicit step "Scan". Seeing as our drone will have 4 horizontally placed sensors in a + sign, a 360 degrees scan can be made by making a 45 degrees turn. By keeping track of our rotation and position, with the use of internal sensors, we can deduct where the scanned points are in our selfmade map.

Search for frontiers

Find paths

Pick best frontier

Travel

Avoiding objects

To be able to avoid object, you need to know your surroundings. That's why the object avoidance function is closely related to mapping. Therefore it will probably be developed simultaneously.