ECE497 SLAM via ROS
Team members: Elias White
(This is a big project for one person. Let's talk about your progress today.) In autonomous navigation understanding the robot's surrounding environment, as well as its position in this environment, is of paramount importance. This project attempts to leverage the open-source efforts resulting in simultaneous localization and mapping (SLAM) algorithms and use them, in collaboration with the Beagleboard -xm, to develop a 3-D model of the world surrounding the board as it moves through space. Obviously the more (quality) sensory data used in a SLAM algorithm the better the results, but at this time a camera will be the only sensor device, although there is the possibility of incorporating a gyroscope. A primary objective of this project is to test the feasibility of using the Beagleboard -xm as the "brain" for an autonomous quad-copter.
An Embedded version of Ubuntu has been successfully installed, and ROS has been installed on top of it. While I make my final SLAM algorithm decision there is nothing else that works.
Building the world model and localizing my self in it.
With this project I hope to provide a example for my Aerial-robotics team members to follow and a starting point that leads them to increasingly awesome aerial robotics projects.
There are a few fairly large installs (embedded Ubuntu, ROS) required. I'll pretty up the procedure and put throw it up here when I think it is at its most painless.
(Do you have a git repository?)
While there are currently no highlights, this video provides an idea of what I would like to do, although the quality of their results is much higher than I am expecting to achieve.
(How much of what's in the video comes with ROS?)
Theory of Operation
The operating system running on the -xm is an embedded version of Ubuntu 12.04. ROS (Robotic Operating System) is installed within this OS, providing hardware abstraction, device drivers, libraries, et cetera, that provide simplified control of the robot platform. OpenCV is embedded in ROS and is responsible for the robot's interpreting the data provided by the camera and constructing an accurate representation of the world. While I haven't yet made my final decision on which SLAM algorithm to use, I am leaning towards GMapping.
As a solo group I'll be the only one working on this project.
In order to improve performance once could bolster the sensory profile of the platform. Useful sensors include:
- Laser scanning range-finder
- IMU (Inertial measurement unit)
- Digital Compass
The incorporation of the data gathered from these sensors will improve the robot's model of the world and decrease the uncertainty it has about its location in the model.