ECE497 SLAM via ROS

From eLinux.org
Revision as of 10:45, 2 November 2012 by Whiteer (talk | contribs) (Theory of Operation)
Jump to: navigation, search

Team members: Elias White

Executive Summary

In autonomous navigation understanding the robot's surrounding environment, as well as its position in this environment, is of paramount importance. This project attempts to leverage the open-source efforts resulting in simultaneous localization and mapping (SLAM) algorithms and use them, in collaboration with the Beagleboard -xm, to develop a 3-D model of the world surrounding the board as it moves through space. Obviously the more (quality) sensory data used in a SLAM algorithm the better the results, but at this time a camera will be the only sensor device, although there is the possibility of incorporating a gyroscope. A primary objective of this project is to test the feasibility of using the Beagleboard -xm as the "brain" for an autonomous quad-copter.



Installation Instructions

Give step by step instructions on how to install your project on the SPEd2 image.

  • Include your github path as a link like this: https://github.com/MarkAYoder/gitLearn.
  • Include any additional packages installed via opkg.
  • Include kernel mods.
  • If there is extra hardware needed, include links to where it can be obtained.

User Instructions

Once everything is installed, how do you use the program? Give details here, so if you have a long user manual, link to it here.

Highlights

While there are currently no highlights, this video provides an idea of what I would like to do, although the quality of their results is much higher than I am expecting to achieve.

Theory of Operation

The operating system running on the -xm is an embedded version of Ubuntu 12.04. ROS (Robotic Operating System) is installed within this OS, providing hardware abstraction, device drivers, libraries, et cetera, that provide simplified control of the robot platform. OpenCV is embedded in ROS and is responsible for the robot's interpreting the data provided by the camera and constructing an accurate representation of the world. While I haven't yet made my final decision on which SLAM algorithm to use, I am leaning towards GMapping.

Work Breakdown

As a solo group I'll be the only one working on this project.

Future Work

In order to improve performance once could bolster the sensory profile of the platform. Useful sensors include:

  1. Laser scanning range-finder
  2. IMU (Inertial measurement unit)
  3. Digital Compass
  4. GPS

The incorporation of the data gathered from these sensors will improve the robot's model of the world and decrease the uncertainty it has about its location in the model.

Conclusions

Nothing yet.