Skip to content
vsu91 edited this page Jul 26, 2013 · 56 revisions

The goal of the project is to enable 3d mapping with multiple robots, merging the maps built by each of them into one big map and provide tools for robots to localize themselves in the maps they build. The project will start with porting existing solutions such as ccny_rgbd and rgbdslam to the cloud. Once complete we will attempt to leverage cloud engine to allow big scale mapping, merging maps from different robots and localization.

Project outline

  1. Prepare hardware (iRobot Create, Asus Xtion, Odroid-U2) for streaming rgb and depth images the cloud and moving around.
  2. Run one of the existing tools (ccny_rgbd) for 3d mapping on the cloud.
  3. Implement distributed, real-time version of global map optimization (ccny keyframe_mapper).
  4. Add map merging capabilities.
  5. Add localization.
  6. Evaluate results.

Preparing hardware

A robot is based on iRobot Create platform, it uses Asus Xtion as rgbd sensor mounted on servo tilt platform and Odroid-U2 board, which is based on a 4 core ARM processor. The board is connected to the internet through usb-wifi dongle. It uses the serial port to communicate with Create base, and as there are no PWM pins available on the board the servo is controlled using audio output.

/robot.jpg /robot2.jpg

OS setup

The system is running on Linaro Ubuntu. Follow the instructions to install Ubuntu image on microSD card and the instructions to install ROS. Unfortunately not all packages are available at the moment. Install ros-groovy-ros-base and ros-groovy-image-transport-plugins from the repository.

Fixing Serial Port

The system has only one serial port (/dev/ttySAC1) and it is used as a serial console by default(in bootloader and kernel). If you need to use it as a serial port to connect devices you will have to disable is as a linux console. To do that change the /media/boot/boot-hdmi1080.txt file to use other port as a console. In this case it is set to ttySAC3, which is not available on the board:

setenv bootargs "console=tty1 console=ttySAC3,115200n8 fb_x_res=${fb_x_res} fb_y_res=${fb_y_res} hdmi_phy_res=${hdmi_phy_res} root=UUID=e139ce78-9841-40fe-8823-96a304a09859 rootwait ro mem=2047M"

compile it into boot.src:

mkimage -T script -A arm -C none -n 'Boot script for odroid-u2' -d boot-hdmi1080.txt boot.scr

Also you need to recompile Kernel with config:

CONFIG_CMDLINE="console=ttySAC3,115200n8 androidboot.console=ttySAC3"

TODO: The problem with is not completely fixed yet. Looks like bootloader is still writes something to the serial port. When iRobot Create is connected the board sometimes fails to boot, because it is stuck in bootloader phase. Looks like bootloader is waiting for serial console. See http://forum.odroid.com/viewtopic.php?f=8&t=1044

To fix bootloader install patched bootloader from https://github.com/vsu91/u-boot

OpenNI2 setup

The regular openni driver doesn't work good with Xtion on ARM. Several attempts were made to compile OpenNI on ARM based on http://www.pansenti.com/wordpress/?page_id=1772 and http://www.kinecthesia.com/2012/02/lights-camera-xtion-getting-asus-xtion.html . Even though I was able to compile OpenNI, streaming didn't work propely (Cannot open device in first case and no color image in second case).

I was able to compile OpenNI2. Looks like it works fine with Xtion, but there is no drivers for ros that use OpenNI2, so I had to write a simple driver.

To install OpenNI driver:

git clone https://github.com/IDSCETHZurich/rapyuta-mapping.git
roscd rm_openni2_camera
rosmake

It will automatically download and build OpenNI2 and camera driver.

Moving base around

There are packages for turtlebot support (https://github.com/turtlebot/turtlebot.git and https://github.com/turtlebot/turtlebot_create.git). Core driver for Create runs fine, but other packages, which provide transformations, ekf filter for fusing odometry, imu and visual odometry still doesn't work. The packages have many unused dependencies. Some of them fail to build on ARM.

TODO: Try to compile packages which are used in roslaunch turtlebot_bringup minimal.launch.

Running ccny_rgbd on the cloud

Once streaming from the board is working, the next step is to run slam on the cloud.

Estimating bandwidth and compression efficiency

  1. Raw data rgb and depth: 17 MB/s for each stream. 34 MB/s in total.
  2. Jpeg compression for rgb and png compression for depth: 1.5 MB/s for rgb stream and 3.5 MB/s for depth stream. Varies depending on scene. 5 MB/s in total
  3. Theora compression for rgb 400-800 KB/s depending on video. Cannot run real-time on Odroid-U2
  4. Another interesting option is a hardware h264 encoder that is available on many smartphone chips (on Odroid-U2 as well) nowadays, but unfortunately they don't have driver support on linux.

The most useful compression in case streaming every frame to the cloud is png and jpeg compression, as it combines relatively high compression with low computation requirements and per frame compression, which doesn't introduce significant delay. The data streaming with this compression requires ~ 5MB/s channel which is not always available.

Implementation of parallel multi-robot mapping system

The next step in cloud mapping is a development of system that can autonomously do mapping with multiple robots. The currently available software as rgbd_slam or ccny_rgbd has been designed for hand-held camera slam, and it assumes that camera is moved by some external force. Using this approach we cannot grantee that the full map of the environment will be build and it has real-time constraints.

In case of home or office robotics what user typically need is to build a good map of the environment and navigate through it. Usually the time constraint is not very important and it is desired that a robot will explore the environment automatically. Taking into account that typically the wireless internet performance is limited, another architecture of mapping system has been developed.

The robot provides a number of services and actions that are available through the server in the cloud. In our case the two most important services is move base service (navigation in existing map using navigation stack from ROS), capture service ( captures the environment around robot's current pose) and set map service which sends different representations of the current map to the robot (keypoints with descriptors for localization and 2d map for navigation).

/service_diagram.png

The algorithm which is used on the server side for each robot is the following: The map is initialized with features extracted from the first frame. This frame is also used to align a floor in the image with XY plane. Then the robot starts to capture the surrounding environment. In the current implementation the robot rotates around vertical axis and captures images with different tilt angles. These messages are sent to the server and combined into a map using feature extraction and RANSAC. Then several versions of the map are generated. Two dimensional map is generated for using with ros navigation stack, 3d keypoints and descriptors map is generated for localization on robot and full image pointcloud is generated for final visualization of the environment. These maps are sent back to robot to allow navigation. After that the server tries to match maps generated by different robots. Also with this map a new (optimal?) capture point is selected and the server sends a navigation command to the robot. Once robot is at the new point the process iterates again starting with capturing.

/algorithm.png

Visualization of the working system

All robots in the system are visualized in Rviz. As we don't know initial location of each robot relative to each other, it is assumed that there is some safe distance to visualize separate maps in Rviz, so they will not overlap. This safe distances are used to visualize different robot maps in one frame. Robot itself doesn't see any other pieces of map until it's map is merged with another robot.

/screenshot1.png /screenshot2.png /screenshot3.png /screenshot4.png