Skip to content
James Underwood edited this page Jul 9, 2018 · 23 revisions

Introduction

The instructions here outline the steps required to extrinsically calibrate one or more 3D sensors (such as Kinect or Velodyne) to a vehicle body frame defined by a 6D nav solution. This estimates the 3 translational and 3 rotational offsets between each sensor frame and the body frame.

Authors:

James Underwood and Andrew Hill

Further Reading:

Error modeling and calibration of exteroceptive sensors for accurate mapping applications:

https://onlinelibrary.wiley.com/doi/full/10.1002/rob.20315

Calibration of range sensor pose on mobile platforms:

https://ieeexplore.ieee.org/abstract/document/4398971/

Reliable and safe autonomy for ground vehicles in unstructured environments (PhD Thesis)

https://ses.library.usyd.edu.au/handle/2123/18383

How to cite:

Please cite both the JFR paper and this github page

@article{underwood2010error,
title={Error modeling and calibration of exteroceptive sensors for accurate mapping applications},
author={Underwood, James P and Hill, Andrew and Peynot, Thierry and Scheding, Steven J},
journal={Journal of Field Robotics},
volume={27},
number={1},
pages={2--20},
year={2010},
publisher={Wiley Online Library}
}

Software prerequisites:

Comma (https://github.com/acfr/comma)

Snark (https://github.com/acfr/snark)

Matlab

Calibration Method

There are three main steps:

Data Collection and Preparation

A calibration data set includes:

A timestamped 6DOF vehicle trajectory in form --binary=t,6d --fields=t,x,y,z,roll,pitch,yaw

Time synchronized points in the range-sensor frame (1 point per time) in form --binary=t,3d --fields=t,x,y,z

The data should be obtained while driving around a scene that includes combinations of planar surfaces and linear features (e.g. poles) all in varying orientations

Feature Selection

Uses the script /applications/calibration-prepare

Preprocesses your trajectory (nav) and pointcloud data and prompts you to label the features you will use for calibration.

Requires 2 files: nav.bin and sensor.bin, which contain the nav solution and pointcloud data respectively.

Optimisation

Uses /applications/calibrate.m

Requires editing of the matlab script so that feature list matches the features you label during Step 2.

Optimises the sensor to body frame 6DOF offset to lower the error of points to their corresponding planar or linear features.

1. Data Preparation:

Data collection should be done to ensure that a number of physical features are included in the point cloud data set, namely upright poles and flat planar surfaces at varying angles (e.g. ground and two non-parallel walls).

The vehicle's trajectory should be relatively smooth throughout the data set. If you have poor time sync between nav and sensor, drive very slowly.

You will get a more accurate result in all 6 degrees if the vehicle itself observes all features (poles/planes) from a wide variety of perspectives and vehicle body angles. It is usually easy to view all features from different x/y locations and heading (yaw) angles, but it can be harder to view features while the vehicle is rolled or pitched. If there is a convenient hill that can induce roll/pitch, drive along it in different angles while observing the features. Sometimes it may be necessary to use a portable ramp to roll/pitch the vehicle.

Start simple and obtain data from existing features like buildings and lamp-posts, while rolling/pitching using existing terrain. But if the results are not sufficiently accurate, add new features and bring a ramp.

Navigation Data

Must be in a single binary file named nav.bin.

Format: --fields=t,x,y,z,roll,pitch,yaw --binary=t,6d

Pointcloud Data

Must be in a single binary file named sensor.bin (where "sensor" can be whatever name you prefer, e.g. puck.bin).

Format: --fields=t,x,y,z --binary=t,3d

2. Feature Selection

This step involves making an intial estimate regarding the offset of the sensor to the body frame of the vehicle (centre of navigation (nav data) frame), then running the calibration-prepare script to manually label planar or linear (pole-like) features in the environment.

Initial Offset Measurement/Estimation

An initial estimate of the translational (in metres) and rotational (in radians) offsets between sensor and body frame is required at the start of the optimisation.

It is also required to be sufficiently accurate that you can identify and label the features in the environment, even if they are blurry.

The offsets are measured from the body frame to the sensor frame, and the angular offsets are Euler angles in the order: Yaw, Pitch, Roll

Rule of thumb (and fingers) to estimate initial offset:

  1. Orient your right hand at the navigation / platform body frame centre, with thumb along +X, 1st (index) finger along +Y and third (rude) finger along +Z.

  2. Before rotating your hand, note the offsets: how do you move in that frame of reference, to get to the sensor frame.

  3. Now how do you rotate your hand to re-align it to be parallel to the sensor frame (ignore translation).

    a) First rotate around 3rd finger

    b) then from that new orientation rotate around 2nd finger

    c) then from that new orientation rotate around your thumb.

    d) That is the Euler ZYZ convention. Don't break your wrist.

Running calibration-prepare

/applications/calibration-prepare uses inputs of nav.bin and sensor.bin and outputs features.bin through the user's selection of features in the pointcloud.

Summary of utility

--name=foo : can be anything, corresponds to point cloud data file foo.bin (binary=t,3d;fields=t,x,y,z)

--offset=x,y,z,roll,pitch,yaw : your initial guess of the offset you are trying to optimise / calibrate

--thin=ratio : percentage of point clouds to consider at random (optional, default=1 meaning all data included)

--rough=size : optionally voxelise data with this size in metres, for initial rough feature selection

Options for handling large point clouds:

If your point cloud is not too large (e.g. < 30M points) you may opt to finely label your features in one hit. In This case use:

/applications/calibration-prepare ./ --name=<model> --offset=<initial_offset>

If your point cloud is very large (e.g. velodyne with >> 10M points), it can be hard to visualise due to RAM limits and can be hard to label forground points given dense background

To handle RAM limits, use the --thin option with a ratio that brings your data down to ~30M or fewer points. (warning) Note, if you get it wrong, currently the app will just use all your RAM and hang up your system

/applications/calibration-prepare ./ --name=<model> --offset=<initial_offset> --thin=0.001

To deal with the dense background, use the --rough option with a voxel size of 0.1m or 0.2m. This will first present you with a spatially regularised cloud with one point per voxel. You can label that roughly, then for fine alignment it will present only the dense data (subject to thinning) that is nearby your roughly selected features.

If using --rough, you should be able to specify a higher thinning ratio, because only the points near your rough selection need to fit into RAM.

/applications/calibration-prepare ./ --name=<model> --offset=<initial_offset> --rough=0.2 --thin=0.005

A concrete example:

/applications/calibration-prepare ./ --name=puck --offset=0.63680,-0.11904,0.00403,3.08345,-0.01786,1.56927  --rough=0.2 --thin=0.005

Process

The first window that pops up should be a view-points window showing the nav solution. As long as it looks like the trajectory that you took when you ran your experiments it should be ok - close the window for view-points and let the script continue.

If there are jumps or other problems with the nav data, ctrl-c out and filter bad nav points yourself somehow. (E.g. by variance, by hand using label-points)

The next window that pops up should be a label-points window to select each of the features.

If --rough was specified, this will be for quick, rough selection. Don't be too precise, you can refine in the next step.

If --rough was not specified, this will be for fine feature selection in the first pass. No chance to refine later.

If --rough was specified, label-points will pop up again, with denser data (depending on --thin) only near the rough features. They'll be pre-labelled accordingly, so label points you don't want with 0. This is fine labelling, no chance for further refinement.

The example below has 2 pole features and 2 flat features (one on the ground and one on the wall).

  1. Select the arrow icon to allow rotation to a good view of the feature
  2. Select the box icon to allow selection of points by clicking an individual point and dragging out a 3D box around surrounding points.
  3. Set the required id of the selection in the text bar, then
  4. Use the bucket icon to set the id of the selected points.
  5. Repeat to ensure that all points of a feature are selcted, although it is preferrable to leave out non planar/linear points, eg lamp post head and door frames.

Note that noisy points forming a shadow around the feature should definitely be included as these are used for the actual calibration! But don't unlude points from adjacent physical structures.

Repeat for all required features, say at least 4 or 5, but more is better. Start with one or two just to test the pipeline though.

If you make a mistake, you can label points with id=0 to ignore them again.

Make sure to note down the type (ie line or plane) of each feature and it's ID for use in the optimsation step.

Click save and close the label-points window once this is done. Another window should now display the selcted features to confirm they are ok, close this and the script should produce sensor.features.bin, sensor.features.json and calibration.bin.

Once you save and close label-points, a final view-points window will load to show you the final selection, that can be taken to the matlab optimisation step.

3. Optimisation

The matlab script calibration.m should be in the same folder as calibration-prepare (/applications).

There are a few things to modify inside this script in order to run the calibration:

dataFolder: Indicate the address where your features files are, ie the directory created for the output of calibration-prepare.

costfunctions: Hopefully you remember or wrote down the type of feature for each feature ID! According with the type of feature you choose, define each cost function so that it corresponds to the relevant feature type, eg, for the above example using features selected in order Line, Plane, Plane and Line:

Ignore=@(x)(0);
costfunctions = cell(nFeatures,1);
costfunctions{1} = 'GeometricCostOfUnconstrainedLine';
costfunctions{2} = 'GeometricCostOfUnconstrainedPlane';
costfunctions{3} = 'GeometricCostOfUnconstrainedPlane';
costfunctions{4} = 'GeometricCostOfUnconstrainedLine';
%costfunctions{5} = Ignore;

If after optimsation it is fond that a feature is not able to be fitted well, for instance points were accidently selected that aren't planar, it can be ignored as per number 5 above.

If calibration is being done for multiple sensors, make sure all feature files are in dataFolder in the form *features.bin. They will be included in the optimisation automatically.

The initial estimate of offset will be loaded from the sensor.features.json file which should be included in dataFolder. There should be one per sensor used.

The following directories will be required to be added to your matlab path using addpath(directory):

~/src/comma/csv/examples (for using bin_load)
/applications
/impl
/costfunctions
/visualise

Optimisation can now be run from matlab:

calibrate

A common issue:

Error using bin_load (line 40) csv-format failed to expand format string

If so, then there is likely a libstdc++ linking error. Run matlab with:

sudo LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6 matlab -desktop

Note: check where your library actually is first

Output

Output should display the calibrated offset of all sensors in the format:

optimised result:
offset=0.63465,-0.12016,0.00392,3.08299,-0.01799,1.56819
offset before calibration (m/deg):
0.7200 0 0 180.0000 0 90.0000
offset after calibration (m/deg):
0.6346 -0.1202 0.0039 176.6423 -1.0306 89.8507

In the above, the line offset=0.63465,-0.12016,0.00392,3.08299,-0.01799,1.56819 would now be saved and added to any sensor listings in config files for the vehicle.

Plots will also be displayed showing the selected features before and after calibration for visual inspection. If any feature appears to not have improved, try running optimisation again with that particular feature costFunction set to Ignore.

Notes

calibration-prepare

MAXGAP parameter: max valid time gap between two successive nav records; if this is too low many or all the points will be discarded; Currently no command line option; it needs to be set in the script

Optimisation

bounds variable (in calibrate.m): defines bounds for search space around the initial estimate, if the initial estimate is way off, bounds needs to be set to higher value. currently not read from configuration, units are in meters and radian.