Skip to content

Latest commit

 

History

History
156 lines (103 loc) · 7.92 KB

0-calibration.md

File metadata and controls

156 lines (103 loc) · 7.92 KB

0-Calibration

This module takes the raw calibration sequence and generates the calibration (intrinsics + extrinsics) for all the cameras in the scene. Further information can be found at: https://github.com/wgrosche/MultiviewCameraCalibration/

Input

  • data/raw_data/

Output

  • data/0-calibration
  • data/0-calibration/calibs/camname1.json
  • data/0-calibration/calibs/camname2.json

0-calibration-setup

Prepares the environment, ensures that all necessary arguments are set. Outputs its findings to the logger. python3 0-calibration-setup.py

1-Intrinsics

This step of the pipeline calculates the intrinsics (camera matrix, distortion coefficients, etc.) for each camera, based on the calibration footage placed in 'data/raw_data/calibration'.

This step requires calibration footage, this is footage of a calibration pattern (checkerboard). You can generate a calibration pattern using the following website: https://markhedleyjones.com/projects/calibration-checkerboard-collection Verify calibration pattern dimensions and amend them in /code/utils/arguments.py if necessary.

To execute run: python3 1-intrinsics.py

Verification images are placed in 'data/0-calibration/visualisation/undistorted-calibration'. Intrinsics are saved to 'data/0-calibration/calibs'. Below are some example outputs from this step of the pipeline. If calibration is unsuccessful, the images will be distorted, especially at the edges. If this is the case, please ensure that the keypoint coverage is good and that the calibration footage is of sufficient quality.

Example keypoint coverage:

Good keypoint coverage is shown in the example below. Ensure that you have good coverage, with keypoints covering the entire image.

Image showing the detected keypoints used in calculating camera intrinsics. Ensure that you have good coverage, with keypoints covering the entire image.

Example undistorted image:

The undistorted image should be free from distortion. Example undistorted image generated by 1-intrinsics.py, the lines are straight throughout the image suggesting that lens distortion has successfully been accounted for.

Example undistorted image generated by 1-intrinsics.py, the lines are straight throughout the image suggesting that lens distortion has successfully been accounted for.

Example monotonicity plot:

The monotonicity plot should show green lines, especially towards the centre of the image. Shown in this figure is a circle of non-monotonic distortion at the peripherie (shown in red).

Monotonicity plot with green depicting areas of monotonicity and red depicting non-monotonic areas.

Skipping the intrinsics step

If you have already calculated the intrinsics and would like to skip this step, you can place the intrinsics in 'data/0-calibration/calibs'.

Example intrisic file:

Note that the calib file can also be use to select which camera to process. if you want to skip a camera, simply remove the corresponding calib file.

2-Extract Images

Undistorts environment footage and places images in 'data/0-calibration/images'.

To execute run: python3 2-extract_images.py

To speed up the reconstruction it is advised to specify which frame of the omnidirectional camera should be used. You can specify that in the 'project_config.yaml' file or in the command line argument with the arguments --omni_start, --omni_end and --omni_step.

3-4 Annotate ROI and Check time alignement

Launches an annotation tool to define region of interest and finetune temporal alignement between videos.

To execute run: sh 3-annotation_tool.sh

Once started ctrl+click on the link to open the annotation tool in a web browser. From there you can open the notebook '4-select_roi.ipynb' and follow the instructions to annotate the region of interest and check the temporal alignment between the videos.

5-Reconstruct Scene

Launches the reconstruction pipeline, first reconstruct the scene from the omnidirectional frames. Then it will add the static camera to the reconstruction.

to launch the reconstruction run: python 5-reconstruct_scene.py

Outputs:

  • data/0-calibration/outputs/360-reconstruction.ply Point cloud of the scene.
  • data/0-calibration/outputs/360-reconstruction_w_static.html Visualization of camera positions and the ground plane.

6-Alignement Validation

Go back to the annotation tool and open the notebook 6-alignement_validation.ipynb to check the alignment of the static camera with the omnidirectional camera.

  • Visualize the groundplane and adjust output_img_size aspect ratio to maximize the space taken by the region of interest.
  • Verify the groundplane height by selecting matching ground points across views. Their corresponding triangulated point should lie on the gorund and have a z-coordinate of 0. If not the groundplane height is adjusted in the notebook.

You can restart the notebook to verify if the adjustement was successful. On the groundplane reprojection the reprojected point should be onverlapping.

Example ground plane projection:

Finally, verify that the calibration and ground plane projection are correct. The ground plane projection should show the ground plane for all the cameras. Points on the ground should be aligned between views and it should be centered on the region of interest selected earlier.

Example ground plane projection.

Status

At the end of this step of the pipeline you should have intrinsics and extrinsics that give you a working ground plane projection.