Slam calibration (1) monocular vision

Slam calibration (1) monocular vision

Visual calibration (1) Monocular vision calibration

1. Camera model

 Common monocular cameras mainly have two models: pinhole (pinhole camera) and fisheye (fisheye camera) models. I have introduced the principle of visual coordinate system conversion before. No matter what the monocular camera model is, its internal reference model is Similarly, take the previous results, as shown in the following figure:

Coordinate System

1.1 pinhole model

pinhole

 The pinhole camera is the principle of small hole imaging. There are mainly two different types of distortion, one is the radial distortion caused by the shape of the lens, and the other is the tangential distortion caused by the lens and the imaging plane cannot be strictly parallel during the assembly process of the camera.

 The radial distortion is shown below, where is the ideal coordinate, is the actual observed coordinate, and.

 Tangential distortion:

 Synthesize the above two kinds of distortions to get the distortion model of the camera

1.2 fisheye model

fisheye

 Fisheye cameras are roughly divided into four models, namely: equidistant projection model, constant solid angle projection model, orthogonal projection model and stereo projection model. In fact, it is impossible to design a fisheye camera according to a certain model. In order to facilitate calibration, it is generally expressed by combining the above four models. We weight the four models and perform Taylor series expansion, taking the first five items:

 The distortion model is:

 Generally, ordinary pinhole cameras only need to be calibrated, fisheye cameras are calibrated.

2. Basic principles

2.1 The homography matrix

 Participating in distortion correction within the monocular adopts the Zhang Zhengyou calibration method, which is a single checkerboard camera calibration method proposed by Professor Zhang Zhengyou in 1998. We rewrite the coordinate system conversion equation as:

 Fix the world coordinate system on the chessboard, that is, you can get:

 Will be recorded as a homography matrix, namely:

It is a homogeneous matrix with 8 unknowns, and 8 equations are needed to solve. Each point pair provides two equations, so at least 4 point pairs are required to calculate the homography matrix.

2.2 Calculation of internal parameters

 The rotation matrix is ​​a unitary matrix, so it has the following properties:

 Will bring in to get:

 It can be seen from the above formula that a homography matrix can provide two equations. The internal parameter matrix contains at least 4 parameters, and at least 2 homography matrices are required. Therefore, at least two checkerboard pictures are required for calibration. Change the camera and calibration board. At least 3 different pictures are obtained from the relative position between. Remember, through cholesky decomposition, the camera's internal parameter matrix is ​​obtained.

 The above derivation is based on an ideal solution. In practice, there is Gaussian noise, and the maximum likelihood estimation is used for optimization. Suppose we have collected n checkerboard images, and each image has m corner points. Let the projection point of the corner point on the i-th image on the image be:

 The likelihood function is constructed by the probability density function of the corner points:

 To maximize the likelihood function, that is, the above formula is the smallest, we choose the LM (Levenberg-Marquardt) algorithm for nonlinear optimization and iterative solution.

2.3 Distortion calculation

 Calculate according to the distortion model formula in the first section, bring it into the above optimization function, and perform LM iterative optimization, and then the internal participating distortions can be calculated together.

3. Calibration operation

 Visual calibration can be calibrated using open source tools such as matlab and opencv. The operation is also very simple. Readers who are interested can refer to related materials. Here, I introduce a better calibration tool, kalibr, which can not only calibrate monocular cameras, but also binocular stereo cameras and vio systems. It also has bag decompression and compression functions. It is highly recommended! Address: https://github.com/ethz-asl/kalibr.git.

3.1 Calibration board

 Common calibration boards are: aprilgrid, checkerboard, circlegrid, we use aprilgrid, as shown below:

aprilgrid

 The yaml file parameters of apriltag are:

  target_type:'aprilgrid' #gridtype
  tagCols: 6 #number of apriltags
  tagRows: 6 #number of apriltags
  tagSize: 0.12 #size of apriltag, edge to edge [m]
  tagSpacing: 0.3 #ratio of space between tags to tagSize
                              #example: tagSize=2m, spacing=0.5m --> tagSpacing=0.25[-]

3.2 Calibration

 We keep shaking the camera in front of the calibration board to record video. I am using a binocular vio system here. There are two cams and an imu topic. 1. we only use cam0 for single target calibration.

rosbag_info

 The camera is calibrated through the command line, --target is the calibration file parameter, --bag is the bag file, --models is the internal parameter distortion model, and --topics is the corresponding topic in the play bag:

kalibr_calibrate_cameras --target 2april_6x6.yaml --bag w.bag --models pinhole-equi --topics/cam0/image_raw

 Kalibr uses python and C++ mixed programming, runs based on the ros environment, and the basic algorithm principle is the Zhang Zhengyou calibration method described in the second section. It uses LM for iterative optimization, namely BA (Bundle Adjustment) optimization. The principle of BA optimization will be introduced later. Due to space limitations, the core details of the code are not given here. Interested readers can read the source code carefully from kalibr_calibrate_cameras.python. After the calibration is completed, a pdf summary document and the corresponding internal parameter distortion yaml file will be generated. The first page of the pdf document is as follows:

error

 We can see that the reprojection error of most of the corresponding corner points is below 0.5. Generally, if the average projection error is less than 0.5 pixels, the calibration result can be considered acceptable, otherwise, the calibration must be re-calibrated. Our calibration results are acceptable, and the corresponding yaml file results are:

cam0:
  cam_overlaps: []
  camera_model: pinhole
  distortion_coeffs: [-0.0012401362900342373, -0.03069966600965989, 0.046607156271033026,
    -0.018194192150805963]
  distortion_model: equidistant
  intrinsics: [290.69996211319705, 290.8786390133949, 311.99329708834927, 208.3866583107687]
  resolution: [640, 400]
  rostopic:/cam0/image_raw

 So far, we have obtained the results of the internal parameter calibration of the monocular camera. In the next issue, we will talk about the internal and external parameter calibration of the binocular stereo vision camera.

SLAM calibration series articles

1. IMU calibration (three) calibration for determining errors 2. IMU calibration (two) calibration for random errors

3. IMU calibration

Python high-performance series of articles

  1. Python high-performance computing list
  2. Python high-performance computing dictionary
  3. Python high-performance computing heap
  4. Python nested function closure
  5. Modifiers for Python high-performance computing

6. How to write high-performance Python modifier functools.wraps

Reference: https://cloud.tencent.com/developer/article/1802617 slam calibration (1) monocular vision-cloud + community-Tencent Cloud