LiDAR and Camera Fusion

https://medium.com/@shashankag14/lidar-camera-fusion-a-short-guide-34115a3055da

Camera calibration

  • Camera Calibration: estimate camera intrinsic parameters (e.g. focal length, principal point, and lens distortion) to map 2D image coordinates (pixel positions) to 3D camera coordinates

LiDAR-camera extrinsic calibration

  • LiDAR-camera extrinsic calibration: determine relative transformation (rotation and translation) between the LiDAR and camera, allowing for fusion of depth information from LiDAR with color information from cameras in a common coordinate frame.

Coordinate alignment and fusion

  • Coordinate alignment and fusion: project the 3D LiDAR points onto the 2D camera image plane using the camera projection matrix.

Sensor Fusion

https://www.thinkautonomous.ai/blog/lidar-and-camera-sensor-fusion-in-self-driving-cars/

In Sensor Fusion, we have two possible processes:

  • Early fusion  — Fusing the raw data - pixels and point clouds.

picture 0

  • Late fusion  — Fusing the results - bounding boxes from LiDAR and from camera.

picture 1

Derivation and Implementation of 3D LIDAR-Camera Sensor Fusion

https://hackaday.io/project/182303-multi-domain-depth-ai-usecases-on-the-edge/log/199419-derivation-and-implementation-of-3d-lidar-camera-sensor-fusion

picture 3

Sensor Fusion - LiDARs & RADARs in Self-Driving Cars

https://www.thinkautonomous.ai/blog/sensor-fusion/

Sensor Fusion with a Kalman Filter

picture 2