LiDAR and Camera Fusion
https://medium.com/@shashankag14/lidar-camera-fusion-a-short-guide-34115a3055da
Camera calibration
- Camera Calibration: estimate camera intrinsic parameters (e.g. focal length, principal point, and lens distortion) to map 2D image coordinates (pixel positions) to 3D camera coordinates
LiDAR-camera extrinsic calibration
- LiDAR-camera extrinsic calibration: determine relative transformation (rotation and translation) between the LiDAR and camera, allowing for fusion of depth information from LiDAR with color information from cameras in a common coordinate frame.
Coordinate alignment and fusion
- Coordinate alignment and fusion: project the 3D LiDAR points onto the 2D camera image plane using the camera projection matrix.
Sensor Fusion
https://www.thinkautonomous.ai/blog/lidar-and-camera-sensor-fusion-in-self-driving-cars/
In Sensor Fusion, we have two possible processes:
- Early fusion — Fusing the raw data - pixels and point clouds.
- Late fusion — Fusing the results - bounding boxes from LiDAR and from camera.
Derivation and Implementation of 3D LIDAR-Camera Sensor Fusion
Sensor Fusion - LiDARs & RADARs in Self-Driving Cars
https://www.thinkautonomous.ai/blog/sensor-fusion/