We present a novel framework for precisely estimating dense depth maps by combining 3D lidar scans with a set of uncalibrated camera RGB color images for the same scene. Rough estimates for 3D structure obtained using structure from motion (SfM) on the uncalibrated images are first co-registered with the lidar scan and then a precise alignment between the datasets is estimated by identifying correspondences between the captured images and reprojected images for individual cameras from the 3D lidar point clouds. The precise alignment is used to update both the camera geometry parameters for the images and the individual camera radial distortion estimates, thereby providing a 3D-to-2D transformation that accurately maps the 3D lidar scan onto the 2D image planes. The 3D to 2D map is then utilized to estimate a dense depth map for each image.
- Lidar and SfM: Complementary Nature
|Lidar||High accuracy||Low resolution|
|SfM||High resolution||Low accuracy|
|Simple operation||Texture required|
- Sample Results
Fusing Structure from motion and lidar for dense accurate depth map estimation Inproceedings
In: Proc. IEEE Intl. Conf. Acoustics Speech and Sig. Proc., pp. 1283–1287, 2017.