You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you very much for your incredible work!
After using your tool to calibrate the data you provided and the data we collected ourselves, I have several questions:
The calibration results can only display the overlap of the image projected onto the point cloud by adjusting blend_weight, but this is not obvious. In many cases, I do not know the quality of the calibration results. Why not project the point cloud onto the image?
When we manually select matching points between images and point clouds, the grayscale image lacks a lot of detail information. Why not use a color image?
This is the result of using the official provided livox_ros1 data and using the automatic matching method.
By adjusting blend_weight, we can see that the overlap between the calibrated image projection and the point cloud is very good.
This is the result of using the official provided ouster_ros1 data and using the automatic matching method.
I can't tell the quality of the calibration results by adjusting blend_weight. Is my result correct?
This is the data we collected, using a repeatedly scanned LiDAR and in an autonomous driving scenario, with sensors installed on the vehicle. In manual matching mode, we collected several segments of data while the vehicle was stationary. However, due to the chaotic scene, it was difficult to select accurate matching points and the calibration error was large. Do I need to collect scene data with more regular and distinct features?
In the automatic matching mode, we collected several segments of vehicle movement data, but the matching results were still not ideal. Can you help me analyze the reason?
Supplement: The data we collect only includes images and point cloud channels, without camera information.
Thank you for your reading and time!
The text was updated successfully, but these errors were encountered:
Because we were interested in creating colored point clouds, we chose projecting image data onto the point cloud. I thinks a visualization with the opposite projection easily be realized using, for example, rviz.
Because the image-point-cloud alignment algorithm works on intensity data, we convert images into mono8. Meanwhile, I think some minor modification makes it possible to show colored images.
The result of ouster looks corrupted. Did you enable dynamic point cloud integration?
The environment itself contains rich geometrical features and looks good, but make sure that there is no dynamic objects.
The accumulated point cloud is too sparse to extract features. I recommend take a longer data with more movement to generate a dense point cloud.
Hello, thank you very much for your incredible work!
After using your tool to calibrate the data you provided and the data we collected ourselves, I have several questions:
blend_weight
, but this is not obvious. In many cases, I do not know the quality of the calibration results. Why not project the point cloud onto the image?By adjusting
blend_weight
, we can see that the overlap between the calibrated image projection and the point cloud is very good.This is the result of using the official provided ouster_ros1 data and using the automatic matching method.
I can't tell the quality of the calibration results by adjusting
blend_weight
. Is my result correct?Supplement: The data we collect only includes images and point cloud channels, without camera information.
Thank you for your reading and time!
The text was updated successfully, but these errors were encountered: