Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic matching result issue #61

Open
jianghaijun007 opened this issue Nov 3, 2023 · 1 comment
Open

Automatic matching result issue #61

jianghaijun007 opened this issue Nov 3, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@jianghaijun007
Copy link

Hello, thank you very much for your incredible work!
After using your tool to calibrate the data you provided and the data we collected ourselves, I have several questions:

  1. The calibration results can only display the overlap of the image projected onto the point cloud by adjusting blend_weight, but this is not obvious. In many cases, I do not know the quality of the calibration results. Why not project the point cloud onto the image?
  2. When we manually select matching points between images and point clouds, the grayscale image lacks a lot of detail information. Why not use a color image?
  3. This is the result of using the official provided livox_ros1 data and using the automatic matching method.
    2023-11-03 11-35-39屏幕截图
    2023-11-03 11-35-47屏幕截图
    By adjusting blend_weight, we can see that the overlap between the calibrated image projection and the point cloud is very good.
    This is the result of using the official provided ouster_ros1 data and using the automatic matching method.
    2023-11-03 17-01-27屏幕截图
    I can't tell the quality of the calibration results by adjusting blend_weight. Is my result correct?
  4. This is the data we collected, using a repeatedly scanned LiDAR and in an autonomous driving scenario, with sensors installed on the vehicle. In manual matching mode, we collected several segments of data while the vehicle was stationary. However, due to the chaotic scene, it was difficult to select accurate matching points and the calibration error was large. Do I need to collect scene data with more regular and distinct features?
    20231102145254_524
  5. In the automatic matching mode, we collected several segments of vehicle movement data, but the matching results were still not ideal. Can you help me analyze the reason?
    Supplement: The data we collect only includes images and point cloud channels, without camera information.
    dy_2_1103_3 bag
    dy_2_1103_3 bag_lidar_intensities
    dy_2_1103_3 bag_superglue
    Thank you for your reading and time!
@jianghaijun007 jianghaijun007 added the enhancement New feature or request label Nov 3, 2023
@koide3
Copy link
Owner

koide3 commented Nov 13, 2023

  1. Because we were interested in creating colored point clouds, we chose projecting image data onto the point cloud. I thinks a visualization with the opposite projection easily be realized using, for example, rviz.

  2. Because the image-point-cloud alignment algorithm works on intensity data, we convert images into mono8. Meanwhile, I think some minor modification makes it possible to show colored images.

  3. The result of ouster looks corrupted. Did you enable dynamic point cloud integration?

  4. The environment itself contains rich geometrical features and looks good, but make sure that there is no dynamic objects.

  5. The accumulated point cloud is too sparse to extract features. I recommend take a longer data with more movement to generate a dense point cloud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants