Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2D to 3D mapping #51

Open
LinLin1031 opened this issue Jul 29, 2023 · 10 comments
Open

2D to 3D mapping #51

LinLin1031 opened this issue Jul 29, 2023 · 10 comments

Comments

@LinLin1031
Copy link

Can you tell me how to find the coordinate relationship between 2D image and 3D point cloud by camera pose?

@xuxiaoxxxx
Copy link

May I ask if you have solved the problem? I also don't konw how to proj 3D into 2D image. Can you share the method with me?

@Geniusly-Stupid
Copy link

I also meet this problem. May I ask how to establish the pose_matrix from "pose.json" so that the point cloud can be transformed to global coordinate?

@Geniusly-Stupid
Copy link

I also meet this problem. May I ask how to establish the pose_matrix from "pose.json" so that the point cloud can be transformed to global coordinate?

Following the Equation in #16, I successfully construct the rotation matrix to align the generated point cloud using images in data folder using "final_camera_rotation" and "camera_rt_matrix". However, I still cannot find the translation vector. I tried "camera_location" and "camera_rt_matrix" but it is off. Could anyone teach me how to find the translation vector. Great Thanks!

@Geniusly-Stupid
Copy link

I also meet this problem. May I ask how to establish the pose_matrix from "pose.json" so that the point cloud can be transformed to global coordinate?

Following the Equation in #16, I successfully construct the rotation matrix to align the generated point cloud using images in data folder using "final_camera_rotation" and "camera_rt_matrix". However, I still cannot find the translation vector. I tried "camera_location" and "camera_rt_matrix" but it is off. Could anyone teach me how to find the translation vector. Great Thanks!

I solved this problem! This is because the depth is saved in 16 bits and should divide 512 before translating. I also summarize some common issues when mapping the 2D to 3D global coordinate.

  1. If the single-frame point cloud generated by the depth map is in separate layer, the reason might be that in this dataset, the missing values are replaced using 2^16 -1.
  2. If you find that after translating, the point cloud from different camera uuids are seperated, the reason might be that you do not divide 512 before translating, and the unit between camera_location and coordinate is not uniform.
  3. The rotation matrix of the pano folder can be constructed using the Equation in Order of pose transformations to align EXR with back-projected camera points #16.
  4. The rotation matrix of the data folder can be constructed using the same equation, but replace the "camera_initial_rotation" with "camera_final_rotation".

Hope it might help! And really thanks for the great work of the team to construct this dataset!

@xuxiaoxxxx
Copy link

Thanks!

@ayushjain1144
Copy link

ayushjain1144 commented Oct 19, 2023

I am facing issues with some scenes in area_5. Specifically in area_5a, office_20 I find that some images are aligned correctly with 3D while some other are at a 90 degree rotation. I am using the files from data folder

S3DIS Pointcloud
Screenshot 2023-10-19 at 4 19 26 PM

Unprojected Pointcloud
Screenshot 2023-10-19 at 4 18 57 PM

@Geniusly-Stupid could you please check if it is ok on your side?

Also, for poses I find that following gives exact same results as the equation described in the above comment:

camera_rt_matrix = data["camera_rt_matrix"] # camera pose
camera_rt_matrix.append([0, 0, 0, 1]) # 4x4 homogen array
camera_rt_matrix = np.linalg.inv(np.array(camera_rt_matrix))

Other bug reports:

 camera_rt_matrix = np.array([
            [0, 1, 0, -4.10],
            [-1, 0, 0, 6.25],
            [0, 0, 1, 0.0],
            [0, 0, 0, 1]
        ]) @ camera_rt_matrix

@ngoductuanlhp
Copy link

Hi @ayushjain1144 ,
I'm also facing the same problem. Have you found the solution yet?

@ayushjain1144
Copy link

Hi, Not really. My understanding is that it's not the issue of alignment, but the depth/color images in raw folder do not exhaustively cover the entire room. One hack that helps is to look for images in other rooms of a particular area and see if there is an image in other room that overlaps with current room and include that as well (For that, I unproject each image from other room, and check if there are any points which are very close to the provided S3dis pointcloud. If so, I add that image to a particular room. I haven't tested this exhaustively, so not very sure if this would be helpful or not.

@ngoductuanlhp
Copy link

Hi @ayushjain1144 , did you use the V1.2_aligned version or the V1.2 version of the 3D point cloud?

@ayushjain1144
Copy link

The normal one (un-aligned)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants