You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've simply used the TF Broadcaster to publish data for the camera each timestep as well.
I run the above script and simply issue angular velocity of 0.1 to the robot continuously, but zero linear velocity.
Now I use the depth and camera_info information to generate a point cloud using image_pipeline package in ros using the depth_image_proc/point_cloud_xyz nodelet.
When I try to visualize the point cloud in rviz, the first issue I notice is that the z-axis appears inverted.
Now I try to read the pointclouds being published in a separate script and combine two adjacent ones to test registration. I use the following code which uses the stamp of the pointcloud message header to fetch the relevant transform for the camera and then transforms the pointcloud points accordingly before merging them into one.
The problem here is that the merged point clouds do not overlap properly. There is a significant difference in the depth of surfaces for two adjacent frames for small rotations (0.1 radians) of the bot. The following images should help highlight the difference:
Can someone please let me know how to fix this issue?
The text was updated successfully, but these errors were encountered:
Hi, I'm trying to use turtlebot2 in gibson with ros by adapting the example script as follows:
turtlebot_depth.py
I've simply used the TF Broadcaster to publish data for the camera each timestep as well.
I run the above script and simply issue angular velocity of 0.1 to the robot continuously, but zero linear velocity.
Now I use the depth and camera_info information to generate a point cloud using image_pipeline package in ros using the depth_image_proc/point_cloud_xyz nodelet.
When I try to visualize the point cloud in rviz, the first issue I notice is that the z-axis appears inverted.
Now I try to read the pointclouds being published in a separate script and combine two adjacent ones to test registration. I use the following code which uses the stamp of the pointcloud message header to fetch the relevant transform for the camera and then transforms the pointcloud points accordingly before merging them into one.
pointcloud_registration.py
The problem here is that the merged point clouds do not overlap properly. There is a significant difference in the depth of surfaces for two adjacent frames for small rotations (0.1 radians) of the bot. The following images should help highlight the difference:
Can someone please let me know how to fix this issue?
The text was updated successfully, but these errors were encountered: