Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misalignment in Laser Data Partitioning #14

Open
zahra370 opened this issue Nov 2, 2023 · 8 comments
Open

Misalignment in Laser Data Partitioning #14

zahra370 opened this issue Nov 2, 2023 · 8 comments

Comments

@zahra370
Copy link

zahra370 commented Nov 2, 2023

I'm encountering an issue with my laser sensor data, my laser sensor has a 180-degree range with 720 samples, When attempting to divide the laser data into left and right readings, then the map nodes mapped obstacle positions appear reversed and at a certain distance. code and rviz image is attached
changes
changed_rviz
Interestingly, when I use the original code with right[0:360], the mapped codes for the right side are correct,
right
r-rviz
I'm seeking suggestions on how to correctly divide the laser data into left and right samples, as well as how to further divide the laser state into 20 ranges to enhance my robot's ability to track the goal.

@reiniscimurs
Copy link
Owner

Hi,

What kind of laser are you using?

To give a bit of background to why this method even uses left and right laser readings (and why you might not need it):
For real life laser such as RpLidar (as used in this implementation) its placement on the robot itself is very important as that will dictate, which laser readings will actually be relevant for robot navigation. In ROS simulated lasers you can just specify the field of view of the laser and the number of laser readings. Then sequential laser readings will be returned from left to right. This is not the case for real lasers. In RpLidar case you will always recieve 360 degree readings from the laser, then it is up to the user to decide which readings correspond to the field of view you want to use. So how you orient your laser will be very important. We placed it in the following way where the data line is in the back of the laser:
image

As you can see from the image, the first laser reading is right in front of the laser, then going clock-wise. Because we need only the 180 degrees in front of the laser, we need to filter the received data. We take the first readings in front of the laser from the first quartile, then we take the final readings from the last quartile. We ignore the other 720 laser readings that are not in our FoV.

But we also require the laser readings to be sequential from left to right. So we take the last 360 readings in laser data and store it as "left" laser data. Then we take the first 360 readings and use it as "right" laser data. Then combine in form [left, right] to obtain 720 values of sequential data in 180 FoV just like we had when we trained the policy in simulation.

So you have to check what kind of laser data you are using and understand the differences of its hardware and representation data (and what the implementation in GDAE is like). If you are using a simulated laser from ROS, these steps are unnecessary. Be mindful that this repo represents a real implementation with physical hardware and different hardware will have different specifications.

@reiniscimurs
Copy link
Owner

reiniscimurs commented Nov 2, 2023

With the previous comment I was explaining why we need to use the left and right filtering and ordering before using the laser data. It is because the hardware treats laser data different from simulated data.

The simulated data already is ordered properly. By that I mean that the input you get from the simulated laser reading already is in the form of [left, right]. Where left = [0:360] and right = [360:720] (in your case with 720 readings). So you do not need to perform this ordering and filtering.
You could simply try:
rpleft = np.array(rplidar.ranges[360 : 720])
rpright = np.array(rplidar.ranges[0:360])
rp_state = np.append(rpleft, rpright)
laser_in = rplidar.ranges[:] (this would take 720 readings directly from the simulated laser)
laser_in[laser_in == inf] = 10
laser_in[laser_in > 10] = 10

By trying to apply the same filtering and ordering of data as needed for the real laser, you would end up scrambling the laser data for the policy as well. You should be able to treat the data here just the same way as in DRL repo. The changes here are purely because of changes to the hardware.

@reiniscimurs
Copy link
Owner

Thank you for being responsive and detailed explanation. Despite implementing the suggested changes, there hasn't been much improvement. The left side map_nodes are mapped correctly, but the right map_nodes are still not visible. I have made the recommended adjustments, but the issue persists. Additionally, after training my model with 180 angular range and 720 samples and testing it successfully, I encountered a new problem. When using the trained model with the GDAM repository, the robot only moves in a circular motion instead of reaching the goal as it did during testing. please review my codes and provide suggestions where necessary. code left

Are you sure that only left side is shown here? The line does not look like the middle of the laser readings. If I am not mistaken, the laser nodes (for debugging) are displayed by map_nodes. If you look up the initialization, by default map_nodes stores only 400 values so it would only display that amount of points in rviz. So it would be just a bit more than half of the laser which does seem to be the case in your image here.

For the performance issue, did you update the return in step function to align it with what is in the DRL repo? Notice that in GDAE we normalize the polar coordinated to the goal in range [0, 1] but we do not do that in DRL repo which you used to train the model.

@reiniscimurs
Copy link
Owner

Your state should add values from aIn instead of a.

Additionally, you can try to disable recovery behaviors to see their effects here:
https://github.com/reiniscimurs/GDAE/blob/main/Code/GDAM_env.py#L315-L324

You could also share your code fully as it is difficult to make estimations of what might be wrong from code snippets.

@zahra370
Copy link
Author

The heuristic function mentioned in the paper is designed to calculate the heuristic for each candidate point. However, in the provided code, fixed values are assumed for the goal and map information. I would like to inquire whether the origin value should vary for each candidate point.How can the heuristic function be incorporated into the GDAM_env.py ?

@reiniscimurs
Copy link
Owner

What do you mean by fixed values? In this repo we use a simplified heuristic function, but even then the goal information is read from the class attribute. At such, on each heuristic calculation you read the current global goal and use it to calculate the heuristic. Alternatively, you could pass this value directly into the function, but it is not fixed as the self.global_goal_x and self.global_goal_y get updated over time. Same would apply for the map information.

@zahra370
Copy link
Author

I'm grateful for your suggestions and help in getting the results, but I need some help with my trajectory planner, it's only rotating counterclockwise and not rotating in the clockwise direction. If I give min_vel_theta negative, the robot oscillates and can't turn completely and failed to reach goal, could you look at the parameters and help me fine tune them, thanks!

traj_plnner_ros.mp4

planner1(1)
2(1)(1)
3(1)(1)
4(1)(1)
5(1)(1)

@reiniscimurs
Copy link
Owner

Hi,

What controls the robot in this case? If the control of done by trajectory planner, I cannot help you there.

If it is still controlled by the learned policy, you should first check the frame of the given goal and make sure that the frame transforms make sense wrt to the robots heading. The issue most likely is not due to the trajectory planner itself, unless it gives back the goal in some weird frame.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants