Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ported scaled_jtc #1188

Closed
wants to merge 1 commit into from
Closed

Conversation

firesurfer
Copy link
Contributor

@firesurfer firesurfer commented Jul 2, 2024

This is a remake of #301 given that neither @fmauch nor @bmagyar
are reacting why this PR has been closed.

Differences to the original PR

  1. This PR simply adds the functionality to the normal JTC
  2. Instead of using the speed scaling interface a transient local speed scaling factor topic is used. This has various advantageous especially in a multiarm / multi JTC setups. (See Added option and implementation for setting speedscaling factor via topic UniversalRobots/Universal_Robots_ROS2_Driver#883 (comment) for more information)
  3. It contains a fix if a goal time is set: Attempt to fix the goal time violated issue UniversalRobots/Universal_Robots_ROS2_Driver#882 (Note: @fmauch and I disagree what the correct behavior would be in that case - but he agreed to merge because otherwise the controller would not be usable)
  4. It does not store the time_data_ field in a RealTimeBox as the access solely happens from the RT thread.
  5. The scaling factor is not clipped to [0,1] as it is really useful for sim environments to be able to upscale the execution speed

Testing

I tested it by cherry-picking the commit back to the iron branch and running it on my setup with multiple joint motions. They were looking fine. We are using basically the same code (but added to the original scaled jtc) on our production setup for more than 3/4 of a year now. (https://github.com/firesurfer/Universal_Robots_ROS2_Driver/tree/fixed_jtc)

Configuration

The PR adds the following configuration entry:

speed_scaling_topic_name: {
type: string,
default_value: "~/speed_scaling_factor",
description: "Topic name for the speed scaling factor - set to an empty string in order to disable it"
}

For a multiarm scenario one would set this topic to the same topic name for all involved JTCs.

Up to discussion: Should it be disabled per default?

Notes

The scaling_factor_ field does not need to be in a RealTime box (it does not matter if it is updated an update cycle earlier or later) but it might make sense to use an std::atomic<double> double as some platforms might not guarantee atomic access to the variable.

It has the same issue as the original PR bei @fmauch. It does not scale velocities and efforts at the moment. But never the less I think it is a really useful addition and bringing this feature upstream will avoid issues such as #1182 where the bug has been fixed upstream but not backported yet.

iron backport I used for testing: https://github.com/firesurfer/ros2_controllers/tree/scaled_jtc_iron

move TimeData reset to on_activate

run pre-commit

Add goal time violated fix
@fmauch
Copy link
Contributor

fmauch commented Jul 3, 2024

Hey @firesurfer thanks for bringing this up again. I just wanted to tackle this and push it above the final barrier, you were faster than me :-)

What is missing from my point of view are

  1. tests
  2. documentation

I'll write some documentation and make a PR to your fork @firesurfer? Then we can get this streamlined into this PR.


Edit: I forgot to mention: By now I am in line that subscribing to the scaling factor is beneficial over a service. @destogl would you agree?


Edit2:
This doesn't contain a lot of the features that I currently had on my branch. I'll see how to merge everything together.


Edit3:
There have been some discussions in the background regarding this, that I have included on my fork, already. I understand that you would like to get this forward @firesurfer. If you would give me until the end of the week to bring my work to a mergable state where we can discuss things, I think that would be faster than iterating your PR to this. Would that be OK for you?

@bmagyar
Copy link
Member

bmagyar commented Jul 3, 2024

Thanks for the PR, we've discussed it in detail at the WG meeting today and you'll get a follow-up PR ;)

@firesurfer
Copy link
Contributor Author

@fmauch I close this PR in favor for #1191

@firesurfer firesurfer closed this Jul 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants