Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correctly handle multiple StaticTransformBroadcasters in a single process #455

Open
wants to merge 1 commit into
base: noetic-devel
Choose a base branch
from

Conversation

rhaschke
Copy link
Contributor

As ROS latches only the last message from a given node and topic, when using multiple static broadcasters, only the last message is actually kept. To correctly handle multiple static broadcasters, we need to share the actual publisher instance across all broadcaster instances within a process.
This is done here via a private, shared impl pointer.

Originally, I implemented this caching at the application level (i.e. in my rviz plugin). Then I thought, the issue should actually be handled here in tf.
Now, I'm even thinking it should be handled at the basic ROS level: whenever there are multiple, latched publishers from a single process, their messages should all be latched.
However, I'm not deep enough into the ros_comm library to judge whether this can be handled as easy as here, where the infrastructure to keep several messages (stamped transforms) was already in place.

I don't care very much on which level this bug will be fixed. I just wanted to point out the issue and a possible solution.

Note that this PR builds on #454.

Copy link
Member

@tfoote tfoote left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pretty good compromise for being able to publish the static transforms in aggregate. What's lost in this implementation is the ability to stop publishing a transformation.

This definitely could not be pushed down lower, to change the behavior in roscpp as that's potentially behavior that others are relying upon, unless there's an option to turn it on for your scope.

To do this correctly I think it might make more sense to have the implementation/singleton keep weak pointers to each instance of the StaticTransformBroadcaster which will then iterate all the weak pointers to assemble still active publishers. That would allow a publisher to go out of scope and the transform would stop being published. Similarly the state of each publication will then be up to date with the state of each of the publishers in case there's an update or removal.

As ROS latches only the last message from a given node to a topic,
when using multiple static broadcasters, only the last message was actually kept.
To correctly handle this, we need to share the actual publisher across
all instances within a process.
@rhaschke
Copy link
Contributor Author

Sorry for not reacting to your feedback, @tfoote, for so long. I already forgot about this PR and just found it again, when scanning my pending PRs.

What's lost in this implementation is the ability to stop publishing a transformation.

Yes, that's an important point I missed.

I think it might make more sense to have the implementation/singleton keep weak pointers to each instance of the StaticTransformBroadcaster which will then iterate all the weak pointers to assemble still active publishers.

I'm not sure what you mean by assembling active publishers. The message cannot get assembled on-demand from active StaticTransformBroadcasters (STB). It is important that there is a single unique net_message_ for latching. If an STB goes out of scope, it should remove all messages from this net_message_ originating from itself.
There is also another issue: If different STBs have published different, possibly conflicting transforms, we should probably keep the previous (now overridden) transforms around, such that they can get restored when the "newer" STB gets destroyed.

This could be achieved by storing all transforms twice: in net messages held by each STB and an overall net message held by the Impl. If a new message should be published, it will be added to both transform lists. If an STB gets destroyed, the overall net message should be (costly?) reassembled from the remaining STB's messages.

To avoid storing all messages twice, the Impl could memorize for each transform its "owner", e.g. in a map(child_frame -> STB).
If an STB goes out of scope, all invalidated transforms could get removed. To restore previous transforms, we would need to additionally memorize overridden transforms, but only those. However, it will require quite some book-keeping to restore the correct transform, i.e. the one that was overridden latest.

I'm not sure it is worth this effort to get rid of old transforms. Aren't clients caching those old transforms anyway?

@rhaschke rhaschke changed the base branch from melodic-devel to noetic-devel October 22, 2020 14:00
@rhaschke
Copy link
Contributor Author

rhaschke commented Jul 2, 2022

Closing and reopening to trigger CI.

@rhaschke rhaschke closed this Jul 2, 2022
@rhaschke rhaschke reopened this Jul 2, 2022
@ahcorde ahcorde added the noetic label Feb 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants