Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allowing External Angle Sources in Pose Estimation #1309

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

InterstellarDrift
Copy link

@InterstellarDrift InterstellarDrift commented Apr 13, 2024

This pull request adds the IMU_SOLVE_BEST_TAG PoseStrategy. It takes the target returned by the getBestTarget method of PhotonPipelineResult and solves solely from that tag. When tested in simulation, the regular multitag solve could be off by as much as half a meter or more and about +/- 0.1m of jitter, but IMU_SOLVE_BEST_TAG maintained both high accuracy and low levels of noise: about +/- 0.01-0.02. (This is not accurate, I pulled those figures from memory and as such they are an approximation)

#746

@InterstellarDrift InterstellarDrift requested a review from a team as a code owner April 13, 2024 19:51
@mdurrani808
Copy link
Contributor

How does this perform when the robot is rotating? A major problem with MegaTag2 is that you have no idea when the gyro reading is from without the timestamp, leading to inaccuracies. This is fine when you're sitting in place, but once you're moving, things start getting dicey.

@InterstellarDrift
Copy link
Author

InterstellarDrift commented Apr 13, 2024

Unfortunately I don't have much information on that at the moment, as so far all of my testing has been done in simulation and for some reason I can't rotate the robot in teleop, as there is something broken in my simulation at the moment. I'll look into compensating for latency though.

@mcm001
Copy link
Contributor

mcm001 commented Apr 13, 2024

The other thing to note is that this still only uses a single tag. How does using the multitag pose but with the robot yaw perform?

@srimanachanta
Copy link
Member

Suggest a refactor here to support an InterpretableBuffer of Rotation angles which would solve timestamp issues and also allow this to be used with multitag better with timestamp synchronization across measurements. I'd prefer to internalize this though as the PhotonPoseEstimator has grown a lot with different strategies which can lead to it being confusing.

@InterstellarDrift
Copy link
Author

I'd prefer to internalize this though as the PhotonPoseEstimator has grown a lot with different strategies which can lead to it being confusing.

I definitely agree that moving the register function to a different class would be more viable, or else it would become tedious to work with if multiple PoseEstimators were used in addition to adding unnecessary options. Would PhotonUtils be an acceptable place to store the heading register?

I plan on pushing the commits I've made over the past month this afternoon or evening.

@InterstellarDrift
Copy link
Author

How does this perform when the robot is rotating? A major problem with MegaTag2 is that you have no idea when the gyro reading is from without the timestamp, leading to inaccuracies. This is fine when you're sitting in place, but once you're moving, things start getting dicey.

Last night I finally did a side-by-side comparison of the estimated pose's position before and after latency compensation was applied, and the results are quite substantial. At the test location I was using between the speaker and the amp in simulation, without latency compensation, the estimated pose would get flung out as far as the opposite side of the stage, but post-application at the same speeds, the farthest point the estimation was thrown by was no more than 2-2.5 feet or so, and that was more from the fact that it was detecting tags halfway across the field, and so the error spikes significantly at that point. It appeared to also hold up similarly when the simulated robot was rotating at 5 radians per second as opposed to the 2 radians per seconds the first two tests were performed with.

I intend to upload the screen recordings I collected of the advantagescope visualization later this afternoon.

@InterstellarDrift
Copy link
Author

The other thing to note is that this still only uses a single tag. How does using the multitag pose but with the robot yaw perform?

I haven't tested what using the robot's yaw in a multitag solve looks like yet, but I did develop a solver algorithm that only uses two tags to calculate the robot's position without needing the robot's heading. It sidesteps the potential reflected ambiguity represented by only using two tags because it uses the leftmost tag leftmost in perspective of the best two tags as the anchor tag. This forces a singular perspective out of the system by calculating the angle between the anchor tag and the secondary tag and then rotating the offset translation from the anchor tag derived using trigonometry by that angle, the field-relative offset from the anchor tag is obtained and then applied to the field-relative position of the anchor tag from the tag layout, resulting in the robot's position on the field.

Both the one tag and two tag solvers achieve greater stability by calculating the offset from the tag purely from its center and the camera's height and angular offsets rather than the SolvePNP transform from the tag's corners.

@InterstellarDrift
Copy link
Author

No latency compensation, 2 radians per second
Screencast from 05-13-2024 11:27:45 PM.webm

@InterstellarDrift
Copy link
Author

@InterstellarDrift
Copy link
Author

This is what it looks like when it's stationary:
image

image

Copy link

@levyishai levyishai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this logic fail in the edge case where cameraHeight == tagHeight, causing hypot to be 0 because one side of the triangle is 0? If so, maybe consider adding a check to handle this by finding the first tag with a different height or fallback to an alternative strategy? Maybe just send a warning?

Comment on lines +480 to +482
double hypot =
-PhotonUtils.calculateDistanceToTargetMeters(
robotToCamera.getZ(), tagPose.get().getZ(), robotToCamera.getY(), tagAngle.getY());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't robotToCamera.getY() be robotToCamera.getRotation().getY()? Or is this intentional?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right, that looks wrong. I think I may have fixed that locally during testing awhile back, but I'll make sure it gets fixed when I rewrite to use the PhotonUtils function.

Comment on lines +454 to +455
private Optional<EstimatedRobotPose> imuTagOnRioStrategy(
PhotonPipelineResult result, Optional<Matrix<N3, N3>> cameraMatrixOpt) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a difference from all the logic here to this PhotonUtils function? If not, why not use it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there's any major differences, so I'm not entirely sure why I decided to write it on my own. The PhotonUtils function seems to do what I want, so I'll rewrite imuTagOnRioStrategy to use it. Thank you for pointing that out, I'm not sure I realized it was there.

Copy link

@levyishai levyishai Aug 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that's werid to me about that utils function, is how it works when the camera yaw isn't the gyro yaw. The only place where the robotToCamera yaw is usrd is in the end, but the only thing that'll do is change our angle. The field position would still be wrong (from the transform here)
Maybe that transform needs to also take the robotToCamera yaw into account? I may be missing something.
Looking more, a lot of the logic seems very weird to me. Perhaps your way is better.

* @param camIntrinsics the distortion matrix of the camera
* @return the angles
*/
public static Rotation3d correctPixelRot(Point pixel, Matrix<N3, N3> camIntrinsics) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just a merge commit meme, but getCorrectedPixelRot already exists? Let's try to only implement this agorithm in only one place

Comment on lines +243 to +245
private static final TimeInterpolatableBuffer<Rotation2d> headingBuffer = TimeInterpolatableBuffer.createBuffer(
1.0
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Photonutils should be stateless.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants