-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allowing External Angle Sources in Pose Estimation #1309
base: master
Are you sure you want to change the base?
Conversation
…oTargetMeters method in PhotonUtils
How does this perform when the robot is rotating? A major problem with MegaTag2 is that you have no idea when the gyro reading is from without the timestamp, leading to inaccuracies. This is fine when you're sitting in place, but once you're moving, things start getting dicey. |
Unfortunately I don't have much information on that at the moment, as so far all of my testing has been done in simulation and for some reason I can't rotate the robot in teleop, as there is something broken in my simulation at the moment. I'll look into compensating for latency though. |
The other thing to note is that this still only uses a single tag. How does using the multitag pose but with the robot yaw perform? |
Suggest a refactor here to support an InterpretableBuffer of Rotation angles which would solve timestamp issues and also allow this to be used with multitag better with timestamp synchronization across measurements. I'd prefer to internalize this though as the PhotonPoseEstimator has grown a lot with different strategies which can lead to it being confusing. |
I definitely agree that moving the register function to a different class would be more viable, or else it would become tedious to work with if multiple PoseEstimators were used in addition to adding unnecessary options. Would PhotonUtils be an acceptable place to store the heading register? I plan on pushing the commits I've made over the past month this afternoon or evening. |
Last night I finally did a side-by-side comparison of the estimated pose's position before and after latency compensation was applied, and the results are quite substantial. At the test location I was using between the speaker and the amp in simulation, without latency compensation, the estimated pose would get flung out as far as the opposite side of the stage, but post-application at the same speeds, the farthest point the estimation was thrown by was no more than 2-2.5 feet or so, and that was more from the fact that it was detecting tags halfway across the field, and so the error spikes significantly at that point. It appeared to also hold up similarly when the simulated robot was rotating at 5 radians per second as opposed to the 2 radians per seconds the first two tests were performed with. I intend to upload the screen recordings I collected of the advantagescope visualization later this afternoon. |
I haven't tested what using the robot's yaw in a multitag solve looks like yet, but I did develop a solver algorithm that only uses two tags to calculate the robot's position without needing the robot's heading. It sidesteps the potential reflected ambiguity represented by only using two tags because it uses the leftmost tag leftmost in perspective of the best two tags as the anchor tag. This forces a singular perspective out of the system by calculating the angle between the anchor tag and the secondary tag and then rotating the offset translation from the anchor tag derived using trigonometry by that angle, the field-relative offset from the anchor tag is obtained and then applied to the field-relative position of the anchor tag from the tag layout, resulting in the robot's position on the field. Both the one tag and two tag solvers achieve greater stability by calculating the offset from the tag purely from its center and the camera's height and angular offsets rather than the SolvePNP transform from the tag's corners. |
No latency compensation, 2 radians per second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this logic fail in the edge case where cameraHeight == tagHeight
, causing hypot
to be 0 because one side of the triangle is 0? If so, maybe consider adding a check to handle this by finding the first tag with a different height or fallback to an alternative strategy? Maybe just send a warning?
double hypot = | ||
-PhotonUtils.calculateDistanceToTargetMeters( | ||
robotToCamera.getZ(), tagPose.get().getZ(), robotToCamera.getY(), tagAngle.getY()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't robotToCamera.getY()
be robotToCamera.getRotation().getY()
? Or is this intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right, that looks wrong. I think I may have fixed that locally during testing awhile back, but I'll make sure it gets fixed when I rewrite to use the PhotonUtils function.
private Optional<EstimatedRobotPose> imuTagOnRioStrategy( | ||
PhotonPipelineResult result, Optional<Matrix<N3, N3>> cameraMatrixOpt) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a difference from all the logic here to this PhotonUtils function? If not, why not use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there's any major differences, so I'm not entirely sure why I decided to write it on my own. The PhotonUtils function seems to do what I want, so I'll rewrite imuTagOnRioStrategy to use it. Thank you for pointing that out, I'm not sure I realized it was there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that's werid to me about that utils function, is how it works when the camera yaw isn't the gyro yaw. The only place where the robotToCamera yaw is usrd is in the end, but the only thing that'll do is change our angle. The field position would still be wrong (from the transform here)
Maybe that transform needs to also take the robotToCamera yaw into account? I may be missing something.
Looking more, a lot of the logic seems very weird to me. Perhaps your way is better.
* @param camIntrinsics the distortion matrix of the camera | ||
* @return the angles | ||
*/ | ||
public static Rotation3d correctPixelRot(Point pixel, Matrix<N3, N3> camIntrinsics) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just a merge commit meme, but getCorrectedPixelRot already exists? Let's try to only implement this agorithm in only one place
private static final TimeInterpolatableBuffer<Rotation2d> headingBuffer = TimeInterpolatableBuffer.createBuffer( | ||
1.0 | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Photonutils should be stateless.
This pull request adds the IMU_SOLVE_BEST_TAG PoseStrategy. It takes the target returned by the getBestTarget method of PhotonPipelineResult and solves solely from that tag. When tested in simulation, the regular multitag solve could be off by as much as half a meter or more and about +/- 0.1m of jitter, but IMU_SOLVE_BEST_TAG maintained both high accuracy and low levels of noise: about +/- 0.01-0.02. (This is not accurate, I pulled those figures from memory and as such they are an approximation)
#746