You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The documentation on how to calibrate the accuracy of the pitch and yaw of objects in object detection mode is lacking. There's one "Hint" that simply states "hey, you better get you FOV correct if you want accurate results."
Experimenting with changing the camera FOV on the camera settings page does indeed change the scaling of Pitch and Yaw angles returned.
Calibrating the camera resolution does not effect pitch/yaw reading for Object Detection.
Seems like having a separate horizontal and vertical FOV could make this more accurate.
This applies to all non-AprilTag pipelines.
Question: is the pitch/yaw computation done before or after the 640x640 scaling? I don't know how it works under the hood, but there would be a big difference in pitch/yaw if you apply the diagonal FOV to a 640x480 image vs a stretched 640x640 image.
Seems like the most accurate way to do it is to scale target pixel from the 640x640 image back to the original resolution and then use the camera calibration to correct for lens distortion before calculating the target pitch/yaw.
The text was updated successfully, but these errors were encountered:
PItch/yaw is calculated in original image space, not letterboxed/resized space yeah. We don't undistort, tho #1250 does but I dont want to merge this late in the season.
It looks like when importing calibrations, we need to make sure that calculateFrameStaticProps() gets propogated up the chain -- otherwise it will take changing the video mode to get the new calibration to apply. but calibration absolutely changes the pitch/yaw reported in my testing. Can you make sure your testing is right?
We did not notice a change in results (it may have just been a small change) after switching pipelines with the camera calibration. The calibration measured a diagonal FOV of 107 deg and we were using a calculated diagonal FOV of 120 deg.
The documentation on how to calibrate the accuracy of the pitch and yaw of objects in object detection mode is lacking. There's one "Hint" that simply states "hey, you better get you FOV correct if you want accurate results."
Experimenting with changing the camera FOV on the camera settings page does indeed change the scaling of Pitch and Yaw angles returned.
Calibrating the camera resolution does not effect pitch/yaw reading for Object Detection.
Seems like having a separate horizontal and vertical FOV could make this more accurate.
This applies to all non-AprilTag pipelines.
Question: is the pitch/yaw computation done before or after the 640x640 scaling? I don't know how it works under the hood, but there would be a big difference in pitch/yaw if you apply the diagonal FOV to a 640x480 image vs a stretched 640x640 image.
Seems like the most accurate way to do it is to scale target pixel from the 640x640 image back to the original resolution and then use the camera calibration to correct for lens distortion before calculating the target pitch/yaw.
The text was updated successfully, but these errors were encountered: