Skip to content

Commit

Permalink
Code Block cleanup/commonization
Browse files Browse the repository at this point in the history
  • Loading branch information
gerth2 committed Sep 6, 2024
1 parent 731d8b5 commit 96d34e7
Show file tree
Hide file tree
Showing 9 changed files with 100 additions and 63 deletions.
7 changes: 6 additions & 1 deletion docs/source/docs/apriltag-pipelines/multitag.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ This multi-target pose estimate can be accessed using PhotonLib. We suggest usin
```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
var result = camera.getLatestResult();
if (result.getMultiTagResult().estimatedPose.isPresent) {
Expand All @@ -38,6 +38,11 @@ This multi-target pose estimate can be accessed using PhotonLib. We suggest usin
if (result.MultiTagResult().result.isPresent) {
frc::Transform3d fieldToCamera = result.MultiTagResult().result.best;
}
.. code-block:: Python
# TODO
```

:::{note}
Expand Down
6 changes: 5 additions & 1 deletion docs/source/docs/installation/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,17 @@ If you would like to access your Ethernet-connected vision device from a compute
```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
PortForwarder.add(5800, "photonvision.local", 5800);
.. code-block:: C++
wpi::PortForwarder::GetInstance().Add(5800, "photonvision.local", 5800);
.. code-block:: Python
# TODO
```

:::{note}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/docs/integration/advancedStrategies.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ A simple way to use a pose estimate is to activate robot functions automatically
```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
Pose3d robotPose;
boolean launcherSpinCmd;
Expand Down
8 changes: 6 additions & 2 deletions docs/source/docs/programming/photonlib/controlling-led.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,17 @@ You can control the vision LEDs of supported hardware via PhotonLib using the `s

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Blink the LEDs.
camera.setLED(VisionLEDMode.kBlink);
.. code-block:: c++
.. code-block:: C++
// Blink the LEDs.
camera.SetLED(photonlib::VisionLEDMode::kBlink);
.. code-block:: Python
# TODO
```
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ You can use the `setDriverMode()`/`SetDriverMode()` (Java and C++ respectively)
```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Set driver mode to on.
camera.setDriverMode(true);
Expand All @@ -18,6 +18,10 @@ You can use the `setDriverMode()`/`SetDriverMode()` (Java and C++ respectively)
// Set driver mode to on.
camera.SetDriverMode(true);
.. code-block:: Python
# TODO
```

## Setting the Pipeline Index
Expand All @@ -27,7 +31,7 @@ You can use the `setPipelineIndex()`/`SetPipelineIndex()` (Java and C++ respecti
```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Change pipeline to 2
camera.setPipelineIndex(2);
Expand All @@ -36,6 +40,10 @@ You can use the `setPipelineIndex()`/`SetPipelineIndex()` (Java and C++ respecti
// Change pipeline to 2
camera.SetPipelineIndex(2);
.. code-block:: Python
# TODO
```

## Getting the Pipeline Latency
Expand All @@ -44,15 +52,19 @@ You can also get the pipeline latency from a pipeline result using the `getLaten

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Get the pipeline latency.
double latencySeconds = result.getLatencyMillis() / 1000.0;
.. code-block:: c++
.. code-block:: C++
// Get the pipeline latency.
units::second_t latency = result.GetLatency();
.. code-block:: Python
# TODO
```

:::{note}
Expand Down
38 changes: 19 additions & 19 deletions docs/source/docs/programming/photonlib/getting-target-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The `PhotonCamera` class has two constructors: one that takes a `NetworkTable` a
:language: c++
:lines: 42-43
.. code-block:: python
.. code-block:: Python
# Change this to match the name of your camera as shown in the web ui
self.camera = PhotonCamera("your_camera_name_here")
Expand Down Expand Up @@ -51,7 +51,7 @@ Use the `getLatestResult()`/`GetLatestResult()` (Java and C++ respectively) to o
:language: c++
:lines: 35-36
.. code-block:: python
.. code-block:: Python
# Query the latest result from PhotonVision
result = self.camera.getLatestResult()
Expand All @@ -69,17 +69,17 @@ Each pipeline result has a `hasTargets()`/`HasTargets()` (Java and C++ respectiv

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Check if the latest result has any targets.
boolean hasTargets = result.hasTargets();
.. code-block:: c++
.. code-block:: C++
// Check if the latest result has any targets.
bool hasTargets = result.HasTargets();
.. code-block:: python
.. code-block:: Python
# Check if the latest result has any targets.
hasTargets = result.hasTargets()
Expand All @@ -99,17 +99,17 @@ You can get a list of tracked targets using the `getTargets()`/`GetTargets()` (J

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Get a list of currently tracked targets.
List<PhotonTrackedTarget> targets = result.getTargets();
.. code-block:: c++
.. code-block:: C++
// Get a list of currently tracked targets.
wpi::ArrayRef<photonlib::PhotonTrackedTarget> targets = result.GetTargets();
.. code-block:: python
.. code-block:: Python
# Get a list of currently tracked targets.
targets = result.getTargets()
Expand All @@ -121,18 +121,18 @@ You can get the {ref}`best target <docs/reflectiveAndShape/contour-filtering:Con

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Get the current best target.
PhotonTrackedTarget target = result.getBestTarget();
.. code-block:: c++
.. code-block:: C++
// Get the current best target.
photonlib::PhotonTrackedTarget target = result.GetBestTarget();
.. code-block:: python
.. code-block:: Python
# TODO - Not currently supported
Expand All @@ -149,7 +149,7 @@ You can get the {ref}`best target <docs/reflectiveAndShape/contour-filtering:Con

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Get information from target.
double yaw = target.getYaw();
Expand All @@ -159,7 +159,7 @@ You can get the {ref}`best target <docs/reflectiveAndShape/contour-filtering:Con
Transform2d pose = target.getCameraToTarget();
List<TargetCorner> corners = target.getCorners();
.. code-block:: c++
.. code-block:: C++
// Get information from target.
double yaw = target.GetYaw();
Expand All @@ -169,7 +169,7 @@ You can get the {ref}`best target <docs/reflectiveAndShape/contour-filtering:Con
frc::Transform2d pose = target.GetCameraToTarget();
wpi::SmallVector<std::pair<double, double>, 4> corners = target.GetCorners();
.. code-block:: python
.. code-block:: Python
# Get information from target.
yaw = target.getYaw()
Expand All @@ -193,23 +193,23 @@ All of the data above (**except skew**) is available when using AprilTags.

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Get information from target.
int targetID = target.getFiducialId();
double poseAmbiguity = target.getPoseAmbiguity();
Transform3d bestCameraToTarget = target.getBestCameraToTarget();
Transform3d alternateCameraToTarget = target.getAlternateCameraToTarget();
.. code-block:: c++
.. code-block:: C++
// Get information from target.
int targetID = target.GetFiducialId();
double poseAmbiguity = target.GetPoseAmbiguity();
frc::Transform3d bestCameraToTarget = target.getBestCameraToTarget();
frc::Transform3d alternateCameraToTarget = target.getAlternateCameraToTarget();
.. code-block:: python
.. code-block:: Python
# Get information from target.
targetID = target.getFiducialId()
Expand All @@ -227,7 +227,7 @@ Images are stored within the PhotonVision configuration directory. Running the "
```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// Capture pre-process camera stream image
camera.takeInputSnapshot();
Expand All @@ -243,7 +243,7 @@ Images are stored within the PhotonVision configuration directory. Running the "
// Capture post-process camera stream image
camera.TakeOutputSnapshot();
.. code-block:: python
.. code-block:: Python
# Capture pre-process camera stream image
camera.takeInputSnapshot()
Expand Down
22 changes: 17 additions & 5 deletions docs/source/docs/programming/photonlib/robot-pose-estimator.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,20 @@ The API documentation can be found in here: [Java](https://github.wpilib.org/all

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
// The field from AprilTagFields will be different depending on the game.
AprilTagFieldLayout aprilTagFieldLayout = AprilTagFields.k2024Crescendo.loadAprilTagLayoutField();
.. code-block:: c++
.. code-block:: C++
// The parameter for LoadAPrilTagLayoutField will be different depending on the game.
frc::AprilTagFieldLayout aprilTagFieldLayout = frc::LoadAprilTagLayoutField(frc::AprilTagField::k2024Crescendo);
.. code-block:: Python
# TODO
```

## Creating a `PhotonPoseEstimator`
Expand All @@ -46,7 +50,7 @@ The PhotonPoseEstimator has a constructor that takes an `AprilTagFieldLayout` (s

```{eval-rst}
.. tab-set-code::
.. code-block:: java
.. code-block:: Java
//Forward Camera
cam = new PhotonCamera("testCamera");
Expand All @@ -55,7 +59,7 @@ The PhotonPoseEstimator has a constructor that takes an `AprilTagFieldLayout` (s
// Construct PhotonPoseEstimator
PhotonPoseEstimator photonPoseEstimator = new PhotonPoseEstimator(aprilTagFieldLayout, PoseStrategy.CLOSEST_TO_REFERENCE_POSE, cam, robotToCam);
.. code-block:: c++
.. code-block:: C++
// Forward Camera
std::shared_ptr<photonlib::PhotonCamera> cameraOne =
Expand All @@ -76,6 +80,10 @@ The PhotonPoseEstimator has a constructor that takes an `AprilTagFieldLayout` (s
photonlib::RobotPoseEstimator estimator(
aprilTags, photonlib::CLOSEST_TO_REFERENCE_POSE, cameras);
.. code-block:: Python
# TODO
```

## Using a `PhotonPoseEstimator`
Expand All @@ -88,7 +96,7 @@ Calling `update()` on your `PhotonPoseEstimator` will return an `EstimatedRobotP
:language: java
:lines: 85-88
.. code-block:: c++
.. code-block:: C++
std::pair<frc::Pose2d, units::millisecond_t> getEstimatedGlobalPose(
frc::Pose3d prevEstimatedRobotPose) {
Expand All @@ -102,6 +110,10 @@ Calling `update()` on your `PhotonPoseEstimator` will return an `EstimatedRobotP
return std::make_pair(frc::Pose2d(), 0_ms);
}
}
.. code-block:: Python
# TODO
```

You should be updating your [drivetrain pose estimator](https://docs.wpilib.org/en/latest/docs/software/advanced-controls/state-space/state-space-pose-estimators.html) with the result from the `RobotPoseEstimator` every loop using `addVisionMeasurement()`. TODO: add example note
Expand Down
Loading

0 comments on commit 96d34e7

Please sign in to comment.