Skip to content

Commit

Permalink
fix how-to links
Browse files Browse the repository at this point in the history
  • Loading branch information
SkalskiP committed Mar 26, 2024
1 parent 97b8312 commit 5a04096
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 11 deletions.
18 changes: 9 additions & 9 deletions docs/how_to/detect_and_annotate.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,15 +126,15 @@ Now that we have predictions from a model, we can load them into Supervision.

You can load predictions from other computer vision frameworks and libraries using:

- [`from_deepsparse`](detection/core/#supervision.detection.core.Detections.from_deepsparse) ([Deepsparse](https://github.com/neuralmagic/deepsparse))
- [`from_detectron2`](detection/core/#supervision.detection.core.Detections.from_detectron2) ([Detectron2](https://github.com/facebookresearch/detectron2))
- [`from_mmdetection`](detection/core/#supervision.detection.core.Detections.from_mmdetection) ([MMDetection](https://github.com/open-mmlab/mmdetection))
- [`from_sam`](detection/core/#supervision.detection.core.Detections.from_sam) ([Segment Anything Model](https://github.com/facebookresearch/segment-anything))
- [`from_yolo_nas`](detection/core/#supervision.detection.core.Detections.from_yolo_nas) ([YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md))
- [`from_deepsparse`](/latest/detection/core/#supervision.detection.core.Detections.from_deepsparse) ([Deepsparse](https://github.com/neuralmagic/deepsparse))
- [`from_detectron2`](/latest/detection/core/#supervision.detection.core.Detections.from_detectron2) ([Detectron2](https://github.com/facebookresearch/detectron2))
- [`from_mmdetection`](/latest/detection/core/#supervision.detection.core.Detections.from_mmdetection) ([MMDetection](https://github.com/open-mmlab/mmdetection))
- [`from_sam`](/latest/detection/core/#supervision.detection.core.Detections.from_sam) ([Segment Anything Model](https://github.com/facebookresearch/segment-anything))
- [`from_yolo_nas`](/latest/detection/core/#supervision.detection.core.Detections.from_yolo_nas) ([YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md))

## Annotate Image with Detections

Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the [`sv.BoundingBoxAnnotator`](annotators/#supervision.annotators.core.BoundingBoxAnnotator) and [`sv.LabelAnnotator`](annotators/#supervision.annotators.core.LabelAnnotator) classes.
Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the [`sv.BoundingBoxAnnotator`](/latest/annotators/#supervision.annotators.core.BoundingBoxAnnotator) and [`sv.LabelAnnotator`](/latest/annotators/#supervision.annotators.core.LabelAnnotator) classes.

=== "Inference"

Expand Down Expand Up @@ -214,7 +214,7 @@ Finally, we can annotate the image with the predictions. Since we are working wi

## Display Custom Labels

By default, [`sv.LabelAnnotator`](annotators/#supervision.annotators.core.LabelAnnotator)
By default, [`sv.LabelAnnotator`](/latest/annotators/#supervision.annotators.core.LabelAnnotator)
will label each detection with its `class_name` (if possible) or `class_id`. You can
override this behavior by passing a list of custom `labels` to the `annotate` method.

Expand Down Expand Up @@ -315,9 +315,9 @@ override this behavior by passing a list of custom `labels` to the `annotate` me
## Annotate Image with Segmentations

If you are running the segmentation model
[`sv.MaskAnnotator`](annotators/#supervision.annotators.core.MaskAnnotator)
[`sv.MaskAnnotator`](/latest/annotators/#supervision.annotators.core.MaskAnnotator)
is a drop-in replacement for
[`sv.BoundingBoxAnnotator`](annotators/#supervision.annotators.core.BoundingBoxAnnotator)
[`sv.BoundingBoxAnnotator`](/latest/annotators/#supervision.annotators.core.BoundingBoxAnnotator)
that will allow you to draw masks instead of boxes.

=== "Inference"
Expand Down
4 changes: 2 additions & 2 deletions docs/how_to/detect_small_objects.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This guide shows how to detect small objects
with the [Inference](https://github.com/roboflow/inference),
[Ultralytics](https://github.com/ultralytics/ultralytics) or
[Transformers](https://github.com/huggingface/transformers) packages using
[`InferenceSlicer`](detection/tools/inference_slicer/#supervision.detection.tools.inference_slicer.InferenceSlicer).
[`InferenceSlicer`](/latest/detection/tools/inference_slicer/#supervision.detection.tools.inference_slicer.InferenceSlicer).

<video controls>
<source src="https://media.roboflow.com/supervision_detect_small_objects_example.mp4" type="video/mp4">
Expand Down Expand Up @@ -154,7 +154,7 @@ is less effective for ultra-high-resolution images (4K and above).

## Inference Slicer

[`InferenceSlicer`](detection/tools/inference_slicer/#supervision.detection.tools.inference_slicer.InferenceSlicer)
[`InferenceSlicer`](/latest/detection/tools/inference_slicer/#supervision.detection.tools.inference_slicer.InferenceSlicer)
processes high-resolution images by dividing them into smaller segments, detecting
objects within each, and aggregating the results.

Expand Down

0 comments on commit 5a04096

Please sign in to comment.