Skip to content

Commit

Permalink
Merge branch 'main' into openvino_support_on_yolov8_object_detection
Browse files Browse the repository at this point in the history
  • Loading branch information
fcakyon authored Apr 22, 2024
2 parents 37ab5e3 + aaa6aae commit 9eba675
Show file tree
Hide file tree
Showing 6 changed files with 242 additions and 65 deletions.
46 changes: 43 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,18 +44,20 @@ Object detection and instance segmentation are by far the most important applica

## <div align="center">Quick Start Examples</div>

[📜 List of publications that cite SAHI (currently 100+)](https://scholar.google.com/scholar?hl=en&as_sdt=2005&sciodt=0,5&cites=14065474760484865747&scipsc=&q=&scisbd=1)
[📜 List of publications that cite SAHI (currently 150+)](https://scholar.google.com/scholar?hl=en&as_sdt=2005&sciodt=0,5&cites=14065474760484865747&scipsc=&q=&scisbd=1)

[🏆 List of competition winners that used SAHI](https://github.com/obss/sahi/discussions/688)

### Tutorials

- [Introduction to SAHI](https://medium.com/codable/sahi-a-vision-library-for-performing-sliced-inference-on-large-images-small-objects-c8b086af3b80)

- [Official paper](https://ieeexplore.ieee.org/document/9897990) (ICIP 2022 oral) (NEW)
- [Official paper](https://ieeexplore.ieee.org/document/9897990) (ICIP 2022 oral)

- [Pretrained weights and ICIP 2022 paper files](https://github.com/fcakyon/small-object-detection-benchmark)

- [Visualizing and Evaluating SAHI predictions with FiftyOne](https://voxel51.com/blog/how-to-detect-small-objects/) (2024) (NEW)

- ['Exploring SAHI' Research Article from 'learnopencv.com'](https://learnopencv.com/slicing-aided-hyper-inference/) (2023) (NEW)

- ['VIDEO TUTORIAL: Slicing Aided Hyper Inference for Small Object Detection - SAHI'](https://www.youtube.com/watch?v=UuOjJKxn-M8&t=270s) (2023) (NEW)
Expand All @@ -82,9 +84,13 @@ Object detection and instance segmentation are by far the most important applica

- `Detectron2` + `SAHI` walkthrough: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_detectron2.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="sahi-detectron2"></a>

- `TorchVision` + `SAHI` walkthrough: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_torchvision.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="sahi-torchvision"></a>

- `HuggingFace` + `SAHI` walkthrough: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_huggingface.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="sahi-huggingface"></a> (NEW)

- `TorchVision` + `SAHI` walkthrough: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_torchvision.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="sahi-torchvision"></a> (NEW)
- `DeepSparse` + `SAHI` walkthrough: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_sparse_yolov5.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="sahi-deepsparse"></a> (NEW)

- `SuperGradients/YOLONAS` + `SAHI`: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolonas.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="sahi-yolonas"></a> (NEW)

<a href="https://huggingface.co/spaces/fcakyon/sahi-yolox"><img width="600" src="https://user-images.githubusercontent.com/34196005/144092739-c1d9bade-a128-4346-947f-424ce00e5c4f.gif" alt="sahi-yolox"></a>

Expand Down Expand Up @@ -129,6 +135,12 @@ conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.7 -c pytorch -c
pip install yolov5==7.0.13
```

- Install your desired detection framework (ultralytics):

```console
pip install ultralytics==8.0.207
```

- Install your desired detection framework (mmdet):

```console
Expand All @@ -148,6 +160,12 @@ pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu113
pip install transformers timm
```

- Install your desired detection framework (super-gradients):

```console
pip install super-gradients==3.3.1
```

</details>

### Framework Agnostic Sliced/Standard Prediction
Expand Down Expand Up @@ -261,6 +279,20 @@ python -m scripts.run_code_style format

<a align="left" href="https://github.com/tureckova" target="_blank">Alzbeta Tureckova</a>

<a align="left" href="https://github.com/s-aiueo32" target="_blank">So Uchida</a>

<a align="left" href="https://github.com/developer0hye" target="_blank">Yonghye Kwon</a>

<a align="left" href="https://github.com/aphilas" target="_blank">Neville</a>

<a align="left" href="https://github.com/mayrajeo" target="_blank">Janne Mäyrä</a>

<a align="left" href="https://github.com/christofferedlund" target="_blank">Christoffer Edlund</a>

<a align="left" href="https://github.com/ilkermanap" target="_blank">Ilker Manap</a>

<a align="left" href="https://github.com/nguyenthean" target="_blank">Nguyễn Thế An</a>

<a align="left" href="https://github.com/weiji14" target="_blank">Wei Ji</a>

<a align="left" href="https://github.com/aynursusuz" target="_blank">Aynur Susuz</a>
Expand All @@ -269,5 +301,13 @@ python -m scripts.run_code_style format

<a align="left" href="https://github.com/lakshaymehra" target="_blank">Lakshay Mehra</a>

<a align="left" href="https://github.com/karl-joan" target="_blank">Karl-Joan Alesma</a>

<a align="left" href="https://github.com/jacobmarks" target="_blank">Jacob Marks</a>

<a align="left" href="https://github.com/williamlung" target="_blank">William Lung</a>

<a align="left" href="https://github.com/amoghdhaliwal" target="_blank">Amogh Dhaliwal</a>

</div>

7 changes: 0 additions & 7 deletions sahi/postprocess/combine.py
Original file line number Diff line number Diff line change
Expand Up @@ -217,19 +217,12 @@ def greedy_nmm(
# according to their confidence scores
order = scores.argsort()

# initialise an empty list for
# filtered prediction boxes
keep = []

while len(order) > 0:
# extract the index of the
# prediction with highest score
# we call this prediction S
idx = order[-1]

# push S in filtered predictions list
keep.append(idx.tolist())

# remove S from P
order = order[:-1]

Expand Down
8 changes: 4 additions & 4 deletions sahi/predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ def get_sliced_prediction(
detection accuracy. Default: True.
postprocess_type: str
Type of the postprocess to be used after sliced inference while merging/eliminating predictions.
Options are 'NMM', 'GRREDYNMM' or 'NMS'. Default is 'GRREDYNMM'.
Options are 'NMM', 'GREEDYNMM' or 'NMS'. Default is 'GREEDYNMM'.
postprocess_match_metric: str
Metric to be used during object prediction matching after sliced prediction.
'IOU' for intersection over union, 'IOS' for intersection over smaller area.
Expand Down Expand Up @@ -231,7 +231,7 @@ def get_sliced_prediction(
# create prediction input
num_group = int(num_slices / num_batch)
if verbose == 1 or verbose == 2:
tqdm.write(f"Performing prediction on {num_slices} number of slices.")
tqdm.write(f"Performing prediction on {num_slices} slices.")
object_prediction_list = []
# perform sliced prediction
for group_ind in range(num_group):
Expand Down Expand Up @@ -416,7 +416,7 @@ def predict(
Default to ``0.2``.
postprocess_type: str
Type of the postprocess to be used after sliced inference while merging/eliminating predictions.
Options are 'NMM', 'GREEDYNMM', 'LSNMS' or 'NMS'. Default is 'GRREDYNMM'.
Options are 'NMM', 'GREEDYNMM', 'LSNMS' or 'NMS'. Default is 'GREEDYNMM'.
postprocess_match_metric: str
Metric to be used during object prediction matching after sliced prediction.
'IOU' for intersection over union, 'IOS' for intersection over smaller area.
Expand Down Expand Up @@ -781,7 +781,7 @@ def predict_fiftyone(
Default to ``0.2``.
postprocess_type: str
Type of the postprocess to be used after sliced inference while merging/eliminating predictions.
Options are 'NMM', 'GRREDYNMM' or 'NMS'. Default is 'GRREDYNMM'.
Options are 'NMM', 'GREEDYNMM' or 'NMS'. Default is 'GREEDYNMM'.
postprocess_match_metric: str
Metric to be used during object prediction matching after sliced prediction.
'IOU' for intersection over union, 'IOS' for intersection over smaller area.
Expand Down
2 changes: 1 addition & 1 deletion sahi/utils/coco.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def from_coco_annotation_dict(cls, annotation_dict: Dict, category_name: Optiona
annotation_dict: dict
COCO formatted annotation dict (with fields "bbox", "segmentation", "category_id")
"""
if annotation_dict.__contains__("segmentation") and not isinstance(annotation_dict["segmentation"], list):
if annotation_dict.__contains__("segmentation") and isinstance(annotation_dict["segmentation"], dict):
has_rle_segmentation = True
logger.warning(
f"Segmentation annotation for id {annotation_dict['id']} is skipped since RLE segmentation format is not supported."
Expand Down
Loading

0 comments on commit 9eba675

Please sign in to comment.