Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/quantizer factory refactoring #1963

Draft
wants to merge 62 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 45 commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
e6bc8fe
OpenVINO
BloodAxe Mar 28, 2024
246d59d
OpenVINO
BloodAxe Mar 28, 2024
e284b31
PTQ with QC
BloodAxe Mar 28, 2024
05c38fa
OpenVINO
BloodAxe Mar 28, 2024
b27ce4d
quantize_with_accuracy_control
BloodAxe Mar 28, 2024
83dd059
coco2017_yolo_nas_s_ptq_only
BloodAxe Mar 28, 2024
d33a1eb
coco2017_yolo_nas_s_ptq_only
BloodAxe Mar 28, 2024
96f3049
coco2017_yolo_nas_s_ptq_only
BloodAxe Mar 28, 2024
f0536d4
coco2017_yolo_nas_s_ptq_only_cpu
BloodAxe Mar 28, 2024
e0e5c91
coco2017_yolo_nas_s_ptq_only_cpu
BloodAxe Mar 29, 2024
08b829e
coco2017_yolo_nas_s_ptq_only_cpu
BloodAxe Mar 29, 2024
93c507c
coco2017_yolo_nas_s_ptq_only_cpu
BloodAxe Mar 29, 2024
10146bc
coco2017_yolo_nas_s_ptq_only_cpu
BloodAxe Mar 29, 2024
ec135ca
coco2017_yolo_nas_s_ptq_only_cpu
BloodAxe Mar 29, 2024
2aba1cb
coco2017_yolo_nas_s_ptq_only_cpu
BloodAxe Mar 29, 2024
eb75e0f
Working PoC of export of quantized model
BloodAxe Apr 2, 2024
240c0b1
PoC of the plugin-based exporter & quantizer
BloodAxe Apr 4, 2024
792aaf5
PoC of the plugin-based exporter & quantizer
BloodAxe Apr 5, 2024
22ead7b
Make TRT & Vino quantizers work for PTQ
BloodAxe Apr 8, 2024
faeaf70
Make TRT QAT work
BloodAxe Apr 9, 2024
0a8b01d
Implementing export (WIP)
BloodAxe Apr 9, 2024
a2fd101
Update exporters API
BloodAxe Apr 10, 2024
0f5f8ca
Update exporters API
BloodAxe Apr 10, 2024
04e9230
Print only common metircs
BloodAxe Apr 10, 2024
a701f85
Print only common metircs
BloodAxe Apr 11, 2024
72e590d
Tune QAT parameters
BloodAxe Apr 11, 2024
7a42947
Split PTQ & QAT methods into two different classes
BloodAxe Apr 11, 2024
be785d9
Added doc
BloodAxe Apr 11, 2024
bc1741b
Fix qat using wrong model
BloodAxe Apr 11, 2024
fdc25cd
Update trainer
BloodAxe Apr 12, 2024
c5301d4
Fix YoloNAS Pose head support for OpenVINO PTQ
BloodAxe Apr 12, 2024
cb404ba
Bump up package version and remove OpenVINO deps
BloodAxe Apr 15, 2024
6dd186d
Merge branch 'refs/heads/master' into feature/openvino-refactoring
BloodAxe Apr 15, 2024
519cbe9
Merge branch 'refs/heads/master' into feature/openvino-refactoring
BloodAxe Apr 15, 2024
173f00d
Remove openvino stuff
BloodAxe Apr 15, 2024
e8bb9aa
Remove openvino stuff
BloodAxe Apr 15, 2024
63f6d3f
Remove openvino stuff
BloodAxe Apr 15, 2024
fa2eb86
Revert irrelevant stuff
BloodAxe Apr 15, 2024
0d111e9
Revert irrelevant stuff
BloodAxe Apr 15, 2024
cbc670f
postprocessing_use_tensorrt_nms -> detection_postprocessing_use_tenso…
BloodAxe Apr 15, 2024
d708ae1
Revert src/super_gradients/modules/skip_connections.py
BloodAxe Apr 15, 2024
24e1355
Remove irrelevant stuff
BloodAxe Apr 15, 2024
6b7b19e
Update QATRecipeModificationCallback
BloodAxe Apr 15, 2024
65f174c
Adding back ptq() and qat() methods
BloodAxe Apr 16, 2024
54c8b58
Adding back ptq() and qat() methods
BloodAxe Apr 16, 2024
a8ab948
Force PTQ/QAT modes when called from corresponding methods and legacy…
BloodAxe Apr 16, 2024
8854ab6
Improve output path handling of exported model
BloodAxe Apr 17, 2024
58ff42b
Remove TODO
BloodAxe Apr 17, 2024
4d14410
Remove leftover
BloodAxe Apr 17, 2024
39416ae
Added missing max_batches docstring
BloodAxe Apr 17, 2024
9d1fe2b
Update notebook
BloodAxe Apr 18, 2024
77427bc
Added missing installation pf trt quantizer for
BloodAxe Apr 18, 2024
20423af
Reoranize quantizers sub-package to allow lazy-install of pytorch-qua…
BloodAxe Apr 18, 2024
8974d56
Fix tests
BloodAxe Apr 18, 2024
f89f9ba
Fix syntax error
BloodAxe Apr 18, 2024
5836dae
Fix import
BloodAxe Apr 18, 2024
aa3264f
Fix import
BloodAxe Apr 18, 2024
be70f48
Fix import
BloodAxe Apr 18, 2024
5ecefd1
Update docs in YAML files
BloodAxe Apr 18, 2024
a6cf4d3
Update notebook to use postprocessing_use_tensorrt_nms:bool instead o…
BloodAxe Apr 19, 2024
0a3c076
Update notebook to use postprocessing_use_tensorrt_nms:bool instead o…
BloodAxe Apr 19, 2024
8be4575
Fixing tests
BloodAxe Apr 19, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 4 additions & 7 deletions documentation/source/ptq_qat.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,17 +259,14 @@ If you prefer more of a DIY approach, here is the code sample:

```python
import torch
from super_gradients.training.utils.quantization.export import export_quantized_module_to_onnx

from super_gradients.conversion.onnx import export_to_onnx
onnx_filename = f"qat_model_1x3x224x224.onnx"

dummy_input = torch.randn([1, 3, 224, 224], device="cpu")
export_quantized_module_to_onnx(
model=quantized_model.cpu(),
export_to_onnx(
model=quantized_model.eval(),
model_input=dummy_input,
onnx_filename=onnx_filename,
input_shape=[1, 3, 224, 224],
input_size=[1, 3, 224, 224],
train=False,
)
```

Expand Down
1,744 changes: 85 additions & 1,659 deletions notebooks/yolo_nas_custom_dataset_fine_tuning_with_qat.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion src/super_gradients/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "3.7.1"
__version__ = "3.8.0"

from super_gradients.common import init_trainer, is_distributed, object_names
from super_gradients.training import losses, utils, datasets_utils, DataAugmentation, Trainer, KDTrainer, QATTrainer
Expand Down
7 changes: 7 additions & 0 deletions src/super_gradients/common/factories/exporter_factory.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from super_gradients.common.factories.base_factory import BaseFactory
from super_gradients.common.registry.registry import ALL_EXPORTERS


class ExporterFactory(BaseFactory):
def __init__(self):
super().__init__(ALL_EXPORTERS)
7 changes: 7 additions & 0 deletions src/super_gradients/common/factories/quantizer_factory.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from super_gradients.common.factories.base_factory import BaseFactory
from super_gradients.common.registry.registry import ALL_QUANTIZER


class QuantizerFactory(BaseFactory):
def __init__(self):
super().__init__(ALL_QUANTIZER)
6 changes: 6 additions & 0 deletions src/super_gradients/common/registry/registry.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,3 +194,9 @@ def warn_if_deprecated(name: str, registry: dict):

PROCESSINGS = {}
register_processing = create_register_decorator(registry=PROCESSINGS)

ALL_QUANTIZER = {}
register_quantizer = create_register_decorator(registry=ALL_QUANTIZER)

ALL_EXPORTERS = {}
register_exporter = create_register_decorator(registry=ALL_EXPORTERS)
20 changes: 20 additions & 0 deletions src/super_gradients/conversion/abstract_exporter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import abc


class AbstractExporter(abc.ABC):
"""
An abstract class for exporting a model from ONNX representation to a specific framework.
For instance, ONNX model can be exported to TFLite, OpenVINO or CoreML formats.
This can be done by subclassing from this class and implementing the `export_from_onnx` method.
"""

@abc.abstractmethod
def export_from_onnx(self, source_onnx: str, output_file: str) -> str:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I understand the logic of this.
We said that SG will not export the TRT engine, or OpenVino engine. We export an onnx file (even if that file is framework specific)

"""
Exports a model from ONNX representation to an output file.
A output filename extension and it's content should be determined by the subclass.
:param source_onnx: Input ONNX model file path.
:param output_file: Output file path of the exported model.
:return: Output file path of the exported model.
"""
pass
9 changes: 2 additions & 7 deletions src/super_gradients/conversion/export_params.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import dataclasses
from typing import Optional, Tuple

from super_gradients.conversion.conversion_enums import ExportTargetBackend, DetectionOutputFormatMode
from super_gradients.conversion.conversion_enums import DetectionOutputFormatMode


@dataclasses.dataclass
Expand All @@ -10,10 +10,6 @@ class ExportParams:
Parameters for exporting a model to ONNX format in PTQ/QAT methods of Trainer.
Most of the parameters are related ot ExportableObjectDetectionModel.export method.

:param output_onnx_path: The path to save the ONNX model.
If None, the ONNX filename will use current experiment dir folder
and the output filename will reflect model input shape & whether it's a PTQ or QAT model.

:param batch_size: The batch size for the ONNX model. Default is 1.

:param input_image_shape: The input image shape (rows, cols) for the ONNX model.
Expand Down Expand Up @@ -58,8 +54,6 @@ class ExportParams:
Relevant only for object detection models and only if postprocessing is True.
"""

output_onnx_path: Optional[str] = None
engine: Optional[ExportTargetBackend] = None
batch_size: int = 1
input_image_shape: Optional[Tuple[int, int]] = None
preprocessing: bool = True
Expand All @@ -74,3 +68,4 @@ class ExportParams:
detection_max_predictions_per_image: Optional[int] = None
detection_predictions_format: DetectionOutputFormatMode = DetectionOutputFormatMode.BATCH_FORMAT
detection_num_pre_nms_predictions: int = 1000
detection_postprocessing_use_tensorrt_nms: bool = False
3 changes: 3 additions & 0 deletions src/super_gradients/conversion/onnx/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .export_to_onnx import export_to_onnx

__all__ = ["export_to_onnx"]
17 changes: 17 additions & 0 deletions src/super_gradients/conversion/onnx_exporter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import shutil

from super_gradients.common.abstractions.abstract_logger import get_logger
from super_gradients.common.registry.registry import register_exporter
from super_gradients.conversion.abstract_exporter import AbstractExporter

logger = get_logger(__name__)


@register_exporter()
class ONNXExporter(AbstractExporter):
def __init__(self):
pass

def export_from_onnx(self, source_onnx: str, output_file: str) -> str:
shutil.copy(source_onnx, output_file)
return output_file
Loading
Loading