Skip to content

Latest commit

 

History

History
156 lines (108 loc) · 5.59 KB

onnx_dynamo.rst

File metadata and controls

156 lines (108 loc) · 5.59 KB

TorchDynamo-based ONNX Exporter

.. automodule:: torch.onnx
  :noindex:

Warning

The ONNX exporter for TorchDynamo is a rapidly evolving beta technology.

The ONNX exporter leverages TorchDynamo engine to hook into Python's frame evaluation API and dynamically rewrite its bytecode into an FX Graph. The resulting FX Graph is then polished before it is finally translated into an ONNX graph.

The main advantage of this approach is that the FX graph is captured using bytecode analysis that preserves the dynamic nature of the model instead of using traditional static tracing techniques.

The exporter is designed to be modular and extensible. It is composed of the following components:

The ONNX exporter depends on extra Python packages:

They can be installed through pip:

pip install --upgrade onnx onnxscript

See below a demonstration of exporter API in action with a simple Multilayer Perceptron (MLP) as example:

import torch

class MLPModel(nn.Module):
  def __init__(self):
      super().__init__()
      self.fc0 = nn.Linear(8, 8, bias=True)
      self.fc1 = nn.Linear(8, 4, bias=True)
      self.fc2 = nn.Linear(4, 2, bias=True)
      self.fc3 = nn.Linear(2, 2, bias=True)

  def forward(self, tensor_x: torch.Tensor):
      tensor_x = self.fc0(tensor_x)
      tensor_x = torch.sigmoid(tensor_x)
      tensor_x = self.fc1(tensor_x)
      tensor_x = torch.sigmoid(tensor_x)
      tensor_x = self.fc2(tensor_x)
      tensor_x = torch.sigmoid(tensor_x)
      output = self.fc3(tensor_x)
      return output

model = MLPModel()
tensor_x = torch.rand((97, 8), dtype=torch.float32)
export_output = torch.onnx.dynamo_export(model, tensor_x)

As the code above shows, all you need is to provide :func:`torch.onnx.dynamo_export` with an instance of the model and its input. The exporter will then return an instance of :class:`torch.onnx.ExportOutput` that contains the exported ONNX graph along with extra information.

The in-memory model available through export_output.model_proto is an onnx.ModelProto object in compliance with the ONNX IR spec. The ONNX model may then be serialized into a Protobuf file using the :meth:`torch.onnx.ExportOutput.save` API.

export_output.save("mlp.onnx")

You can view the exported model using Netron.

MLP model as viewed using Netron

Note that each layer is represented in a rectangular box with a f icon in the top right corner.

ONNX function highlighted on MLP model

By expanding it, the function body is shown.

ONNX function body

The function body is a sequence of ONNX operators or other functions.

ONNX diagnostics goes beyond regular logs through the adoption of Static Analysis Results Interchange Format (aka SARIF) to help users debug and improve their model using a GUI, such as Visual Studio Code's SARIF Viewer.

The main advantages are:

  • The diagnostics are emitted in machine parseable Static Analysis Results Interchange Format (SARIF).
  • A new clearer, structured way to add new and keep track of diagnostic rules.
  • Serve as foundation for more future improvements consuming the diagnostics.
.. toctree::
   :maxdepth: 1
   :caption: ONNX Diagnostic SARIF Rules
   :glob:

   generated/onnx_dynamo_diagnostics_rules/*

.. autofunction:: torch.onnx.dynamo_export

.. autoclass:: torch.onnx.ExportOptions
    :members:

.. autofunction:: torch.onnx.enable_fake_mode

.. autoclass:: torch.onnx.ExportOutput
    :members:

.. autoclass:: torch.onnx.ExportOutputSerializer
    :members:

.. autoclass:: torch.onnx.OnnxExporterError
    :members:

.. autoclass:: torch.onnx.OnnxRegistry
    :members:

.. autoclass:: torch.onnx.DiagnosticOptions
    :members: