You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The more complex transformer architecture (e.g. from ONNX) consists of a stacking of either encoder or decoder building blocks. It will help if the Netron community can feedback how to make such complex graph network more intuitive to visualize with an overview of these building blocks.
Ideally these higher level building blocks are defined within the ONNX during export.
Is there graph layout methods that can help in identifying these building blocks?
Recently, a Work-In-Progress attempt has been made on such feature. Here is a WIP output using gpt2.onnx
The text was updated successfully, but these errors were encountered:
The more complex transformer architecture (e.g. from ONNX) consists of a stacking of either encoder or decoder building blocks. It will help if the Netron community can feedback how to make such complex graph network more intuitive to visualize with an overview of these building blocks.
Ideally these higher level building blocks are defined within the ONNX during export.
Is there graph layout methods that can help in identifying these building blocks?
Recently, a Work-In-Progress attempt has been made on such feature. Here is a WIP output using gpt2.onnx
The text was updated successfully, but these errors were encountered: