-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exported model assumes that the input should always be similar to the tracing example #1991
Comments
Based on the error message, it seems you are trying to use an input with a different shape than what the model was traced with. This means you need to use Flexible Input Shapes. |
@TobyRoseman So all the flexible input shapes solutions require have kind of limit to the shape input size, why do we need to have a limit? what kind of limitations we have here compared to the Pytorch Lighting export? |
Yes, flexible input shapes require limits. This is a requirement of the Core ML Framework. I'm not familiar enough with PyTorch Lighting export to compare. |
Seems that there is a bug in the convert when I use flexible input shapes 🤔:
Code: rangeDim = ct.RangeDim(lower_bound=16000, upper_bound=16000 * 100, default=16000)
input_signal_shape = ct.Shape(shape=(1, rangeDim))
input_signal_len_shape = ct.Shape(shape=[rangeDim])
mlmodel = ct.convert(
torshscript_model,
source="pytorch",
inputs=[
ct.TensorType(name="input_signal", shape=input_signal_shape),
ct.TensorType(name="input_signal_length", shape=input_signal_len_shape),
]
)
os.remove(exported_model_path)
exported_model_path = os.path.join(output_dir, "Model.mlpackage")
mlmodel.save(exported_model_path) |
Try calling |
But that is what I am currently doing 🤔 |
It doesn't seem so. Note this line in your output:
|
@TobyRoseman Here is my full code (sorry the scripted_model is confusing) : custom_model = MyCustomModel()
custom_model.eval()
audio_signal = torch.randn(1, 16000 * 100)
audio_signal_len = torch.tensor([audio_signal.shape[1]])
scripted_model = torch.jit.trace(
custom_model.forward, example_inputs=(audio_signal, audio_signal_len)
)
os.remove(exported_model_path)
exported_model_path = os.path.join(
output_dir, "MyModel.ts"
)
scripted_model.save(exported_model_path)
torshscript_model = torch.jit.load(exported_model_path)
mlmodel = ct.convert(
scripted_model,
source="pytorch",
inputs=[
ct.TensorType(
name="inputSignal",
shape=(
1,
ct.RangeDim(16000, 16000 * 100),
),
dtype=np.float32,
),
ct.TensorType(
name="inputSignalLength",
shape=(ct.RangeDim(16000, 16000 * 100),),
dtype=np.int64,
),
]
)
os.remove(exported_model_path)
exported_model_path = os.path.join(output_dir, "MyModel.mlpackage")
mlmodel.save(exported_model_path) |
Are you still getting the following warning?
If so, then I don't think your model is actually traced. Here is the check for that warning. Perhaps part of your model is tagged with the Also I'm not sure why the first parameter to Since you didn't share the implementation of |
No, I am not getting the warning anymore. I can guarantee that the model is traced since it's already working with Here is the full implementation. But I believe the problem could be related to #1921, it seems the same case to me |
@hadiidbouk : I cannot reproduce it for this line:
|
🐞Describing the bug
The bug isn't detected while exporting the model, no error is shown, however, when I try using the model in Swift I got this error:
On this line:
There is a problem in exporting somehow that makes the tracing not work as expected, it keeps assuming that my input is always the same as the one passed to the trace function.
The first thing to think of here is that the tracing is failing, but that's not the case because I am able to export the model using Pytorch lighting and use it with the LibTorch C++ library without any problem.
Stack Trace
System environment (please complete the following information):
7.0.0
The text was updated successfully, but these errors were encountered: