You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
From a clinical safety perspective, it is important to know what version of a trained model was used to build a MAP. Our internal MLOps process uses an mlflow server to record experiments and register trained models. It would be very useful to be able to point the MAP packager at the MLFLow model registry to take advantage of this accountability.
Describe the solution you'd like
Currently, the packager takes as input a path to a torchscript model on disk, this file path is used by the ModelFactory to create a Model object. I think the simple solution is to add a condition at the start of this process, whereby if an mlflow model is specified the model artifact is first downloaded to a temporary directory from the mlflow registry using the mlflow REST API, then the process continues as before.
The mlflow models can be recorded and accessed based on a number of identifiers. I think the most useful would be using the name and version of a registered model. We can add an optional argument to the argparser "model_registry" which can be extended to other registry's as required. The model:/ syntax is used to identify mlflow models e.g.
I am happy to contribute this solution so long as this idea sounds logical.
One related feature I'm not yet sure how to address is that we need some mechanism for seeing the model identifier (name, version number, UID whatever) within the MAP. Ideally, we would be able to query a packaged MAP for this information, or view it in execution logs.
Describe alternatives you've considered
We can continue using the filepath packaging method by downloading the model manually, however I believe this introduces an unnecessary risk whereby the packaged model is not auditable on the mlflow server.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
From a clinical safety perspective, it is important to know what version of a trained model was used to build a MAP. Our internal MLOps process uses an mlflow server to record experiments and register trained models. It would be very useful to be able to point the MAP packager at the MLFLow model registry to take advantage of this accountability.
Describe the solution you'd like
Currently, the packager takes as input a path to a torchscript model on disk, this file path is used by the ModelFactory to create a Model object. I think the simple solution is to add a condition at the start of this process, whereby if an mlflow model is specified the model artifact is first downloaded to a temporary directory from the mlflow registry using the mlflow REST API, then the process continues as before.
The mlflow models can be recorded and accessed based on a number of identifiers. I think the most useful would be using the name and version of a registered model. We can add an optional argument to the argparser "model_registry" which can be extended to other registry's as required. The
model:/
syntax is used to identify mlflow models e.g.monai-deploy package examples/apps/ai_spleen_seg_app --tag seg_app:latest --model_registry mlflow --model models:/MODEL_NAME/MODEL_ID
Unregistered models can also be supported by accessing a specific artifact UID, which can be useful for building prototype MAPs.
monai-deploy package examples/apps/ai_spleen_seg_app --tag seg_app:latest --model_registry mlflow --model s3://mlflow/RUN_ID/artifacts/model
I am happy to contribute this solution so long as this idea sounds logical.
One related feature I'm not yet sure how to address is that we need some mechanism for seeing the model identifier (name, version number, UID whatever) within the MAP. Ideally, we would be able to query a packaged MAP for this information, or view it in execution logs.
Describe alternatives you've considered
We can continue using the filepath packaging method by downloading the model manually, however I believe this introduces an unnecessary risk whereby the packaged model is not auditable on the mlflow server.
The text was updated successfully, but these errors were encountered: