-
Notifications
You must be signed in to change notification settings - Fork 878
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't load self trained model for prediction #180
Comments
👋 Hello @Melvin0419, thank you for leaving an issue on Roboflow Notebooks. 🐞 Bug reportsIf you are filing a bug report, please be as detailed as possible. This will help us more easily diagnose and resolve the problem you are facing. To learn more about contributing, check out our Contributing Guidelines. If you require support with custom code that is not part of Roboflow Notebooks, please reach out on the Roboflow Forum or on the GitHub Discussions page associated with this repository. 💬 Get in touchDo you have more questions about Roboflow that we haven't responded to yet? Feel free to ask them on the Roboflow Discuss forum. Our developer advocates and community team actively respond to questions there. To ask questions about Notebooks, head over to the GitHub Discussions section of this repository. |
Hi, @Melvin0419 👋🏻! Did you train and load the model in the same notebook? |
Thanks for replying! No, I load the model in another notebook. I think that is the problem! But is there a way that i can load the model in another notebook? different from the notebook for training. |
You need to make sure that version of |
Yes, both notebooks have the version ultralytics==8.0.20. And I have some updates! Previously, I chose the yolov8s as my model to train, and it cause an error when loading it into another notebook, But, if I trained the model using yolov8n, the trained model can be loaded into my loading notebook successfully, without showing the error I encountered when using yolov8s. |
Did you try to update the |
We've recently verified the required versions of yolov8 needed to train, validate, deploy and infer with the model. I'm closing this now. Do let us know if there's any further issues! |
Search before asking
Notebook name
train-yolov8-object-detection-on-custom-dataset.ipynb
Bug
requirements: YOLOv8 requirement "models" not found, attempting AutoUpdate...
requirements: ❌ Command 'pip install "models" ' returned non-zero exit status 1.
ModuleNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/ultralytics/nn/tasks.py in torch_safe_load(weight)
331 try:
--> 332 return torch.load(file, map_location='cpu') # load
333 except ModuleNotFoundError as e:
10 frames
ModuleNotFoundError: No module named 'models'
During handling of the above exception, another exception occurred:
ModuleNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/torch/serialization.py in find_class(self, mod_name, name)
1163 pass
1164 mod_name = load_module_mapping.get(mod_name, mod_name)
-> 1165 return super().find_class(mod_name, name)
1166
1167 # Load the data (which may in turn use
persistent_load
to load tensors)ModuleNotFoundError: No module named 'models'
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
Environment
Google Colab
Minimal Reproducible Example
from ultralytics import YOLO
model = YOLO("best.pt")
Additional
I followed the tutorial on google colab, and i have successfully trained a model with my own dataset. I have tested the trained model, named "best.pt", and it also work pretty well. Then I downloaded the "best.pt" from google colab, and tried to load the model on local side. The script I used is:
from ultralytics import YOLO
model = YOLO("best.pt")
just the same as the script for prediction on DOCs , but i encountered the following error:
requirements: YOLOv8 requirement "models" not found, attempting AutoUpdate...
requirements: ❌ Command 'pip install "models" ' returned non-zero exit status 1.
ModuleNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/ultralytics/nn/tasks.py in torch_safe_load(weight)
331 try:
--> 332 return torch.load(file, map_location='cpu') # load
333 except ModuleNotFoundError as e:
10 frames
ModuleNotFoundError: No module named 'models'
During handling of the above exception, another exception occurred:
ModuleNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/torch/serialization.py in find_class(self, mod_name, name)
1163 pass
1164 mod_name = load_module_mapping.get(mod_name, mod_name)
-> 1165 return super().find_class(mod_name, name)
1166
1167 # Load the data (which may in turn use
persistent_load
to load tensors)ModuleNotFoundError: No module named 'models'
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
I found out that i can't load the best.pt model using the above code. Is there any adjustion I need to make? Thanks for reading.
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: