Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update torch.cuda.amp to torch.amp #13244

Open
wants to merge 21 commits into
base: master
Choose a base branch
from

Conversation

jacobdbrown4
Copy link

@jacobdbrown4 jacobdbrown4 commented Aug 5, 2024

torch.cuda.amp is deprecated as of Pytorch 2.4. This PR updates use to torch.amp. This gets rid of the

FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with torch.cuda.amp.autocast(amp):

warning as mentioned in #13226.

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Update to use latest CUDA AMP (Automatic Mixed Precision) API across various files for better compatibility and performance.

📊 Key Changes

  • Replaced torch.cuda.amp.autocast with torch.amp.autocast("cuda") in multiple files.
  • Replaced torch.cuda.amp.GradScaler with torch.amp.GradScaler("cuda").

🎯 Purpose & Impact

  • Improved Compatibility: Ensures that the code remains compatible with the latest PyTorch changes, reducing the risk of future issues.
  • Performance: Leverages CUDA's improved automatic mixed precision to potentially enhance computational efficiency.
  • Maintenance: Simplifies code adjustments related to AMP, making future updates easier to manage.

Copy link
Contributor

github-actions bot commented Aug 5, 2024

All Contributors have signed the CLA. ✅
Posted by the CLA Assistant Lite bot.

@UltralyticsAssistant UltralyticsAssistant added enhancement New feature or request python labels Aug 5, 2024
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 Hello @jacobdbrown4, thank you for submitting a YOLOv5 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:

  • ✅ Verify your PR is up-to-date with ultralytics/yolov5 master branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running git pull and git merge master locally.
  • ✅ Verify all YOLOv5 Continuous Integration (CI) checks are passing.
  • ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." — Bruce Lee

@jacobdbrown4
Copy link
Author

I have read the CLA Document and I sign the CLA

@jacobdbrown4
Copy link
Author

recheck

@glenn-jocher
Copy link
Member

@jacobdbrown4 thank you for your comment! To ensure we address your issue effectively, could you please verify that you are using the latest versions of YOLOv5 and all related dependencies? This helps us confirm whether the problem persists with the most recent updates.

If the issue is still reproducible, please provide additional details such as error messages, steps to reproduce, and any relevant code snippets. This information will help us diagnose and resolve the issue more efficiently.

Looking forward to your response! 😊

@ijnrghjkdsmigywneig203
Copy link

ijnrghjkdsmigywneig203 commented Sep 13, 2024

This pull request needs to be merged, inference times are slower due to this error constantly popping up with the newest versions of pytorch.

@ijnrghjkdsmigywneig203
Copy link

@glenn-jocher

@glenn-jocher
Copy link
Member

Thank you for your input. Please ensure you're using the latest YOLOv5 version to see if the issue persists. If the problem continues, provide additional details so we can assist further.

@mezotaken
Copy link

Reproducible with this commit 907bef2
Enough to run python3 train.py and it will spam this warning on every iteration

@pderrenger
Copy link
Member

Please ensure you're using the latest YOLOv5 version, as updates may resolve this issue. If it persists, let us know with more details.

@ijnrghjkdsmigywneig203
Copy link

Are you guys just bots designed to repeat the same thing when a real issue is present within the code? The latest version of yolov5 is literally fetched every single time I initialize my script. The issue is with your code being outdated.

@pderrenger
Copy link
Member

@ijnrghjkdsmigywneig203 thank you for your feedback. We recommend ensuring all dependencies are up-to-date. If the issue persists, please provide more details so we can investigate further.

@ijnrghjkdsmigywneig203
Copy link

Everything is up to date and the error for your outdated code still occurs. What can I do to solve it?

torch.cuda.amp is deprecated as of Pytorch 2.4. This PR updates use to torch.amp. This gets rid of the

FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.
with torch.cuda.amp.autocast(amp):

@pderrenger
Copy link
Member

Thank you for bringing this to our attention. Please check if there's an open pull request addressing this update. If not, consider submitting one to help resolve the issue.

@beltoforion
Copy link

There is an open pull request regarding this issue that will solve it. It is this one. Please merge this PR.

@pderrenger
Copy link
Member

Thank you for pointing this out. Please follow the pull request for updates, as it will be reviewed and merged if it meets the requirements.

@ijnrghjkdsmigywneig203
Copy link

@glenn-jocher

@sakgoyal
Copy link

will this be merged anytime soon?

@pderrenger
Copy link
Member

Thank you for following up. The PR appears to address the deprecation warning for torch.cuda.amp in PyTorch 2.4. I'll let the maintainers review and make a decision on merging. In the meantime, you can track the PR status for updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants