Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: Check for prefix tuning + gradient checkpointing fails #2191

Conversation

BenjaminBossan
Copy link
Member

See #869

Since transformers is moving to the new cache implementation, we had to change prefix tuning to use this too. However, caching does not work with gradient checkpointing. Therefore, this currently runs into an error about size mismatches.

Now, PEFT checks for gradient checkpointing and raises a helpful error.

Related to this change, I relaxed the _test_training_gradient_checkpointing test, which would skip all prompt learning methods, when in fact only prefix tuning needs to be skipped.

See huggingface#869

Since transformers is moving to the new cache implementation, we had to
change prefix tuning to use this too. However, caching does not work
with gradient checkpointing. Therefore, this currently runs into an
error about size mismatches.

Now, PEFT checks for gradient checkpointing and raises a helpful error.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@BenjaminBossan BenjaminBossan merged commit b5b9023 into huggingface:main Nov 1, 2024
14 checks passed
@BenjaminBossan BenjaminBossan deleted the fix-raise-error-prefix-tuning-with-gradient-checkpointing branch November 1, 2024 09:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants