v0.13.0: LoRA+, VB-LoRA, and more
Highlights
New methods
LoRA+
@kallewoof added LoRA+ to PEFT (#1915). This is a function that allows to initialize an optimizer with settings that are better suited for training a LoRA adapter.
VB-LoRA
@leo-yangli added a new method to PEFT called VB-LoRA (#2039). The idea is to have LoRA layers be composed from a single vector bank (hence "VB") that is shared among all layers. This makes VB-LoRA extremely parameter efficient and the checkpoints especially small (comparable to the VeRA method), while still promising good fine-tuning performance. Check the VB-LoRA docs and example.
Enhancements
New Hugging Face team member @ariG23498 added the helper function rescale_adapter_scale
to PEFT (#1951). Use this context manager to temporarily increase or decrease the scaling of the LoRA adapter of a model. It also works for PEFT adapters loaded directly into a transformers or diffusers model.
@ariG23498 also added DoRA support for embedding layers (#2006). So if you're using the use_dora=True
option in the LoraConfig
, you can now also target embedding layers.
For some time now, we support inference with batches that are using different adapters for different samples, so e.g. sample 1-5 use "adapter1" and samples 6-10 use "adapter2". However, this only worked for LoRA layers so far. @saeid93 extended this to also work with layers targeted by modules_to_save
(#1990).
When loading a PEFT adapter, you now have the option to pass low_cpu_mem_usage=True
(#1961). This will initialize the adapter with empty weights ("meta" device) before loading the weights instead of initializing on CPU or GPU. This can speed up loading PEFT adapters. So use this option especially if you have a lot of adapters to load at the same time or if these adapters are very big. Please let us know if you encounter issues with this option, as we may make this the default in the future.
Changes
Safe loading of PyTorch weights
Unless indicated otherwise, PEFT adapters are saved and loaded using the secure safetensors
format. However, we also support the PyTorch format for checkpoints, which relies on the inherently insecure pickle protocol from Python. In the future, PyTorch will be more strict when loading these files to improve security by making the option weights_only=True
the default. This is generally recommended and should not cause any trouble with PEFT checkpoints, which is why with this release, PEFT will enable this by default. Please open an issue if this causes trouble.
What's Changed
- Bump version to 0.12.1.dev0 by @BenjaminBossan in #1950
- CI Fix Windows permission error on merge test by @BenjaminBossan in #1952
- Check if past_key_values is provided when using prefix_tuning in peft_model by @Nidhogg-lyz in #1942
- Add lora+ implementation by @kallewoof in #1915
- FIX: New bloom changes breaking prompt learning by @BenjaminBossan in #1969
- ENH Update VeRA preconfigured models by @BenjaminBossan in #1941
- fix: lora+: include lr in optimizer kwargs by @kallewoof in #1973
- FIX active_adapters for transformers models by @BenjaminBossan in #1975
- FIX Loading adapter honors offline mode by @BenjaminBossan in #1976
- chore: Update CI configuration for workflows by @XciD in #1985
- Cast to fp32 if using bf16 weights on cpu during
merge_and_unload
by @snarayan21 in #1978 - AdaLora: Trigger warning when user uses 'r' inplace of 'init_r' by @bhargavyagnik in #1981
- [Add] scaling LoRA adapter weights with a context manager by @ariG23498 in #1951
- DOC Small fixes for HQQ and section title by @BenjaminBossan in #1986
- Add docs and examples for X-LoRA by @EricLBuehler in #1970
- fix: fix docker build gpus by @XciD in #1987
- FIX: Adjust transformers version check for bloom by @BenjaminBossan in #1992
- [Hotfix] Fix BOFT mixed precision by @Edenzzzz in #1925
- [Suggestions] Updates suggested for
helper.rescale_adapter_scale
by @ariG23498 in #1989 - MAINT: Default to loading weights only for torch.load by @BenjaminBossan in #1993
- BOFT bug fix when saving by @Zeju1997 in #1994
- FIX Import error in BOFT half precision test by @BenjaminBossan in #1995
- Update lora.md (typos) by @nir-sh-automat-it in #2003
- TST Add LNTuningConfig and LoKrConfig to tests by @BenjaminBossan in #2005
- ENH: Warn when a user provided model name in the config renamed by @BenjaminBossan in #2004
- FIX CI Correctly report outcome of bnb import test by @BenjaminBossan in #2007
- Update docs for X-LoRA and some bugfixes by @EricLBuehler in #2002
- TST: Potentially Skip 8bit bnb regression test if compute capability is too low by @BenjaminBossan in #1998
- CI Activate single core multi backend bnb tests by @BenjaminBossan in #2008
- Fix usage of deprecated parameters/functions in X-LoRA by @EricLBuehler in #2010
- [tests] enable
test_vera_dtypes
on XPU by @faaany in #2017 - CI Remove regression tests from BNB CI by @BenjaminBossan in #2024
- [tests] enable regression tests on XPU by @faaany in #2019
- ENH: Better error msg for replace_lora_weights_loftq when using a local model. by @BenjaminBossan in #2022
- [tests] make cuda-only cases in
TestModelAndLayerStatus
device-agnostic by @faaany in #2026 - [tests] enable
test_mixed_adapter_batches_lora_opt_timing
on XPU by @faaany in #2021 - MAINT: Update ruff version to ~0.6.1 by @BenjaminBossan in #1965
- ENH Raise error when applying modules_to_save on tuner layer by @BenjaminBossan in #2028
- FIX: Don't target the classification head when using target_modules="all-linear" by @BenjaminBossan in #2033
- [tests] enable cuda-only tests in
test_common_gpu.py
to work on XPU by @faaany in #2031 - [Add] DoRA Embedding by @ariG23498 in #2006
- [tests] enable
test_gpu_examples.py
on XPU by @faaany in #2036 - Bug: set correct pre-commit-hooks version by @ltoniazzi in #2034
- Warn if using tied target module with
tie_word_embeddings
by @ltoniazzi in #2025 - ENH: Faster adapter loading if there are a lot of target modules by @BenjaminBossan in #2045
- FIX: Error with OLoRA init when using bnb by @BenjaminBossan in #2011
- FIX: Small numerical discrepancy for p-tuning after loading the model by @BenjaminBossan in #2047
- Add VB-LoRA by @leo-yangli in #2039
- Fixing scalings logging test by @EricLBuehler in #2042
- TST: Fewer inference steps for stable diffusion tests by @BenjaminBossan in #2051
- TST Speed up vision model tests by @BenjaminBossan in #2058
- TST: Make X-LoRA tests faster by @BenjaminBossan in #2059
- Update permissions for githubtoken stale.yml by @glegendre01 in #2061
- MAINT: Give stale bot permissions for PRs too by @BenjaminBossan in #2064
- avoid saving boft_P in adapter model by @sywangyi in #2050
- fix arguments for PiSSA preprocess by @keakon in #2053
- Apply deprecated
evaluation_strategy
by @muellerzr in #1664 - fixing multiple LoRA in the same batch or vit by @saeid93 in #1990
- FIX: Bug that prevents BOFT from loading multiple adapters by @BenjaminBossan in #2068
- [tests] skip some tests for XPU devices by @faaany in #2074
- ENH: PiSSA/OLoRA: Preserve original config on save by @BenjaminBossan in #2077
- Expose bias to to ModulesToSaveWrapper by @dengdifan in #2081
- Update setup.py to update contact info by @sayakpaul in #2086
- ENH: Allow empty initialization of adapter weight by @BenjaminBossan in #1961
- ENH: Add default target layers for gemma2 architecture by @BenjaminBossan in #2078
- FIX: Bug in find_minimal_target_modules by @BenjaminBossan in #2083
- Fix func docstring by @kwonmha in #2087
- ENH: Better DoRA check in mixed adapter batch inference by @BenjaminBossan in #2089
New Contributors
- @Nidhogg-lyz made their first contribution in #1942
- @XciD made their first contribution in #1985
- @bhargavyagnik made their first contribution in #1981
- @ariG23498 made their first contribution in #1951
- @Edenzzzz made their first contribution in #1925
- @Zeju1997 made their first contribution in #1994
- @nir-sh-automat-it made their first contribution in #2003
- @faaany made their first contribution in #2017
- @ltoniazzi made their first contribution in #2034
- @leo-yangli made their first contribution in #2039
- @glegendre01 made their first contribution in #2061
- @keakon made their first contribution in #2053
- @muellerzr made their first contribution in #1664
- @saeid93 made their first contribution in #1990
- @dengdifan made their first contribution in #2081
- @kwonmha made their first contribution in #2087
Full Changelog: v0.12.0...v0.13.0