Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: BOFT forward/merging with CUDA #2219

Closed
4 tasks
BenjaminBossan opened this issue Nov 18, 2024 · 18 comments · Fixed by #2242
Closed
4 tasks

Bug: BOFT forward/merging with CUDA #2219

BenjaminBossan opened this issue Nov 18, 2024 · 18 comments · Fixed by #2242
Labels
bug Something isn't working

Comments

@BenjaminBossan
Copy link
Member

System Info

At least for me, there is a bug that when fbd cuda is used for BOFT, the BOFT results are all 0.

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

With a CUDA-enabled device, run:

pytest tests/test_custom_models.py -k "test_merge_layers_multi and boft"

This fails for me because both BOFT adapters produce the same result. Checking out why, this line just produces a zeros matrix, so both adapters produce the same output. With CPU, the matrix != zeros.

There are more failing tests than those.

Expected behavior

The tests should pass.

@BenjaminBossan BenjaminBossan added the bug Something isn't working label Nov 18, 2024
@BenjaminBossan
Copy link
Member Author

@Zeju1997 Could you please check if you can reproduce this?

@d-kleine
Copy link
Contributor

d-kleine commented Nov 25, 2024

If you mean the coverage %, I can reproduce the output with the provided test command (if that helps you):

========================================================================================================================= test session starts =========================================================================================================================
platform win32 -- Python 3.11.10, pytest-8.3.3, pluggy-1.5.0
rootdir: C:\Users\dk\Desktop\peft
configfile: pyproject.toml
plugins: anyio-4.6.2.post1, cov-6.0.0
collected 4056 items / 4054 deselected / 2 selected

tests\test_custom_models.py ..c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\coverage\control.py:892: CoverageWarning: No data was collected. (no-data-collected)
  self._warn("No data was collected.", slug="no-data-collected")
                                                                                                                                                                                                                                   [100%]

========================================================================================================================== warnings summary =========================================================================================================================== 
..\..\anaconda3\envs\peft\Lib\site-packages\accelerate\utils\other.py:220
  c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\accelerate\utils\other.py:220: DeprecationWarning: numpy.core is deprecated and has been renamed to numpy._core. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.multiarray.
    np.core.multiarray._reconstruct,

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same
  c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\torch\utils\cpp_extension.py:382: UserWarning: Error checking compiler version for gcc: Command 'gcc' returned non-zero exit status 1.
    warnings.warn(f'Error checking compiler version for {compiler}: {error}')

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same
  c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\peft\tuners\boft\layer.py:95: UserWarning: Failed to load the CUDA extension: CUDA_HOME environment variable is not set. Please set it to your CUDA install root., check if ninja is available.
    warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.")

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same
tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_16_BOFT_Different
  c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\peft\tuners\boft\layer.py:96: UserWarning: Setting boft_n_butterfly_factor to 1 to speed up the finetuning process.
    warnings.warn("Setting boft_n_butterfly_factor to 1 to speed up the finetuning process.")

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same
tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_16_BOFT_Different
  c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\peft\tuners\boft\layer.py:95: UserWarning: Failed to load the CUDA extension: DLL load failed while importing fbd_cuda: Das angegebene Modul wurde nicht gefunden., check if ninja is available.
    warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.")

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform win32, python 3.11.10-final-0 ----------
Name                                                  Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------------------
src\peft\__init__.py                                      8      8     0%   20-115
src\peft\auto.py                                         69     69     0%   15-172
src\peft\config.py                                      130    130     0%   14-339
src\peft\helpers.py                                      52     52     0%   15-211
src\peft\import_utils.py                                 65     65     0%   14-117
src\peft\mapping.py                                      46     46     0%   15-258
src\peft\mixed_model.py                                 152    152     0%   15-480
src\peft\optimizers\__init__.py                           2      2     0%   15-18
src\peft\optimizers\loraplus.py                          36     36     0%   19-121
src\peft\peft_model.py                                 1226   1226     0%   15-3042
src\peft\tuners\__init__.py                              22     22     0%   20-49
src\peft\tuners\_buffer_dict.py                          61     61     0%   10-160
src\peft\tuners\adalora\__init__.py                      14     14     0%   15-37
src\peft\tuners\adalora\bnb.py                           76     76     0%   15-145
src\peft\tuners\adalora\config.py                        32     32     0%   15-78
src\peft\tuners\adalora\gptq.py                          32     32     0%   14-68
src\peft\tuners\adalora\layer.py                        219    219     0%   15-358
src\peft\tuners\adalora\model.py                        162    162     0%   15-359
src\peft\tuners\adaption_prompt\__init__.py               4      4     0%   14-19
src\peft\tuners\adaption_prompt\config.py                25     25     0%   15-81
src\peft\tuners\adaption_prompt\layer.py                 46     46     0%   15-128
src\peft\tuners\adaption_prompt\model.py                 84     84     0%   15-163
src\peft\tuners\adaption_prompt\utils.py                 50     50     0%   14-121
src\peft\tuners\boft\__init__.py                          4      4     0%   15-20
src\peft\tuners\boft\config.py                           30     30     0%   18-158
src\peft\tuners\boft\fbd\__init__.py                      0      0   100%
src\peft\tuners\boft\layer.py                           486    486     0%   18-984
src\peft\tuners\boft\model.py                           165    165     0%   18-353
src\peft\tuners\bone\__init__.py                          4      4     0%   15-20
src\peft\tuners\bone\config.py                           24     24     0%   15-125
src\peft\tuners\bone\layer.py                           125    125     0%   14-255
src\peft\tuners\bone\model.py                           151    151     0%   15-336
src\peft\tuners\cpt\__init__.py                           3      3     0%   16-20
src\peft\tuners\cpt\config.py                            32     32     0%   15-98
src\peft\tuners\cpt\model.py                             84     84     0%   15-200
src\peft\tuners\fourierft\__init__.py                     4      4     0%   15-20
src\peft\tuners\fourierft\config.py                      30     30     0%   15-205
src\peft\tuners\fourierft\layer.py                      104    104     0%   15-190
src\peft\tuners\fourierft\model.py                      168    168     0%   14-350
src\peft\tuners\hra\__init__.py                           4      4     0%   15-20
src\peft\tuners\hra\config.py                            27     27     0%   15-136
src\peft\tuners\hra\layer.py                            231    231     0%   15-440
src\peft\tuners\hra\model.py                            153    153     0%   15-341
src\peft\tuners\ia3\__init__.py                          13     13     0%   15-36
src\peft\tuners\ia3\bnb.py                               67     67     0%   15-129
src\peft\tuners\ia3\config.py                            22     22     0%   15-112
src\peft\tuners\ia3\layer.py                            191    191     0%   15-327
src\peft\tuners\ia3\model.py                            215    215     0%   14-498
src\peft\tuners\ln_tuning\__init__.py                     3      3     0%   15-19
src\peft\tuners\ln_tuning\config.py                      13     13     0%   14-70
src\peft\tuners\ln_tuning\layer.py                       60     60     0%   15-117
src\peft\tuners\ln_tuning\model.py                       92     92     0%   14-205
src\peft\tuners\loha\__init__.py                          4      4     0%   15-20
src\peft\tuners\loha\config.py                           25     25     0%   14-139
src\peft\tuners\loha\layer.py                           180    180     0%   15-369
src\peft\tuners\loha\model.py                            20     20     0%   15-116
src\peft\tuners\lokr\__init__.py                          4      4     0%   15-20
src\peft\tuners\lokr\config.py                           28     28     0%   14-151
src\peft\tuners\lokr\layer.py                           196    196     0%   15-425
src\peft\tuners\lokr\model.py                            21     21     0%   15-118
src\peft\tuners\lora\__init__.py                         18     18     0%   15-57
src\peft\tuners\lora\aqlm.py                             46     46     0%   15-100
src\peft\tuners\lora\awq.py                              53     53     0%   14-108
src\peft\tuners\lora\bnb.py                             275    275     0%   14-538
src\peft\tuners\lora\config.py                           91     91     0%   15-488
src\peft\tuners\lora\dora.py                             95     95     0%   15-188
src\peft\tuners\lora\eetq.py                             53     53     0%   14-104
src\peft\tuners\lora\eva.py                             333    333     0%   15-738
src\peft\tuners\lora\gptq.py                             49     49     0%   15-114
src\peft\tuners\lora\hqq.py                             138    138     0%   14-258
src\peft\tuners\lora\layer.py                           664    664     0%   14-1207
src\peft\tuners\lora\model.py                           411    411     0%   14-937
src\peft\tuners\lora\torchao.py                          78     78     0%   14-146
src\peft\tuners\lora\tp_layer.py                        187    187     0%   14-397
src\peft\tuners\lycoris_utils.py                        208    208     0%   14-436
src\peft\tuners\mixed\__init__.py                         2      2     0%   15-18
src\peft\tuners\mixed\model.py                          190    190     0%   14-341
src\peft\tuners\multitask_prompt_tuning\__init__.py       3      3     0%   15-19
src\peft\tuners\multitask_prompt_tuning\config.py        21     21     0%   15-62
src\peft\tuners\multitask_prompt_tuning\model.py         48     48     0%   15-120
src\peft\tuners\oft\__init__.py                           4      4     0%   15-20
src\peft\tuners\oft\config.py                            39     39     0%   15-207
src\peft\tuners\oft\layer.py                            380    380     0%   14-747
src\peft\tuners\oft\model.py                            165    165     0%   15-374
src\peft\tuners\p_tuning\__init__.py                      3      3     0%   15-19
src\peft\tuners\p_tuning\config.py                       17     17     0%   15-60
src\peft\tuners\p_tuning\model.py                        34     34     0%   17-130
src\peft\tuners\poly\__init__.py                          4      4     0%   15-20
src\peft\tuners\poly\config.py                           21     21     0%   15-101
src\peft\tuners\poly\layer.py                            89     89     0%   15-165
src\peft\tuners\poly\model.py                           111    111     0%   15-189
src\peft\tuners\poly\router.py                           38     38     0%   15-81
src\peft\tuners\prefix_tuning\__init__.py                 3      3     0%   15-19
src\peft\tuners\prefix_tuning\config.py                  10     10     0%   15-42
src\peft\tuners\prefix_tuning\model.py                   19     19     0%   17-80
src\peft\tuners\prompt_tuning\__init__.py                 3      3     0%   15-19
src\peft\tuners\prompt_tuning\config.py                  23     23     0%   15-85
src\peft\tuners\prompt_tuning\model.py                   30     30     0%   15-91
src\peft\tuners\tuners_utils.py                         499    499     0%   14-1175
src\peft\tuners\vblora\__init__.py                        4      4     0%   15-20
src\peft\tuners\vblora\config.py                         29     29     0%   15-196
src\peft\tuners\vblora\layer.py                         130    130     0%   15-249
src\peft\tuners\vblora\model.py                         198    198     0%   14-447
src\peft\tuners\vera\__init__.py                         13     13     0%   15-36
src\peft\tuners\vera\bnb.py                             207    207     0%   14-409
src\peft\tuners\vera\config.py                           28     28     0%   14-158
src\peft\tuners\vera\layer.py                           151    151     0%   15-294
src\peft\tuners\vera\model.py                           237    237     0%   15-509
src\peft\tuners\xlora\__init__.py                         3      3     0%   15-19
src\peft\tuners\xlora\classifier.py                      88     88     0%   14-195
src\peft\tuners\xlora\config.py                          36     36     0%   14-102
src\peft\tuners\xlora\layer.py                          110    110     0%   14-223
src\peft\tuners\xlora\model.py                          207    207     0%   14-519
src\peft\utils\__init__.py                                5      5     0%   21-56
src\peft\utils\constants.py                              41     41     0%   15-320
src\peft\utils\hotswap.py                                61     61     0%   14-220
src\peft\utils\incremental_pca.py                       148    148     0%   15-338
src\peft\utils\integrations.py                           98     98     0%   15-172
src\peft\utils\loftq_utils.py                           233    233     0%   18-410
src\peft\utils\merge_utils.py                            79     79     0%   15-268
src\peft\utils\other.py                                 356    356     0%   14-726
src\peft\utils\peft_types.py                             30     30     0%   19-90
src\peft\utils\save_and_load.py                         300    300     0%   14-575
-----------------------------------------------------------------------------------
TOTAL                                                 12843  12843     0%

================================================================================================================ slowest 10 durations =================================================================================================================
1.47s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same
0.02s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_16_BOFT_Different

(4 durations < 0.005s hidden.  Use -vv to show these durations.)
=================================================================================================== 2 passed, 4054 deselected, 8 warnings in 19.43s ===================================================================================================

@BenjaminBossan
Copy link
Member Author

Thanks for checking @d-kleine. What I meant is that the tests fail for me. The test coverage is currently not my concern, though your log shows that no test was run, not sure why.

@d-kleine
Copy link
Contributor

I have used the current version of the main branch (0.13.3.dev0). On the latest release 0.13.2, it fails also for me:

platform win32 -- Python 3.11.10, pytest-8.3.3, pluggy-1.5.0
rootdir: C:\Users\dk\Desktop\peft
configfile: pyproject.toml
plugins: anyio-4.6.2.post1, cov-6.0.0
collected 0 items / 1 error
c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\coverage\control.py:892: CoverageWarning: No data was collected. (no-data-collected)
  self._warn("No data was collected.", slug="no-data-collected")

=============================================================================================================================== ERRORS ================================================================================================================================ 
____________________________________________________________________________________________________________ ERROR collecting tests/test_custom_models.py _____________________________________________________________________________________________________________ 
ImportError while importing test module 'C:\Users\dk\Desktop\peft\tests\test_custom_models.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
..\..\anaconda3\envs\peft\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
tests\test_custom_models.py:35: in <module>
    from peft import (
E   ImportError: cannot import name 'BoneConfig' from 'peft' (c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\peft\__init__.py)
========================================================================================================================== warnings summary =========================================================================================================================== 
..\..\anaconda3\envs\peft\Lib\site-packages\accelerate\utils\other.py:220
  c:\Users\dk\anaconda3\envs\peft\Lib\site-packages\accelerate\utils\other.py:220: DeprecationWarning: numpy.core is deprecated and has been renamed to numpy._core. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.multiarray.
    np.core.multiarray._reconstruct,

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform win32, python 3.11.10-final-0 ----------
Name                                                  Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------------------
src\peft\__init__.py                                      8      8     0%   20-115
src\peft\auto.py                                         69     69     0%   15-172
src\peft\config.py                                      130    130     0%   14-339
src\peft\helpers.py                                      52     52     0%   15-211
src\peft\import_utils.py                                 65     65     0%   14-117
src\peft\mapping.py                                      46     46     0%   15-258
src\peft\mixed_model.py                                 152    152     0%   15-480
src\peft\optimizers\__init__.py                           2      2     0%   15-18
src\peft\optimizers\loraplus.py                          36     36     0%   19-121
src\peft\peft_model.py                                 1226   1226     0%   15-3042
src\peft\tuners\__init__.py                              22     22     0%   20-49
src\peft\tuners\_buffer_dict.py                          61     61     0%   10-160
src\peft\tuners\adalora\__init__.py                      14     14     0%   15-37
src\peft\tuners\adalora\bnb.py                           76     76     0%   15-145
src\peft\tuners\adalora\config.py                        32     32     0%   15-78
src\peft\tuners\adalora\gptq.py                          32     32     0%   14-68
src\peft\tuners\adalora\layer.py                        219    219     0%   15-358
src\peft\tuners\adalora\model.py                        162    162     0%   15-359
src\peft\tuners\adaption_prompt\__init__.py               4      4     0%   14-19
src\peft\tuners\adaption_prompt\config.py                25     25     0%   15-81
src\peft\tuners\adaption_prompt\layer.py                 46     46     0%   15-128
src\peft\tuners\adaption_prompt\model.py                 84     84     0%   15-163
src\peft\tuners\adaption_prompt\utils.py                 50     50     0%   14-121
src\peft\tuners\boft\__init__.py                          4      4     0%   15-20
src\peft\tuners\boft\config.py                           30     30     0%   18-158
src\peft\tuners\boft\fbd\__init__.py                      0      0   100%
src\peft\tuners\boft\layer.py                           486    486     0%   18-984
src\peft\tuners\boft\model.py                           165    165     0%   18-353
src\peft\tuners\bone\__init__.py                          4      4     0%   15-20
src\peft\tuners\bone\config.py                           24     24     0%   15-125
src\peft\tuners\bone\layer.py                           125    125     0%   14-255
src\peft\tuners\bone\model.py                           151    151     0%   15-336
src\peft\tuners\cpt\__init__.py                           3      3     0%   16-20
src\peft\tuners\cpt\config.py                            32     32     0%   15-98
src\peft\tuners\cpt\model.py                             84     84     0%   15-200
src\peft\tuners\fourierft\__init__.py                     4      4     0%   15-20
src\peft\tuners\fourierft\config.py                      30     30     0%   15-205
src\peft\tuners\fourierft\layer.py                      104    104     0%   15-190
src\peft\tuners\fourierft\model.py                      168    168     0%   14-350
src\peft\tuners\hra\__init__.py                           4      4     0%   15-20
src\peft\tuners\hra\config.py                            27     27     0%   15-136
src\peft\tuners\hra\layer.py                            231    231     0%   15-440
src\peft\tuners\hra\model.py                            153    153     0%   15-341
src\peft\tuners\ia3\__init__.py                          13     13     0%   15-36
src\peft\tuners\ia3\bnb.py                               67     67     0%   15-129
src\peft\tuners\ia3\config.py                            22     22     0%   15-112
src\peft\tuners\ia3\layer.py                            191    191     0%   15-327
src\peft\tuners\ia3\model.py                            215    215     0%   14-498
src\peft\tuners\ln_tuning\__init__.py                     3      3     0%   15-19
src\peft\tuners\ln_tuning\config.py                      13     13     0%   14-70
src\peft\tuners\ln_tuning\layer.py                       60     60     0%   15-117
src\peft\tuners\ln_tuning\model.py                       92     92     0%   14-205
src\peft\tuners\loha\__init__.py                          4      4     0%   15-20
src\peft\tuners\loha\config.py                           25     25     0%   14-139
src\peft\tuners\loha\layer.py                           180    180     0%   15-369
src\peft\tuners\loha\model.py                            20     20     0%   15-116
src\peft\tuners\lokr\__init__.py                          4      4     0%   15-20
src\peft\tuners\lokr\config.py                           28     28     0%   14-151
src\peft\tuners\lokr\layer.py                           196    196     0%   15-425
src\peft\tuners\lokr\model.py                            21     21     0%   15-118
src\peft\tuners\lora\__init__.py                         18     18     0%   15-57
src\peft\tuners\lora\aqlm.py                             46     46     0%   15-100
src\peft\tuners\lora\awq.py                              53     53     0%   14-108
src\peft\tuners\lora\bnb.py                             275    275     0%   14-538
src\peft\tuners\lora\config.py                           91     91     0%   15-488
src\peft\tuners\lora\dora.py                             95     95     0%   15-188
src\peft\tuners\lora\eetq.py                             53     53     0%   14-104
src\peft\tuners\lora\eva.py                             333    333     0%   15-738
src\peft\tuners\lora\gptq.py                             49     49     0%   15-114
src\peft\tuners\lora\hqq.py                             138    138     0%   14-258
src\peft\tuners\lora\layer.py                           664    664     0%   14-1207
src\peft\tuners\lora\model.py                           411    411     0%   14-937
src\peft\tuners\lora\torchao.py                          78     78     0%   14-146
src\peft\tuners\lora\tp_layer.py                        187    187     0%   14-397
src\peft\tuners\lycoris_utils.py                        208    208     0%   14-436
src\peft\tuners\mixed\__init__.py                         2      2     0%   15-18
src\peft\tuners\mixed\model.py                          190    190     0%   14-341
src\peft\tuners\multitask_prompt_tuning\__init__.py       3      3     0%   15-19
src\peft\tuners\multitask_prompt_tuning\config.py        21     21     0%   15-62
src\peft\tuners\multitask_prompt_tuning\model.py         48     48     0%   15-120
src\peft\tuners\oft\__init__.py                           4      4     0%   15-20
src\peft\tuners\oft\config.py                            39     39     0%   15-207
src\peft\tuners\oft\layer.py                            380    380     0%   14-747
src\peft\tuners\oft\model.py                            165    165     0%   15-374
src\peft\tuners\p_tuning\__init__.py                      3      3     0%   15-19
src\peft\tuners\p_tuning\config.py                       17     17     0%   15-60
src\peft\tuners\p_tuning\model.py                        34     34     0%   17-130
src\peft\tuners\poly\__init__.py                          4      4     0%   15-20
src\peft\tuners\poly\config.py                           21     21     0%   15-101
src\peft\tuners\poly\layer.py                            89     89     0%   15-165
src\peft\tuners\poly\model.py                           111    111     0%   15-189
src\peft\tuners\poly\router.py                           38     38     0%   15-81
src\peft\tuners\prefix_tuning\__init__.py                 3      3     0%   15-19
src\peft\tuners\prefix_tuning\config.py                  10     10     0%   15-42
src\peft\tuners\prefix_tuning\model.py                   19     19     0%   17-80
src\peft\tuners\prompt_tuning\__init__.py                 3      3     0%   15-19
src\peft\tuners\prompt_tuning\config.py                  23     23     0%   15-85
src\peft\tuners\prompt_tuning\model.py                   30     30     0%   15-91
src\peft\tuners\tuners_utils.py                         499    499     0%   14-1175
src\peft\tuners\vblora\__init__.py                        4      4     0%   15-20
src\peft\tuners\vblora\config.py                         29     29     0%   15-196
src\peft\tuners\vblora\layer.py                         130    130     0%   15-249
src\peft\tuners\vblora\model.py                         198    198     0%   14-447
src\peft\tuners\vera\__init__.py                         13     13     0%   15-36
src\peft\tuners\vera\bnb.py                             207    207     0%   14-409
src\peft\tuners\vera\config.py                           28     28     0%   14-158
src\peft\tuners\vera\layer.py                           151    151     0%   15-294
src\peft\tuners\vera\model.py                           237    237     0%   15-509
src\peft\tuners\xlora\__init__.py                         3      3     0%   15-19
src\peft\tuners\xlora\classifier.py                      88     88     0%   14-195
src\peft\tuners\xlora\config.py                          36     36     0%   14-102
src\peft\tuners\xlora\layer.py                          110    110     0%   14-223
src\peft\tuners\xlora\model.py                          207    207     0%   14-519
src\peft\utils\__init__.py                                5      5     0%   21-56
src\peft\utils\constants.py                              41     41     0%   15-320
src\peft\utils\hotswap.py                                61     61     0%   14-220
src\peft\utils\incremental_pca.py                       148    148     0%   15-338
src\peft\utils\integrations.py                           98     98     0%   15-172
src\peft\utils\loftq_utils.py                           233    233     0%   18-410
src\peft\utils\merge_utils.py                            79     79     0%   15-268
src\peft\utils\other.py                                 356    356     0%   14-726
src\peft\utils\peft_types.py                             30     30     0%   19-90
src\peft\utils\save_and_load.py                         300    300     0%   14-575
-----------------------------------------------------------------------------------
TOTAL                                                 12843  12843     0%

======================================================================================================================= short test summary info ======================================================================================================================= 
ERROR tests/test_custom_models.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 
===================================================================================================================== 1 warning, 1 error in 8.36s ===================================================================================================================== 

@BenjaminBossan
Copy link
Member Author

I have used the current version of the main branch (0.13.3.dev0).

Thanks again for testing. Using the from source install should correspond to my setup, so I'm not sure what the difference is. Do you see the test running in the pytest logs.

On the latest release 0.13.2, it fails also for me:

This error is unrelated, it appears to be some issue with the env. Could it be that the tests are from PEFT main but the PEFT version is the latest release?

@d-kleine
Copy link
Contributor

I have installed peft 0.13.2 with pip install peft just to be sure.

I am using

  • Windows 11 Home x64 (I can switch to Ubuntu if necessary)
  • Python 3.11
  • PyTorch 2.5.1 with CUDA 12.4

Please provide your output, then I can test whether I can reproduce it or not.

@BenjaminBossan
Copy link
Member Author

Thanks again for helping with this.

I have installed peft 0.13.2 with pip install peft just to be sure.

In addition to that, you probably also have a cloned/forked PEFT repo locally, which is where you're running the tests, right? If that repo is out of sync with the installed PEFT version, you can get errors as the one above (ImportError: cannot import name 'BoneConfig' from 'peft'). The way I personally handle this is that I have a cloned/forked repo locally, install it with python -m pip install -e . and then switch versions via git (e.g. git checkout v0.13.2).

Please provide your output, then I can test whether I can reproduce it or not.

I run

pytest tests/test_custom_models.py -k "test_merge_layers_multi and boft"

on a machine with a CUDA-enabled GPU and I get:

Log
FAILED tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same - assert not True
FAILED tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_16_BOFT_Different - RuntimeError: CUDA error: an illegal memory access was encountered
================================================================================================================== 2 failed, 4054 deselected, 1 warning in 4.93s ===================================================================================================================
(peft) name:~/work/forks/peft$ CUDA_VISIBLE_DEVICES=0 pytest tests/test_custom_models.py -k "test_merge_layers_multi and boft"
=============================================================================================================================== test session starts ================================================================================================================================
platform linux -- Python 3.11.9, pytest-8.2.2, pluggy-1.5.0
rootdir: /home/name/work/forks/peft
configfile: pyproject.toml
plugins: requests-mock-1.12.1, xdist-3.6.1, anyio-4.2.0, cov-5.0.0
collected 4056 items / 4054 deselected / 2 selected                                                                                                                                                                                                                                

tests/test_custom_models.py FF                                                                                                                                                                                                                                               [100%]

===================================================================================================================================== FAILURES =====================================================================================================================================
________________________________________________________________________________________________________ MultipleActiveAdaptersTester.test_merge_layers_multi_15_BOFT_Same _________________________________________________________________________________________________________

a = (<tests.test_custom_models.MultipleActiveAdaptersTester testMethod=test_merge_layers_multi_15_BOFT_Same>,), kw = {}

    @wraps(func)
    def standalone_func(*a, **kw):
>       return func(*(a + p.args), **p.kwargs, **kw)

../../../anaconda3/envs/peft/lib/python3.11/site-packages/parameterized/parameterized.py:620: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tests.test_custom_models.MultipleActiveAdaptersTester testMethod=test_merge_layers_multi_15_BOFT_Same>, test_name = 'BOFT Same', tuner_method = 'boft', config_cls = <class 'peft.tuners.boft.config.BOFTConfig'>
config_kwargs_1 = {'boft_block_size': 2, 'init_weights': False, 'target_modules': ['lin0']}, config_kwargs_2 = {'boft_block_size': 2, 'init_weights': False, 'target_modules': ['lin0']}

    @parameterized.expand(MULTIPLE_ACTIVE_ADAPTERS_TEST_CASES)
    def test_merge_layers_multi(self, test_name, tuner_method, config_cls, config_kwargs_1, config_kwargs_2):
        torch.manual_seed(0)
        model = MLP(bias=tuner_method != "ia3")
        model.eval()
    
        config_1 = config_cls(**config_kwargs_1)
        config_2 = config_cls(**config_kwargs_2)
    
        model = get_peft_model(model, config_1)
    
        dummy_input = self.prepare_inputs_for_testing()
        model.eval()
    
        with torch.inference_mode():
            logits_adapter_1 = model(**dummy_input)[0]
    
        model.add_adapter("adapter-2", config_2)
        model.set_adapter("adapter-2")
        model.eval()
    
        with torch.inference_mode():
            logits_adapter_2 = model(**dummy_input)[0]
    
>       assert not torch.allclose(logits_adapter_1, logits_adapter_2, atol=1e-3, rtol=1e-3)
E       assert not True
E        +  where True = <built-in method allclose of type object at 0x7af6540d0240>(tensor([-0.7747, -0.6177]), tensor([-0.7747, -0.6177]), atol=0.001, rtol=0.001)
E        +    where <built-in method allclose of type object at 0x7af6540d0240> = torch.allclose

tests/test_custom_models.py:2019: AssertionError
------------------------------------------------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------------------------------------------------
ninja: no work to do.
------------------------------------------------------------------------------------------------------------------------------- Captured stderr call -------------------------------------------------------------------------------------------------------------------------------
Using /home/name/.cache/torch_extensions/py311_cu124 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/name/.cache/torch_extensions/py311_cu124/fbd_cuda/build.ninja...
Building extension module fbd_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Loading extension module fbd_cuda...
______________________________________________________________________________________________________ MultipleActiveAdaptersTester.test_merge_layers_multi_16_BOFT_Different ______________________________________________________________________________________________________

a = (<tests.test_custom_models.MultipleActiveAdaptersTester testMethod=test_merge_layers_multi_16_BOFT_Different>,), kw = {}

    @wraps(func)
    def standalone_func(*a, **kw):
>       return func(*(a + p.args), **p.kwargs, **kw)

../../../anaconda3/envs/peft/lib/python3.11/site-packages/parameterized/parameterized.py:620: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tests/test_custom_models.py:1997: in test_merge_layers_multi
    torch.manual_seed(0)
../../../anaconda3/envs/peft/lib/python3.11/site-packages/torch/_compile.py:32: in inner
    return disable_fn(*args, **kwargs)
../../../anaconda3/envs/peft/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:632: in _fn
    return fn(*args, **kwargs)
../../../anaconda3/envs/peft/lib/python3.11/site-packages/torch/random.py:46: in manual_seed
    torch.cuda.manual_seed_all(seed)
../../../anaconda3/envs/peft/lib/python3.11/site-packages/torch/cuda/random.py:129: in manual_seed_all
    _lazy_call(cb, seed_all=True)
../../../anaconda3/envs/peft/lib/python3.11/site-packages/torch/cuda/__init__.py:249: in _lazy_call
    callable()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def cb():
        for i in range(device_count()):
            default_generator = torch.cuda.default_generators[i]
>           default_generator.manual_seed(seed)
E           RuntimeError: CUDA error: an illegal memory access was encountered
E           CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
E           For debugging consider passing CUDA_LAUNCH_BLOCKING=1
E           Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

../../../anaconda3/envs/peft/lib/python3.11/site-packages/torch/cuda/random.py:127: RuntimeError
================================================================================================================================= warnings summary =================================================================================================================================
tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same
  /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. 
  If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
    warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.11.9-final-0 -----------
Name                                                  Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------------------
src/peft/__init__.py                                      8      0   100%
src/peft/auto.py                                         69     32    54%   52, 73-130
src/peft/config.py                                      130     76    42%   39-43, 67, 75, 88-108, 123-170, 183-204, 215-225, 229-241, 249-266, 275, 339
src/peft/helpers.py                                      52     52     0%   15-211
src/peft/import_utils.py                                 65     36    45%   23, 28-33, 38-44, 52, 58-69, 87-96, 106-117
src/peft/mapping.py                                      46     18    61%   78, 143, 185, 191-195, 202, 209, 220-222, 245-258
src/peft/mixed_model.py                                 152    104    32%   56-71, 75-76, 118-131, 135, 139, 143, 150-168, 182-184, 192-197, 203, 209, 216-220, 249-259, 262-269, 293-303, 306-315, 332, 339, 342, 345, 349, 352, 386-389, 392, 401, 443-480
src/peft/optimizers/__init__.py                           2      2     0%   15-18
src/peft/optimizers/loraplus.py                          36     36     0%   19-121
src/peft/peft_model.py                                 1226   1053    14%   168-170, 191, 196, 201-216, 220-223, 270-429, 484-611, 614-666, 680, 683-686, 694-706, 712-769, 775-799, 812-814, 822-825, 832-834, 849-851, 857-859, 873-907, 941, 948-955, 957, 963-966, 972-976, 1006, 1046, 1050-1060, 1073-1140, 1144-1163, 1210-1310, 1332, 1340, 1355-1395, 1446-1466, 1490-1497, 1511-1563, 1576-1629, 1681-1682, 1696-1765, 1771-1822, 1825-1843, 1846-1917, 1963-1965, 1984-2092, 2097-2165, 2168-2187, 2238-2258, 2282-2289, 2303-2356, 2369-2405, 2459-2479, 2503-2510, 2527-2583, 2597-2648, 2698, 2711-2759, 2808-2878, 2941-3042
src/peft/tuners/__init__.py                              22      0   100%
src/peft/tuners/_buffer_dict.py                          61     40    34%   57-61, 64, 67, 70, 73, 76, 79, 83, 92-94, 98, 102, 106, 122-147, 150-157, 160
src/peft/tuners/adalora/__init__.py                      14      7    50%   27-37
src/peft/tuners/adalora/bnb.py                           76     76     0%   15-145
src/peft/tuners/adalora/config.py                        32      5    84%   57, 60, 70, 74, 78
src/peft/tuners/adalora/gptq.py                          32     27    16%   30-37, 40-68
src/peft/tuners/adalora/layer.py                        219    183    16%   31, 42-46, 49-77, 80-83, 99-106, 121-143, 149-155, 158, 165-186, 189-190, 204-212, 215, 218-220, 223-231, 234-250, 254-271, 276, 279-281, 284-333, 337-345, 349-358
src/peft/tuners/adalora/model.py                        162    138    15%   68-85, 94-102, 117-144, 155-217, 221-227, 231-236, 239-266, 269-297, 300-313, 336-355, 359
src/peft/tuners/adaption_prompt/__init__.py               4      0   100%
src/peft/tuners/adaption_prompt/config.py                25      9    64%   35-36, 41, 73-81
src/peft/tuners/adaption_prompt/layer.py                 46     38    17%   37-55, 67-128
src/peft/tuners/adaption_prompt/model.py                 84     67    20%   44-59, 63-95, 99-108, 112-113, 117-118, 122-128, 132-136, 140-146, 150-152, 156-163
src/peft/tuners/adaption_prompt/utils.py                 50     43    14%   30-32, 46-57, 66-116, 121
src/peft/tuners/boft/__init__.py                          4      0   100%
src/peft/tuners/boft/config.py                           30      3    90%   152, 154, 158
src/peft/tuners/boft/fbd/__init__.py                      0      0   100%
src/peft/tuners/boft/layer.py                           486    313    36%   60, 69, 94-97, 136-138, 153-154, 165-191, 229-232, 238-242, 245-252, 255-259, 269-271, 278, 284, 290-305, 309, 314-319, 326, 332-336, 372-378, 412-436, 498-532, 538-551, 561-583, 589-591, 593, 600, 613-615, 625, 650-651, 670-674, 686-784, 799-842, 848-872, 883-905, 908-980, 983-984
src/peft/tuners/boft/model.py                           165     85    48%   90, 110, 128, 147, 150-152, 155-159, 178-187, 192, 198-202, 204-207, 220, 224-230, 233-235, 238, 241-249, 255-256, 263-265, 277-302, 311-324, 344, 353
src/peft/tuners/bone/__init__.py                          4      0   100%
src/peft/tuners/bone/config.py                           24      2    92%   121, 125
src/peft/tuners/bone/layer.py                           125    102    18%   32-44, 60-83, 86, 89, 92-99, 102-106, 122-125, 140-164, 170-178, 188-221, 224-251, 254-255
src/peft/tuners/bone/model.py                           151    118    22%   96-97, 104, 116-134, 141-166, 169-187, 191-203, 207-212, 215-221, 224-226, 229, 232-240, 243-249, 253-259, 268-285, 294-307, 327, 336
src/peft/tuners/cpt/__init__.py                           3      0   100%
src/peft/tuners/cpt/config.py                            32     16    50%   76-98
src/peft/tuners/cpt/model.py                             84     72    14%   39-61, 75-82, 88-99, 102-121, 129-139, 161-200
src/peft/tuners/fourierft/__init__.py                     4      0   100%
src/peft/tuners/fourierft/config.py                      30     10    67%   188-205
src/peft/tuners/fourierft/layer.py                      104     82    21%   33-52, 55-79, 83-84, 87-92, 108-112, 127-149, 155-161, 164, 167-186, 189-190
src/peft/tuners/fourierft/model.py                      168    131    22%   62, 73-74, 81, 93-124, 127-152, 155-173, 177-205, 209-214, 217-223, 226-228, 235, 242-250, 258-264, 268-274, 283-299, 308-322, 341, 350
src/peft/tuners/hra/__init__.py                           4      0   100%
src/peft/tuners/hra/config.py                            27      3    89%   128, 132, 136
src/peft/tuners/hra/layer.py                            231    201    13%   33-48, 66-92, 95-102, 105, 108-115, 118-122, 139-142, 157-181, 187-195, 198-225, 228-252, 255-256, 271-274, 289-336, 342-359, 362-389, 392-436, 439-440
src/peft/tuners/hra/model.py                            153    120    22%   96-97, 104, 116-135, 143-168, 171-189, 193-208, 212-217, 220-226, 229-231, 234, 237-245, 248-254, 258-264, 273-290, 299-312, 332, 341
src/peft/tuners/ia3/__init__.py                          13      7    46%   26-36
src/peft/tuners/ia3/bnb.py                               67     67     0%   15-129
src/peft/tuners/ia3/config.py                            22      1    95%   112
src/peft/tuners/ia3/layer.py                            191    164    14%   31-52, 57-65, 68-70, 85-90, 105-132, 138-155, 158-184, 197-203, 207-214, 229-257, 263-280, 283-310, 317-319, 325-327
src/peft/tuners/ia3/model.py                            215    176    18%   79, 84-149, 153, 156-158, 170-190, 198-202, 205-229, 233-238, 241-247, 250-252, 259, 266, 283-289, 293-305, 323-351, 378, 385, 394-407, 415-447, 466-498
src/peft/tuners/ln_tuning/__init__.py                     3      0   100%
src/peft/tuners/ln_tuning/config.py                      13      2    85%   69-70
src/peft/tuners/ln_tuning/layer.py                       60     45    25%   33-38, 41, 51-61, 64-83, 86-92, 98-113, 116-117
src/peft/tuners/ln_tuning/model.py                       92     64    30%   70, 74-79, 84-90, 102-105, 113-118, 121-135, 138-142, 145, 148-150, 157, 164, 167-173, 182-197, 200, 205
src/peft/tuners/loha/__init__.py                          4      0   100%
src/peft/tuners/loha/config.py                           25      1    96%   139
src/peft/tuners/loha/layer.py                           180    143    21%   31-40, 44, 48-61, 69-76, 84-91, 116-152, 156-189, 192-215, 232-236, 241-243, 246-247, 265-269, 276-279, 289-290, 299-301, 305-316, 322-327, 331-361, 365, 369
src/peft/tuners/loha/model.py                            20      9    55%   105-116
src/peft/tuners/lokr/__init__.py                          4      0   100%
src/peft/tuners/lokr/config.py                           28      1    96%   151
src/peft/tuners/lokr/layer.py                           196    167    15%   39-49, 53, 72-95, 98-111, 114-127, 131-144, 172-222, 226-251, 254-277, 296-300, 305-307, 310-311, 331-335, 342-345, 355-356, 392-411, 415-416, 420-425
src/peft/tuners/lokr/model.py                            21     10    52%   106-118
src/peft/tuners/lora/__init__.py                         18     10    44%   42-57
src/peft/tuners/lora/aqlm.py                             46     34    26%   40-44, 48-71, 74-75, 89-100
src/peft/tuners/lora/awq.py                              53     39    26%   41-49, 52-75, 78-79, 87-108
src/peft/tuners/lora/bnb.py                             275    275     0%   14-538
src/peft/tuners/lora/config.py                           91     28    69%   117-120, 399-401, 415, 419, 423, 426, 430-438, 440-441, 444-445, 447, 461-467, 486-488
src/peft/tuners/lora/dora.py                             95     72    24%   27-28, 32-35, 39-63, 72-100, 103-104, 113-126, 129-130, 136-140, 147-172, 175-176, 181-182, 187-188
src/peft/tuners/lora/eetq.py                             53     46    13%   24-82, 90-104
src/peft/tuners/lora/eva.py                             333    285    14%   52-58, 62-74, 78, 81-103, 128-145, 149-168, 187-188, 192, 196-198, 209-212, 219-223, 230-238, 249-254, 272-282, 286, 302-486, 494-553, 613-655, 711-738
src/peft/tuners/lora/gptq.py                             49     39    20%   37-47, 59-82, 85-86, 100-114
src/peft/tuners/lora/hqq.py                             138    118    14%   46-51, 74-109, 115-139, 142, 155-188, 191-240, 243-244, 248-258
src/peft/tuners/lora/layer.py                           664    596    10%   42-103, 109-151, 154-171, 174-216, 219-251, 254-272, 275-297, 300, 303-304, 307-310, 313-320, 323-330, 334-357, 364-389, 418-432, 447-507, 513-527, 537-561, 564-607, 610-611, 628-632, 643-678, 681-692, 707-729, 735-741, 751-775, 782-805, 808-809, 821-857, 860-861, 878-884, 895-934, 937, 940-950, 954, 969-1027, 1033-1047, 1057-1094, 1097-1134, 1137-1138, 1144-1147, 1150, 1156-1159, 1162, 1171-1207
src/peft/tuners/lora/model.py                           411    352    14%   61-62, 141, 152-153, 160, 172-173, 184-237, 240-277, 280-298, 304-364, 368-373, 376-382, 385-387, 394, 401-409, 426-432, 437-469, 476-480, 484-490, 499-524, 533-592, 647-724, 743-794, 806-838, 847-860, 890, 899, 907-937
src/peft/tuners/lora/torchao.py                          78     63    19%   35-37, 41-44, 47-81, 84-114, 117-118, 127-146
src/peft/tuners/lora/tp_layer.py                        187    166    11%   55-95, 111-175, 178-222, 237-297, 303-317, 327-351, 354-355, 364-397
src/peft/tuners/lycoris_utils.py                        208    148    29%   69-79, 92-95, 121-141, 147-150, 153-160, 166-172, 175-182, 205, 209-214, 218, 234-272, 275-277, 281-283, 286-307, 310-312, 321-348, 355, 362, 382, 391, 408-414, 423-436
src/peft/tuners/mixed/__init__.py                         2      0   100%
src/peft/tuners/mixed/model.py                          190    155    18%   57, 66-74, 81, 89-100, 103-131, 134-153, 157-179, 183-188, 191-193, 196, 199-207, 210-216, 220-227, 236-275, 278, 287-310, 329, 338, 341
src/peft/tuners/multitask_prompt_tuning/__init__.py       3      0   100%
src/peft/tuners/multitask_prompt_tuning/config.py        21      2    90%   61-62
src/peft/tuners/multitask_prompt_tuning/model.py         48     40    17%   30-106, 109-120
src/peft/tuners/oft/__init__.py                           4      0   100%
src/peft/tuners/oft/config.py                            39      6    85%   182, 184, 188, 201-207
src/peft/tuners/oft/layer.py                            380    339    11%   39-40, 51-68, 91-116, 120, 123-127, 130-137, 140-144, 150-214, 220-231, 240-248, 252-261, 266-276, 282-296, 317-324, 339-373, 379-392, 402-416, 419-469, 472-473, 493-500, 507-560, 575-618, 624-648, 658-672, 675-743, 746-747
src/peft/tuners/oft/model.py                            165    131    21%   95, 106-107, 114, 126-150, 162-187, 190-208, 212-233, 237-242, 245-251, 254-256, 259, 262-270, 273-279, 283-289, 298-323, 332-345, 365, 374
src/peft/tuners/p_tuning/__init__.py                      3      0   100%
src/peft/tuners/p_tuning/config.py                       17      2    88%   59-60
src/peft/tuners/p_tuning/model.py                        34     28    18%   72-119, 122-130
src/peft/tuners/poly/__init__.py                          4      0   100%
src/peft/tuners/poly/config.py                           21      4    81%   96-101
src/peft/tuners/poly/layer.py                            89     72    19%   34-52, 55-85, 88-111, 123-127, 130-161, 164-165
src/peft/tuners/poly/model.py                           111     79    29%   37, 41, 52-63, 66-90, 93-95, 99-107, 114-119, 122-128, 131-133, 136, 139, 142-144, 147-153, 157-171, 176-181, 184-185, 188-189
src/peft/tuners/poly/router.py                           38     22    42%   28-31, 50-57, 60, 63-81
src/peft/tuners/prefix_tuning/__init__.py                 3      0   100%
src/peft/tuners/prefix_tuning/config.py                  10      2    80%   41-42
src/peft/tuners/prefix_tuning/model.py                   19     15    21%   57-72, 75-80
src/peft/tuners/prompt_tuning/__init__.py                 3      0   100%
src/peft/tuners/prompt_tuning/config.py                  23      8    65%   72-85
src/peft/tuners/prompt_tuning/model.py                   30     23    23%   63-86, 90-91
src/peft/tuners/tuners_utils.py                         499    287    42%   67-117, 171-179, 197, 230, 258, 290, 300, 307, 314, 323, 339, 352, 356, 360-362, 365-366, 373-396, 445, 463-470, 480-489, 493, 504-532, 536, 543, 557-559, 562-565, 584-588, 594-597, 600-605, 621, 628-630, 678-685, 693, 696, 714-720, 737-745, 780-787, 802-824, 840, 852, 854, 859, 897-941, 950, 965-971, 974, 988-1007, 1016-1024, 1033, 1042-1076, 1087-1104, 1113-1123, 1133-1175
src/peft/tuners/vblora/__init__.py                        4      0   100%
src/peft/tuners/vblora/config.py                         29      6    79%   186-196
src/peft/tuners/vblora/layer.py                         130    107    18%   32-55, 59, 72-100, 103-106, 127-134, 149-169, 172-179, 182-184, 187-214, 224-229, 232-249
src/peft/tuners/vblora/model.py                         198    157    21%   75, 78-80, 83, 95-96, 103, 114-147, 151-176, 179-196, 200-238, 242-247, 250-256, 259-261, 268, 275-283, 300-306, 310-316, 325-342, 351-364, 395, 404, 410-440, 446-447
src/peft/tuners/vera/__init__.py                         13      7    46%   26-36
src/peft/tuners/vera/bnb.py                             207    207     0%   14-409
src/peft/tuners/vera/config.py                           28      7    75%   149-158
src/peft/tuners/vera/layer.py                           151    127    16%   35-62, 66, 78-132, 135-138, 158-164, 179-202, 205-212, 222-256, 259-290, 293-294
src/peft/tuners/vera/model.py                           237    191    19%   59-69, 105, 113-142, 145-157, 160, 172-191, 198, 210-240, 244-269, 272-290, 295-363, 367-372, 375-381, 384-386, 389, 392-400, 403-409, 413-419, 429-446, 455-469, 500, 509
src/peft/tuners/xlora/__init__.py                         3      0   100%
src/peft/tuners/xlora/classifier.py                      88     72    18%   30-32, 36-38, 58-99, 113-122, 138-171, 179-188, 191-195
src/peft/tuners/xlora/config.py                          36     14    61%   79-102
src/peft/tuners/xlora/layer.py                          110     87    21%   41-45, 54-56, 65-81, 93, 101-129, 141, 149-174, 186, 194-223
src/peft/tuners/xlora/model.py                          207    160    23%   44-87, 106-153, 236-307, 310-314, 317-320, 324-388, 392-397, 402, 415, 422, 434, 439, 442, 449-450, 457-458, 465-466, 472-473, 480, 488-489, 495-496, 502-503, 509-510, 518-519
src/peft/utils/__init__.py                                5      0   100%
src/peft/utils/constants.py                              41     15    63%   23-32, 37-43, 54
src/peft/utils/hotswap.py                                61     61     0%   14-220
src/peft/utils/incremental_pca.py                       148    131    11%   57-69, 72-77, 80, 83-88, 100-123, 141-180, 198-206, 219-228, 241-299, 313-314, 330-338
src/peft/utils/integrations.py                           98     85    13%   28-41, 52-78, 86-113, 118-122, 131-152, 161-172
src/peft/utils/loftq_utils.py                           233    202    13%   36-48, 52-60, 64-86, 89-102, 105-112, 115-153, 157-169, 176-186, 191-238, 243-259, 271-309, 312-328, 367-410
src/peft/utils/merge_utils.py                            79     65    18%   32-34, 49-53, 68-72, 90-100, 117-125, 139-141, 155-160, 176-182, 205-214, 230-236, 259-268
src/peft/utils/other.py                                 356    290    19%   85-93, 114-180, 193-202, 207-213, 220-231, 236, 241, 246-265, 268-288, 294-302, 306-315, 323-349, 352-359, 369-380, 397-402, 413-415, 419-430, 435-444, 449-456, 460-504, 508-553, 557-562, 570-574, 583-590, 597-603, 610-633, 647-658, 676-680, 690-696, 705-721, 726
src/peft/utils/peft_types.py                             30      0   100%
src/peft/utils/save_and_load.py                         300    279     7%   40, 45-48, 71-279, 285-307, 314-326, 353-474, 484-486, 501-575
-----------------------------------------------------------------------------------
TOTAL                                                 12843   9913    23%

=============================================================================================================================== slowest 10 durations ===============================================================================================================================
0.24s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same

(5 durations < 0.005s hidden.  Use -vv to show these durations.)
============================================================================================================================= short test summary info ==============================================================================================================================
FAILED tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_15_BOFT_Same - assert not True
FAILED tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_16_BOFT_Different - RuntimeError: CUDA error: an illegal memory access was encountered

Note that only the first error is relevant, the second is probably a side effect of the first.

I'm using CUDA 12.4 in case it's relevant:

$ python -c "import torch;print(torch.version.cuda)"
12.4

@d-kleine
Copy link
Contributor

d-kleine commented Nov 26, 2024

You were right about resetting the local clone with git checkout v0.13.2. But still the same issue, here is what I have checked before:

(peft) C:\Users\dk\Desktop\peft>python -c "import torch;print(torch.version.cuda)"
12.4

(peft) C:\Users\dk\Desktop\peft>python -c "import torch; print(torch.cuda.device_count())"
1

(peft) C:\Users\dk\Desktop\peft>ninja --version
1.11.1.git.kitware.jobserver-1

(peft) C:\Users\dk\Desktop\peft>gcc --version
gcc (Rev1, Built by MSYS2 project) 14.2.0
Copyright (C) 2024 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

(peft) C:\Users\dk\Desktop\peft>python -c "import ninja; print(ninja.__version__)"
1.11.1.2

This is the output

(peft) C:\Users\dk\Desktop\peft>pytest tests/test_custom_models.py -k "test_merge_layers_multi and boft"
============================================================================================================================ test session starts ============================================================================================================================
platform win32 -- Python 3.11.10, pytest-8.3.3, pluggy-1.5.0
rootdir: C:\Users\dk\Desktop\peft
configfile: pyproject.toml
plugins: cov-6.0.0
collected 3643 items / 3641 deselected / 2 selected

tests\test_custom_models.py ..                                                                                                                                                                                                                                         [100%]

============================================================================================================================= warnings summary ==============================================================================================================================
..\..\anaconda3\envs\peft\Lib\site-packages\accelerate\utils\other.py:220
  C:\Users\dk\anaconda3\envs\peft\Lib\site-packages\accelerate\utils\other.py:220: DeprecationWarning: numpy.core is deprecated and has been renamed to numpy._core. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.multiarray.
    np.core.multiarray._reconstruct,

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
  C:\Users\dk\anaconda3\envs\peft\Lib\site-packages\torch\utils\cpp_extension.py:382: UserWarning: Error checking compiler version for gcc: Command 'gcc' returned non-zero exit status 1.
    warnings.warn(f'Error checking compiler version for {compiler}: {error}')

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
  C:\Users\dk\anaconda3\envs\peft\Lib\site-packages\torch\utils\cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
  If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
    warnings.warn(

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
  C:\Users\dk\Desktop\peft\src\peft\tuners\boft\layer.py:95: UserWarning: Failed to load the CUDA extension: Error building extension 'fbd_cuda', check if ninja is available.
    warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.")

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_14_BOFT_Different
  C:\Users\dk\Desktop\peft\src\peft\tuners\boft\layer.py:96: UserWarning: Setting boft_n_butterfly_factor to 1 to speed up the finetuning process.
    warnings.warn("Setting boft_n_butterfly_factor to 1 to speed up the finetuning process.")

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_14_BOFT_Different
  C:\Users\dk\Desktop\peft\src\peft\tuners\boft\layer.py:95: UserWarning: Failed to load the CUDA extension: DLL load failed while importing fbd_cuda: Das angegebene Modul wurde nicht gefunden., check if ninja is available.
    warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.")

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform win32, python 3.11.10-final-0 ----------
Name                                                  Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------------------
src\peft\__init__.py                                      8      0   100%
src\peft\auto.py                                         69     32    54%   52, 73-130
src\peft\config.py                                      105     58    45%   48, 61-81, 96-119, 132-152, 163-173, 177-189, 197-214, 278
src\peft\helpers.py                                      52     52     0%   15-210
src\peft\import_utils.py                                 48     22    54%   23, 28-33, 38-44, 52, 58-69
src\peft\mapping.py                                      44     17    61%   74, 135, 172, 178-182, 186, 191-193, 216-229
src\peft\mixed_model.py                                 150    103    31%   58-73, 77-78, 120-133, 137, 141, 145, 152-170, 184-186, 194-199, 205, 211, 218-222, 251-261, 264-271, 295-305, 308-317, 334, 341, 344, 347, 351, 385-388, 391, 400, 442-479
src\peft\optimizers\__init__.py                           2      2     0%   15-18
src\peft\optimizers\loraplus.py                          36     36     0%   19-121
src\peft\peft_model.py                                 1154    980    15%   163-165, 186, 191, 196-211, 215-218, 265-424, 479-595, 598-645, 659, 662-665, 673-685, 691-735, 741-765, 778-780, 790, 798-800, 815-817, 823-825, 839-873, 907, 914-921, 923, 929-932, 938-942, 972, 1012, 1016-1026, 1039-1106, 1153-1242, 1264, 1272, 1287-1327, 1378-1398, 1418-1425, 1439-1491, 1504-1557, 1609-1610, 1624-1691, 1694-1712, 1715-1785, 1831-1833, 1852-1960, 1965-2033, 2036-2045, 2096-2116, 2136-2143, 2157-2210, 2223-2259, 2313-2333, 2353-2360, 2377-2433, 2447-2498, 2548, 2561-2609, 2658-2728, 2791-2892
src\peft\tuners\__init__.py                              20      0   100%
src\peft\tuners\_buffer_dict.py                          61     40    34%   57-61, 64, 67, 70, 73, 76, 79, 83, 92-94, 98, 102, 106, 122-147, 150-157, 160
src\peft\tuners\adalora\__init__.py                      14      7    50%   27-37
src\peft\tuners\adalora\bnb.py                           76     76     0%   15-145
src\peft\tuners\adalora\config.py                        30      5    83%   56, 59, 66, 70, 74
src\peft\tuners\adalora\gptq.py                          32     27    16%   30-37, 40-68
src\peft\tuners\adalora\layer.py                        219    183    16%   31, 42-46, 49-77, 80-83, 99-106, 121-143, 149-155, 158, 165-186, 189-190, 204-212, 215, 218-220, 223-231, 234-250, 254-271, 276, 279-281, 284-333, 337-345, 349-358
src\peft\tuners\adalora\model.py                        162    138    15%   68-85, 94-102, 117-144, 155-217, 221-227, 231-236, 239-266, 269-297, 300-313, 336-355, 359
src\peft\tuners\adaption_prompt\__init__.py               4      0   100%
src\peft\tuners\adaption_prompt\config.py                24      8    67%   35, 40, 72-80
src\peft\tuners\adaption_prompt\layer.py                 46     38    17%   37-55, 67-128
src\peft\tuners\adaption_prompt\model.py                 84     67    20%   44-59, 63-95, 99-108, 112-113, 117-118, 122-128, 132-136, 140-146, 150-152, 156-163
src\peft\tuners\adaption_prompt\utils.py                 50     43    14%   30-32, 46-57, 66-116, 121
src\peft\tuners\boft\__init__.py                          4      0   100%
src\peft\tuners\boft\config.py                           24      2    92%   128, 130
src\peft\tuners\boft\fbd\__init__.py                      0      0   100%
src\peft\tuners\boft\layer.py                           480    276    42%   65-71, 78, 130-132, 136-138, 153-154, 165-191, 229-232, 238-242, 245-252, 255-259, 273, 278, 284, 290-305, 309, 314-319, 326, 333-337, 373-379, 413-437, 502, 510-522, 539-552, 570, 582, 590-592, 594, 612, 626, 651-652, 671-675, 687-788, 803-842, 848-870, 881-903, 906-976, 979-980
src\peft\tuners\boft\model.py                           165     63    62%   90, 110, 155-159, 178-187, 192, 198-202, 204-207, 220, 224-230, 233-235, 238, 241-249, 255-256, 263-265, 294-300, 311-324, 353
src\peft\tuners\fourierft\__init__.py                     4      0   100%
src\peft\tuners\fourierft\config.py                      25      6    76%   178-188
src\peft\tuners\fourierft\layer.py                      104     82    21%   33-52, 55-79, 83-84, 87-92, 108-112, 127-149, 155-161, 164, 167-186, 189-190
src\peft\tuners\fourierft\model.py                      168    131    22%   62, 73-74, 81, 93-124, 127-152, 155-173, 177-205, 209-214, 217-223, 226-228, 235, 242-250, 258-264, 268-274, 283-299, 308-322, 341, 350
src\peft\tuners\hra\__init__.py                           4      0   100%
src\peft\tuners\hra\config.py                            21      2    90%   112, 116
src\peft\tuners\hra\layer.py                            231    201    13%   33-48, 66-92, 95-102, 105, 108-115, 118-122, 139-142, 157-181, 187-195, 198-225, 228-252, 255-256, 271-274, 289-336, 342-359, 362-389, 392-431, 434-435
src\peft\tuners\hra\model.py                            153    120    22%   96-97, 104, 116-135, 143-168, 171-189, 193-208, 212-217, 220-226, 229-231, 234, 237-245, 248-254, 258-264, 273-290, 299-312, 332, 341
src\peft\tuners\ia3\__init__.py                          13      7    46%   26-36
src\peft\tuners\ia3\bnb.py                               67     67     0%   15-129
src\peft\tuners\ia3\config.py                            18      1    94%   98
src\peft\tuners\ia3\layer.py                            180    157    13%   31-52, 57-65, 68-70, 85-90, 105-132, 138-155, 158-184, 197-202, 206-214, 229-257, 263-280, 283-310
src\peft\tuners\ia3\model.py                            213    174    18%   79, 84-147, 151, 154-156, 168-188, 196-200, 203-227, 231-236, 239-245, 248-250, 257, 264, 281-287, 291-303, 321-349, 376, 383, 392-405, 413-445, 464-496
src\peft\tuners\ln_tuning\__init__.py                     3      0   100%
src\peft\tuners\ln_tuning\config.py                      11      1    91%   61
src\peft\tuners\ln_tuning\layer.py                       60     45    25%   33-38, 41, 51-61, 64-83, 86-92, 98-113, 116-117
src\peft\tuners\ln_tuning\model.py                       92     64    30%   70, 74-79, 84-90, 102-105, 113-118, 121-135, 138-142, 145, 148-150, 157, 164, 167-173, 182-197, 200, 205
src\peft\tuners\loha\__init__.py                          4      0   100%
src\peft\tuners\loha\config.py                           19      0   100%
src\peft\tuners\loha\layer.py                           180    143    21%   31-40, 44, 48-61, 69-76, 84-91, 116-152, 156-189, 192-215, 232-236, 241-243, 246-247, 265-269, 276-279, 289-290, 299-301, 305-316, 322-327, 331-361, 365, 369
src\peft\tuners\loha\model.py                            20      9    55%   105-116
src\peft\tuners\lokr\__init__.py                          4      0   100%
src\peft\tuners\lokr\config.py                           20      0   100%
src\peft\tuners\lokr\layer.py                           181    153    15%   39-49, 53, 72-95, 98-111, 114-127, 155-201, 205-229, 232-255, 274-278, 283-285, 288-289, 309-313, 320-323, 333-334, 370-389, 393-394, 398-403
src\peft\tuners\lokr\model.py                            20      9    55%   106-117
src\peft\tuners\lora\__init__.py                         17     10    41%   37-52
src\peft\tuners\lora\aqlm.py                             46     35    24%   25, 40-44, 48-71, 74-75, 89-100
src\peft\tuners\lora\awq.py                              53     40    25%   26, 41-49, 52-75, 78-79, 87-108
src\peft\tuners\lora\bnb.py                             271    271     0%   14-530
src\peft\tuners\lora\config.py                           64     17    73%   321-323, 332, 336, 339, 343-348, 362-368, 372, 391-393
src\peft\tuners\lora\dora.py                             80     61    24%   27-28, 32-35, 39-63, 70-103, 106-107, 116-129, 132-133, 139-142, 149-174, 177-178
src\peft\tuners\lora\eetq.py                             53     46    13%   24-82, 90-104
src\peft\tuners\lora\gptq.py                             49     39    20%   37-47, 59-82, 85-86, 100-114
src\peft\tuners\lora\hqq.py                             135    124     8%   30-233, 237-247
src\peft\tuners\lora\layer.py                           630    570    10%   42-101, 107-147, 150-167, 170-212, 215-247, 250-268, 271-293, 296, 299-300, 303-306, 309-316, 319-326, 330-353, 360-385, 414-428, 443-503, 509-523, 533-557, 560-597, 600-601, 618-622, 633-668, 671-682, 697-719, 725-731, 741-765, 772-795, 798-799, 811-847, 850-851, 868-872, 883-920, 923-932, 947-1003, 1009-1023, 1033-1071, 1074-1111, 1114-1115, 1124-1157
src\peft\tuners\lora\model.py                           398    339    15%   61-62, 141, 152-153, 160, 172-173, 184-230, 233-270, 273-291, 297-355, 359-364, 367-373, 376-378, 385, 392-400, 417-423, 428-447, 454-458, 462-468, 477-502, 511-570, 625-702, 721-772, 784-816, 825-838, 868, 877, 885-915
src\peft\tuners\lora\tp_layer.py                        191    170    11%   55-95, 111-175, 178-224, 239-299, 305-319, 329-353, 356-357, 366-399
src\peft\tuners\lycoris_utils.py                        207    147    29%   69-78, 91-94, 120-140, 146-149, 152-159, 165-171, 174-181, 204, 208-213, 217, 233-271, 274-276, 280-282, 285-306, 309-311, 320-347, 354, 361, 381, 390, 407-413, 422-435
src\peft\tuners\mixed\__init__.py                         2      0   100%
src\peft\tuners\mixed\model.py                          190    155    18%   57, 66-74, 81, 89-100, 103-131, 134-153, 157-179, 183-188, 191-193, 196, 199-207, 210-216, 220-227, 236-275, 278, 287-310, 329, 338, 341
src\peft\tuners\multitask_prompt_tuning\__init__.py       3      0   100%
src\peft\tuners\multitask_prompt_tuning\config.py        20      1    95%   61
src\peft\tuners\multitask_prompt_tuning\model.py         48     40    17%   30-106, 109-120
src\peft\tuners\oft\__init__.py                           4      0   100%
src\peft\tuners\oft\config.py                            19      0   100%
src\peft\tuners\oft\layer.py                            185    154    17%   31-38, 42, 45-48, 51, 54, 79-112, 116, 131-175, 181-218, 221-233, 237-245, 249-258, 263-273, 276-312, 327-331, 336-343, 346-347, 362-366, 371-378, 381-382
src\peft\tuners\oft\model.py                             18      8    56%   98-108
src\peft\tuners\p_tuning\__init__.py                      3      0   100%
src\peft\tuners\p_tuning\config.py                       16      1    94%   59
src\peft\tuners\p_tuning\model.py                        34     28    18%   72-119, 122-130
src\peft\tuners\poly\__init__.py                          4      0   100%
src\peft\tuners\poly\config.py                           17      2    88%   86-87
src\peft\tuners\poly\layer.py                            89     72    19%   34-52, 55-85, 88-111, 123-127, 130-161, 164-165
src\peft\tuners\poly\model.py                           111     79    29%   37, 41, 52-63, 66-90, 93-95, 99-107, 114-119, 122-128, 131-133, 136, 139, 142-144, 147-153, 157-171, 176-181, 184-185, 188-189
src\peft\tuners\poly\router.py                           38     22    42%   28-31, 50-57, 60, 63-81
src\peft\tuners\prefix_tuning\__init__.py                 3      0   100%
src\peft\tuners\prefix_tuning\config.py                   9      1    89%   41
src\peft\tuners\prefix_tuning\model.py                   19     15    21%   57-72, 75-80
src\peft\tuners\prompt_tuning\__init__.py                 3      0   100%
src\peft\tuners\prompt_tuning\config.py                  22      7    68%   72-84
src\peft\tuners\prompt_tuning\model.py                   30     23    23%   63-86, 90-91
src\peft\tuners\tuners_utils.py                         467    247    47%   72-73, 82-99, 104, 108-117, 171-179, 197, 230, 258, 290, 300, 307, 314, 323, 339, 352, 356, 360-362, 365-366, 396, 443-444, 462-469, 477-486, 500, 509, 521-523, 526-529, 548-552, 558-561, 564-569, 585, 592-594, 642-649, 657, 660, 678-684, 701-709, 744-751, 766-788, 804, 816, 818, 823, 861-905, 920, 934-953, 962-970, 979, 988-1022, 1034, 1036, 1039-1048, 1059-1069, 1079-1121
src\peft\tuners\vblora\__init__.py                        4      0   100%
src\peft\tuners\vblora\config.py                         23      2    91%   174-175
src\peft\tuners\vblora\layer.py                         130    107    18%   32-55, 59, 72-100, 103-106, 127-134, 149-169, 172-179, 182-184, 187-214, 224-229, 232-249
src\peft\tuners\vblora\model.py                         198    157    21%   75, 78-80, 83, 95-96, 103, 114-147, 151-176, 179-196, 200-238, 242-247, 250-256, 259-261, 268, 275-283, 300-306, 310-316, 325-342, 351-364, 395, 404, 410-440, 446-447
src\peft\tuners\vera\__init__.py                          4      0   100%
src\peft\tuners\vera\config.py                           24      4    83%   147-153
src\peft\tuners\vera\layer.py                           151    127    16%   35-62, 66, 78-132, 135-138, 158-164, 179-202, 205-212, 222-256, 259-290, 293-294
src\peft\tuners\vera\model.py                           220    175    20%   58-68, 104, 112-140, 143-154, 157, 169-188, 195, 207-236, 240-265, 268-286, 290-327, 331-336, 339-345, 348-350, 353, 356-364, 367-373, 377-383, 393-410, 419-433, 464, 473
src\peft\tuners\xlora\__init__.py                         3      0   100%
src\peft\tuners\xlora\classifier.py                      88     72    18%   30-32, 36-38, 58-99, 113-122, 138-171, 179-188, 191-195
src\peft\tuners\xlora\config.py                          35     13    63%   79-101
src\peft\tuners\xlora\layer.py                          110     87    21%   41-45, 54-56, 65-81, 93, 101-129, 141, 149-174, 186, 194-223
src\peft\tuners\xlora\model.py                          207    160    23%   44-87, 106-153, 236-307, 310-314, 317-320, 324-388, 392-397, 402, 415, 422, 434, 439, 442, 449-450, 457-458, 465-466, 472-473, 480, 488-489, 495-496, 502-503, 509-510, 518-519
src\peft\utils\__init__.py                                4      0   100%
src\peft\utils\constants.py                              39     15    62%   21-30, 35-41, 52
src\peft\utils\integrations.py                           67     56    16%   28-41, 52-74, 82-109, 114-118
src\peft\utils\loftq_utils.py                           234    202    14%   37-49, 53-61, 65-87, 90-103, 106-113, 116-154, 158-170, 177-187, 192-239, 244-260, 272-310, 313-329, 368-411
src\peft\utils\merge_utils.py                            79     65    18%   32-34, 49-53, 68-72, 90-100, 117-125, 139-141, 155-160, 176-182, 205-214, 230-236, 259-268
src\peft\utils\other.py                                 346    279    19%   84-92, 113-167, 180-189, 194-200, 207-218, 223, 228, 232-234, 238-240, 243-263, 269-277, 281-290, 298-324, 327-334, 344-355, 372-377, 388-390, 394-405, 410-419, 424-431, 435-479, 483-526, 530-535, 543-547, 556-563, 570-576, 583-606, 620-631, 649-653, 663-669, 678-694
src\peft\utils\peft_types.py                             28      0   100%
src\peft\utils\save_and_load.py                         293    274     6%   39, 44-47, 70-276, 282-304, 331-487, 497-499, 514-588
-----------------------------------------------------------------------------------
TOTAL                                                 10813   8166    24%

=========================================================================================================================== slowest 10 durations ============================================================================================================================
11.85s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
0.03s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_14_BOFT_Different

(4 durations < 0.005s hidden.  Use -vv to show these durations.)
============================================================================================================== 2 passed, 3641 deselected, 8 warnings in 21.91s ==============================================================================================================

@BenjaminBossan
Copy link
Member Author

Okay, so for you these tests pass, which means the issue is most likely something specific to me. This would be good news, as it means others out there probably don't encounter it either. I'll investigate further what could be wrong on my end, thanks for helping with this.

@BenjaminBossan
Copy link
Member Author

Update: I cleared the cache inside of ~.cache/torch_extensions for fbd_cuda but still got the same error. With logs enabled, I got the output shown below. As I'm not proficient with CUDA, I have no idea if these logs are helpful in any way.

Detected CUDA files, patching ldflags
Emitting ninja build file /home/name/.cache/torch_extensions/py311_cu124/fbd_cuda/build.ninja...
Building extension module fbd_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] gcc -MMD -MF fbd_cuda.o.d -DTORCH_EXTENSION_NAME=fbd_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/TH -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/THC -isystem /home/name/anaconda3/envs/peft/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -c /home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda.cpp -o fbd_cuda.o 
[2/3] /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output fbd_cuda_kernel.cuda.o.d -ccbin gcc -DTORCH_EXTENSION_NAME=fbd_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/torch/csrc/api/include -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/TH -isystem /home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/THC -isystem /home/name/anaconda3/envs/peft/include/python3.11 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89 --compiler-options '-fPIC' -std=c++17 -c /home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu -o fbd_cuda_kernel.cuda.o 
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu: In lambda function:
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:66:132: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   66 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.type(), "forward_fast_block_diag1", ([&] {
      |                                                                                                                                    ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:66:159: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   66 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.type(), "forward_fast_block_diag1", ([&] {
      |                                                                                                                                                               ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu: In lambda function:
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:66:130: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   66 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.type(), "forward_fast_block_diag1", ([&] {
      |                                                                                                                                  ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:66:156: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   66 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.type(), "forward_fast_block_diag1", ([&] {
      |                                                                                                                                                            ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu: In lambda function:
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:66:138: warning: ‘T* at::Tensor::data() const [with T = c10::Half]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   66 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.type(), "forward_fast_block_diag1", ([&] {
      |                                                                                                                                          ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:66:168: warning: ‘T* at::Tensor::data() const [with T = c10::Half]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   66 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.type(), "forward_fast_block_diag1", ([&] {
      |                                                                                                                                                                        ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu: In lambda function:
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:97:139: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   97 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(grad_output.type(), "backward_fast_block_diag", ([&] {
      |                                                                                                                                           ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:97:170: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   97 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(grad_output.type(), "backward_fast_block_diag", ([&] {
      |                                                                                                                                                                          ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu: In lambda function:
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:97:137: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   97 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(grad_output.type(), "backward_fast_block_diag", ([&] {
      |                                                                                                                                         ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:97:167: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   97 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(grad_output.type(), "backward_fast_block_diag", ([&] {
      |                                                                                                                                                                       ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu: In lambda function:
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:97:145: warning: ‘T* at::Tensor::data() const [with T = c10::Half]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   97 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(grad_output.type(), "backward_fast_block_diag", ([&] {
      |                                                                                                                                                 ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
/home/name/work/forks/peft/src/peft/tuners/boft/fbd/fbd_cuda_kernel.cu:97:179: warning: ‘T* at::Tensor::data() const [with T = c10::Half]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
   97 |     AT_DISPATCH_FLOATING_TYPES_AND_HALF(grad_output.type(), "backward_fast_block_diag", ([&] {
      |                                                                                                                                                                                   ^ 
/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/include/ATen/core/TensorBody.h:247:1: note: declared here
  247 |   T * data() const {
      | ^ ~~
[3/3] gcc fbd_cuda.o fbd_cuda_kernel.cuda.o -shared -L/home/name/anaconda3/envs/peft/lib/python3.11/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/lib64 -lcudart -o fbd_cuda.so
Loading extension module fbd_cuda...

@d-kleine
Copy link
Contributor

d-kleine commented Nov 27, 2024

On Windows, I think there is a problem on my system with ninja and gcc. Both work for me and have been set in the system environment variables, but somehow I don't get them to worked properly with the tests so that the warning messages disappear. But idk why the tests pass for me here.

So, I just ran the tests on WSL (using Ubuntu) and there, I have the same errors as you, if that helps you:

Log (Ubuntu)
================================================================================================================= test session starts ==================================================================================================================
platform linux -- Python 3.11.10, pytest-8.3.3, pluggy-1.5.0
rootdir: /mnt/c/Users/dk/Desktop/peft
configfile: pyproject.toml
plugins: cov-6.0.0
collected 3643 items / 3641 deselected / 2 selected

tests/test_custom_models.py FF/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/coverage/control.py:892: CoverageWarning: No data was collected. (no-data-collected)
self._warn("No data was collected.", slug="no-data-collected")
                                                                                                                                                                                                                 [100%]

======================================================================================================================= FAILURES =======================================================================================================================
__________________________________________________________________________________________ MultipleActiveAdaptersTester.test_merge_layers_multi_13_BOFT_Same ___________________________________________________________________________________________

a = (<tests.test_custom_models.MultipleActiveAdaptersTester testMethod=test_merge_layers_multi_13_BOFT_Same>,), kw = {}

  @wraps(func)
  def standalone_func(*a, **kw):
>       return func(*(a + p.args), **p.kwargs, **kw)

/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/parameterized/parameterized.py:620:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tests.test_custom_models.MultipleActiveAdaptersTester testMethod=test_merge_layers_multi_13_BOFT_Same>, test_name = 'BOFT Same', tuner_method = 'boft', config_cls = <class 'peft.tuners.boft.config.BOFTConfig'>
config_kwargs_1 = {'boft_block_size': 2, 'init_weights': False, 'target_modules': ['lin0']}, config_kwargs_2 = {'boft_block_size': 2, 'init_weights': False, 'target_modules': ['lin0']}

>   ???
E   assert not True
E    +  where True = <built-in method allclose of type object at 0x7f436045f240>(tensor([-0.7747, -0.6177]), tensor([-0.7747, -0.6177]), atol=0.001, rtol=0.001)
E    +    where <built-in method allclose of type object at 0x7f436045f240> = torch.allclose

C:\Users\dk\Desktop\peft\tests\test_custom_models.py:1938: AssertionError
----------------------------------------------------------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------------------------------------------------------
ninja: no work to do.
----------------------------------------------------------------------------------------------------------------- Captured stderr call -----------------------------------------------------------------------------------------------------------------
Using /home/dk/.cache/torch_extensions/py311_cu124 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/dk/.cache/torch_extensions/py311_cu124/fbd_cuda/build.ninja...
Building extension module fbd_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Loading extension module fbd_cuda...
________________________________________________________________________________________ MultipleActiveAdaptersTester.test_merge_layers_multi_14_BOFT_Different ________________________________________________________________________________________

a = (<tests.test_custom_models.MultipleActiveAdaptersTester testMethod=test_merge_layers_multi_14_BOFT_Different>,), kw = {}

  @wraps(func)
  def standalone_func(*a, **kw):
>       return func(*(a + p.args), **p.kwargs, **kw)

/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/parameterized/parameterized.py:620:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
C:\Users\dk\Desktop\peft\tests\test_custom_models.py:1916: in test_merge_layers_multi
  ???
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/_compile.py:32: in inner
  return disable_fn(*args, **kwargs)
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:632: in _fn
  return fn(*args, **kwargs)
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/random.py:46: in manual_seed
  torch.cuda.manual_seed_all(seed)
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/cuda/random.py:129: in manual_seed_all
  _lazy_call(cb, seed_all=True)
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/cuda/__init__.py:249: in _lazy_call
  callable()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

  def cb():
      for i in range(device_count()):
          default_generator = torch.cuda.default_generators[i]
>           default_generator.manual_seed(seed)
E           RuntimeError: CUDA error: an illegal memory access was encountered
E           CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
E           For debugging consider passing CUDA_LAUNCH_BLOCKING=1
E           Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/cuda/random.py:127: RuntimeError
=================================================================================================================== warnings summary ===================================================================================================================
../../../../../../home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/accelerate/utils/other.py:220
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/accelerate/utils/other.py:220: DeprecationWarning: numpy.core is deprecated and has been renamed to numpy._core. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.multiarray.
  np.core.multiarray._reconstruct,

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.11.10-final-0 ----------
Name                                                  Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------------------
src/peft/__init__.py                                      8      8     0%   20-108
src/peft/auto.py                                         69     69     0%   15-172
src/peft/config.py                                      105    105     0%   14-278
src/peft/helpers.py                                      52     52     0%   15-210
src/peft/import_utils.py                                 48     48     0%   14-89
src/peft/mapping.py                                      44     44     0%   15-229
src/peft/mixed_model.py                                 150    150     0%   15-479
src/peft/optimizers/__init__.py                           2      2     0%   15-18
src/peft/optimizers/loraplus.py                          36     36     0%   19-121
src/peft/peft_model.py                                 1154   1154     0%   15-2892
src/peft/tuners/__init__.py                              20     20     0%   20-39
src/peft/tuners/_buffer_dict.py                          61     61     0%   10-160
src/peft/tuners/adalora/__init__.py                      14     14     0%   15-37
src/peft/tuners/adalora/bnb.py                           76     76     0%   15-145
src/peft/tuners/adalora/config.py                        30     30     0%   15-74
src/peft/tuners/adalora/gptq.py                          32     32     0%   14-68
src/peft/tuners/adalora/layer.py                        219    219     0%   15-358
src/peft/tuners/adalora/model.py                        162    162     0%   15-359
src/peft/tuners/adaption_prompt/__init__.py               4      4     0%   14-19
src/peft/tuners/adaption_prompt/config.py                24     24     0%   15-80
src/peft/tuners/adaption_prompt/layer.py                 46     46     0%   15-128
src/peft/tuners/adaption_prompt/model.py                 84     84     0%   15-163
src/peft/tuners/adaption_prompt/utils.py                 50     50     0%   14-121
src/peft/tuners/boft/__init__.py                          4      4     0%   15-20
src/peft/tuners/boft/config.py                           24     24     0%   18-130
src/peft/tuners/boft/fbd/__init__.py                      0      0   100%
src/peft/tuners/boft/layer.py                           480    480     0%   18-980
src/peft/tuners/boft/model.py                           165    165     0%   18-353
src/peft/tuners/fourierft/__init__.py                     4      4     0%   15-20
src/peft/tuners/fourierft/config.py                      25     25     0%   15-188
src/peft/tuners/fourierft/layer.py                      104    104     0%   15-190
src/peft/tuners/fourierft/model.py                      168    168     0%   14-350
src/peft/tuners/hra/__init__.py                           4      4     0%   15-20
src/peft/tuners/hra/config.py                            21     21     0%   15-116
src/peft/tuners/hra/layer.py                            231    231     0%   15-435
src/peft/tuners/hra/model.py                            153    153     0%   15-341
src/peft/tuners/ia3/__init__.py                          13     13     0%   15-36
src/peft/tuners/ia3/bnb.py                               67     67     0%   15-129
src/peft/tuners/ia3/config.py                            18     18     0%   15-98
src/peft/tuners/ia3/layer.py                            180    180     0%   15-310
src/peft/tuners/ia3/model.py                            213    213     0%   14-496
src/peft/tuners/ln_tuning/__init__.py                     3      3     0%   15-19
src/peft/tuners/ln_tuning/config.py                      11     11     0%   14-61
src/peft/tuners/ln_tuning/layer.py                       60     60     0%   15-117
src/peft/tuners/ln_tuning/model.py                       92     92     0%   14-205
src/peft/tuners/loha/__init__.py                          4      4     0%   15-20
src/peft/tuners/loha/config.py                           19     19     0%   15-119
src/peft/tuners/loha/layer.py                           180    180     0%   15-369
src/peft/tuners/loha/model.py                            20     20     0%   15-116
src/peft/tuners/lokr/__init__.py                          4      4     0%   15-20
src/peft/tuners/lokr/config.py                           20     20     0%   15-127
src/peft/tuners/lokr/layer.py                           181    181     0%   15-403
src/peft/tuners/lokr/model.py                            20     20     0%   15-117
src/peft/tuners/lora/__init__.py                         17     17     0%   15-52
src/peft/tuners/lora/aqlm.py                             46     46     0%   15-100
src/peft/tuners/lora/awq.py                              53     53     0%   14-108
src/peft/tuners/lora/bnb.py                             271    271     0%   14-530
src/peft/tuners/lora/config.py                           64     64     0%   15-393
src/peft/tuners/lora/dora.py                             80     80     0%   15-178
src/peft/tuners/lora/eetq.py                             53     53     0%   14-104
src/peft/tuners/lora/gptq.py                             49     49     0%   15-114
src/peft/tuners/lora/hqq.py                             135    135     0%   14-247
src/peft/tuners/lora/layer.py                           630    630     0%   14-1157
src/peft/tuners/lora/model.py                           398    398     0%   14-915
src/peft/tuners/lora/tp_layer.py                        191    191     0%   14-399
src/peft/tuners/lycoris_utils.py                        207    207     0%   14-435
src/peft/tuners/mixed/__init__.py                         2      2     0%   15-18
src/peft/tuners/mixed/model.py                          190    190     0%   14-341
src/peft/tuners/multitask_prompt_tuning/__init__.py       3      3     0%   15-19
src/peft/tuners/multitask_prompt_tuning/config.py        20     20     0%   15-61
src/peft/tuners/multitask_prompt_tuning/model.py         48     48     0%   15-120
src/peft/tuners/oft/__init__.py                           4      4     0%   15-20
src/peft/tuners/oft/config.py                            19     19     0%   15-117
src/peft/tuners/oft/layer.py                            185    185     0%   15-382
src/peft/tuners/oft/model.py                             18     18     0%   15-108
src/peft/tuners/p_tuning/__init__.py                      3      3     0%   15-19
src/peft/tuners/p_tuning/config.py                       16     16     0%   15-59
src/peft/tuners/p_tuning/model.py                        34     34     0%   17-130
src/peft/tuners/poly/__init__.py                          4      4     0%   15-20
src/peft/tuners/poly/config.py                           17     17     0%   15-87
src/peft/tuners/poly/layer.py                            89     89     0%   15-165
src/peft/tuners/poly/model.py                           111    111     0%   15-189
src/peft/tuners/poly/router.py                           38     38     0%   15-81
src/peft/tuners/prefix_tuning/__init__.py                 3      3     0%   15-19
src/peft/tuners/prefix_tuning/config.py                   9      9     0%   15-41
src/peft/tuners/prefix_tuning/model.py                   19     19     0%   17-80
src/peft/tuners/prompt_tuning/__init__.py                 3      3     0%   15-19
src/peft/tuners/prompt_tuning/config.py                  22     22     0%   15-84
src/peft/tuners/prompt_tuning/model.py                   30     30     0%   15-91
src/peft/tuners/tuners_utils.py                         467    467     0%   14-1121
src/peft/tuners/vblora/__init__.py                        4      4     0%   15-20
src/peft/tuners/vblora/config.py                         23     23     0%   15-175
src/peft/tuners/vblora/layer.py                         130    130     0%   15-249
src/peft/tuners/vblora/model.py                         198    198     0%   14-447
src/peft/tuners/vera/__init__.py                          4      4     0%   15-20
src/peft/tuners/vera/config.py                           24     24     0%   15-153
src/peft/tuners/vera/layer.py                           151    151     0%   15-294
src/peft/tuners/vera/model.py                           220    220     0%   15-473
src/peft/tuners/xlora/__init__.py                         3      3     0%   15-19
src/peft/tuners/xlora/classifier.py                      88     88     0%   14-195
src/peft/tuners/xlora/config.py                          35     35     0%   14-101
src/peft/tuners/xlora/layer.py                          110    110     0%   14-223
src/peft/tuners/xlora/model.py                          207    207     0%   14-519
src/peft/utils/__init__.py                                4      4     0%   21-55
src/peft/utils/constants.py                              39     39     0%   15-301
src/peft/utils/integrations.py                           67     67     0%   15-118
src/peft/utils/loftq_utils.py                           234    234     0%   18-411
src/peft/utils/merge_utils.py                            79     79     0%   15-268
src/peft/utils/other.py                                 346    346     0%   14-694
src/peft/utils/peft_types.py                             28     28     0%   19-87
src/peft/utils/save_and_load.py                         293    293     0%   14-588
-----------------------------------------------------------------------------------
TOTAL                                                 10813  10813     0%

================================================================================================================= slowest 10 durations =================================================================================================================
1.00s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
0.01s setup    tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same

(4 durations < 0.005s hidden.  Use -vv to show these durations.)
=============================================================================================================== short test summary info ================================================================================================================
FAILED tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same - assert not True
FAILED tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_14_BOFT_Different - RuntimeError: CUDA error: an illegal memory access was encountered
=================================================================================================== 2 failed, 3641 deselected, 2 warnings in 16.21s ====================================================================================================

@BenjaminBossan
Copy link
Member Author

So, I just ran the tests on WSL (using Ubuntu) and there, I have the same errors as you, if that helps you:

Great, thanks, so this appears to be a CUDA on Linux issue then (I also use Ubuntu). I assume that your previous test was on Windows?

Sorry to ping again @Zeju1997 but my CUDA knowledge is not deep enough to debug this issue, could you take a look?

@d-kleine
Copy link
Contributor

d-kleine commented Nov 27, 2024

I assume that your previous test was on Windows?

Yes, all tests before the WSL using Ubuntu test (failing) were executed on Windows 11 (seemingly passing, despite gcc and ninja warnings), and I was using peft v0.13.2 for the tests on both platforms.

@d-kleine
Copy link
Contributor

d-kleine commented Nov 28, 2024

I remember I have fixed several issues with this RuntimeError: CUDA error: an illegal memory access was encountered error before. Typically, this error happens when inputs are not synchronized across the devices (see pytorch/pytorch#21819, there several are discussions on this issue). Therefore, I checked the inputs whether they differ when using CPU or CUDA, which results in the first error FAILED tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same - assert not True:

  • On CPU, the inputs did not differ (not raising the assertion)
  • On CUDA, the inputs were different (raising the assertion)

So I have made some changes so that the inputs will always be computed on the same devices with this logic:

class MultipleActiveAdaptersTester(unittest.TestCase):
    ...
    torch_device = infer_device() # changed: added

    def prepare_inputs_for_testing(self):
        X = torch.arange(90).view(9, 10).to(self.torch_device) # changed: moving to device
        return {"X": X}
    ...
    @parameterized.expand(MULTIPLE_ACTIVE_ADAPTERS_TEST_CASES)
    def test_merge_layers_multi(self, test_name, tuner_method, config_cls, config_kwargs_1, config_kwargs_2):
        torch.manual_seed(0)
        model = MLP(bias=tuner_method != "ia3").to(self.torch_device).eval() # changed: moving to device

        config_1 = config_cls(**config_kwargs_1)
        config_2 = config_cls(**config_kwargs_2)

        model = get_peft_model(model, config_1)

        dummy_input = self.prepare_inputs_for_testing()
        ....

This applies to all tests in the MultipleActiveAdaptersTester class that the model must be moved to the device (model = MLP(bias=tuner_method != "ia3").to(self.torch_device).eval()) but most importantly each input must be moved to the same device too (X = torch.arange(90).view(9, 10).to(self.torch_device)).

With this logic, I was able to fix both errors.

After this, there is another issue in test_multiple_active_adapters_merge_and_unmerge that torch.allclose will return an assertion here. I have made the absolute tolerance less strict (e.g. from atol=1e-5 to atol=1e-4) and this issue was gone too.

Log for WSL (using Ubuntu)
(peft) dk@Eclipse:/mnt/c/Users/dk/Desktop/peft$ pytest tests/test_custom_models.py -k "test_merge_layers_multi and boft"
========================================================================================================================= test session starts ==========================================================================================================================
platform linux -- Python 3.11.10, pytest-8.3.3, pluggy-1.5.0
rootdir: /mnt/c/Users/dk/Desktop/peft
configfile: pyproject.toml
plugins: cov-6.0.0
collected 3643 items / 3641 deselected / 2 selected

tests/test_custom_models.py ../home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/coverage/control.py:892: CoverageWarning: No data was collected. (no-data-collected)
self._warn("No data was collected.", slug="no-data-collected")
                                                                                                                                                                                                                                 [100%]

=========================================================================================================================== warnings summary ===========================================================================================================================
../../../../../../home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/accelerate/utils/other.py:220
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/accelerate/utils/other.py:220: DeprecationWarning: numpy.core is deprecated and has been renamed to numpy._core. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.multiarray.
  np.core.multiarray._reconstruct,

tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
/home/dk/miniconda3/envs/peft/lib/python3.11/site-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.11.10-final-0 ----------
Name                                                  Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------------------
src/peft/__init__.py                                      8      8     0%   20-108
src/peft/auto.py                                         69     69     0%   15-172
src/peft/config.py                                      105    105     0%   14-278
src/peft/helpers.py                                      52     52     0%   15-210
src/peft/import_utils.py                                 48     48     0%   14-89
src/peft/mapping.py                                      44     44     0%   15-229
src/peft/mixed_model.py                                 150    150     0%   15-479
src/peft/optimizers/__init__.py                           2      2     0%   15-18
src/peft/optimizers/loraplus.py                          36     36     0%   19-121
src/peft/peft_model.py                                 1154   1154     0%   15-2892
src/peft/tuners/__init__.py                              20     20     0%   20-39
src/peft/tuners/_buffer_dict.py                          61     61     0%   10-160
src/peft/tuners/adalora/__init__.py                      14     14     0%   15-37
src/peft/tuners/adalora/bnb.py                           76     76     0%   15-145
src/peft/tuners/adalora/config.py                        30     30     0%   15-74
src/peft/tuners/adalora/gptq.py                          32     32     0%   14-68
src/peft/tuners/adalora/layer.py                        219    219     0%   15-358
src/peft/tuners/adalora/model.py                        162    162     0%   15-359
src/peft/tuners/adaption_prompt/__init__.py               4      4     0%   14-19
src/peft/tuners/adaption_prompt/config.py                24     24     0%   15-80
src/peft/tuners/adaption_prompt/layer.py                 46     46     0%   15-128
src/peft/tuners/adaption_prompt/model.py                 84     84     0%   15-163
src/peft/tuners/adaption_prompt/utils.py                 50     50     0%   14-121
src/peft/tuners/boft/__init__.py                          4      4     0%   15-20
src/peft/tuners/boft/config.py                           24     24     0%   18-130
src/peft/tuners/boft/fbd/__init__.py                      0      0   100%
src/peft/tuners/boft/layer.py                           480    480     0%   18-980
src/peft/tuners/boft/model.py                           165    165     0%   18-353
src/peft/tuners/fourierft/__init__.py                     4      4     0%   15-20
src/peft/tuners/fourierft/config.py                      25     25     0%   15-188
src/peft/tuners/fourierft/layer.py                      104    104     0%   15-190
src/peft/tuners/fourierft/model.py                      168    168     0%   14-350
src/peft/tuners/hra/__init__.py                           4      4     0%   15-20
src/peft/tuners/hra/config.py                            21     21     0%   15-116
src/peft/tuners/hra/layer.py                            231    231     0%   15-435
src/peft/tuners/hra/model.py                            153    153     0%   15-341
src/peft/tuners/ia3/__init__.py                          13     13     0%   15-36
src/peft/tuners/ia3/bnb.py                               67     67     0%   15-129
src/peft/tuners/ia3/config.py                            18     18     0%   15-98
src/peft/tuners/ia3/layer.py                            180    180     0%   15-310
src/peft/tuners/ia3/model.py                            213    213     0%   14-496
src/peft/tuners/ln_tuning/__init__.py                     3      3     0%   15-19
src/peft/tuners/ln_tuning/config.py                      11     11     0%   14-61
src/peft/tuners/ln_tuning/layer.py                       60     60     0%   15-117
src/peft/tuners/ln_tuning/model.py                       92     92     0%   14-205
src/peft/tuners/loha/__init__.py                          4      4     0%   15-20
src/peft/tuners/loha/config.py                           19     19     0%   15-119
src/peft/tuners/loha/layer.py                           180    180     0%   15-369
src/peft/tuners/loha/model.py                            20     20     0%   15-116
src/peft/tuners/lokr/__init__.py                          4      4     0%   15-20
src/peft/tuners/lokr/config.py                           20     20     0%   15-127
src/peft/tuners/lokr/layer.py                           181    181     0%   15-403
src/peft/tuners/lokr/model.py                            20     20     0%   15-117
src/peft/tuners/lora/__init__.py                         17     17     0%   15-52
src/peft/tuners/lora/aqlm.py                             46     46     0%   15-100
src/peft/tuners/lora/awq.py                              53     53     0%   14-108
src/peft/tuners/lora/bnb.py                             271    271     0%   14-530
src/peft/tuners/lora/config.py                           64     64     0%   15-393
src/peft/tuners/lora/dora.py                             80     80     0%   15-178
src/peft/tuners/lora/eetq.py                             53     53     0%   14-104
src/peft/tuners/lora/gptq.py                             49     49     0%   15-114
src/peft/tuners/lora/hqq.py                             135    135     0%   14-247
src/peft/tuners/lora/layer.py                           630    630     0%   14-1157
src/peft/tuners/lora/model.py                           398    398     0%   14-915
src/peft/tuners/lora/tp_layer.py                        191    191     0%   14-399
src/peft/tuners/lycoris_utils.py                        207    207     0%   14-435
src/peft/tuners/mixed/__init__.py                         2      2     0%   15-18
src/peft/tuners/mixed/model.py                          190    190     0%   14-341
src/peft/tuners/multitask_prompt_tuning/__init__.py       3      3     0%   15-19
src/peft/tuners/multitask_prompt_tuning/config.py        20     20     0%   15-61
src/peft/tuners/multitask_prompt_tuning/model.py         48     48     0%   15-120
src/peft/tuners/oft/__init__.py                           4      4     0%   15-20
src/peft/tuners/oft/config.py                            19     19     0%   15-117
src/peft/tuners/oft/layer.py                            185    185     0%   15-382
src/peft/tuners/oft/model.py                             18     18     0%   15-108
src/peft/tuners/p_tuning/__init__.py                      3      3     0%   15-19
src/peft/tuners/p_tuning/config.py                       16     16     0%   15-59
src/peft/tuners/p_tuning/model.py                        34     34     0%   17-130
src/peft/tuners/poly/__init__.py                          4      4     0%   15-20
src/peft/tuners/poly/config.py                           17     17     0%   15-87
src/peft/tuners/poly/layer.py                            89     89     0%   15-165
src/peft/tuners/poly/model.py                           111    111     0%   15-189
src/peft/tuners/poly/router.py                           38     38     0%   15-81
src/peft/tuners/prefix_tuning/__init__.py                 3      3     0%   15-19
src/peft/tuners/prefix_tuning/config.py                   9      9     0%   15-41
src/peft/tuners/prefix_tuning/model.py                   19     19     0%   17-80
src/peft/tuners/prompt_tuning/__init__.py                 3      3     0%   15-19
src/peft/tuners/prompt_tuning/config.py                  22     22     0%   15-84
src/peft/tuners/prompt_tuning/model.py                   30     30     0%   15-91
src/peft/tuners/tuners_utils.py                         467    467     0%   14-1121
src/peft/tuners/vblora/__init__.py                        4      4     0%   15-20
src/peft/tuners/vblora/config.py                         23     23     0%   15-175
src/peft/tuners/vblora/layer.py                         130    130     0%   15-249
src/peft/tuners/vblora/model.py                         198    198     0%   14-447
src/peft/tuners/vera/__init__.py                          4      4     0%   15-20
src/peft/tuners/vera/config.py                           24     24     0%   15-153
src/peft/tuners/vera/layer.py                           151    151     0%   15-294
src/peft/tuners/vera/model.py                           220    220     0%   15-473
src/peft/tuners/xlora/__init__.py                         3      3     0%   15-19
src/peft/tuners/xlora/classifier.py                      88     88     0%   14-195
src/peft/tuners/xlora/config.py                          35     35     0%   14-101
src/peft/tuners/xlora/layer.py                          110    110     0%   14-223
src/peft/tuners/xlora/model.py                          207    207     0%   14-519
src/peft/utils/__init__.py                                4      4     0%   21-55
src/peft/utils/constants.py                              39     39     0%   15-301
src/peft/utils/integrations.py                           67     67     0%   15-118
src/peft/utils/loftq_utils.py                           234    234     0%   18-411
src/peft/utils/merge_utils.py                            79     79     0%   15-268
src/peft/utils/other.py                                 346    346     0%   14-694
src/peft/utils/peft_types.py                             28     28     0%   19-87
src/peft/utils/save_and_load.py                         293    293     0%   14-588
-----------------------------------------------------------------------------------
TOTAL                                                 10813  10813     0%

========================================================================================================================= slowest 10 durations =========================================================================================================================
1.01s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same
0.02s call     tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_14_BOFT_Different
0.01s setup    tests/test_custom_models.py::MultipleActiveAdaptersTester::test_merge_layers_multi_13_BOFT_Same

(3 durations < 0.005s hidden.  Use -vv to show these durations.)
=========================================================================================================== 2 passed, 3641 deselected, 2 warnings in 17.34s ============================================================================================================

@BenjaminBossan
Copy link
Member Author

Thanks a lot for investigating this further @d-kleine. The fixes you propose sound reasonable, although I wonder why tolerance needs to be increased for GPU. Would you be interested in creating a PR with these changes?

@d-kleine
Copy link
Contributor

d-kleine commented Nov 28, 2024

I wondered about that too and checked this with pytest -s tests/test_custom_models.py -k "test_multiple_active_adapters_merge_and_unmerge". There is only a small value slightly differing so that the atol=1e-5 threshold doesn't match:

merged_combined_output:

tensor([
  [-1.9908e-05, -1.0824e+01],
  [-3.8504e-05, -1.0165e+01],
  [-4.0054e-05, -1.0125e+01],
  ...
  [-1.1893e-01, -2.1881e+00],
  [-4.9193e-01, -9.4532e-01], # Reference value
  [-1.1784e+00, -3.6786e-01]
], device='cuda:0')

combined_output:

tensor([
  [-1.9908e-05, -1.0824e+01],
  [-3.8504e-05, -1.0165e+01],
  [-4.0054e-05, -1.0125e+01],
  ...
  [-1.1893e-01, -2.1881e+00],
  [-4.9192e-01, -9.4534e-01],  # Slight difference here
  [-1.1784e+00, -3.6786e-01]
], device='cuda:0', grad_fn=<LogSoftmaxBackward0>)

So it needs to be increased from atol=1e-5 to atol=1e-4.

I don't know exactly why there is this slight difference - I assume this due to floating point precision on CUDA (see pytorch/pytorch#116966)

@BenjaminBossan
Copy link
Member Author

Great, I think we can live with atol=1e-4. Do you want to create a PR for that?

@d-kleine
Copy link
Contributor

Sure, will do later

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants