Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PeftModelForSequenceClassification.add_adapter() got an unexpected keyword argument 'low_cpu_mem_usage' #2246

Closed
2 of 4 tasks
TristanDonze opened this issue Dec 2, 2024 · 2 comments

Comments

@TristanDonze
Copy link

TristanDonze commented Dec 2, 2024

Hello!

System Info

peft==0.13.2
transformers==4.46.3
python 3.12.5
VsCode Notebook
Macbook Air M1 beta 15.2

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

I copied and pasted the following tutorial from HuggingFace: P-tuning for sequence classification
Here's the first part of the code :

from transformers import (
    AutoModelForSequenceClassification,
    AutoTokenizer,
    DataCollatorWithPadding,
    TrainingArguments,
    Trainer,
)
from peft import (
    get_peft_config,
    get_peft_model,
    get_peft_model_state_dict,
    set_peft_model_state_dict,
    PeftType,
    PromptEncoderConfig,
)
from datasets import load_dataset
import evaluate
import torch

model_name_or_path = "roberta-large"
task = "mrpc"
num_epochs = 20
lr = 1e-3
batch_size = 32
dataset = load_dataset("glue", task)
dataset["train"][0]
metric = evaluate.load("glue", task)
import numpy as np

def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = np.argmax(predictions, axis=1)
    return metric.compute(predictions=predictions, references=labels)
if any(k in model_name_or_path for k in ("gpt", "opt", "bloom")):
    padding_side = "left"
else:
    padding_side = "right"

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side)
if getattr(tokenizer, "pad_token_id") is None:
    tokenizer.pad_token_id = tokenizer.eos_token_id


def tokenize_function(examples):
    # max_length=None => use the model max length (it's actually the default)
    outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
    return outputs
tokenized_datasets = dataset.map(
    tokenize_function,
    batched=True,
    remove_columns=["idx", "sentence1", "sentence2"],
)

tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding="longest")
peft_config = PromptEncoderConfig(task_type="SEQ_CLS", num_virtual_tokens=20, encoder_hidden_size=128)
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path, return_dict=True)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()

The line :

model = get_peft_model(model, peft_config)

throws the error :

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[9], [line 2](vscode-notebook-cell:?execution_count=9&line=2)
      [1](vscode-notebook-cell:?execution_count=9&line=1) model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path, return_dict=True)
----> [2](vscode-notebook-cell:?execution_count=9&line=2) model = get_peft_model(model, peft_config)
      [3](vscode-notebook-cell:?execution_count=9&line=3) model.print_trainable_parameters()

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/mapping.py:193, in get_peft_model(model, peft_config, adapter_name, mixed, autocast_adapter_dtype, revision)
    [191](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/mapping.py:191) if peft_config.is_prompt_learning:
    [192](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/mapping.py:192)     peft_config = _prepare_prompt_learning_config(peft_config, model_config)
--> [193](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/mapping.py:193) return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](
    [194](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/mapping.py:194)     model, peft_config, adapter_name=adapter_name, autocast_adapter_dtype=autocast_adapter_dtype
    [195](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/mapping.py:195) )

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:1378, in PeftModelForSequenceClassification.__init__(self, model, peft_config, adapter_name, **kwargs)
   [1375](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:1375) def __init__(
   [1376](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:1376)     self, model: torch.nn.Module, peft_config: PeftConfig, adapter_name: str = "default", **kwargs
   [1377](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:1377) ) -> None:
-> [1378](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:1378)     super().__init__(model, peft_config, adapter_name, **kwargs)
   [1380](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:1380)     classifier_module_names = ["classifier", "score"]
   [1381](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:1381)     if self.modules_to_save is None:

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:165, in PeftModel.__init__(self, model, peft_config, adapter_name, autocast_adapter_dtype, low_cpu_mem_usage)
    [163](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:163)     self._peft_config = {adapter_name: peft_config}
    [164](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:164)     self.base_model = model
--> [165](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:165)     self.add_adapter(adapter_name, peft_config, low_cpu_mem_usage=low_cpu_mem_usage)
    [166](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:166) else:
    [167](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/peft/peft_model.py:167)     self._peft_config = None

TypeError: PeftModelForSequenceClassification.add_adapter() got an unexpected keyword argument 'low_cpu_mem_usage'

Expected behavior

It should just run normally.

@BenjaminBossan
Copy link
Member

Yes, sorry, this was an oversight but it has been fixed in #2156. This fix is not yet released, so either you can install PEFT from source or wait a little bit (the 0.14.0 release is planned for this week).

@TristanDonze
Copy link
Author

Ok, thanks for your answer :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants