Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation
Code for the paper "Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation".
conda create -n EXT-SUB python=3.10
conda activate EXT-SUB
pip install -r requirements.txt
or you can simply copy our conda environment:
# You should edit the `prefix` parameter to your local conda path in `environment.yaml`.
conda env create -f environment.yaml
conda activate ext_sub
The utilization of Parameter-Efficient Modules in this work is based on PEFT. Further information can be available in the HuggingFace documentation.
Run the training Bash script with custom parameters: model_name_or_path
, data_path
, output_dir
cd training
bash train_peft.sh
⚠️ Note: The training code from Alpaca was leveraged in order to resize the embedding of models, thereby incorporating a pad token. When saving the tokenizer after training, it now includes the pad token, which is not present in the original model (as only the PEMs were saved). To effectively utilize the pad token during testing, it is advisable to either resize the embedding once more or substitute the pad token with an existing token.
python ext_sub.py \
--input_path_1 Your/Expert/PEMs/Path \
--input_path_2 Your/Anti-Expert/PEMs/Path \
--alpha 1.0 \
--method ext-sub \
--output_path Your/Output/PEMs/Path
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftConfig, PeftModel
model_name_or_path = ""
config = PeftConfig.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, model_name_or_path)
model = model.cuda()
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_fast=False)
We include our evaluation scripts at eval folder.
- TruthfulQA: Our revised TruthfulQA repository for customized prompt and adapter loading
- HaluEval: Ext-Sub/eval/halueval_eval.py
- toxicity:
- response generation: Ext-Sub/eval/toxic_eval_generation.py
- score evaluation: Ext-Sub/eval/toxic_eval_score.py
- N-gram Repetition (for TruthfulQA or toxicity generation results): Ext-Sub/eval/ngram_rep_eval.py
We have made our trained LoRA checkpoints available through Google Drive.
The base model can be obtained from the HuggingFace model hub: huggyllama/llama-7b and huggyllama/llama-13b.
Please remember to modify the base_model_name_or_path
in the adapter_config.json
file to the local path on your system.
We also release our models and data in huggingface hub Ext-Sub.
Model | Training Data | Description |
---|---|---|
huggyllama/llama-7b | - | (untrained) raw llama-7b model |
Ext-Sub/llama-7b_alpaca-gpt4_lora | Ext-Sub/alpaca-gpt4 | + (Expert) llama-7b trained on raw alpaca-gpt4 |
Ext-Sub/llama-7b_alpaca-gpt4-untruthful_lora | Ext-Sub/alpaca-gpt4-untruthful | - (Anti-expert) llama-7b trained on generated untruthful alpaca-gpt4 |
Ext-Sub/llama-7b_wizardlm_lora | Ext-Sub/wizardlm-70k | + (Expert) llama-7b trained on raw WizardLM |
Ext-Sub/llama-7b_wizardlm_untruthful_lora | Ext-Sub/wizardlm-70k-untruthful | - (Anti-expert) llama-7b trained on generated untruthful WizardLM |
Ext-Sub/llama-7b_toxic_lora | Ext-Sub/toxic_instruct | - (Anti-expert) llama-7b trained on generated toxic data |
----------- | ----------- | ----------- |
huggyllama/llama-13b | - | (untrained) raw llama-13b model |
Ext-Sub/llama-13b_alpaca-gpt4_lora | Ext-Sub/alpaca-gpt4 | + (Expert) llama-13b trained on raw alpaca-gpt4 |
Ext-Sub/llama-13b_alpaca-gpt4-untruthful_lora | Ext-Sub/alpaca-gpt4-untruthful | - (Anti-expert) llama-13b trained on generated untruthful alpaca-gpt4 |
Ext-Sub/llama-13b_toxic_lora | Ext-Sub/toxic_instruct | - (Anti-expert) llama-13b trained on generated toxic data |
Our method tends to prefer contrasting positive and negative examples with larger differences, unlike preference data in DPO or PPO. This implies that generating negative samples becomes more straightforward and convenient in this approach, not requiring hard negatives.
@inproceedings{hu2024separate,
title={Separate the wheat from the chaff: Model deficiency unlearning via parameter-efficient module operation},
author={Hu, Xinshuo and Li, Dongfang and Hu, Baotian and Zheng, Zihao and Liu, Zhenyu and Zhang, Min},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={16},
pages={18252--18260},
year={2024}
}
This repository respects to MIT license.