Skip to content

Official implementation of our paper "Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation". A model merge method for deficiency unlearning, compitable with huggingface peft (LoRA).

License

Notifications You must be signed in to change notification settings

HITsz-TMG/Ext-Sub

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation

✨ Overview

Code for the paper "Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation".

💻 Usage

🌈 Environment

conda create -n EXT-SUB python=3.10
conda activate EXT-SUB
pip install -r requirements.txt

or you can simply copy our conda environment:

# You should edit the `prefix` parameter to your local conda path in `environment.yaml`.
conda env create -f environment.yaml
conda activate ext_sub

The utilization of Parameter-Efficient Modules in this work is based on PEFT. Further information can be available in the HuggingFace documentation.

🔥 Train

Run the training Bash script with custom parameters: model_name_or_path, data_path, output_dir

cd training
bash train_peft.sh

⚠️ Note: The training code from Alpaca was leveraged in order to resize the embedding of models, thereby incorporating a pad token. When saving the tokenizer after training, it now includes the pad token, which is not present in the original model (as only the PEMs were saved). To effectively utilize the pad token during testing, it is advisable to either resize the embedding once more or substitute the pad token with an existing token.

🔨 PEMs Operation

python ext_sub.py \
  --input_path_1  Your/Expert/PEMs/Path \
  --input_path_2  Your/Anti-Expert/PEMs/Path \
  --alpha 1.0 \
  --method ext-sub \
  --output_path  Your/Output/PEMs/Path

🚀 Load Model

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftConfig, PeftModel


model_name_or_path = ""

config = PeftConfig.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, model_name_or_path)
model = model.cuda()

tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_fast=False)

📊 Evaluation

We include our evaluation scripts at eval folder.

📁 Download

Google Drive

We have made our trained LoRA checkpoints available through Google Drive.

The base model can be obtained from the HuggingFace model hub: huggyllama/llama-7b and huggyllama/llama-13b. Please remember to modify the base_model_name_or_path in the adapter_config.json file to the local path on your system.

HuggingFace

We also release our models and data in huggingface hub Ext-Sub.

Model Training Data Description
huggyllama/llama-7b - (untrained) raw llama-7b model
Ext-Sub/llama-7b_alpaca-gpt4_lora Ext-Sub/alpaca-gpt4 icon+ (Expert) llama-7b trained on raw alpaca-gpt4
Ext-Sub/llama-7b_alpaca-gpt4-untruthful_lora Ext-Sub/alpaca-gpt4-untruthful icon- (Anti-expert) llama-7b trained on generated untruthful alpaca-gpt4
Ext-Sub/llama-7b_wizardlm_lora Ext-Sub/wizardlm-70k icon+ (Expert) llama-7b trained on raw WizardLM
Ext-Sub/llama-7b_wizardlm_untruthful_lora Ext-Sub/wizardlm-70k-untruthful icon- (Anti-expert) llama-7b trained on generated untruthful WizardLM
Ext-Sub/llama-7b_toxic_lora Ext-Sub/toxic_instruct icon- (Anti-expert) llama-7b trained on generated toxic data
----------- ----------- -----------
huggyllama/llama-13b - (untrained) raw llama-13b model
Ext-Sub/llama-13b_alpaca-gpt4_lora Ext-Sub/alpaca-gpt4 icon+ (Expert) llama-13b trained on raw alpaca-gpt4
Ext-Sub/llama-13b_alpaca-gpt4-untruthful_lora Ext-Sub/alpaca-gpt4-untruthful icon- (Anti-expert) llama-13b trained on generated untruthful alpaca-gpt4
Ext-Sub/llama-13b_toxic_lora Ext-Sub/toxic_instruct icon- (Anti-expert) llama-13b trained on generated toxic data

🤔 Insight

Our method tends to prefer contrasting positive and negative examples with larger differences, unlike preference data in DPO or PPO. This implies that generating negative samples becomes more straightforward and convenient in this approach, not requiring hard negatives.

🔗 Cite

@inproceedings{hu2024separate,
  title={Separate the wheat from the chaff: Model deficiency unlearning via parameter-efficient module operation},
  author={Hu, Xinshuo and Li, Dongfang and Hu, Baotian and Zheng, Zihao and Liu, Zhenyu and Zhang, Min},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={16},
  pages={18252--18260},
  year={2024}
}

📜 License

This repository respects to MIT license.

About

Official implementation of our paper "Separate the Wheat from the Chaff: Model Deficiency Unlearning via Parameter-Efficient Module Operation". A model merge method for deficiency unlearning, compitable with huggingface peft (LoRA).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published