Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lora finetuning error #114

Open
MoYang94 opened this issue Apr 8, 2024 · 0 comments
Open

lora finetuning error #114

MoYang94 opened this issue Apr 8, 2024 · 0 comments

Comments

@MoYang94
Copy link

MoYang94 commented Apr 8, 2024

When I run the following command, an error will be reported.
image
My command is as follows:
`deepspeed --include localhost:0,1 --master_port 65500 --module tevatron.retriever.driver.train --deepspeed ../deepspeed/ds_zero3_config.json --output_dir retriever-mistral-v0.1 --model_name_or_path ../../e5-mistral-7b-instruct/ --lora --lora_target_modules q_proj,k_proj,v_proj,o_proj,down_proj,up_proj,gate_proj --save_steps 50 --dataset_path ../../data/train_demo.jsonl --query_prefix "Query: " --passage_prefix "Passage: " --bf16 --pooling eos --append_eos_token --normalize --temperature 0.01 --per_device_train_batch_size 8 --gradient_checkpointing --train_group_size 16 --learning_rate 1e-4 --query_max_len 32 --passage_max_len 156 --num_train_epochs 1 --logging_steps 10 --overwrite_output_dir --gradient_accumulation_steps 4 --lora_name_or_path ../../e5-mistral-7b-instruct

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant