Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The output from running evaluate.py is garbled #22

Open
mingtouyizu opened this issue Jul 9, 2024 · 2 comments
Open

The output from running evaluate.py is garbled #22

mingtouyizu opened this issue Jul 9, 2024 · 2 comments

Comments

@mingtouyizu
Copy link

image

@gordonhu608
Copy link
Collaborator

Thanks for the interest in our work. I have not encountered this issue before. Did you follow the correct procedures in preparing and downloading model weights? See here

BLIVA/README.md

Lines 305 to 317 in d99de3a

## Prepare Weight
1. BLIVA Vicuna 7B
Our Vicuna version model is released at [here](https://huggingface.co/mlpc-lab/BLIVA_Vicuna). Download our model weight and specify the path in the model config [here](bliva/configs/models/bliva_vicuna7b.yaml#L8) at line 8.
The LLM we used is the v0.1 version from Vicuna-7B. To prepare Vicuna's weight, please refer to our instruction [here](PrepareVicuna.md). Then, set the path to the vicuna weight in the model config file [here](bliva/configs/models/bliva_vicuna7b.yaml#L21) at Line 21.
2. BLIVA FlanT5 XXL (Available for Commercial Use)
The FlanT5 version model is released at [here](https://huggingface.co/mlpc-lab/BLIVA_FlanT5). Download our model weight and specify the path in the model config [here](bliva/configs/models/bliva_flant5xxl.yaml#L8) at line 8.
The LLM weight for Flant5 will automatically begin to download from huggingface when running our inference code.

@mingtouyizu
Copy link
Author

Thanks,I have got it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants