Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove hf_auth_token use #1822

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Abhishek-Varma
Copy link
Contributor

-- This commit removes --hf_auth_token uses from vicuna.py.
-- It adds llama2 models based on daryl49's HF.

Signed-off-by: Abhishek Varma [email protected]

@Abhishek-Varma
Copy link
Contributor Author

Currently marking it as draft since 13B and 70B paths need testing.
CC: @powderluv

@powderluv
Copy link
Contributor

If we only download the mlir we wouldn't hit the token right?

@Abhishek-Varma
Copy link
Contributor Author

If we only download the mlir we wouldn't hit the token right?

I did try doing that but during the run saw that we will hit that issue - because we're using tokenizers to decode each generated token. And this tokenizer is being instantiated as per the HF repo we use.

@Abhishek-Varma
Copy link
Contributor Author

If we only download the mlir we wouldn't hit the token right?

I did try doing that but during the run saw that we will hit that issue - because we're using tokenizers to decode each generated token. And this tokenizer is being instantiated as per the HF repo we use.

Even this would work since we're anyway blocking the IR generation.
It'd then essentially download the tokenizer's config files from daryl149/llama-2-7b-hf and we already have the MLIR generated from meta-llama/Llama-2-7b-chat-hf.

I verified it on CPU for llama2 7B.

With this PR we don't need to maintain config files for tokenizer but we're changing the base HF repo and this would impact the workflow when the IR generation is given a green signal.

But with the other PR we only need to incur an overhead for maintaining the config files - keeping rest of the infra same.

@Abhishek-Varma Abhishek-Varma marked this pull request as ready for review September 8, 2023 14:23
-- This commit removes `--hf_auth_token` uses from vicuna.py.
-- It adds llama2 models based on daryl49's HF.

Signed-off-by: Abhishek Varma <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants