Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation Performance on LLaVA-Lora #375

Open
XindiWu opened this issue Oct 28, 2024 · 1 comment
Open

Evaluation Performance on LLaVA-Lora #375

XindiWu opened this issue Oct 28, 2024 · 1 comment

Comments

@XindiWu
Copy link

XindiWu commented Oct 28, 2024

I'm evaluating the LLaVA-Lora version (https://huggingface.co/liuhaotian/llava-v1.5-7b-lora/discussions), but the performance seems unusually low. Do you know if this is supported in the lmms-eval pipeline?

@kcz358
Copy link
Collaborator

kcz358 commented Oct 29, 2024

I have never tested a Lora version of LLaVA using lmms-eval. I remember there are some discussion about this in this issue #241. For llava-v1.5, since it is already a relatively old model, I am not sure what environment or dependencies you are going to need for lmms-eval

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants