Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameterization "v" Related Questions in LoRA Training #386

Open
JFrankLee opened this issue Jun 27, 2024 · 0 comments
Open

Parameterization "v" Related Questions in LoRA Training #386

JFrankLee opened this issue Jun 27, 2024 · 0 comments

Comments

@JFrankLee
Copy link

Hello,

I have some questions regarding the parameterization "v" in the context of training and using LoRA (Low-Rank Adaptation) with a 512-base model. Here are my observations:

Whether I use the "v_parameterization" or not during LoRA training seems to have no impact when loading the LoRA.
When training the 512-base model without loading LoRA, it requires a non-"v_parameterization" configuration to function correctly. But when loading LoRA into the 512-base model, it must use the "v_parameterization"; otherwise, the generated images are just noise.
Could you please provide some insights or explanations for these observations?
Thank you for your assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant