You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
initialization scaled by the LoRA rank for linear and layers. Setting the initialization to False leads to
completely random initialization and is discouraged. Pass `'loftq'` to use LoftQ initialization. Pass
Documentation for False is not clear. Presumably 'completely random' means the arrays will be uninitialized and hence contain whatever the contents of the relevant memory locations are?
The text was updated successfully, but these errors were encountered:
To explain further: The default implementation initializes the LoRA A parameter randomly and the LoRA B parameter to zeros. This results in LoRA being an identity transform at initialization, which can help with training. When setting init_lora_weights=False, the LoRA B weight is instead also randomly initialized, resulting in a non-identity transform.
For real LoRA training, you almost never want that, which is why we discourage it. However, the weights are not initialized as random memory as in torch.empty, which seems to be what you suspected.
peft/src/peft/tuners/lora/config.py
Lines 112 to 113 in 162d7e5
Documentation for
False
is not clear. Presumably 'completely random' means the arrays will be uninitialized and hence contain whatever the contents of the relevant memory locations are?The text was updated successfully, but these errors were encountered: