You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As people mentioned before LLaMa is very slow due it's huge size... So why don't we try different models for example
"phi3" from microsoft.... in the context of day to day uses llama and phi3 won't have big difference.... but phi3 is lightwieght and is much more quick than llama. It has 3.8B parameters.... If we wan't to make to useable then LLaMa won't be good choice...
try "ollama run phi3"... You will get an idea
The text was updated successfully, but these errors were encountered:
Hey there @SakthiMahendran thanks for the input on this. I really appreciate you suggestion. any thoughts on fine-tuning phi3? perhaps we can start from there. I have never fine-tuned it before and I suppose the fine-tuning processing is different from llama.
As people mentioned before LLaMa is very slow due it's huge size... So why don't we try different models for example
"phi3" from microsoft.... in the context of day to day uses llama and phi3 won't have big difference.... but phi3 is lightwieght and is much more quick than llama. It has 3.8B parameters.... If we wan't to make to useable then LLaMa won't be good choice...
try "ollama run phi3"... You will get an idea
The text was updated successfully, but these errors were encountered: