Replies: 1 comment
-
manyoso and I are the core developers of this project, and I don't think either of us is an expert at fine-tuning. My best recommendation is to check out the #finetuning-and-sorcery channel in the KoboldAI Discord - the people there are very knowledgeable about this kind of thing. All you have to do is train a local model or LoRA based on HF transformers. If your GPU is not powerful then you are probably interested in QLoRA. Once you have a fine-tuned model, you can convert it to GGUF and quantize it using the tools at the llama.cpp repo. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello folks!
First, THANK YOU for all your work in the space. Providing an open source, easily runnable model with great UX really lowers the barrier of entry for a lot of people who want to use GPT locally. Every single person who worked on this is amazing 😄
I'm interested in fine-tuning a model. I have a bunch of data that has the generic format of
System message\nUser message\nIdeal response
. Pretty much what is needed to fine-tune per OpenAI's API docs.The goal is, because I have this data, the model can be slightly more accurate if given similar prompts to what is in my tuning dataset.
Is it possible to fine-tune a model in any way with gpt4all? If not, does anyone know of a similar open source project where it's possible or easy?
Many thanks!
Beta Was this translation helpful? Give feedback.
All reactions