Local LLM using Ollama without OpenAI #731
Replies: 2 comments 5 replies
-
Hey @lindamathez 🌟, I'm here to help you with your technical issues and questions. Let's tackle this bug together and get everything up and running smoothly! To configure
By following these steps, you should be able to use |
Beta Was this translation helpful? Give feedback.
-
it finally worked by adding |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I was trying to get started with
paperqa
usingOllama
instead ofOpenAI
API - as I have limitations on the use of OpenAI. However, I have downloadedllama3.2
(llm
andsummary_llm
) andnomc-embed-text
(embedding
) fromOllama
.My goal is to use only local models and to not use any OpenAI models at all.
My environment looks like this:
My code looks like this:
I wonder how to decouple
paperqa
fromOpenAI
API. I addedOllama
information for the keywordsllm
,summary_llm
andembedding
- but it still seems to be trying to access OpenAI. My error is the following:It seems that the code is still trying to access
OpenAI
despite the fact that I've provided the path to the local deployment of myOllama
models.Many thanks for hints - Cheers :)
Beta Was this translation helpful? Give feedback.
All reactions