-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: Ollama API incorrectly handled by GenAIScript (causes 404s) #814
Comments
Hey there, OLLAMA_HOST does not work for me either, but, I used OLLAMA_API_BASE=https://<domain or ip>:<port>/v1
GENAISCRIPT_DEFAULT_MODEL="ollama:qwen2.5:7b-instruct-q8_0"
GENAISCRIPT_DEFAULT_SMALL_MODEL="ollama:qwen2.5:7b-instruct-q8_0" For me it is |
Thanks for the details. We'll look into this. |
part 1 fixed in 1.71.0 @sammcj @DimitriGilbert |
@sammcj do you know models that don't work with the OpenAI company layer? |
Do you mean OpenAI compatible API? If so, all models work with it, but the native API is better as it supports all features like hot model loading, setting the context size etc... |
I see. Thanks for the clarification. |
When trying to follow the introduction docs using GenAIScript with any Ollama models fails.
This seems to stem from two issues:
The standard way of setting the Ollama host is through the
OLLAMA_HOST
environment variable, for example:When an application makes a call to the Ollama API, you would expect to see requests to
$OLLAMA_HOST/api/<API Method>
, for example:However - when GenAIScript makes calls to Ollama it appears to be hitting the base URL without the
/api
path:This should be
/api/generate
, for example:Looking at the traces generated from the failed requests I see:
This suggests GenAIScript is not using the Ollama API, but instead the OpenAI compatible API.
The Ollama API exists at
http(s)://ollama-hostname:port/api
, This is recommended to be used as it's the native API that supports all functionality.Ollama also provides an OpenAI compatible API for applications that only support OpenAI with basic functionality at
http(s)://ollama-hostname:port/v1
This is only recommended to be used as a last resort for applications that aren't compatible with Ollama and does not provide all features, see https://github.com/ollama/ollama/blob/main/docs/openai.md
While using the OpenAI compatible API endpoint is not ideal, it should work for basic generation tasks, however the correct API path of
/v1
should be used, e.g.environment for reference:
The text was updated successfully, but these errors were encountered: