-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to change the config file to support openai model? #9
Comments
If you are using OpenAI as the vlm model provider. You can follow the instruction in https://github.com/arkohut/pensieve?tab=readme-ov-file#using-ollama-for-visual-search . And change vlm config part in vlm:
endpoint: https://api.openai.com
modelname: <openai model name>
force_jpeg: true
prompt: Please describe the content of this image, including the layout and visual elements # Prompt sent to the model
token: <openai api key> Please follow the full part in the doc to enable vlm plugin. |
thanks, can I use openai for embedding? I tried:
and when I tried
|
Sorry, right now it is not supported. I support the batch embed api from ollama... |
I've noticed that in readme you said "Compatible with any OpenAI API models", is there any guide for that?
The text was updated successfully, but these errors were encountered: