Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for locally deployed LLM using LM Studio #10624

Open
kellerj opened this issue Dec 7, 2024 · 4 comments
Open

Support for locally deployed LLM using LM Studio #10624

kellerj opened this issue Dec 7, 2024 · 4 comments
Labels

Comments

@kellerj
Copy link

kellerj commented Dec 7, 2024

How do you specify the OpenAI endpoint URL? I would like to use a locally running LLM (using the OpenAI API syntax during initial experimentation and development.). Thanks.

@tirumaraiselvan
Copy link
Contributor

Hi @kellerj

Which LLM are you using and how are you deploying it locally - using Ollama or anything like that?

@kellerj
Copy link
Author

kellerj commented Dec 10, 2024

I was using LM Studio, which provides the OpenAI style endpoints. I've used it with some other tools by setting their environment variables to point to those localhost URLs rather than the ChatGPT endpoints. Ollama is also an option, I just have not used it as much.

@seanparkross
Copy link
Contributor

Forwarded to team.

@tirumaraiselvan
Copy link
Contributor

Supporting all major LLMs (including locally hosted ones) is definitely in the roadmap but not in the short-term.

As of date, we only support OpenAI and Anthropic.

@tirumaraiselvan tirumaraiselvan changed the title Feedback for “Getting Started” Support for locally deployed LLM using LM Studio Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants