Replies: 2 comments
-
It will be great! |
Beta Was this translation helpful? Give feedback.
0 replies
-
@ogencoglu @giovannicocco , it works. configure liteLLM to use openai endpoint but specify localhost. here is example: from litellm import completion
def get_llm_response():
try:
response = completion(
model="openai/llama3.2-1b-instruct",
messages=[{"role": "user", "content": "Say Hello World!"}],
api_base="http://localhost:1337/v1",
api_key="dummy-key",
temperature=0.7,
)
# Extract and return the response
return response.choices[0].message.content
except Exception as e:
return f"Error: {str(e)}"
if __name__ == "__main__":
# Test the function
result = get_llm_response()
print(result) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
liteLLM support would be great and reduce a lot of work on Jan side as well. Any plans on that?
Beta Was this translation helpful? Give feedback.
All reactions