-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarifai: Fixed model name error and streaming #4170
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git βοΈ
|
@ishaan-jaff , |
hey @mogith-pn, this doesn't solve the problem of async streaming not working |
|
litellm/llms/clarifai.py
Outdated
) | ||
## RESPONSE OBJECT | ||
try: | ||
completion_response = response.iter_lines() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be aiter_lines
to return an async iterable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, Changed it to aiter and ran tests. It's running without any errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krrishdholakia
Could you please take a loot at it ?
TIA :)
@@ -11129,7 +11129,7 @@ def handle_clarifai_completion_chunk(self, chunk): | |||
completion_tokens = len(encoding.encode(text)) | |||
return { | |||
"text": text, | |||
"is_finished": True, | |||
"is_finished": False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when is this ever finished? @mogith-pn
if you fake streaming isn't the first chunk also the last one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when is this ever finished? @mogith-pn
if you fake streaming isn't the first chunk also the last one?
when is this ever finished? @mogith-pn
if you fake streaming isn't the first chunk also the last one?
This is bit tricky situation, Previously when I added integration this was working fine (with 1.37.5 version).
But now it's throwing this below error when is_finished
is set to True
.
Attaching the logs here.
Give Feedback [/](https://vscode-remote+codespaces-002bdidactic-002dspace-002dpancake-002dr4px9j45g65f5v55.vscode-resource.vscode-cdn.net/) Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call: []
Give Feedback [/](https://vscode-remote+codespaces-002bdidactic-002dspace-002dpancake-002dr4px9j45g65f5v55.vscode-resource.vscode-cdn.net/) Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Logging Details: logger_fn - None | callable(logger_fn) - False
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/workspaces/litellm-fork/litellm/utils.py in ?(self, chunk)
11817 traceback_exception = traceback.format_exc()
11818 e.message = str(e)
> 11819 raise exception_type(
11820 model=self.model,
KeyError: 'finish_reason'
During handling of the above exception, another exception occurred:
APIConnectionError Traceback (most recent call last)
/workspaces/litellm-fork/litellm/utils.py in ?(self)
11942 raise e
11943 else:
> 11944 raise exception_type(
11945 model=self.model,
/workspaces/litellm-fork/litellm/utils.py in ?(self, chunk)
11817 traceback_exception = traceback.format_exc()
11818 e.message = str(e)
> 11819 raise exception_type(
11820 model=self.model,
...
-> 9990 raise e
9991 else:
9992 raise original_exception
APIConnectionError: 'finish_reason'Give Feedback [/](https://vscode-remote+codespaces-002bdidactic-002dspace-002dpancake-002dr4px9j45g65f5v55.vscode-resource.vscode-cdn.net/) Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call: []
Give Feedback [/](https://vscode-remote+codespaces-002bdidactic-002dspace-002dpancake-002dr4px9j45g65f5v55.vscode-resource.vscode-cdn.net/) Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Logging Details: logger_fn - None | callable(logger_fn) - False
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/workspaces/litellm-fork/litellm/utils.py in ?(self, chunk)
11817 traceback_exception = traceback.format_exc()
11818 e.message = str(e)
> 11819 raise exception_type(
11820 model=self.model,
KeyError: 'finish_reason'
During handling of the above exception, another exception occurred:
APIConnectionError Traceback (most recent call last)
/workspaces/litellm-fork/litellm/utils.py in ?(self)
11942 raise e
11943 else:
> 11944 raise exception_type(
11945 model=self.model,
/workspaces/litellm-fork/litellm/utils.py in ?(self, chunk)
11817 traceback_exception = traceback.format_exc()
11818 e.message = str(e)
> 11819 raise exception_type(
11820 model=self.model,
...
-> 9990 raise e
9991 else:
9992 raise original_exception
APIConnectionError: 'finish_reason'
Any idea why it happened ? @krrishdholakia
Fixed model name error and streaming
Type
π Bug Fix
β Test
Changes
[REQUIRED] Testing - Attach a screenshot of any new tests passing locall
If UI changes, send a screenshot/GIF of working UI fixes