Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nonstop answer from HTTP OpenAI-compatible model #3328

Open
hxt365 opened this issue Oct 25, 2024 · 0 comments
Open

Nonstop answer from HTTP OpenAI-compatible model #3328

hxt365 opened this issue Oct 25, 2024 · 0 comments
Labels
bug Something isn't working fixed-in-next-release

Comments

@hxt365
Copy link

hxt365 commented Oct 25, 2024

Describe the bug
I ran Tabby with an HTTP OpenAI-compatible chat model running remotely with vLLM. In the chat, Tabby kept sending a nonstop answer, like the following image.

image

Information about your version
0.18.0

Additional context
Log from Tabby:

tabby-1  | The application panicked (crashed).
tabby-1  | Message:  index out of bounds: the len is 0 but the index is 0
tabby-1  | Location: ee/tabby-webserver/src/service/answer.rs:173
tabby-1  | 
tabby-1  | Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working fixed-in-next-release
Projects
None yet
Development

No branches or pull requests

2 participants