You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After running .\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct after downloading the models, the llama-server.exe crashes.
Tabby
tabby v0.18.0 and tabby v0.19.0-rc.1
tabby_x86_64-windows-msvc and tabby_x86_64-windows-msvc-vulkan
It turns out that this doesn't work even for the CPU.
Environment
Ryzen 5 3500U with Vega 8
Windows 10
Further, I connected these gguf models to gpt4all and they work, so the issue is with the backend.
Here is the output that tabby periodically produces.
After running
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
after downloading the models, the llama-server.exe crashes.Tabby
tabby v0.18.0 and tabby v0.19.0-rc.1
tabby_x86_64-windows-msvc and tabby_x86_64-windows-msvc-vulkan
It turns out that this doesn't work even for the CPU.
Environment
Ryzen 5 3500U with Vega 8
Windows 10
Further, I connected these gguf models to gpt4all and they work, so the issue is with the backend.
Here is the output that tabby periodically produces.
The text was updated successfully, but these errors were encountered: