-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
搭建配置询问 #140
Comments
部署这个服务不需要很高的配置,服务本身不吃资源。向量化和诊断模型都可以用API的方式调用。 |
你好,如果需要本地部署模型的话。目前平台所支持的所有本地模型,cpu和gpu的大致配置是怎样的呢? |
得看跑啥模型,之前我们本地微调的Qwen,13B以下的话只需要一张3090或4090就行。 |
请问比如Llama2-13b的话一张3090或4090够用吗? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
你好,请问如果需要在Linux服务器上尝试搭建,cpu和gpu的大致配置是怎样的?
The text was updated successfully, but these errors were encountered: