-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torch not compiled with CUDA enabled #31
Comments
Hey all, same for me. If I try to execute the "python app.py" which i got from: https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat/tree/main I will get:
Is there any way to use cpu instead cuda (or similar)? I tried it on a Mac with M1. Best regards, |
Your M1 Mac does not have Cuda, which is a feature of Nvidia GPUs. From what I found out, theoretically you should use |
The GGML project, for running LLMs on CPUs (including specifically mac support!) has an initial example project that can run StableLM: https://github.com/ggerganov/ggml/tree/master/examples/stablelm There's also https://huggingface.co/cakewalk/ggml-q4_0-stablelm-tuned-alpha-7b/tree/main which supposedly works in llama.cpp |
Hi, on mac M1 I have the error related to Torch not compiled with CUDA enabled
Traceback (most recent call last):
File "/start.py", line 6, in
model.half().cuda()
File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 749, in cuda
return self._apply(lambda t: t.cuda(device))
File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/dev/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/dev/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 749, in
return self._apply(lambda t: t.cuda(device))
File "/dev/miniforge3/lib/python3.10/site-packages/torch/cuda/init.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Thanks
The text was updated successfully, but these errors were encountered: