Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch not compiled with CUDA enabled #31

Open
mikecastrodemaria opened this issue Apr 20, 2023 · 3 comments
Open

Torch not compiled with CUDA enabled #31

mikecastrodemaria opened this issue Apr 20, 2023 · 3 comments
Labels
question Further information is requested

Comments

@mikecastrodemaria
Copy link

Hi, on mac M1 I have the error related to Torch not compiled with CUDA enabled

Traceback (most recent call last):
File "/start.py", line 6, in
model.half().cuda()
File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 749, in cuda
return self._apply(lambda t: t.cuda(device))
File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/dev/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/dev/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 749, in
return self._apply(lambda t: t.cuda(device))
File "/dev/miniforge3/lib/python3.10/site-packages/torch/cuda/init.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Thanks

@reinies
Copy link

reinies commented Apr 20, 2023

Hey all,

same for me. If I try to execute the "python app.py" which i got from: https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat/tree/main I will get:

stablelm-tuned-alpha-chat git:(main) ✗ python app.py
Starting to load the model to memory
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:34<00:00,  8.60s/it]
Traceback (most recent call last):
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/app.py", line 12, in <module>
    "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda()
                                                                      ^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 905, in cuda
    return self._apply(lambda t: t.cuda(device))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 820, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 905, in <lambda>
    return self._apply(lambda t: t.cuda(device))
                                 ^^^^^^^^^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Is there any way to use cpu instead cuda (or similar)?

I tried it on a Mac with M1.

Best regards,
Reinhard

@AlexanderFillbrunn
Copy link

Your M1 Mac does not have Cuda, which is a feature of Nvidia GPUs. From what I found out, theoretically you should use .to("mps") instead of .cuda() to replace Cuda with Metal, but this causes a different problem for me. The script crashes and I get the error described here: pytorch/pytorch#99564.

@mcmonkey4eva
Copy link

mcmonkey4eva commented Apr 24, 2023

The GGML project, for running LLMs on CPUs (including specifically mac support!) has an initial example project that can run StableLM: https://github.com/ggerganov/ggml/tree/master/examples/stablelm

There's also https://huggingface.co/cakewalk/ggml-q4_0-stablelm-tuned-alpha-7b/tree/main which supposedly works in llama.cpp

@mcmonkey4eva mcmonkey4eva added the question Further information is requested label Apr 24, 2023
@twmmason twmmason reopened this Apr 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants