Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: "LayerNormKernelImpl" not implemented for 'Half #26

Open
RSKothari opened this issue Mar 18, 2024 · 4 comments
Open

RuntimeError: "LayerNormKernelImpl" not implemented for 'Half #26

RSKothari opened this issue Mar 18, 2024 · 4 comments

Comments

@RSKothari
Copy link

    return super().forward(x_or_x_list)
  File "/Users/rkothari/Documents/Projects/compalg_horizon/src/extern/RoMa/roma/models/transformer/layers/block.py", line 105, in forward
    x = x + attn_residual_func(x)
  File "/Users/rkothari/Documents/Projects/compalg_horizon/src/extern/RoMa/roma/models/transformer/layers/block.py", line 84, in attn_residual_func
    return self.ls1(self.attn(self.norm1(x)))
  File "/Users/rkothari/anaconda3/envs/horizon/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/rkothari/anaconda3/envs/horizon/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rkothari/anaconda3/envs/horizon/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 201, in forward
    return F.layer_norm(
  File "/Users/rkothari/anaconda3/envs/horizon/lib/python3.10/site-packages/torch/nn/functional.py", line 2546, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half

Where is this error coming from?

@Parskatt
Copy link
Owner

It seems that the fp16 implementation for layernorm doesnt exist in your pytorch. Also, are you running on cpu? Could you tell me what torch version youre using?

@j-baker
Copy link

j-baker commented Mar 22, 2024

I see this on 2.1.1 on CPU (MacOS). I've tried to use MPS as an executor that can do fp16, but there I get:

Input type (c10::Half) and bias type (float) should be the same

@RSKothari
Copy link
Author

@j-baker is right, this seems to occur when running fp16 on CPU (MacOS). Any recommendations or alternatives to make it compatible? @Parskatt

@dgcnz
Copy link
Contributor

dgcnz commented Apr 17, 2024

This is issue temporarily fixed on #31. Use float32 until this issue gets resolved:

roma_model = roma_outdoor(device=device, amp_dtype=torch.float32)

Vincentqyw added a commit to Vincentqyw/image-matching-webui that referenced this issue Apr 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants