Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@uniforms dependent on @index values fail #277

Open
leios opened this issue Nov 16, 2021 · 1 comment
Open

@uniforms dependent on @index values fail #277

leios opened this issue Nov 16, 2021 · 1 comment

Comments

@leios
Copy link
Contributor

leios commented Nov 16, 2021

I may have missed something important, but this code is currently failing on the CPU

using KernelAbstractions

@kernel function f_test_kernel!(input)
    tid = @index(Global, Linear)
    @uniform a = tid

end

function f_test!(input; numcores = 4, numthreads = 256)

    if isa(input, Array)
        kernel! = f_test_kernel!(CPU(), numcores)
    else
        kernel! = f_test_kernel!(CUDADevice(), numthreads)
    end

    kernel!(input, ndrange=size(input))
end

f_test!(zeros(10))

With tid not defined:

julia> wait(f_test!(zeros(10)))
ERROR: TaskFailedException
Stacktrace:
 [1] wait
   @ ./task.jl:322 [inlined]
 [2] wait
   @ ~/projects/KernelAbstractions.jl/src/cpu.jl:65 [inlined]
 [3] wait (repeats 2 times)
   @ ~/projects/KernelAbstractions.jl/src/cpu.jl:29 [inlined]
 [4] top-level scope
   @ REPL[3]:1
 [5] top-level scope
   @ ~/.julia/packages/CUDA/YpW0k/src/initialization.jl:52

    nested task error: UndefVarError: tid not defined
    Stacktrace:
     [1] cpu_f_test_kernel!(::KernelAbstractions.CompilerMetadata{KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicCheck, CartesianIndex{1}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, KernelAbstractions.NDIteration.NDRange{1, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.StaticSize{(4,)}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, Nothing}}, ::Vector{Float64})
       @ ./none:0 [inlined]
     [2] overdub
       @ ./none:0 [inlined]
     [3] __thread_run(tid::Int64, len::Int64, rem::Int64, obj::KernelAbstractions.Kernel{CPU, KernelAbstractions.NDIteration.StaticSize{(4,)}, KernelAbstractions.NDIteration.DynamicSize, typeof(cpu_f_test_kernel!)}, ndrange::Tuple{Int64}, iterspace::KernelAbstractions.NDIteration.NDRange{1, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.StaticSize{(4,)}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, Nothing}, args::Tuple{Vector{Float64}}, dynamic::KernelAbstractions.NDIteration.DynamicCheck)
       @ KernelAbstractions ~/projects/KernelAbstractions.jl/src/cpu.jl:157
     [4] __run(obj::KernelAbstractions.Kernel{CPU, KernelAbstractions.NDIteration.StaticSize{(4,)}, KernelAbstractions.NDIteration.DynamicSize, typeof(cpu_f_test_kernel!)}, ndrange::Tuple{Int64}, iterspace::KernelAbstractions.NDIteration.NDRange{1, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.StaticSize{(4,)}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, Nothing}, args::Tuple{Vector{Float64}}, dynamic::KernelAbstractions.NDIteration.DynamicCheck)
       @ KernelAbstractions ~/projects/KernelAbstractions.jl/src/cpu.jl:130
     [5] (::KernelAbstractions.var"#33#34"{Nothing, Nothing, typeof(KernelAbstractions.__run), Tuple{KernelAbstractions.Kernel{CPU, KernelAbstractions.NDIteration.StaticSize{(4,)}, KernelAbstractions.NDIteration.DynamicSize, typeof(cpu_f_test_kernel!)}, Tuple{Int64}, KernelAbstractions.NDIteration.NDRange{1, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.StaticSize{(4,)}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, Nothing}, Tuple{Vector{Float64}}, KernelAbstractions.NDIteration.DynamicCheck}})()
       @ KernelAbstractions ~/projects/KernelAbstractions.jl/src/cpu.jl:22

In my case, I wanted to use @uniform elem = input[tid] as a separate element variable, which I expect to be a fairly common use-case

@albertomercurio
Copy link

I have the same issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants