Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add printf support #163

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

charleskawczynski
Copy link
Contributor

This is my first pass at adding printf support. I'm still experimenting, but I may need help with this!

Comment on lines 549 to 551
# @sprintf($sfmt, $(args...))
@print(@sprintf($sfmt, $(args...)))
# @print("test")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# @sprintf($sfmt, $(args...))
@print(@sprintf($sfmt, $(args...)))
# @print("test")
@printf($sfmt, $(args...))

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This results in a StackOverflowError

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it should be Base.@printf?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes. Needed Printf.@printf

@DilumAluthge
Copy link
Collaborator

Let's see if CI passes.

bors try

bors bot added a commit that referenced this pull request Jan 6, 2021
@bors
Copy link
Contributor

bors bot commented Jan 6, 2021

try

Build failed:

@DilumAluthge
Copy link
Collaborator

DilumAluthge commented Jan 6, 2021

CI fails with: (link to CI log)

�[37mprint test: �[39m�[91m�[1mError During Test�[22m�[39m at �[39m�[1m/var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/print_test.jl:35�[22m
  Got exception outside of a @test
  InvalidIRError: compiling kernel gpu_kernel_printf(Cassette.Context{nametype(CUDACtx),KernelAbstractions.CompilerMetadata{DynamicSize,DynamicCheck,Nothing,CartesianIndices{1,Tuple{Base.OneTo{Int64}}},NDRange{1,DynamicSize,StaticSize{(4,)},CartesianIndices{1,Tuple{Base.OneTo{Int64}}},Nothing}},Nothing,KernelAbstractions.var"##PassType#253",Nothing,Cassette.DisableHooks}, typeof(gpu_kernel_printf)) resulted in invalid LLVM IR
  Reason: unsupported dynamic function invocation (call to _cuprintf(::Val{fmt}, argspec...) where fmt in CUDA at /root/.julia/packages/CUDA/YeS8q/src/device/intrinsics/output.jl:38)
  Stacktrace:
   [1] overdub at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/src/backends/cuda.jl:327
   [2] macro expansion at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/print_test.jl:21
   [3] gpu_kernel_printf at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/src/macros.jl:80
   [4] overdub at /root/.julia/packages/Cassette/158rp/src/overdub.jl:0
  Reason: unsupported dynamic function invocation (call to _cuprintf(::Val{fmt}, argspec...) where fmt in CUDA at /root/.julia/packages/CUDA/YeS8q/src/device/intrinsics/output.jl:38)
  Stacktrace:
   [1] overdub at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/src/backends/cuda.jl:327
   [2] macro expansion at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/print_test.jl:22
   [3] gpu_kernel_printf at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/src/macros.jl:80
   [4] overdub at /root/.julia/packages/Cassette/158rp/src/overdub.jl:0
  Stacktrace:
   [1] check_ir(::GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget,CUDA.CUDACompilerParams}, ::LLVM.Module) at /root/.julia/packages/GPUCompiler/uTpNx/src/validation.jl:123
   [2] macro expansion at /root/.julia/packages/GPUCompiler/uTpNx/src/driver.jl:239 [inlined]
   [3] macro expansion at /root/.julia/packages/TimerOutputs/ZmKD7/src/TimerOutput.jl:206 [inlined]
   [4] codegen(::Symbol, ::GPUCompiler.CompilerJob; libraries::Bool, deferred_codegen::Bool, optimize::Bool, strip::Bool, validate::Bool, only_entry::Bool) at /root/.julia/packages/GPUCompiler/uTpNx/src/driver.jl:237
   [5] compile(::Symbol, ::GPUCompiler.CompilerJob; libraries::Bool, deferred_codegen::Bool, optimize::Bool, strip::Bool, validate::Bool, only_entry::Bool) at /root/.julia/packages/GPUCompiler/uTpNx/src/driver.jl:39
   [6] compile at /root/.julia/packages/GPUCompiler/uTpNx/src/driver.jl:35 [inlined]
   [7] cufunction_compile(::GPUCompiler.FunctionSpec; kwargs::Base.Iterators.Pairs{Symbol,Int64,Tuple{Symbol},NamedTuple{(:maxthreads,),Tuple{Int64}}}) at /root/.julia/packages/CUDA/YeS8q/src/compiler/execution.jl:310
   [8] check_cache(::Dict{UInt64,Any}, ::Any, ::Any, ::GPUCompiler.FunctionSpec{typeof(Cassette.overdub),Tuple{Cassette.Context{nametype(CUDACtx),KernelAbstractions.CompilerMetadata{DynamicSize,DynamicCheck,Nothing,CartesianIndices{1,Tuple{Base.OneTo{Int64}}},NDRange{1,DynamicSize,StaticSize{(4,)},CartesianIndices{1,Tuple{Base.OneTo{Int64}}},Nothing}},Nothing,KernelAbstractions.var"##PassType#253",Nothing,Cassette.DisableHooks},typeof(gpu_kernel_printf)}}, ::UInt64; kwargs::Base.Iterators.Pairs{Symbol,Int64,Tuple{Symbol},NamedTuple{(:maxthreads,),Tuple{Int64}}}) at /root/.julia/packages/GPUCompiler/uTpNx/src/cache.jl:40
   [9] gpu_kernel_printf at ./none:0 [inlined]
   [10] cufunction(::typeof(Cassette.overdub), ::Type{Tuple{Cassette.Context{nametype(CUDACtx),KernelAbstractions.CompilerMetadata{DynamicSize,DynamicCheck,Nothing,CartesianIndices{1,Tuple{Base.OneTo{Int64}}},NDRange{1,DynamicSize,StaticSize{(4,)},CartesianIndices{1,Tuple{Base.OneTo{Int64}}},Nothing}},Nothing,KernelAbstractions.var"##PassType#253",Nothing,Cassette.DisableHooks},typeof(gpu_kernel_printf)}}; name::String, kwargs::Base.Iterators.Pairs{Symbol,Int64,Tuple{Symbol},NamedTuple{(:maxthreads,),Tuple{Int64}}}) at /root/.julia/packages/CUDA/YeS8q/src/compiler/execution.jl:297
   [11] macro expansion at /root/.julia/packages/CUDA/YeS8q/src/compiler/execution.jl:109 [inlined]
   [12] (::KernelAbstractions.Kernel{CUDADevice,StaticSize{(4,)},DynamicSize,typeof(gpu_kernel_printf)})(; ndrange::Tuple{Int64}, dependencies::Nothing, workgroupsize::Nothing, progress::Function) at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/src/backends/cuda.jl:187
   [13] test_printf(::CUDADevice) at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/print_test.jl:32
   [14] top-level scope at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/print_test.jl:45
   [15] top-level scope at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1115
   [16] top-level scope at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/print_test.jl:36
   [17] include(::String) at ./client.jl:457
   [18] top-level scope at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/runtests.jl:33
   [19] top-level scope at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1115
   [20] top-level scope at /var/lib/buildkite-agent/builds/rtx2070-gpuci1-julia-csail-mit-edu/julialang/kernelabstractions-dot-jl/test/runtests.jl:33
   [21] include(::String) at ./client.jl:457
   [22] top-level scope at none:6
   [23] eval(::Module, ::Any) at ./boot.jl:331
   [24] exec_options(::Base.JLOptions) at ./client.jl:272
   [25] _start() at ./client.jl:506

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants