-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use PrecompileTools to warmup CUDA.jl #2325
base: master
Are you sure you want to change the base?
Conversation
So IIUC it isn't worth using the actual PTX ISA or device capability here because the inference caches are shared between CUDA subtargets, and this will prime them. I considered whether we need a mechanism to ensure this doesn't actively use the CUDA toolkit, which would prevent use on a system without a GPU, but I think CI should already cover that: CUDA.jl/.buildkite/pipeline.yml Lines 198 to 226 in 5da4d1d
|
Correct! Using JuliaGPU/GPUCompiler.jl#557 (comment) this improved TTFK from 12s to 4s |
80ec869
to
c7f880c
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2325 +/- ##
===========================================
- Coverage 73.37% 59.96% -13.42%
===========================================
Files 157 156 -1
Lines 15197 14989 -208
===========================================
- Hits 11151 8988 -2163
- Misses 4046 6001 +1955 ☔ View full report in Codecov by Sentry. |
51520a1
to
03530f0
Compare
03530f0
to
bfe2eb9
Compare
Fails on 1.11:
|
No description provided.