-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split batched solver compilation #1629
Conversation
259f2c1
to
8c25a83
Compare
This should have a huge impact, excerpt from the HIP 5.14 debug build log
|
core/solver/batch_dispatch.hpp
Outdated
#define GKO_BATCH_INSTANTIATE_STOP(macro, ...) \ | ||
macro(__VA_ARGS__, \ | ||
::gko::batch::solver::device::batch_stop::SimpleAbsResidual); \ | ||
template macro( \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the template
here (and in the other macros below) could be removed, if the value/index type instantiation macros would accept variable number or arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't work until C++20. A macro with (arg, ...)
requires two arguments before c++20.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, the idea looks good, but the pipelines are failing.
One thing against this approach is the readability and maintainability is seriously affected. The already complex batched code is even more complex and annoying to read now. We should maybe see if instead we dont do this split approach and instead maybe do what Jacobi does and have fewer cases as default, and only have full instantiations when necessary.
8c25a83
to
870ad69
Compare
IMO the Jacobi instantiation is more complex than what is here. The kernel and the instantiations are directly together, instead of being generated by CMake, which makes it easier to follow for me. But I agree that the batch system needs an overhaul in general. |
d04f06c
to
fa6d091
Compare
fa6d091
to
e59ab55
Compare
An alternative approach: https://github.com/ginkgo-project/ginkgo/tree/batch-optim |
This seems to be quite orthogonal to this PR. With full optimizations enabled, there would be the same issue as before, so the fix from this PR is still needed. I don't see a reason why we should burden people that want the full optimizations enabled with those long compile times, for which we already have a fix available. |
@MarcelKoch, can you please rebase this when you have some time and we can try to get it merged ? |
48fe94b
to
045ad1c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it will lead issue with single mode
|
||
|
||
// begin | ||
GKO_INSTANTIATE_FOR_EACH_VALUE_TYPE(GKO_DECLARE_BATCH_BICGSTAB_LAUNCH_0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will not work with GINKGO_DPCPP_SINGLE_MODE=ON
.
we use the instantiation to provide the specialization with unsupported exception on double precision.
with GKO_BATCH_INSTANTION
, it will be wrong. Only the last one has the specialization, but the others will be instantiated.
template macro {GKO_UNSUPPORTED;}
->
template first_...;
template second_...;
...
template last {GKO_UNSUPPORTED;}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. Thanks for bringing this up. I changed the order of macro application, so now it should be fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would wait for the CI to finish to merge this (maybe also the Intel SYCL pipelines), but looks good to me otherwise.
get_num_regs( | ||
batch_single_kernels::apply_kernel<StopType, 9, true, PrecType, | ||
LogType, BatchMatrixType, | ||
ValueType>), | ||
get_num_regs( | ||
batch_single_kernels::apply_kernel<StopType, 0, false, PrecType, | ||
LogType, BatchMatrixType, | ||
ValueType>)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think first one is everything in shared memory, second one is nothing in shared memory.
const int max_threads_regs = | ||
((max_regs_blk / static_cast<int>(num_regs_used)) / warp_sz) * warp_sz; | ||
int max_threads = std::min(max_threads_regs, device_max_threads); | ||
max_threads = max_threads <= max_bicgstab_threads ? max_threads | ||
: max_bicgstab_threads; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a comment and something for me to do in the future: I think this whole logic needs to be simplified. It seems it is now also possible to set the max number of registers similar to the launch_bounds with CUDA: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#maximum-number-of-registers-per-thread
But of course, that means we maybe cannot unify HIP and CUDA anymore, but something we need to investigate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. It is a bit hard to understand now though.
- adds header guard Co-authored-by: Pratik Nayak <[email protected]>
Co-authored-by: Tobias Ribizel <[email protected]>
02b4f27
to
bdf51dc
Compare
Quality Gate failedFailed conditions |
This PR splits up the compilation of the batched solvers in order to reduce the compilation times. It splits up the instantiations of the kernel launches depending on the number of vectors in shared memory. This is based on the same CMake mechanism as for the csr and fbcsr kernels.