-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core][Parallelization] Making explicitily schedule(runtime)
, with dynamic
by default, in OMP loops in ParallelUtils
#12923
base: master
Are you sure you want to change the base?
Conversation
… in `ParallelUtils`
schedule(dynamic)
by default in OMP loops in ParallelUtils
schedule(dynamic)
by default in OMP loops in ParallelUtils
…ith dynamic schedule without conflicting the GIL
are u sure this is needed? because this is c++ code, i don't think the gil presents a problem here |
Look at https://github.com/KratosMultiphysics/Kratos/actions/runs/12273173829/job/34243450170 |
And now is failing when running tests: https://github.com/KratosMultiphysics/Kratos/actions/runs/12275201329/job/34250231555?pr=12923. I will define in CMake |
@loumalouomega dynamic scheduling is used today for example in the builder and solver....without the need of releasing the GIL why is that different? |
No idea, look at the outcome from the CI. We tested for some functions and the improvement is significant. This was added in a recent version of pybind11. pybind/pybind11#4246 |
Okay, looks like the last change fixed the issue |
@RiccardoRossi we can set it on runtime with this: https://www.openmp.org/spec-html/5.0/openmpse49.html and keep the current code and set the |
Modified to be on runtime, defaulted to dynamic |
schedule(dynamic)
by default in OMP loops in ParallelUtils
schedule(runtime)
, with dynamic
by default, in OMP loops in ParallelUtils
Okay, looks like the runtime works |
Right now if you have 4 tareas and 1000 items, you will do 250 on each...definitely suboptimal.for dyna.ic scheduling... |
The default is |
Okay fixed that issue, BTW, now the the banner includes the parallelism information: | / |
' / __| _` | __| _ \ __|
. \ | ( | | ( |\__ \
_|\_\_| \__,_|\__|\___/ ____/
Multi-Physics 10.1."0"-core/explicit-schedule-parallel-utili-d7754dadfa-Release-x86_64
Compiled for GNU/Linux and Python3.10 with Clang-14.0
Compiled with threading and MPI support. Threading support with OpenMP, scheduling dynamic.
Maximum number of threads: 20.
Running without MPI. |
…t variable for scheduling type
@@ -206,7 +206,7 @@ class BlockPartition | |||
KRATOS_PREPARE_CATCH_THREAD_EXCEPTION | |||
|
|||
TReducer global_reducer; | |||
#pragma omp parallel for | |||
#pragma omp parallel for schedule(runtime) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@loumalouomega as i am telling, take a look to line 154. It does not make sense to change this unless we change what happens there.
also to my understanding the runtime behaviour has potentially a very high overhead due to the need of making a syscall to fetch an env variable.
not sure if that matters...but at least we need to beware of this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use the benchmark to check that it affects significantly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea of the runtime is to give flexibility, if you prefer we can define it on compiling time...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is recommended by OMP itself:
https://www.openmp.org/wp-content/uploads/openmp-webinar-vanderPas-20210318.pdf
(And I found a master thesis saying that it doesn't penalize https://hpc.dmi.unibas.ch/wp-content/uploads/sites/87/2020/10/2019_akan_yilmaz_ma_thesisjune2019.pdf)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@loumalouomega aside of the comments on the opportunity of using the OMP_SCHEDULE did u take a look at what i am writing?
we are doing "by hand" the chunking. If we don't change that, it makes no sense to use a different scheduling, as everyone will be working on its chunk (as of now we dot have more chunks than threads!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay...let me think this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case we may need to rethink the chunging (to be dependent of the CPU architecture)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RiccardoRossi what do you suggest exactly, because I has been studying this and our chunking conflicts with the OMP scheduling, and a priori the most efficient would be to let OMP to do the chunking. The problem is that with that we lose the parallel_utilities design and reduction utilities.
# Check if the environment variable OMP_SCHEDULE is defined | ||
if(DEFINED ENV{OMP_SCHEDULE}) | ||
# Set the already defined one | ||
set(KRATOS_OMP_SCHEDULE $ENV{OMP_SCHEDULE}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OMP_SCHEDULE
is a runtime env variable, it is a extremely bad idea to use it a compilation switch (IMO).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand, but this is the following.
During compilation the OMP_SCHEDULE
will set KRATOS_OMP_SCHEDULE
that will be used as default if actually OMP_SCHEDULE
is not defined, but if OMP_SCHEDULE
is defined OMP_SCHEDULE
will be taken into account. Do you understand me?
I agree with chunk size argument by @RiccardoRossi My point (in #12924) was to first give a way to define dynamic scheduling in our for each loop. This would let us to fine tune our parallelization in many cases that dynamic would be better or at least not worst. For having the dynamic as default now I understand that would not work and chunk size would be an important blocker.... |
to clarify, it is NOT difficult to change the chunking algorithm (i guess it will be a 20 lines long code), i am simply telling that it needs to be done aside of the other changes. |
…ock-based operations for improved performance and clarity
…uling for improved performance
📝 Description
Making explicitily
schedule(runtime)
, withdynamic
by default, in OMP loops inParallelUtils
. I still need to add a benchmark and actually compare that is faster. Also updates the bannerwith the parallelism information:Fixes #12924
🆕 Changelog
schedule(dynamic)
by default in OMP loops inParallelUtils
parallel_utilities
#12942