-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not try to compute initial solution for inactive multi-segment wells split across processors #5751
Conversation
jenkins build this please |
For what it's worth, this PR allows me to run a field case as I'll nevertheless defer to those more familiar with this part of the code to review the PR as there may be aspects of the structure that I don't fully grasp. |
Is there some more information regarding the symptom? Where does it crash exactly? |
In current master we crash in That said, if we want to use the proposed guard, then we should at least amend it to |
Or just |
My main concern is that with an if condition of inequality of these two variable is too broad for the targeted situation, might cover up other scenarios/bugs for future (we are not running distributed parallel ms wells yet, it should be addressed by that development for the situation of parallel ms well running). If we know it was because that the well is SHUT, why do not we use that types of if condition to make it more clear that it was due to SHUT of the well. (at least something like And also, let us output some DEBUG information or throw if |
I agree that the case with distributed active wells need to be handled by that development, hence the \todo message.
When allowing to split inactive wells (that are never open at any time during the simulation) across processes, Checking for SHUT sounds dangerous, since I guess wells may open during a time step..?
Since this is not an error situation I think we should avoid DEBUG messages and definitely throws. |
I can add a more explicit check for inactive wells, then (for now) throw for distributed wells. Does that sound ok? |
… distributed multi-segment well
Yes, that is sensible. And we discussed a little bit. Since we decide some inactive wells can be distributed across processes, there should be a way/criteria to detect/decide which wells can be split. For those wells, since we can not do much (like opening them), let us do minimal things with them. For example, if possible, not initialize unneeded wellstate information (you are the one knows the best regarding this issue). For the function Please let us know what you think of it. |
opm/simulators/wells/WellState.cpp
Outdated
// \todo{ Update the procedure below to work for actually distributed wells. } | ||
if (static_cast<int>(ws.perf_data.size()) != n_activeperf) | ||
if (this->is_inactive_well(well_ecl.name())) | ||
continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
opm/simulators/wells/WellState.cpp
Outdated
@@ -273,6 +273,7 @@ void WellState<Scalar>::init(const std::vector<Scalar>& cellPressures, | |||
report_step, | |||
wells_ecl); | |||
well_rates.clear(); | |||
this->inactive_well_names_ = schedule.getInactiveWellNamesAtEnd(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is using schedule.getInactiveWellNamesAtEnd();
to determine whether a well can be split across the processes, I will suggest a more specific name for inactive_well_names_
, to show the wells will be shut all the time and can not be open across the simulation. Like permanently_inactive_well_names_
and the corresponding function name can be is_permanently_inactive_well
.
b60eecb
to
673d541
Compare
jenkins build this please |
@bska , can you test whether the current version fix the running of your case? I am happy with the current approach that has a more specific design to tackle the problem. You can review/merge as you will. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you test whether the current version fix the running of your case?
I've just completed a test of field case I mentioned before. I can confirm that the case continues to run in parallel (mpirun -np 14
) with this edition of the PR. In the current master sources the case does not run in parallel, but it does run in sequential mode.
I am happy with the current approach that has a more specific design to tackle the problem.
It looks good to me too. At some point we may consider moving the Schedule::getInactiveWellNamesAtEnd()
call to the WellState
constructor, however. We call WellState<>::init()
at least once for each report step and I don't really expect getInactiveWellNamesAtEnd()
to change although I may be missing something.
In any case, this fixes a real problem on a real case so I'll merge into master.
@bska can you rerun the test field case you were running with Thanks! |
Sure. Is there anything in particular you'd like me to look out for? |
Nothing in particular, just check if the case runs through as expected. Thanks! |
Cool. I'll just rebuild everything first to make sure I have a consistent set of binaries given the CMake changes that were just merged. |
@lisajulia : The model does indeed still run as |
I think the concern only applies when we actually distribute the MS wells across processes. |
I got slightly different timestepping behaviour between master and that PR, but not different enough that it's possible to say that one run is "better" than the other. Final TCPU is currently slightly higher with #5746 than in master as of #5756. On a sidenote, if
|
Ok thanks, I will take this setting into account for my PR #5746 ! |
do address the typo in the message as well (--enable-multisegment-wells=false) |
6bdb801#diff-cdbb36d3d28bb6896b6aa7d316bc42496e4feb0bca83f210919e4826dc7f275dR327 |
No description provided.