You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With MPI and sufficiently large basis set sizes, storing copies of Choleskies in each task can lead to out-of-memory errors. Would it be possible to implement the option of storing 1 copy of Choleskies in shared memory accessible to all tasks?
The text was updated successfully, but these errors were encountered:
Could you provide some more information? Where do you see the problem? What size are the choleskies? how much memory per node do you have? How many walkers are you using? An example script would be very helpful here, or at least a minimal problem which reproduces it.
Currently we DO store the choleskies in shared memory (one copy of the full thing + the half rotated a/b tensors per node) using MPI3 or at least we did, but with all the infrastructure changes it's possible something was modified, or maybe you're hitting some other limit / issue.
On a separate note we should print out some more detailed (dynamic) memory consumption information.
With MPI and sufficiently large basis set sizes, storing copies of Choleskies in each task can lead to
out-of-memory
errors. Would it be possible to implement the option of storing 1 copy of Choleskies in shared memory accessible to all tasks?The text was updated successfully, but these errors were encountered: