You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Provide parallelization at the node level in simulations.
Describe your proposed implementation
The Numba backend has a for loop over nodes in the network here:
foriinrange(weights.shape[0]):
Enabling the backend to switch to numba.prange would enable parallelization over nodes, and the switch would be an option passed in by the user/client of the backend class when instantiating it.
Describe possible alternatives
Numba supports some automatic parallelization with the gufunc mechanism, but it would require a different template structure from the current backends
Numba cuda backend could do this as well, but not yet mature enough to be used
Describe the new feature or enhancement
Provide parallelization at the node level in simulations.
Describe your proposed implementation
The Numba backend has a for loop over nodes in the network here:
Enabling the backend to switch to
numba.prange
would enable parallelization over nodes, and the switch would be an option passed in by the user/client of the backend class when instantiating it.Describe possible alternatives
Additional comments
This was asked about in the TVB list this morning, https://groups.google.com/g/tvb-users/c/GovsAb-xc1k/m/vCTOQcd_FAAJ
The text was updated successfully, but these errors were encountered: