You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is a wish list for the belief_propagation function:
Customizable update sequences/schedules, including combinations of parallel and serial/sequential schedules. Ideally this would be designed to make it easy to do real space parallel DMRG/TDVP/TEBD.
A unified picture of using different gauges, such as arbitrary gauges (standard BP), the Vidal gauge, and orthogonal gauges using square root BP. This would make it so that many aspects of other tensor network algorithms, such as DMRG/TDVP on a tree, can be encompassed within the BP function.
Specialized functionality when run on a tree (or a subregion of a graph that is a tree) to ensure it is done as efficiently as possible, i.e. in one iteration using a tree traversal.
Ability to do minimal message tensor updates from one region to another (i.e. update message tensors around some path between regions), with applications to DMRG/TDVP. How do we elegantly handle that for both trees and non-trees? The goal would be to only require doing a minimal amount of self-consistency, and also allow dropping parts of the cache for memory efficiency.
Writing BP cache to disk (this would be used for the write-to-disk feature that is important in DMRG to avoid running out of memory for large bond dimensions).
Better format for partitioned tensor networks as well as a simpler format for the BP cache.
Make sure all of this functionality efficiently handles simpler cases like periodic MPS. That should be handled automatically if all of the issues above are addressed.
Replace specialized functionality we have for TTNs like computing inner products or orthogonalizing with BP (generalized to partitioned networks) and square root BP. In principle we could entirely remove the TreeTensorNetwork type with the correct abstractions in the belief_propagation function that analyze the graph structure and handle tree structures efficiently at runtime.
Try out different BP schedules/update sequences. This is a good reference proposing a new schedule and comparing to existing ones: https://arxiv.org/abs/1206.6837
In https://github.com/mtfishman/ITensorNetworks.jl/tree/generalize_alternating_update I am working on generalizing alternating_update to non-tree graphs using BP as a contraction backend, addressing these issues will be useful for that PR. Issues around the gauge will be important, since picking the right gauge can allow us to use regular eigensolvers instead of generalized eigensolvers for DMRG on loopy networks. Designing it correctly will also allow us to not have two separate codes for trees or non-trees, ideally those details can be handled in the belief_propagation function.
Here is a wish list for the
belief_propagation
function:TreeTensorNetwork
type with the correct abstractions in thebelief_propagation
function that analyze the graph structure and handle tree structures efficiently at runtime.Some of this is addressed by #111 and ITensor/NamedGraphs.jl#39.
In https://github.com/mtfishman/ITensorNetworks.jl/tree/generalize_alternating_update I am working on generalizing
alternating_update
to non-tree graphs using BP as a contraction backend, addressing these issues will be useful for that PR. Issues around the gauge will be important, since picking the right gauge can allow us to use regular eigensolvers instead of generalized eigensolvers for DMRG on loopy networks. Designing it correctly will also allow us to not have two separate codes for trees or non-trees, ideally those details can be handled in thebelief_propagation
function.@JoeyT1994 @b-kloss
The text was updated successfully, but these errors were encountered: