diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index da3afbd9..5a640a8c 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-22T11:39:53","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-22T11:40:18","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/alternatives.html b/dev/alternatives.html index db35f154..6f212b25 100644 --- a/dev/alternatives.html +++ b/dev/alternatives.html @@ -1,2 +1,2 @@ -Alternatives · Tenet.jl

Alternatives

Tenet is strongly opinionated. We acknowledge that it may not suit all cases (although we try 🙂). If your case doesn't fit Tenet's design, you can try the following libraries:

+Alternatives · Tenet.jl

Alternatives

Tenet is strongly opinionated. We acknowledge that it may not suit all cases (although we try 🙂). If your case doesn't fit Tenet's design, you can try the following libraries:

diff --git a/dev/ansatz/chain.html b/dev/ansatz/chain.html index 8395b3c1..3279bacd 100644 --- a/dev/ansatz/chain.html +++ b/dev/ansatz/chain.html @@ -21,4 +21,4 @@ Label(fig[1,1, Bottom()], "Open") # hide Label(fig[1,2, Bottom()], "Periodic") # hide -fig # hide

In Tenet, the generic MatrixProduct ansatz implements this topology. Type variables are used to address their functionality (State or Operator) and their boundary conditions (Open or Periodic).

Missing docstring.

Missing docstring for MatrixProduct. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MatrixProduct(::Any). Check Documenter's build log for details.

+fig # hide

In Tenet, the generic MatrixProduct ansatz implements this topology. Type variables are used to address their functionality (State or Operator) and their boundary conditions (Open or Periodic).

Missing docstring.

Missing docstring for MatrixProduct. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MatrixProduct(::Any). Check Documenter's build log for details.

diff --git a/dev/ansatz/product.html b/dev/ansatz/product.html index 67afdfe6..906ede5e 100644 --- a/dev/ansatz/product.html +++ b/dev/ansatz/product.html @@ -1,2 +1,2 @@ -Product ansatz · Tenet.jl
+Product ansatz · Tenet.jl
diff --git a/dev/contraction.html b/dev/contraction.html index ed8fc9be..2198d6cc 100644 --- a/dev/contraction.html +++ b/dev/contraction.html @@ -1,2 +1,2 @@ -Contraction · Tenet.jl

Contraction

Contraction path optimization and execution is delegated to the EinExprs library. A EinExpr is a lower-level form of a Tensor Network, in which the contraction path has been laid out as a tree. It is similar to a symbolic expression (i.e. Expr) but in which every node represents an Einstein summation expression (aka einsum).

EinExprs.einexprMethod
einexpr(tn::AbstractTensorNetwork; optimizer = EinExprs.Greedy, output = inds(tn, :open), kwargs...)

Search a contraction path for the given AbstractTensorNetwork and return it as a EinExpr.

Keyword Arguments

  • optimizer Contraction path optimizer. Check EinExprs documentation for more info.
  • outputs Indices that won't be contracted. Defaults to open indices.
  • kwargs Options to be passed to the optimizer.

See also: contract.

source
Missing docstring.

Missing docstring for contract(::Tenet.TensorNetwork). Check Documenter's build log for details.

+Contraction · Tenet.jl

Contraction

Contraction path optimization and execution is delegated to the EinExprs library. A EinExpr is a lower-level form of a Tensor Network, in which the contraction path has been laid out as a tree. It is similar to a symbolic expression (i.e. Expr) but in which every node represents an Einstein summation expression (aka einsum).

EinExprs.einexprMethod
einexpr(tn::AbstractTensorNetwork; optimizer = EinExprs.Greedy, output = inds(tn, :open), kwargs...)

Search a contraction path for the given AbstractTensorNetwork and return it as a EinExpr.

Keyword Arguments

  • optimizer Contraction path optimizer. Check EinExprs documentation for more info.
  • outputs Indices that won't be contracted. Defaults to open indices.
  • kwargs Options to be passed to the optimizer.

See also: contract.

source
Missing docstring.

Missing docstring for contract(::Tenet.TensorNetwork). Check Documenter's build log for details.

diff --git a/dev/developer/type-hierarchy.html b/dev/developer/type-hierarchy.html index 8d92ee7e..e02e4d48 100644 --- a/dev/developer/type-hierarchy.html +++ b/dev/developer/type-hierarchy.html @@ -9,4 +9,4 @@ id3(Ansatz) --> Chain style id1 stroke-dasharray: 5 5 style id2 stroke-dasharray: 5 5 - style id3 stroke-dasharray: 5 5 + style id3 stroke-dasharray: 5 5 diff --git a/dev/index.html b/dev/index.html index 142da2c9..2db8ebaa 100644 --- a/dev/index.html +++ b/dev/index.html @@ -2,4 +2,4 @@ Home · Tenet.jl

Tenet.jl

BSC-Quantic's Registry

Tenet and some of its dependencies are located in our own Julia registry. In order to download Tenet, add our registry to your Julia installation by using the Pkg mode in a REPL session,

using Pkg
 pkg"registry add https://github.com/bsc-quantic/Registry"

A Julia library for Tensor Networks. Tenet can be executed both at local environments and on large supercomputers. Its goals are,

  • Expressiveness Simple to use 👶
  • Flexibility Extend it to your needs 🔧
  • Performance Goes brr... fast 🏎️

A video of its presentation at JuliaCon 2023 can be seen here:

-

Features

  • Optimized Tensor Network contraction, powered by EinExprs
  • Tensor Network slicing/cuttings
  • Automatic Differentiation of TN contraction, powered by EinExprs and ChainRules
  • 3D visualization of large networks, powered by Makie
+

Features

diff --git a/dev/quantum.html b/dev/quantum.html index 477d34a1..7caa3bb2 100644 --- a/dev/quantum.html +++ b/dev/quantum.html @@ -1,2 +1,2 @@ -Introduction · Tenet.jl

Quantum Tensor Networks

Tenet.QuantumType
Quantum

Tensor Network with a notion of "causality". This leads to the notion of sites and directionality (input/output).

Notes

  • Indices are referenced by Sites.
source
Base.adjointMethod
adjoint(q::Quantum)

Returns the adjoint of a Quantum Tensor Network; i.e. the conjugate Tensor Network with the inputs and outputs swapped.

source

Queries

Missing docstring.

Missing docstring for Tenet.inds(::Quantum; kwargs...). Check Documenter's build log for details.

Missing docstring.

Missing docstring for Tenet.tensors(::Quantum; kwargs...). Check Documenter's build log for details.

Connecting Quantum Tensor Networks

Missing docstring.

Missing docstring for inputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for outputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for lanes. Check Documenter's build log for details.

Missing docstring.

Missing docstring for ninputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for noutputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for nlanes. Check Documenter's build log for details.

Missing docstring.

Missing docstring for Socket. Check Documenter's build log for details.

Tenet.ScalarType
Scalar <: Socket

Socket representing a scalar; i.e. a Tensor Network with no open sites.

source
Tenet.StateType
State <: Socket

Socket representing a state; i.e. a Tensor Network with only input sites (or only output sites if dual = true).

source
Tenet.OperatorType
Operator <: Socket

Socket representing an operator; i.e. a Tensor Network with both input and output sites.

source
Missing docstring.

Missing docstring for Base.merge(::Quantum, ::Quantum...). Check Documenter's build log for details.

+Introduction · Tenet.jl

Quantum Tensor Networks

Tenet.QuantumType
Quantum

Tensor Network with a notion of "causality". This leads to the notion of sites and directionality (input/output).

Notes

  • Indices are referenced by Sites.
source
Base.adjointMethod
adjoint(q::Quantum)

Returns the adjoint of a Quantum Tensor Network; i.e. the conjugate Tensor Network with the inputs and outputs swapped.

source

Queries

Missing docstring.

Missing docstring for Tenet.inds(::Quantum; kwargs...). Check Documenter's build log for details.

Missing docstring.

Missing docstring for Tenet.tensors(::Quantum; kwargs...). Check Documenter's build log for details.

Connecting Quantum Tensor Networks

Missing docstring.

Missing docstring for inputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for outputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for lanes. Check Documenter's build log for details.

Missing docstring.

Missing docstring for ninputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for noutputs. Check Documenter's build log for details.

Missing docstring.

Missing docstring for nlanes. Check Documenter's build log for details.

Missing docstring.

Missing docstring for Socket. Check Documenter's build log for details.

Tenet.ScalarType
Scalar <: Socket

Socket representing a scalar; i.e. a Tensor Network with no open sites.

source
Tenet.StateType
State <: Socket

Socket representing a state; i.e. a Tensor Network with only input sites (or only output sites if dual = true).

source
Tenet.OperatorType
Operator <: Socket

Socket representing an operator; i.e. a Tensor Network with both input and output sites.

source
Missing docstring.

Missing docstring for Base.merge(::Quantum, ::Quantum...). Check Documenter's build log for details.

diff --git a/dev/references.html b/dev/references.html index e89dd7bd..1f670794 100644 --- a/dev/references.html +++ b/dev/references.html @@ -1,2 +1,2 @@ -References · Tenet.jl

References

  • Fishman, M.; White, S. R. and Stoudenmire, E. M. (2022). The ITensor Software Library for Tensor Network Calculations. SciPost Phys. Codebases, 4.
  • Gray, J. (2018), quimb: A python package for quantum information and many-body calculations. Journal of Open Source Software 3, 819.
  • Gray, J. and Kourtis, S. (2021). Hyper-optimized tensor network contraction. Quantum 5, 410.
  • Hauschild, J.; Pollmann, F. and Zaletel, M. (2021). The Tensor Network Python (TeNPy) Library. In: APS March Meeting Abstracts, Vol. 2021; p. R21–006.
  • Ramón Pareja Monturiol, J.; Pérez-García, D. and Pozas-Kerstjens, A. (2023). TensorKrowch: Smooth integration of tensor networks in machine learning, arXiv e-prints, arXiv–2306.
+References · Tenet.jl

References

  • Fishman, M.; White, S. R. and Stoudenmire, E. M. (2022). The ITensor Software Library for Tensor Network Calculations. SciPost Phys. Codebases, 4.
  • Gray, J. (2018), quimb: A python package for quantum information and many-body calculations. Journal of Open Source Software 3, 819.
  • Gray, J. and Kourtis, S. (2021). Hyper-optimized tensor network contraction. Quantum 5, 410.
  • Hauschild, J.; Pollmann, F. and Zaletel, M. (2021). The Tensor Network Python (TeNPy) Library. In: APS March Meeting Abstracts, Vol. 2021; p. R21–006.
  • Ramón Pareja Monturiol, J.; Pérez-García, D. and Pozas-Kerstjens, A. (2023). TensorKrowch: Smooth integration of tensor networks in machine learning, arXiv e-prints, arXiv–2306.
diff --git a/dev/tensor-network.html b/dev/tensor-network.html index 532ea51e..5225d1b0 100644 --- a/dev/tensor-network.html +++ b/dev/tensor-network.html @@ -6,4 +6,4 @@ size(tn::AbstractTensorNetwork, index)

Return a mapping from indices to their dimensionalities.

If index is set, return the dimensionality of index. This is equivalent to size(tn)[index].

source
Missing docstring.

Missing docstring for tensors(::Tenet.TensorNetwork). Check Documenter's build log for details.

Modification

Add/Remove tensors

Base.push!Method
push!(tn::AbstractTensorNetwork, tensor::Tensor)

Add a new tensor to the Tensor Network.

See also: append!, pop!.

source
Base.append!Method
append!(tn::TensorNetwork, tensors::AbstractVecOrTuple{<:Tensor})

Add a list of tensors to a TensorNetwork.

See also: push!, merge!.

source
Base.merge!Method
merge!(self::TensorNetwork, others::TensorNetwork...)
 merge(self::TensorNetwork, others::TensorNetwork...)

Fuse various TensorNetworks into one.

See also: append!.

source
Base.pop!Method
pop!(tn::TensorNetwork, tensor::Tensor)
 pop!(tn::TensorNetwork, i::Union{Symbol,AbstractVecOrTuple{Symbol}})

Remove a tensor from the Tensor Network and returns it. If a Tensor is passed, then the first tensor satisfies egality (i.e. or ===) will be removed. If a Symbol or a list of Symbols is passed, then remove and return the tensors that contain all the indices.

See also: push!, delete!.

source
Base.delete!Method
delete!(tn::TensorNetwork, x)

Like pop! but return the TensorNetwork instead.

source

Replace existing elements

Base.replace!Function
replace!(tn::AbstractTensorNetwork, old => new...)
-replace(tn::AbstractTensorNetwork, old => new...)

Replace the element in old with the one in new. Depending on the types of old and new, the following behaviour is expected:

  • If Symbols, it will correspond to a index renaming.
  • If Tensors, first element that satisfies egality ( or ===) will be replaced.
source

Slicing

Base.selectdimFunction
selectdim(tn::AbstractTensorNetwork, index::Symbol, i)

Return a copy of the AbstractTensorNetwork where index has been projected to dimension i.

See also: view, slice!.

source
Tenet.slice!Function
slice!(tn::AbstractTensorNetwork, index::Symbol, i)

In-place projection of index on dimension i.

See also: selectdim, view.

source
Base.viewMethod
view(tn::AbstractTensorNetwork, index => i...)

Return a copy of the AbstractTensorNetwork where each index has been projected to dimension i. It is equivalent to a recursive call of selectdim.

See also: selectdim, slice!.

source

Miscelaneous

Base.copyMethod
copy(tn::TensorNetwork)

Return a shallow copy of a TensorNetwork.

source
Missing docstring.

Missing docstring for Base.rand(::Type{TensorNetwork}, n::Integer, regularity::Integer). Check Documenter's build log for details.

+replace(tn::AbstractTensorNetwork, old => new...)

Replace the element in old with the one in new. Depending on the types of old and new, the following behaviour is expected:

source

Slicing

Base.selectdimFunction
selectdim(tn::AbstractTensorNetwork, index::Symbol, i)

Return a copy of the AbstractTensorNetwork where index has been projected to dimension i.

See also: view, slice!.

source
Tenet.slice!Function
slice!(tn::AbstractTensorNetwork, index::Symbol, i)

In-place projection of index on dimension i.

See also: selectdim, view.

source
Base.viewMethod
view(tn::AbstractTensorNetwork, index => i...)

Return a copy of the AbstractTensorNetwork where each index has been projected to dimension i. It is equivalent to a recursive call of selectdim.

See also: selectdim, slice!.

source

Miscelaneous

Base.copyMethod
copy(tn::TensorNetwork)

Return a shallow copy of a TensorNetwork.

source
Missing docstring.

Missing docstring for Base.rand(::Type{TensorNetwork}, n::Integer, regularity::Integer). Check Documenter's build log for details.

diff --git a/dev/tensors.html b/dev/tensors.html index 6f7c48c3..16879a82 100644 --- a/dev/tensors.html +++ b/dev/tensors.html @@ -1,11 +1,11 @@ Tensors · Tenet.jl

Tensors

There are many jokes[1] about how to define a tensor. The definition we are giving here might not be the most correct one, but it is good enough for our use case (don't kill me please, mathematicians). A tensor $T$ of order[2] $n$ is a multilinear[3] application between $n$ vector spaces over a field $\mathcal{F}$.

\[T : \mathcal{F}^{\dim(1)} \times \dots \times \mathcal{F}^{\dim(n)} \mapsto \mathcal{F}\]

In layman's terms, it is a linear function whose inputs are vectors and the output is a scalar number.

\[T(\mathbf{v}^{(1)}, \dots, \mathbf{v}^{(n)}) = c \in \mathcal{F} \qquad\qquad \forall i, \mathbf{v}^{(i)} \in \mathcal{F}^{\dim(i)}\]

Tensor algebra is a higher-order generalization of linear algebra, where scalar numbers can be viewed as order-0 tensors, vectors as order-1 tensors, matrices as order-2 tensors, ...

Letters are used to identify each of the vector spaces the tensor relates to. In computer science, you would intuitively think of tensors as "n-dimensional arrays with named dimensions".

\[T_{ijk} \iff \mathtt{T[i,j,k]}\]

The Tensor type

In Tenet, a tensor is represented by the Tensor type, which wraps an array and a list of symbols. As it subtypes AbstractArray, many array operations can be dispatched to it.

You can create a Tensor by passing an array and a list of Symbols that name indices.

julia> Tᵢⱼₖ = Tensor(rand(3,5,2), (:i,:j,:k))3×5×2 Tensor{Float64, 3, Array{Float64, 3}}:
 [:, :, 1] =
- 0.778267  0.686699  0.689728  0.0125738  0.246916
- 0.453462  0.540498  0.145831  0.165915   0.204773
- 0.732301  0.264758  0.361329  0.506882   0.819037
+ 0.716966  0.816131  0.292421  0.389926  0.993739
+ 0.747671  0.88229   0.811297  0.905784  0.0419479
+ 0.911713  0.455111  0.388715  0.623101  0.794524
 
 [:, :, 2] =
- 0.709213   0.468461  0.958876  0.284568  0.194078
- 0.775376   0.157592  0.22332   0.400485  0.109313
- 0.0150892  0.946944  0.925321  0.722181  0.259605

The dimensionality or size of each index can be consulted using the size function.

Base.sizeMethod
Base.size(::Tensor[, i])

Return the size of the underlying array or the dimension i (specified by Symbol or Integer).

source
julia> size(Tᵢⱼₖ)(3, 5, 2)
julia> size(Tᵢⱼₖ, :j)5
julia> length(Tᵢⱼₖ)30

Operations

Contraction

Graphs.LinAlg.contractMethod
contract(a::Tensor[, b::Tensor]; dims=nonunique([inds(a)..., inds(b)...]))

Perform tensor contraction operation.

source

Factorizations

LinearAlgebra.svdMethod
LinearAlgebra.svd(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform SVD factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the SVD factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the SVD factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.qrMethod
LinearAlgebra.qr(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform QR factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the QR factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the QR factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.luMethod
LinearAlgebra.lu(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform LU factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the LU factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the LU factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
  • 1For example, recursive definitions like a tensor is whatever that transforms as a tensor.
  • 2The order of a tensor may also be known as rank or dimensionality in other fields. However, these can be missleading, since it has nothing to do with the rank of linear algebra nor with the dimensionality of a vector space. We prefer to use word order.
  • 3Meaning that the relationships between the output and the inputs, and the inputs between them, are linear.
+ 0.136287 0.341294 0.295501 0.387303 0.374442 + 0.659483 0.830502 0.488166 0.762 0.175478 + 0.625201 0.452687 0.128882 0.29749 0.623336

The dimensionality or size of each index can be consulted using the size function.

Base.sizeMethod
Base.size(::Tensor[, i])

Return the size of the underlying array or the dimension i (specified by Symbol or Integer).

source
julia> size(Tᵢⱼₖ)(3, 5, 2)
julia> size(Tᵢⱼₖ, :j)5
julia> length(Tᵢⱼₖ)30

Operations

Contraction

Graphs.LinAlg.contractMethod
contract(a::Tensor[, b::Tensor]; dims=nonunique([inds(a)..., inds(b)...]))

Perform tensor contraction operation.

source

Factorizations

LinearAlgebra.svdMethod
LinearAlgebra.svd(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform SVD factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the SVD factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the SVD factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.qrMethod
LinearAlgebra.qr(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform QR factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the QR factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the QR factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
LinearAlgebra.luMethod
LinearAlgebra.lu(tensor::Tensor; left_inds, right_inds, virtualind, kwargs...)

Perform LU factorization on a tensor.

Keyword arguments

  • left_inds: left indices to be used in the LU factorization. Defaults to all indices of t except right_inds.
  • right_inds: right indices to be used in the LU factorization. Defaults to all indices of t except left_inds.
  • virtualind: name of the virtual bond. Defaults to a random Symbol.
source
diff --git a/dev/transformations-a30dcedf.png b/dev/transformations-a30dcedf.png new file mode 100644 index 00000000..8462f0f6 Binary files /dev/null and b/dev/transformations-a30dcedf.png differ diff --git a/dev/transformations-f136c61b.png b/dev/transformations-f136c61b.png deleted file mode 100644 index a9b449e2..00000000 Binary files a/dev/transformations-f136c61b.png and /dev/null differ diff --git a/dev/transformations.html b/dev/transformations.html index fc952010..a80d2d1d 100644 --- a/dev/transformations.html +++ b/dev/transformations.html @@ -1,4 +1,4 @@ Transformations · Tenet.jl

Transformations

In tensor network computations, it is good practice to apply various transformations to simplify the network structure, reduce computational cost, or prepare the network for further operations. These transformations modify the network's structure locally by permuting, contracting, factoring or truncating tensors.

A crucial reason why these methods are indispensable lies in their ability to drastically reduce the problem size of the contraction path search and also the contraction. This doesn't necessarily involve reducing the maximum rank of the Tensor Network itself, but more importantly, it reduces the size (or rank) of the involved tensors.

Our approach is based in (Gray and Kourtis, 2021), which can also be found in quimb.

In Tenet, we provide a set of predefined transformations which you can apply to your TensorNetwork using both the transform/transform! functions.

Tenet.transformFunction
transform(tn::TensorNetwork, config::Transformation)
 transform(tn::TensorNetwork, configs)

Return a new TensorNetwork where some Transformation has been performed into it.

See also: transform!.

source
Tenet.transform!Function
transform!(tn::TensorNetwork, config::Transformation)
-transform!(tn::TensorNetwork, configs)

In-place version of transform.

source

Available transformations

Hyperindex converter

Tenet.HyperFlattenType
HyperFlatten <: Transformation

Convert hyperindices to COPY-tensors, represented by DeltaArrays. This transformation is always used by default when visualizing a TensorNetwork with plot.

See also: HyperGroup.

source

Contraction simplification

Example block output

Diagonal reduction

Tenet.DiagonalReductionType
DiagonalReduction <: Transformation

Reduce the dimension of a Tensor in a TensorNetwork when it has a pair of indices that fulfil a diagonal structure.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
source
Example block output

Anti-diagonal reduction

Tenet.AntiDiagonalGaugingType
AntiDiagonalGauging <: Transformation

Reverse the order of tensor indices that fulfill the anti-diagonal condition. While this transformation doesn't directly enhance computational efficiency, it sets up the TensorNetwork for other operations that do.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source

Dimension truncation

Tenet.TruncateType
Truncate <: Transformation

Truncate the dimension of a Tensor in a TensorNetwork when it contains columns with all elements smaller than atol.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source
Example block output

Split simplification

Tenet.SplitSimplificationType
SplitSimplification <: Transformation

Reduce the rank of tensors in the TensorNetwork by decomposing them using the Singular Value Decomposition (SVD). Tensors whose factorization do not increase the maximum rank of the network are left decomposed.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-10.
source
Example block output
+transform!(tn::TensorNetwork, configs)

In-place version of transform.

source

Available transformations

Hyperindex converter

Tenet.HyperFlattenType
HyperFlatten <: Transformation

Convert hyperindices to COPY-tensors, represented by DeltaArrays. This transformation is always used by default when visualizing a TensorNetwork with plot.

See also: HyperGroup.

source
Tenet.HyperGroupType
HyperGroup <: Transformation

Convert COPY-tensors, represented by DeltaArrays, to hyperindices.

See also: HyperFlatten.

source

Contraction simplification

Tenet.ContractSimplificationType
ContractSimplification <: Transformation

Preemptively contract tensors whose result doesn't increase in size.

source
Example block output

Diagonal reduction

Tenet.DiagonalReductionType
DiagonalReduction <: Transformation

Reduce the dimension of a Tensor in a TensorNetwork when it has a pair of indices that fulfil a diagonal structure.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
source
Example block output

Anti-diagonal reduction

Tenet.AntiDiagonalGaugingType
AntiDiagonalGauging <: Transformation

Reverse the order of tensor indices that fulfill the anti-diagonal condition. While this transformation doesn't directly enhance computational efficiency, it sets up the TensorNetwork for other operations that do.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source

Dimension truncation

Tenet.TruncateType
Truncate <: Transformation

Truncate the dimension of a Tensor in a TensorNetwork when it contains columns with all elements smaller than atol.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-12.
  • skip List of indices to skip. Defaults to [].
source
Example block output

Split simplification

Tenet.SplitSimplificationType
SplitSimplification <: Transformation

Reduce the rank of tensors in the TensorNetwork by decomposing them using the Singular Value Decomposition (SVD). Tensors whose factorization do not increase the maximum rank of the network are left decomposed.

Keyword Arguments

  • atol Absolute tolerance. Defaults to 1e-10.
source
Example block output diff --git a/dev/visualization.html b/dev/visualization.html index 114ad14f..56571c4c 100644 --- a/dev/visualization.html +++ b/dev/visualization.html @@ -1,4 +1,4 @@ Visualization · Tenet.jl

Visualization

Tenet provides a Package Extension for Makie support. You can just import a Makie backend and call GraphMakie.graphplot on a TensorNetwork.

GraphMakie.graphplotMethod
graphplot(tn::TensorNetwork; kwargs...)
 graphplot!(f::Union{Figure,GridPosition}, tn::TensorNetwork; kwargs...)
-graphplot!(ax::Union{Axis,Axis3}, tn::TensorNetwork; kwargs...)

Plot a TensorNetwork as a graph.

Keyword Arguments

  • labels If true, show the labels of the tensor indices. Defaults to false.
  • The rest of kwargs are passed to GraphMakie.graphplot.
source
graphplot(tn, layout=Stress(), labels=true)
Example block output
+graphplot!(ax::Union{Axis,Axis3}, tn::TensorNetwork; kwargs...)

Plot a TensorNetwork as a graph.

Keyword Arguments

source
graphplot(tn, layout=Stress(), labels=true)
Example block output