Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] [BlockSparseArrays] (Truncated) SVD for Abstract(Block)(Sparse)Array #1572

Draft
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

lkdvos
Copy link
Contributor

@lkdvos lkdvos commented Nov 7, 2024

This is a first draft of functionality to extend singular value decompositions to blockarrays. It features quite a few components that probably have to be discussed further.

svd(!)

The core functionality is centered around the typical LinearAlgebra.svd.

usv = svd(A) # returns factorization object
U, S, V = usv # returns components

For generic block-arrays, the idea would be that the rows of U, and the columns of Vt have the same structure as the rows and columns of A, such that U * Diagonal(S) * Vt = A. For the columns of U and the rows of Vt however, there is no structure that can be inferred, such that this would generically be a single block.
Additionally, implementation-wise it is probably the most efficient to first cast to a single dense array, then do the SVD, and then restore the blocked structure. For that reason, the return type of the matrices is also BlockedArray instead of BlockArray. This is implemented and works:

julia> a = mortar([rand(i, j) for i in [2, 2], j in [2, 2]])
2×2-blocked 4×4 BlockMatrix{Float64}:
 0.899443  0.245510.735501  0.367721
 0.60512   0.3540160.854856  0.807514
 ─────────────────────┼────────────────────
 0.379235  0.3713870.540997  0.597405
 0.983355  0.01944330.187833  0.497739

julia> svd(a)
NDTensors.BlockSparseArrays.SVD{Float64, Float64, BlockedMatrix{Float64, Matrix{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}, BlockedVector{Float64, Vector{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}}}, BlockedMatrix{Float64, Matrix{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}}
U factor:
2×1-blocked 4×4 BlockedMatrix{Float64}:
 -0.538016   0.185128  -0.809959   0.14224  
 -0.593322  -0.454957   0.177765  -0.63983  
 ───────────────────────────────────────────
 -0.408608  -0.407648   0.310857   0.755136 
 -0.43766    0.769782   0.464471  -0.0124688
singular values:
1-blocked 4-element BlockedVector{Float64}:
 2.2459630332217984 
 0.6638504762523649 
 0.3266654288265734 
 0.09879357698918856
Vt factor:
1×2-blocked 4×4 BlockedMatrix{Float64}:
 -0.635932  -0.223688-0.537043  -0.507088
  0.743515  -0.379664-0.495153  -0.24055 
 -0.14178   -0.0350279-0.576573   0.803888
  0.15058    0.896991-0.366022  -0.19688

If we want to keep that behavior of the blockstructure, the LinearAlgebra.SVD struct is however insufficient, as this requires the types of U an Vt to be the same. For BlockArray objects, the type of axes is captured in the type parameters, such that if the rows and colums of A are not described by axes of the same type, this fails.

SVD struct

In order to circumvent this problem, I copied most of the LinearAlgebra.svd files, and created a new SVD struct that does not have this restriction, but otherwise functions identically. With this, I have new functions svd(!) that by default fall back to the LinearAlgebra implementation, but rewrap them in the new struct.
Some questions here:

  • Do we want to overload LinearAlgebra.svd! instead of keeping them separate instead, only returning the custom SVD struct whenever required?
  • Should I rename the SVD struct in order to avoid confusion with the LinearAlgebra-defined one?

BlockSparse (Diagonal) SVD

Expanding this to BlockSparseArrays, I first added an implementation for BlockDiagonal, which is just a BlockSparseMatrix with a Diagonal{T,<:AbstractMatrix{T}} storage type.
In this case, the SVD happens block-by-block, and as a result we have extra information about the columns of U and the rows of Vt, as they should retain this blocked structure.

julia> a = BlockDiagonal([rand(T, i, j) for (i, j) in zip(m, n)])
typeof(axes) = Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent ...

2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, Diagonal{Matrix{Float64}, Vector{Matrix{Float64}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}:
 0.583632  0.168925               
 0.911104  0.842136               
 ────────────────────┼────────────────────
           0.324859  0.471626
           0.188073  0.720221

julia> svd(a)
NDTensors.BlockSparseArrays.SVD{Float64, Float64, BlockSparseArray{Float64, 2, Matrix{Float64}, Diagonal{Matrix{Float64}, Vector{Matrix{Float64}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}, BlockVector{Float64, Vector{Vector{Float64}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}}}, BlockSparseArray{Float64, 2, Matrix{Float64}, Diagonal{Matrix{Float64}, Vector{Matrix{Float64}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}}
U factor:
typeof(axes) = Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent ...

2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, Diagonal{Matrix{Float64}, Vector{Matrix{Float64}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}:
 -0.415014  -0.909815                
 -0.909815   0.415014                
 ──────────────────────┼─────────────────────
             0.603525   0.797344
             0.797344  -0.603525
singular values:
2-blocked 4-element BlockVector{Float64}:
 1.3589527927192564 
 0.24841857379494714
 ───────────────────
 0.9259816883073599 
 0.15688225526219635
Vt factor:
typeof(axes) = Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent ...

2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, Diagonal{Matrix{Float64}, Vector{Matrix{Float64}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}:
 -0.788218  -0.615396                
 -0.615396   0.788218                
 ──────────────────────┼─────────────────────
             0.373678   0.927558
             0.927558  -0.373678

Here, it might be beneficial to have S be either a BlockSparseVector or a BlockVector, depending on whether or not the full SVD is asked (not implemented yet)

BlockSparse SVD

Then, to also implement the generic BlockSparse case, I selected two possible implementations. Either the input is "permuted block-diagonal", as can arise in the context of tensors with nonzero flux, or it is a generic block sparse matrix.
There is a runtime check for this, (based on ITensor/BlockSparseArrays.jl#3) which permutes the array to become BlockDiagonal if possible, and otherwise falls back to a dense implementation.

Here, the questions mostly concern whether we want to have this runtime check. I would guess that this is something that can be statically inferred from the type of the axes, ie if the sparsity is due to symmetries, it has to be blockdiagonal in some form, while if not, it is presumably quite rare to have this specific structure. We could consider a trait to enable or disable this implementation.

Truncated SVD

Finally, in order to also include the truncation, I added an interface roughly based on the TensorKit implementation, which features a truncate(F::SVD) to truncate a given decomposition, as well as a tsvd(A) which simply combines the decomposition and truncation step. This second form could be useful in the future for algorithms that cannot decouple these two, such as Krylov methods.

The interface supports truncation by a number of different methods, based around keeping the n first values after sorting according to some criteria, as well as discarding values via a filtering mechanism. There are convenient constructors for the cases that are probably used most, but the system allows quite a lot of freedom. Additionally, because of the specialization on TruncationScheme, it is easily extensible in the future.

julia> tsvd(a)
NDTensors.BlockSparseArrays.SVD{Float64, Float64, BlockedMatrix{Float64, Matrix{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}, BlockedVector{Float64, Vector{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}}}, BlockedMatrix{Float64, Matrix{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}}
U factor:
2×1-blocked 4×4 BlockedMatrix{Float64}:
 -0.672324   0.299543   0.533151   0.417138
 -0.265515  -0.723181   0.435688  -0.465496
 ──────────────────────────────────────────
 -0.412697   0.490917  -0.221498  -0.734589
 -0.554223  -0.382471  -0.690554   0.263986
singular values:
1-blocked 4-element BlockedVector{Float64}:
 2.156295710384511  
 0.841059553579276  
 0.4119978358023982 
 0.17695709459403608
Vt factor:
1×2-blocked 4×4 BlockedMatrix{Float64}:
 -0.472017  -0.55809-0.605812  -0.314209
  0.212759  -0.816940.404206   0.35208 
 -0.505012   0.143246-0.17094    0.8338  
 -0.690577   0.0250160.663618  -0.286513

julia> tsvd(a; trunc=truncbelow(0.2))
NDTensors.BlockSparseArrays.SVD{Float64, Float64, BlockedMatrix{Float64, Matrix{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, Base.OneTo{Int64}}}, Vector{Float64}, BlockedMatrix{Float64, Matrix{Float64}, Tuple{Base.OneTo{Int64}, BlockedOneTo{Int64, Vector{Int64}}}}}
U factor:
2×1-blocked 4×3 BlockedMatrix{Float64, Matrix{Float64}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, Base.OneTo{Int64}}}:
 -0.672324   0.299543   0.533151
 -0.265515  -0.723181   0.435688
 ───────────────────────────────
 -0.412697   0.490917  -0.221498
 -0.554223  -0.382471  -0.690554
singular values:
3-element Vector{Float64}:
 2.156295710384511
 0.841059553579276
 0.4119978358023982
Vt factor:
1×2-blocked 3×4 BlockedMatrix{Float64, Matrix{Float64}, Tuple{Base.OneTo{Int64}, BlockedOneTo{Int64, Vector{Int64}}}}:
 -0.472017  -0.55809-0.605812  -0.314209
  0.212759  -0.816940.404206   0.35208 
 -0.505012   0.143246-0.17094    0.8338

In order to make this work with BlockSparseArrays, I am missing the functionality to have U[:,[1,2,4]] slice the matrix but retain as much of the block structure as possible:

julia> a = BlockDiagonal([rand(T, i, j) for (i, j) in zip(m, n)])
typeof(axes) = Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.

2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, Diagonal{Matrix{Float64}, Vector{Matrix{Float64}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}:
 0.545446  0.847428               
 0.214882  0.472071               
 ────────────────────┼────────────────────
           0.988776  0.313793
           0.394168  0.616615

julia> tsvd(a; trunc=truncbelow(0.2))
ERROR: Slicing not implemented for range of type `Vector{Int64}`.

As you see, much to discuss. I attached my preliminary test file, and added the code in places that I felt are relevant, but obviously I am completely open to suggestions, reorganisations, etc.

Attempt to find a permutation of blocks that makes `A` blockdiagonal. If unsuccesful,
returns nothing, otherwise returns both the blockdiagonal `B` as well as the permutation `I, J`.
"""
function try_make_blockdiagonal(A::AbstractBlockSparseMatrix)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider changing to try_to_blockdiagonal as discussed above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, does this handle rectangular block matrices, i.e. when the block pattern itself is rectangular?

@mtfishman mtfishman changed the title (Truncated) SVD for Abstract(Block)(Sparse)Array [BlockSparseArrays] (Truncated) SVD for Abstract(Block)(Sparse)Array Nov 7, 2024
@mtfishman mtfishman changed the title [BlockSparseArrays] (Truncated) SVD for Abstract(Block)(Sparse)Array [WIP] [BlockSparseArrays] (Truncated) SVD for Abstract(Block)(Sparse)Array Nov 7, 2024

# Cast to block-diagonal implementation if permuted-blockdiagonal
function find_permute_blockdiagonal(A)
@show keys = map(Base.Fix2(getproperty, :n), collect(block_stored_indices(A)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the canonical way to convert Block(1, 1) to (1, 1), as an alternative to using field access:

Int.(Tuple(Block(1, 1)))

Copy link
Member

@mtfishman mtfishman Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though we could consider keeping the Block wrapper, i.e. Tuple(Block(1, 1)), and then the permutations are block permutations. Then below, instead of blocks(A)[tuple.(invperm(I), J)], it could be written as a slicing operation on the block sparse matrix A[...]. I slightly prefer that approach. In that case I would call the function something like try_to_blockdiagonal_blockperm to make it clear it is outputting a block permutation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I played around with this, but within the current constraints I don't think I can make that work. I really need a vector of blocks in this case, in other words linear slicing into the block sparse matrix. A[vec_of_blockinds] just cannot work, for the reasons we discussed previously, because the slicing wants to return a BlockArray, which would then be a vector of matrices that all have different sizes.

My way around this is to perform linear index slicing into the blocks, which I think is fair to assume to return a vector of blocks (not wrapped into a BlockArray). I agree that it would be nicer to not have to do this, but I would also argue that it is not outside the allowed operations of the interface. blocks should return an AbstractArray containing the different blocks, and I should be allowed to index into that to obtain different blocks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm not quite following, let's discuss this.

@lkdvos
Copy link
Contributor Author

lkdvos commented Nov 11, 2024

In this second iteration, I implemented most of the comments as you suggested.

There are two main problems that hold this back currently:

  • BlockDiagonal * Adjoint and vice-versa don't work
  • blocks(::AbstractBlockSparseArray)[vec_of_cartesianinds] does not work

I'll handle these first separately, and then continue working here. I'll also split off the tsvd stuff before this PR is ready to merge.

@emstoudenmire
Copy link
Collaborator

Looks interesting and relatively simple (in a good way I mean).

My main question about the truncation system would be this: is the design flexible enough to be able to use a "relative" cutoff, meaning that one would first compute the sum of squares of singular values and then measure discard only those singular values defined so that their sum of squares divided by the total sum of squares is less than the cutoff? I would guess this is trivial to add in the dense case – is it also easy to add for the block sparse case?

@lkdvos
Copy link
Contributor Author

lkdvos commented Nov 18, 2024

Let me link the original inspiration: https://github.com/Jutho/TensorKit.jl/blob/master/src/tensors/truncation.jl

I think the main strengths of the design are that it is incredibly modular, and most of the pieces are designed to make it easy to hook into. For example, for truncating a fraction of the total weight, something like this could work:

struct TruncFraction{T} <: TruncationScheme
    fraction::T
end
function _truncate(usv::SVD, trunc::TruncUntil)
  total_weight = norm(usv.S)^2
  current_weight = zero(total_weight)
  local n # make n available outside loop
  for n in reverse(1:length(usv.S))
    current_weight += usv.S[n]
    if current_weight > trunc.factor * total_weight
       break
    end
  end
  keep = 1:n
  return SVD(usv.U[:, keep], usv.S[keep], usv.Vt[keep, :])
end

Making use of Julia's multiple dispatch, it's relatively straightforward to specialize truncate(::SVD{<:BlockDiagonal,...}, ...) as well.

I think here there is no real benefit of t trying to come up with code that works for both dense and blockdiagonal at the same time, the implementations are fundamentally different, and abstracting this even further might hurt more than it does good, but that's of course my opinion. That's also more or less why the design is the way it is, to make it very easy to add custom implementations, rather than attempting to provide implementations that work for everything.

@emstoudenmire
Copy link
Collaborator

Thanks for the explanation and detailed example. I definitely agree with you that it would be better to split up the dense and block-sparse truncation implementations.

@mtfishman
Copy link
Member

Should be moved to BlockSparseArrays.jl once it is configured properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants