Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add global dofs #1012

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 36 additions & 2 deletions src/Dofs/DofHandler.jl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't celldofs[!] return these additional dofs, too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the global dofs in this PR have no coupling with the cells, and this can be added for example via AffineConstraints.

For "the other type of global dofs", I think we should go with something like master...kam/GlobalDofs, which allows you to do

dh = DofHandler(grid)
sdh1 = SubDofHandler(dh, domain1)
add!(sdh1, :u, Lagrange{RefTriangle,1}())
add!(sdh1, :g, GlobalInterpolation(n)) # Add n extra dofs that are connected to all cells in sdh1
sdh2 = SubDofHandler(dh, domain2)
add!(sdh2, :u, Lagrange{RefTriangle,1}())
close!(dh)

If you only want the global dof on some subdofhandlers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[...] the global dofs in this PR have no coupling with the cells [...]

Okay, then I do not understand what the purpose of feature. If you have an independent dof, then you have effectively two independent systems. Hence we can just have a separate dof handler with the additional dofs and assemble two separate systems. Can you point me to the detail which I am missing out?

[...] this can be added for example via AffineConstraints.

Not sure how this fits in the statement above. From what I originally understood in the related discussion in the linked issue is that you just want to have some mechanism to add "simple constraints". Why should the dof handler manage this and why can't we add have some LagrangeMultiplierConstraint which is managed by the constraint handler instead? Or what exactly is the advantage of this design, having the dof handler managing these constraints (partially)?

Copy link
Member Author

@KnutAM KnutAM Jul 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, then I do not understand what the purpose of feature. If you have an independent dof, then you have effectively two independent systems.

By having the DofHandler aware of the total number of dofs, the constraint handler can rely on ndofs(dh) to return the number of rows and columns in the system matrix to which it should add constraints. In #1009 I first proposed passing this info directly to the ConstraintHandler, but this PR introduces the alternative solution that passes this information via the DofHandler. With this design, one can do

dh = DofHandler(grid)
add!(dh, ...)
add_global_dofs!(dh, ...)
close!(dh)
ch = ConstraintHandler(dh) ...
add!(ch, ...)
close!(ch)
K = allocate_matrix(dh, ch)

whereas with my original suggestion you need

dh = DofHandler(grid)
add!(dh, ...)
close!(dh)
n, global_dof_info = setup_global_dofs(...) # User defined data structures
ndofs_total = ndofs(dh) + n
ch = ConstraintHandler(dh; ndofs_total) # New feature that we would need to add. 
add!(ch, ...)
close!(ch)
sp = SparsityPattern(ndofs_total, ndofs_total)
add_sparsity_entries!(sp, dh, ch)
K = allocate_matrix(sp)

Or what exactly is the advantage of this design, having the dof handler managing these constraints (partially)?

The DofHandler doesn't really manage these constraints, it is just aware of the extra dofs. Currently, my workaround is doing dh.ndofs += n after close!(dh), which tricks the setup to work, but provides no info about the additional dofs, this I have to manage myself. The main advantage with adding fields to the dofhandler is IMO that you can directly request these from the DofHandler, and that ndofs(dh) gives the correct number of the degrees of freedom in the system.

Copy link
Member Author

@KnutAM KnutAM Jul 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should the dof handler manage this and why can't we add have some LagrangeMultiplierConstraint which is managed by the constraint handler instead?

It might also be possible to do

dh = DofHandler(grid)
add!(dh, ...)
add_global_dofs!(dh, ...)
close!(dh)
ch = ConstraintHandler(dh) ...
add!(ch, ...)
add!(ch, LagrangeMultiplierConstraint(:lambda, n, AffineConstraintInfos)) # Extends with n number of dofs
close!(ch)
K = allocate_matrix(dh, ch)

One advantage that I see with this setup is that it potentially could make it easier to "deactivate" some multipliers, keeping the sparsity pattern the same (nice for e.g. contact simulations).

A disadvantage I see is that it is less flexible then the suggestions above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we are really talking about just adding some constraint $\sum_i u_i = 0$? Or why can you skip the setup_global_dofs to build the connectivity in the the examples 1. and 3.?

A disadvantage I see is that it is less flexible then the suggestions above.

I see this as an advantage, because is is clear what to do in all scenarios. Whereas the in proposed design it seems heavily underdesigned, as it is absolutely unclear to my how the interaction between sparsity pattern, constraints, assembly and distributed parallelization should be (and it is also not clear from going over the code). So maybe it is easier to have a special dof handler for such specific corner cases?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we are really talking about just adding some constraint $\sum_i u_i = 0$?

Yes, the case I hit was adding an extra dof. Then I add many AffineConstraints which constrain other dofs to this extra dof.
(The reason for having it in the system, as opposed to fixing it by hard-coding it in the affine constraints is that I need this for computing correct residuals and sensitivities later, and there are cases when I don't constrain it, but still want the AffineConstraints to the extra dof).

Whereas the in proposed design it seems heavily underdesigned, as it is absolutely unclear to my how the interaction between sparsity pattern, constraints, assembly and distributed parallelization should be (and it is also not clear from going over the code).

The only difference apart from the possibility to get information from the DofHandler, is that it increases the value of ndofs(dh), implying that the total equation system is larger. This allows you to add constraints to these extra dofs, which then will affect e.g. the sparsity system as always. It does not change anything for the assembly. For distributed parallelization it should have no effect either, the effect comes if you tie two dofs together via Affine constraints, which is not different with this PR from before.

So to summarize, an absolute minimal change with this pr would be to add a field n_global_dofs::Int to the dofhandler (allowing you to add a given number of extra dofs, but without any naming etc. of these), and redefine ndofs(dh) = dh.ndofs + dh.n_global_dofs (or update dh.ndofs to this value during close!.

Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ end
mutable struct DofHandler{dim,G<:AbstractGrid{dim}} <: AbstractDofHandler
const subdofhandlers::Vector{SubDofHandler{DofHandler{dim, G}}}
const field_names::Vector{Symbol}
const global_dofs::OrderedDict{Symbol, UnitRange{Int}}
# Dofs for cell i are stored in cell_dofs at the range:
# cell_dofs_offset[i]:(cell_dofs_offset[i]+ndofs_per_cell(dh, i)-1)
const cell_dofs::Vector{Int}
Expand Down Expand Up @@ -131,7 +132,8 @@ close!(dh)
function DofHandler(grid::G) where {dim, G <: AbstractGrid{dim}}
ncells = getncells(grid)
sdhs = SubDofHandler{DofHandler{dim, G}}[]
DofHandler{dim, G}(sdhs, Symbol[], Int[], zeros(Int, ncells), zeros(Int, ncells), false, grid, -1)
global_dofs = OrderedDict{Symbol, UnitRange{Int}}()
DofHandler{dim, G}(sdhs, Symbol[], global_dofs, Int[], zeros(Int, ncells), zeros(Int, ncells), false, grid, -1)
end

function Base.show(io::IO, mime::MIME"text/plain", dh::DofHandler)
Expand All @@ -151,6 +153,10 @@ function Base.show(io::IO, mime::MIME"text/plain", dh::DofHandler)
println(io, " ", repr(fieldname), ", ", field_type)
end
end
if length(dh.global_dofs) > 0
gdof_str = join([string(key, " (", length(r), ")") for (key, r) in dh.global_dofs], ", ")
println(io, " Global dofs: ", gdof_str)
end
if !isclosed(dh)
print(io, " Not closed!")
else
Expand Down Expand Up @@ -322,6 +328,29 @@ function add!(dh::DofHandler, name::Symbol, ip::Interpolation)
return dh
end

"""
add_global_dofs!(dh::DofHandler, name::Symbol, n::Int)

Add `n` global degrees of freedom to `dh`. The degree of freedom numbers can be obtained
after closing `dh` with [`get_global_dofs`](@ref).
"""
function add_global_dofs!(dh::DofHandler, name::Symbol, n::Int)
@assert !isclosed(dh)
haskey(dh.global_dofs, name) && throw(ArgumentError(string(":", name, " is already a global dof")))
dh.global_dofs[name] = (-n):(-1) # Range shifted in close
end

"""
get_global_dofs(dh::DofHandler, name::Symbol)

Get a `UnitRange{Int}` describing the global dofs associated with `name`.
Such dofs can be added with [`add_global_dofs!`](@ref).
"""
function get_global_dofs(dh::DofHandler, name::Symbol)
KnutAM marked this conversation as resolved.
Show resolved Hide resolved
@assert isclosed(dh)
return dh.global_dofs[name]
end

"""
close!(dh::AbstractDofHandler)

Expand Down Expand Up @@ -388,7 +417,12 @@ function __close!(dh::DofHandler{dim}) where {dim}
facedicts,
)
end
dh.ndofs = maximum(dh.cell_dofs; init=0)
n_dofs = maximum(dh.cell_dofs; init=0)
for (key, dofs) in dh.global_dofs
dh.global_dofs[key] = n_dofs .+ (1:length(dofs))
n_dofs += length(dofs)
end
dh.ndofs = n_dofs
dh.closed = true

return dh, vertexdicts, edgedicts, facedicts
Expand Down
2 changes: 1 addition & 1 deletion src/Ferrite.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ using LinearAlgebra:
using NearestNeighbors:
NearestNeighbors, KDTree, knn
using OrderedCollections:
OrderedSet
OrderedSet, OrderedDict
using SparseArrays:
SparseArrays, SparseMatrixCSC, nonzeros, nzrange, rowvals, sparse
using StaticArrays:
Expand Down
57 changes: 57 additions & 0 deletions test/test_grid_dofhandler_vtk.jl
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,63 @@ end # of testset

close(csio)

@testset "DofHandler: global dofs" begin
grid = generate_grid(Quadrilateral, (5, 5))
ip = Lagrange{RefQuadrilateral, 1}()

# dh1: One standard field
dh1 = close!(add!(DofHandler(grid), :u, ip))

# dh1a: dh1 + one global dofs entry
dh1a = DofHandler(grid)
add!(dh1a, :u, ip)
Ferrite.add_global_dofs!(dh1a, :λ, 2)
@test_throws ArgumentError Ferrite.add_global_dofs!(dh1a, :λ, 3) # Field already existing
@test_throws AssertionError Ferrite.get_global_dofs(dh1a, :λ) # Not closed
close!(dh1a)
@test_throws AssertionError Ferrite.add_global_dofs!(dh1a, :x, 2) # Closed
@test ndofs(dh1a) == ndofs(dh1) + 2
@test Ferrite.get_global_dofs(dh1a, :λ) == (ndofs(dh1)+1):(ndofs(dh1)+2) # global dofs added at the end

showstring = sprint(show, MIME"text/plain"(), dh1a)
@test contains(showstring, "Global dofs: λ (2)")

# dh1b: dh1 + two global dofs entries, added before and after standard field.
dh1b = DofHandler(grid)
Ferrite.add_global_dofs!(dh1b, :λ1, 2)
add!(dh1b, :u, ip)
Ferrite.add_global_dofs!(dh1b, :λ2, 1)
close!(dh1b)
@test ndofs(dh1b) == ndofs(dh1) + 3
@test Ferrite.get_global_dofs(dh1b, :λ1) == (ndofs(dh1)+1):(ndofs(dh1)+2)
@test Ferrite.get_global_dofs(dh1b, :λ2) == (ndofs(dh1)+3):(ndofs(dh1)+3)

showstring = sprint(show, MIME"text/plain"(), dh1b)
@test contains(showstring, "Global dofs: λ1 (2), λ2 (1)")

# dh2: Use subdofhandlers, dh2a adds global fields in addition to the standard fields in dh2
domain1 = Ferrite.create_cellset(grid, x -> norm(x) < 0.75)
domain2 = setdiff!(Set(1:getncells(grid)), domain1)

(dh2, dh2a) = map((false, true)) do add_globals
dh = DofHandler(grid)
add_globals && Ferrite.add_global_dofs!(dh, :λ1, 1)
sdh1 = SubDofHandler(dh, domain1)
add!(sdh1, :s, ip)
add!(sdh1, :v, ip^2)
add_globals && Ferrite.add_global_dofs!(dh, :λ2, 2)
sdh2 = SubDofHandler(dh, domain2)
add!(sdh2, :v, ip^2)
add_globals && Ferrite.add_global_dofs!(dh, :λ3, 3)
close!(dh)
dh
end
@test ndofs(dh2a) == ndofs(dh2) + 6
@test Ferrite.get_global_dofs(dh2a, :λ1) == ndofs(dh2) .+ (1:1)
@test Ferrite.get_global_dofs(dh2a, :λ2) == ndofs(dh2) .+ (2:3)
@test Ferrite.get_global_dofs(dh2a, :λ3) == ndofs(dh2) .+ (4:6)
end

@testset "vtk tensor export" begin
# open files
checksums_file_tensors = joinpath(dirname(@__FILE__), "checksums2.sha1")
Expand Down