-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Matrix-valued interpolations #958
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #958 +/- ##
==========================================
- Coverage 93.71% 93.17% -0.54%
==========================================
Files 36 36
Lines 5313 5366 +53
==========================================
+ Hits 4979 5000 +21
- Misses 334 366 +32 ☔ View full report in Codecov by Sentry. |
Very cool to be able to work directly with tensorial fields! abstract type TensorInterpolation{TB, refshape, iporder} <: Interpolation{refshape, iporder, Nothing} end
struct TensorizedInterpolation{TB, refshape, iporder, SI <: ScalarInterpolation{refshape, iporder}} <: TensorInterpolation{TB, refshape, iporder}
ip::SI
function TensorizedInterpolation{TB}(ip::ScalarInterpolation{refshape, iporder}) where {TB<:AbstractTensor{<:Any, <:Any, <:Any}, refshape, iporder}
return new{TB, refshape, iporder, typeof(ip)}(ip)
end
end where This would allow
This allows re-using e.g. This will also play nicely with mixed tensors, i.e.
|
Indeed. This is one of the reasons why I would like to keep the API internal for now.I first need to wrap my head around a generic indexing logic and I need to think of a way to properly generalize operations like curl and divergence to be useful by default and to be customizeable if the user works with a slightly different definition. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, ok. I'm not sure if I fully understand the purpose? If this is only internal because we expect it to change, then it seems like it could be tested outside as well? I don't see any code that isn't just overloading with new types?
Or is the purpose to move functionality from FerriteViz to Ferrite?
tosvec(v::Vec) = SVector((v...,)) | ||
tovec(sv::SVector) = Vec((sv...)) | ||
val = shape_value(ipm, ξ, I) | ||
grad = ForwardDiff.jacobian(sv -> tosmat(shape_value(ipm, tovec(sv), I)), tosvec(ξ)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this will do the right thing unfortunately.
But for these matrix interpolations, it probably is better to calculate the gradients wrt. the scalar base-interpolation, and then just insert the values in the right places (and zeros elsewhere).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any suggestion how we can cover this case in testing (in addition to the vectorial case)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @lijas solution here was quite nice, just having the option of calling that function explicitly as setup here.
Maybe combined with calling something like
for Is in 1:getnbasefunctions(ipm.ip)
g, v = shape_gradient_and_value(ipm.ip, ξ, Is)
Im = (Is - 1)*getnbasefunctions(ipm.ip)
for i in 1:vdim1, j in 1:vdim2
Im += 1
G, V = shape_gradient_and_value(ipm.ip, ξ, Im)
@test G[i,j,:] \approx g
@test V[i,j] \approx v
end
end
What do you mean by outside? If you propose to solve a matrix-valued problem, then I do not have the time currently to implement it.
We use something very similar to this PR in FerriteViz and this is beneficial to simplify ZZ error estimators. In the future I want to also highlight how we can solve matrix-valued problems (e.g. DG for elasticity) with the API. However, Fredrik repeatedly told me to make smaller PRs to make them more reviewable, which I can fully understand. So here is it. Does this explain it? |
I meant that one can use this even without merging this into Ferrite (as opposed to changes to fields of some structs for example). That means that when trying this out (since the way how to implement this is not set as you say), we don't need to merge it in as an experimental feature. But with the code in this PR as a basis, that is great to be able to figure out what the best way is! |
Foundation for #398 . Also, we basically emulate this in FerriteViz.jl when visualizing stress and in the AMR error estimation when computing the stress error. However, I would like to keep this API internal for now.
TODO