Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@unsafe_region applies to all TensorNetworks within the block, not just the one passed in the argument #230

Closed
jofrevalles opened this issue Nov 5, 2024 · 2 comments · Fixed by #233
Labels
bug Something isn't working question Further information is requested

Comments

@jofrevalles
Copy link
Member

jofrevalles commented Nov 5, 2024

Summary

I've noticed that the @unsafe_region macro, when applied to a specific TensorNetwork, actually affects all tensor networks within its block, even those not specified. This behavior is unexpected and can lead to potential silent errors.

Example

julia> using Tenet

julia> tn = TensorNetwork([
           Tensor(ones(2, 2), [:a, :b]),
           Tensor(ones(2, 2), [:b, :c])
       ])
TensorNetwork (#tensors=2, #inds=3)

julia> tn2 = copy(tn)
TensorNetwork (#tensors=2, #inds=3)

julia> Tenet.@unsafe_region tn begin ### No errors thrown!
           tensor = Tensor(ones(3, 2), [:c, :d])
           push!(tn, tensor)
           push!(tn2, tensor)  # tn2 is not specified in @unsafe_region
           @test length(tensors(tn)) == 3
           pop!(tn, tensor)
       end
3×2 Tensor{Float64, 2, Matrix{Float64}}:
 1.0  1.0
 1.0  1.0
 1.0  1.0

In this code the @unsafe_region macro is applied only to tn. Inside the block, we also manipulate tn2, which is not specified in the macro. We expect push!(tn2, tensor) to throw a DimensionMismatch error due to inconsistent indices. However, no error is thrown, indicating that @unsafe_region is affecting tn2 as well.

Possible Solutions:

  1. Restrict the scope to specified TensorNetworks:
  • Modify the @unsafe_region macro to ensure that it only affects the tensor networks explicitly listed.
  1. Change macro to reflect global scope:
  • If the intended behavior is to apply to all tensor networks within the block, we can consider changing the macro to not require specifying any tensor networks.

For example:

@unsafe_region begin # no tn passed as argument here
    # Code here
end

This would make it clear to users that all tensor networks are affected within the block.

@jofrevalles jofrevalles added bug Something isn't working question Further information is requested labels Nov 5, 2024
@mofeing
Copy link
Member

mofeing commented Nov 6, 2024

Yeah, but let's talk today in the sotware meeting. Add it to the agenda please.

@mofeing
Copy link
Member

mofeing commented Nov 7, 2024

Summary of the meeting between me and @jofrevalles:

  • In the particular example above, a second @unsafe_region should be use for tn2
  • There are cases where the a new TN is created deep inside the call stack and cannot be marked as "unsafe" (e.g. calling adjoint(tn) where tn is unsafe). It's this kind of cases which are considered a bug in this issue.

The solution we agreed to is adding a new unsafe::Ref{Bool} field to TensorNetwork which controls consistency checks in push! and remove the global flag. This way, when copying a TensorNetwork, the field will be propagated and will know that it's unsafe too.

This method has some potential issues like if we copy the Ref as it is, then both TNs will get their values shared. If instead, we don't copy the Ref as it is but we create a new Ref and we copy the value, then the 2nd TN won't know when to run the consistency checks because it won't get notified that the @unsafe_region has ended.

I prefer to bet for the second option of copying the Bool and create a new Ref, and let's figure out how to solve this issue with @unsafe_region once we have the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants