You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry, just noticed this issue. The basic solution to this is simple: in the canonicalization, just normalize the R matrices after each QR decomposition and keep track of the original norms. The question is, how do we incorporate this...
The easiest approach would be to add a switch to MPArray.normalize (s.th. like normalize_norm), which does the R normalization mentioned above. This would require use to remove the "divide by norm" pattern and might be a little slower. The naming would of course be easier if #31 was fixed.
That being said, I certainly don't have any use for more than say 30 sites anyway...
At the moment, the result of
a / mp.norm(a)
may unexpectedly be zero orNaN
for large MPArrays (or small MPArrays with large tensor entries):The reason is that floats cannot exceed a certain value:
It would be nice to add a method which computes
a / mp.norm(a)
without running into the floating point overflow.Underflow can also happen:
The text was updated successfully, but these errors were encountered: