-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update document #144
Update document #144
Conversation
I may work on the tutorial module while you finish the time evolution part. @jjren |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #144 +/- ##
==========================================
- Coverage 85.27% 84.74% -0.53%
==========================================
Files 105 105
Lines 9995 10140 +145
==========================================
+ Hits 8523 8593 +70
- Misses 1472 1547 +75
☔ View full report in Codecov by Sentry. |
This comment was marked as outdated.
This comment was marked as outdated.
My another question is what the difference between the three CI tests. |
This comment was marked as outdated.
This comment was marked as outdated.
Oh, finally, I found it in
|
|
renormalizer/__init__.py
Outdated
@@ -6,7 +6,7 @@ | |||
import warnings | |||
|
|||
|
|||
reno_num_threads = os.environ.get("RENO_NUM_THREADS") | |||
reno_num_threads = os.environ.get("RENO_NUM_THREADS", 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default behavior is changed on purpose in #132 . I strongly oppose setting the default number of threads when importing renormalizer. The reason is that this is a self-centered feature and is bad for the applicability of renormalizer.
When renormalizer is imported as a regular package for another project (such as TenCirChem), setting the number of threads by default can cause unexpected behavior. For example, TenCirChem itself may have set the "MKL_NUM_THREADS"
to a certain value for its purpose, and an experienced user can also set "MKL_NUM_THREADS"
directly without referring to the documentation or source code of renormalizer. But the value will be overwritten silently by renormalizer when importing the package and there's not even a good way to prevent this behavior.
Also, the best practice for production-level calculation is to set the number of threads to 1, 2 or 4 based on the details of the calculation. Setting the default number of threads to 1 is more like a development shortcut. In fact, in my development environment, I hardcoded the number of threads to be 1 in __init__.py
but I never commit this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also for consistency with popular computational software such as pyscf (pyscf/pyscf#540)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. I will recover it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
solved.
renormalizer/model/basis.py
Outdated
elif set(op_symbol.split(" ")) == set("x"): | ||
moment = len(op_symbol.split(" ")) | ||
self._recursion_flag = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better to use self._recursion_flag += 1
just as BasisSHO for deep recursion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. I will do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
solved
logger.debug(f"mmax, percent: {mmax}, {percent}") | ||
|
||
if isinstance(compress_config, CompressConfig): | ||
mps.compress_config = compress_config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the original mps.compress_config
? Should we record it and then set it back after the optimization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We control the bond dimension in gs.py with mmax and the original mps.compress_config is not used. I guess it is the default one. So is it necessary to set it back?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user may have set this value for future time evolution. pseudo-code:
mps.compress_config = ...
mps.optimize_config = ...
mps = optimize_mps(mps, mpo, procedure)
mps = excitation_mpo @ mps
mps = mps.evolve_with_rk(tau)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
solved.
@@ -1190,9 +1199,21 @@ def func1(y): | |||
coef, ovlp_inv1=S_L_inv_list[imps+1], | |||
ovlp_inv0=S_L_inv_list[imps], ovlp0=S_L_list[imps]) | |||
return func(0, y) | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better to wrap this routine as a function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean func1 is not necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean L1203 to L1215, which has a lot of duplication in this commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is already a one-line function, although a bit longer.
I didn't think of a better way to make it elegant.
In general looks good! Only need to resolve the problem of the default thread number. |
Finally. 🐢 |
great stuff, thanks! Just a reminder: the document still needs a lot of work. |
update @ 2023.4.8 @jjren
remove the m_max argument in _update_mps in mp.py.
First, because it is duplicated with mps.compress_config.
The more important reason is that if we want to adaptively control the bond dimension by truncation threshold in the two-site tdvp-ps or ground state algorithm, the former implementation can not achieve that.
(What I haven't modified is OFS algorithm because one loss function depends on m_max, so I added an assertion to make sure the bond dimension is fixed.)
add scipy ivp solver company to Krylov solver in the tdvp related evolution algorithm.
This will make the evolution algorithm suitable for non-Hermitian operators.
add quadrature in SineDVR basis by using the sympy expression to support more operators.
This feature is only experimental, not fully tested.
bump up dependencies (numpy, scipy) in requirement.txt