Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update document #144

Merged
merged 5 commits into from
Jul 4, 2023
Merged

Update document #144

merged 5 commits into from
Jul 4, 2023

Conversation

liwt31
Copy link
Collaborator

@liwt31 liwt31 commented Oct 26, 2022

update @ 2023.4.8 @jjren

  • remove the m_max argument in _update_mps in mp.py.
    First, because it is duplicated with mps.compress_config.
    The more important reason is that if we want to adaptively control the bond dimension by truncation threshold in the two-site tdvp-ps or ground state algorithm, the former implementation can not achieve that.
    (What I haven't modified is OFS algorithm because one loss function depends on m_max, so I added an assertion to make sure the bond dimension is fixed.)

  • add scipy ivp solver company to Krylov solver in the tdvp related evolution algorithm.
    This will make the evolution algorithm suitable for non-Hermitian operators.

  • add quadrature in SineDVR basis by using the sympy expression to support more operators.
    This feature is only experimental, not fully tested.

  • bump up dependencies (numpy, scipy) in requirement.txt

@liwt31
Copy link
Collaborator Author

liwt31 commented Oct 26, 2022

I may work on the tutorial module while you finish the time evolution part. @jjren

@codecov
Copy link

codecov bot commented Oct 28, 2022

Codecov Report

Patch coverage: 79.38% and project coverage change: -0.53 ⚠️

Comparison is base (87dc576) 85.27% compared to head (4a03f1d) 84.74%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #144      +/-   ##
==========================================
- Coverage   85.27%   84.74%   -0.53%     
==========================================
  Files         105      105              
  Lines        9995    10140     +145     
==========================================
+ Hits         8523     8593      +70     
- Misses       1472     1547      +75     
Impacted Files Coverage Δ
renormalizer/mps/mp.py 84.75% <42.85%> (-3.81%) ⬇️
renormalizer/spectra/base.py 95.65% <75.00%> (-4.35%) ⬇️
renormalizer/model/basis.py 88.77% <82.97%> (-0.55%) ⬇️
renormalizer/mps/gs.py 95.81% <86.66%> (-1.28%) ⬇️
renormalizer/mps/mps.py 90.52% <92.85%> (-0.03%) ⬇️
renormalizer/cv/spectra_cv.py 88.88% <100.00%> (+0.09%) ⬆️
renormalizer/cv/tests/test_H_chain.py 100.00% <100.00%> (ø)
renormalizer/cv/zerot.py 98.14% <100.00%> (ø)
renormalizer/lib/tests/test_krylov.py 100.00% <100.00%> (ø)
renormalizer/model/tests/test_basis.py 100.00% <100.00%> (ø)
... and 7 more

... and 2 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@jjren

This comment was marked as outdated.

@jjren
Copy link
Collaborator

jjren commented Apr 8, 2023

My another question is what the difference between the three CI tests.
From my understanding, ci / test (pull request) is the PR branch + master, ci / test (push) is the PR branch only. Then what is ci/circleci ?

@jjren

This comment was marked as outdated.

@jjren
Copy link
Collaborator

jjren commented Apr 8, 2023

Hi Weitang @liwt31 , at the beginning of this PR, you mentioned the Heisenberg tutorial bug, but I didn't find any changes to the Heisenberg tutorial. What I found is that the jupyter notebook of Heisenberg tutorial is supposed to move into doc/source/tutorial, but it doesn't. Did I do something wrong during the force push ... or you haven't move the jupyter notebook of Heisenberg into doc/source/tutorial?

I found that in the successful ci/circleci run, it showed that

Running Sphinx v5.3.0
copying ../../example/1D-Heisenberg.ipynb
making output directory... done

How did it happen? I can't find a rule anywhere in the package to do this.

Oh, finally, I found it in /home/jjren/Code/Renormalizer/doc/source/conf.py !
The remaining question is that

  • the bug in Heisenberg model tutorial
  • pandoc seems to have not been installed successfully in the two ci/test.

@liwt31
Copy link
Collaborator Author

liwt31 commented Apr 10, 2023

  • CI/CircleCI is based on the PR branch only. We are in the middle of switching from circleci to GitHub actions.
  • I can't really remember the bug exactly. Probably related to documentation generation and not the content in the notebook
  • CircleCI passes because in this branch the installation of pandoc is added to .circleci/config.yaml. The same lines should be added to .github/workflow/ci.yaml

@@ -6,7 +6,7 @@
import warnings


reno_num_threads = os.environ.get("RENO_NUM_THREADS")
reno_num_threads = os.environ.get("RENO_NUM_THREADS", 1)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default behavior is changed on purpose in #132 . I strongly oppose setting the default number of threads when importing renormalizer. The reason is that this is a self-centered feature and is bad for the applicability of renormalizer.

When renormalizer is imported as a regular package for another project (such as TenCirChem), setting the number of threads by default can cause unexpected behavior. For example, TenCirChem itself may have set the "MKL_NUM_THREADS" to a certain value for its purpose, and an experienced user can also set "MKL_NUM_THREADS" directly without referring to the documentation or source code of renormalizer. But the value will be overwritten silently by renormalizer when importing the package and there's not even a good way to prevent this behavior.

Also, the best practice for production-level calculation is to set the number of threads to 1, 2 or 4 based on the details of the calculation. Setting the default number of threads to 1 is more like a development shortcut. In fact, in my development environment, I hardcoded the number of threads to be 1 in __init__.py but I never commit this change.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also for consistency with popular computational software such as pyscf (pyscf/pyscf#540)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. I will recover it.

Copy link
Collaborator

@jjren jjren Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

solved.

elif set(op_symbol.split(" ")) == set("x"):
moment = len(op_symbol.split(" "))
self._recursion_flag = 1
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to use self._recursion_flag += 1 just as BasisSHO for deep recursion?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. I will do it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

solved

logger.debug(f"mmax, percent: {mmax}, {percent}")

if isinstance(compress_config, CompressConfig):
mps.compress_config = compress_config
Copy link
Collaborator Author

@liwt31 liwt31 Apr 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the original mps.compress_config? Should we record it and then set it back after the optimization?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We control the bond dimension in gs.py with mmax and the original mps.compress_config is not used. I guess it is the default one. So is it necessary to set it back?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The user may have set this value for future time evolution. pseudo-code:

mps.compress_config = ...
mps.optimize_config = ...
mps = optimize_mps(mps, mpo, procedure)
mps = excitation_mpo @ mps
mps = mps.evolve_with_rk(tau)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

solved.

@@ -1190,9 +1199,21 @@ def func1(y):
coef, ovlp_inv1=S_L_inv_list[imps+1],
ovlp_inv0=S_L_inv_list[imps], ovlp0=S_L_list[imps])
return func(0, y)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to wrap this routine as a function?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean func1 is not necessary?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean L1203 to L1215, which has a lot of duplication in this commit

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is already a one-line function, although a bit longer.
I didn't think of a better way to make it elegant.

@liwt31
Copy link
Collaborator Author

liwt31 commented Apr 10, 2023

In general looks good! Only need to resolve the problem of the default thread number.

@jjren
Copy link
Collaborator

jjren commented Jul 3, 2023

Finally. 🐢

@liwt31
Copy link
Collaborator Author

liwt31 commented Jul 4, 2023

great stuff, thanks! Just a reminder: the document still needs a lot of work.

@liwt31 liwt31 merged commit 977eb2a into master Jul 4, 2023
@liwt31 liwt31 deleted the doc branch July 4, 2023 02:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants