Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, I encountered some problems while performing finite temperature calculations. Could you help me resolve them? #191

Open
willOcean1 opened this issue Dec 9, 2024 · 3 comments

Comments

@willOcean1
Copy link

I want to calculate entropy and the Wilson loop. For entropy, I compared the results obtained using the exact diagonalization method with those from the cTPQ method. The results from these two methods are almost identical.

(Note: Initially, I tried calculating entropy using the script provided in samples/tutorial_2.1/Finite.py. However, the results seemed incorrect; the entropy shows a significantly large negative value as the temperature approaches zero. What could be causing this issue? The formula used is 𝑆=(ln𝑍+⟨𝐻⟩/𝑇)/𝑁sites)

For the Wilson loop(W_p=SxSySzSxSy*Sz), I first calculated the result at zero temperature and then used both the exact diagonalization method and the cTPQ method to calculate it at finite temperatures. At very low temperatures, the results do not match the zero-temperature result, and the results obtained using the exact diagonalization and cTPQ methods also do not completely overlap.

This is the Wp calculated at zero temperature.
image

This is the Wp calculated using cTPQ and exact diagonalization.
image
My final question is: when I use cTPQ to obtain results and simply take an average (divide by NumAve), is this reliable?
CG_12sites.zip

Full_12sites.zip

TPQ_12sites.zip

@tmisawa
Copy link
Contributor

tmisawa commented Dec 11, 2024

For the cTPQ method, the bootsrap method is a standard way to evaluate the average value and error bars.
For details of the bootstrap sampling, please refer to section 3.5.1 in the following paper:

https://www.sciencedirect.com/science/article/pii/S001046552400016X?via%3Dihub

In tutorial 2.4.1, we provide a script for performing the bootstrap method (BS_TPQ.py).
Within error bars, we expect that the results of the cTPQ method are consistent with those of the full diagonalization.
As shown in Fig.8 in the above paper, we confirm that both methods are consistent.
The numerical data for Fig. 8 is available in the following repository:

https://isspns-gitlab.issp.u-tokyo.ac.jp/hphi-dev/hphi-paper2023/-/tree/main/data/Fig8?ref_type=heads

@willOcean1
Copy link
Author

For the cTPQ method, the bootsrap method is a standard way to evaluate the average value and error bars. For details of the bootstrap sampling, please refer to section 3.5.1 in the following paper:

https://www.sciencedirect.com/science/article/pii/S001046552400016X?via%3Dihub

In tutorial 2.4.1, we provide a script for performing the bootstrap method (BS_TPQ.py). Within error bars, we expect that the results of the cTPQ method are consistent with those of the full diagonalization. As shown in Fig.8 in the above paper, we confirm that both methods are consistent. The numerical data for Fig. 8 is available in the following repository:

https://isspns-gitlab.issp.u-tokyo.ac.jp/hphi-dev/hphi-paper2023/-/tree/main/data/Fig8?ref_type=heads

Hello, I am confused about some parts of the code you provided, such as in the function def CalcBasic. For cases where it's not the 0th cTPQ and not the 0th iteration, why is tot_Z calculated as tot_Z = tot_Z * IPL_Z / Norm[0][k] with a cumulative product? Why isn't it calculated as tot_Z = IPL_Z / Norm[0][k], which seems more similar to the formula in the image?
image

In the Norm_rand#.dat file of HPhi, the norms of various cTPQ states are stored, and their squared norms are equivalent to the partition function. Can I use these to calculate entropy (simply calculating the entropy of multiple cTPQs without performing Bootstrap)?

I first use a matrix Norm to store the squared norms of all cTPQ states, with its size being NumAve * Lanczos_Max. Then, within two nested loops, I calculate the entropy using the formula:
Ent[i][j] = ln(Norm[i][i]) + Ene[i][j] * InvTemp[i][j] / Nsitea + ln(lambda)
(where lambda represents the spin degrees of freedom).

By this method, can I correctly calculate the entropy?

Thank you very much for your help!

@tmisawa
Copy link
Contributor

tmisawa commented Jan 5, 2025

Sorry for the delayed response.
First of all, please note that what is output in Norm_rand#.dat is not the norm itself but the relative norm. Since the norm itself can become very large, we have chosen to output the relative norm instead.
For the relationship between the relative norm and the norm, please refer to the explanation around Eq. (49) in the paper.

For cases where it's not the 0th cTPQ and not the 0th iteration, why is tot_Z calculated as tot_Z = tot_Z * IPL_Z / Norm[0][k] with a cumulative product?

This is because IPL_Z = Norm[cnt_samp][k] is the relative norm.

In the Norm_rand#.dat file of HPhi, the norms of various cTPQ states are stored, and their squared norms are equivalent to the partition function. Can I use these to calculate entropy (simply calculating the entropy of multiple cTPQs without performing Bootstrap)?

Again, please note that the relative norm is ouput in Norm_rand#.dat. To obtain the norm itself, it is necessary to multiply the values in Norm_rand#.dat.
Also, please note that calculating entropy from a single sample may induce large fluctuations. Depending on the system size, averaging over more than 10 samples is necessary to obtain a reliable mean value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants