-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello, I encountered some problems while performing finite temperature calculations. Could you help me resolve them? #191
Comments
For the cTPQ method, the bootsrap method is a standard way to evaluate the average value and error bars. https://www.sciencedirect.com/science/article/pii/S001046552400016X?via%3Dihub In tutorial 2.4.1, we provide a script for performing the bootstrap method ( |
Hello, I am confused about some parts of the code you provided, such as in the function def CalcBasic. For cases where it's not the 0th cTPQ and not the 0th iteration, why is tot_Z calculated as tot_Z = tot_Z * IPL_Z / Norm[0][k] with a cumulative product? Why isn't it calculated as tot_Z = IPL_Z / Norm[0][k], which seems more similar to the formula in the image? In the Norm_rand#.dat file of HPhi, the norms of various cTPQ states are stored, and their squared norms are equivalent to the partition function. Can I use these to calculate entropy (simply calculating the entropy of multiple cTPQs without performing Bootstrap)? I first use a matrix Norm to store the squared norms of all cTPQ states, with its size being NumAve * Lanczos_Max. Then, within two nested loops, I calculate the entropy using the formula: By this method, can I correctly calculate the entropy? Thank you very much for your help! |
Sorry for the delayed response.
This is because
Again, please note that the relative norm is ouput in |
I want to calculate entropy and the Wilson loop. For entropy, I compared the results obtained using the exact diagonalization method with those from the cTPQ method. The results from these two methods are almost identical.
(Note: Initially, I tried calculating entropy using the script provided in samples/tutorial_2.1/Finite.py. However, the results seemed incorrect; the entropy shows a significantly large negative value as the temperature approaches zero. What could be causing this issue? The formula used is 𝑆=(ln𝑍+⟨𝐻⟩/𝑇)/𝑁sites)
For the Wilson loop(W_p=SxSySzSxSy*Sz), I first calculated the result at zero temperature and then used both the exact diagonalization method and the cTPQ method to calculate it at finite temperatures. At very low temperatures, the results do not match the zero-temperature result, and the results obtained using the exact diagonalization and cTPQ methods also do not completely overlap.
This is the Wp calculated at zero temperature.
This is the Wp calculated using cTPQ and exact diagonalization.
My final question is: when I use cTPQ to obtain results and simply take an average (divide by NumAve), is this reliable?
CG_12sites.zip
Full_12sites.zip
TPQ_12sites.zip
The text was updated successfully, but these errors were encountered: