Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linear interpolation for hybridization #177

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

ec147
Copy link

@ec147 ec147 commented Jul 29, 2024

  • Linear interpolation for the evaluation of the hybridization function instead of 0th order interpolation.
  • Addition of a parameter n_tau_delta for the imaginary time grid of $\Delta(\tau)$. The default value is -1, a convention to set it to n_tau.

Right now, the linear interpolation is disabled so that I can pass the tests. Not sure how you want to set it up: do you want to keep the linear interpolation as an option, or completely replace the 0th order interpolation (in that case, all the reference files for the tests need to be recomputed) ?

@Wentzell
Copy link
Member

Thank you @ec147 for this Pull Request!

Please add a description of the Feature and the changes to this PR.

Also, can you please adjust the git history to group the changes into
logical chunks with useful commit messages, instead of generic
messages like Add files via upload

@Wentzell
Copy link
Member

Dear @ec147,

Can you comment on why you decided to go for linear interpolation here?
Was there a particular problem that you wanted to address?

Also, what is your motivation to add an additional parameter n_tau_delta ?

@ec147
Copy link
Author

ec147 commented Oct 1, 2024

Sure. Currently, a lot of things happen at the same time when you increase n_tau. You decrease the binning error of $G(\tau)$, decrease the number of samples in each bin of $G(\tau)$, and decrease the interpolation error in the evaluation of $\Delta(\tau)$ during the run.

These are three completely different things. The idea of this commit is to provide more flexibility for convergence studies, and disentangle all these effects. Typically, the way I converge my results is the following: I start with an arbitrary value of n_tau and converge wrt the number of sweeps, in order to control the noise of the Green's function and the quality of my Fourier transform. Then I converge wrt n_tau, in order to control the binning error of $G(\tau)$, keeping the ratio n_cycles / n_tau fixed st I still have the same number of samples in each bin. Finally, I converge wrt n_tau_delta, to control the interpolation error of $\Delta(\tau)$.

In my system at least, the value I have to choose for n_tau_delta is much higher than n_tau, so forcing the user to have the same value for both requires more sweeps and thus more computation time in order to keep the value n_cycles / n_tau fixed.

Finally, the choice of the linear interpolation is simply to help during the convergence process of n_tau_delta. I was also very curious regarding the impact of this interpolation error, and was worried that a 0th order interpolation was simply not enough and could give faulty results, especially since $\Delta(\tau)$ can have a very high derivative for low imaginary times. Turns out, it not that bad (in my system at least), but it takes a bit longer to converge wrt n_tau_delta.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants