Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: improve hotstart by not using the auxiliary variable #156

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

jacopok
Copy link
Contributor

@jacopok jacopok commented Nov 27, 2024

This is the first draft of a way to improve the hotstart transformation, solving #155.

It does not use an auxiliary variable and implements a single transformation, computed from the marginal distributions on the posterior samples and represented as a cubic "PChip" spline of the marginal CDFs, which is guaranteed to be monotonic.

This is what the new transform looks like when computed for a posterior guess with width $10^{-2}$ and centered around 0.3.

new_transform

The auxiliary logweight is computed from the derivative of the spline, which is analytic.

The for loop in the get_aux_splines function is there to ensure that the compression is always one: if the tails of the posterior guess PDF approach zero, we do not need to capture that, and can only use the "interior" part of the PDF where its value is greater than the effective PDF outside of the compression region.
Essentially the idea is to avoid a very low-probability region at the edges of the posterior guess.

I think there is still some tuning to do and for sure some documentation to write, I wanted to start the PR early to make it clear what I'm doing.

@JohannesBuchner
Copy link
Owner

Thanks for looking into this.

We should have a few benchmarks to check that no biases are introduced, and to measure the speed-up.

I would suggest to use the ones from https://github.com/JohannesBuchner/space-of-inference-spaces/blob/main/image/problems.py

  • Asymmetric Gaussian (get_asymgauss(4), get_asymgauss(16))
  • Funnel (get_stdfunnel4(1, gamma=0.95), get_stdfunnel4(9, gamma=0.95))
  • Beta distribution (get_beta(2), get_beta(10))
  • Rosenbrock (get_rosenbrock(2), get_rosenbrock(20))

@jacopok
Copy link
Contributor Author

jacopok commented Nov 27, 2024

Definitely! There is a big degree of freedom in choosing which posterior guess to use, I think we should compare

  • regular sampling
  • hotstart sampling with a "good" guess
  • hotstart sampling with a "bad" guess

but it's tricky to define what those mean in general. For the spike-slab case, I'd use Gaussians with variances comparable to the spike ("bad") and the slab ("good").

@JohannesBuchner
Copy link
Owner

Yeah, I think we should aim for much-improved performance for a Gaussian-ish posterior shape, and no bias with arbitrary performance for any other cases.

The former should be quite easy in high information gain analyses, such as asymgauss.

A "optimistic" guess may be based on 3 sigma, while a "cautious" guess could be based on 10 sigma-equivalent quantiles of the marginals (or full interval if the zoom interval is not much smaller than the full interval).
I guess one could also try taking the posterior from a Metropolis-Hastings MCMC run or via an optimizer run:

from snowline import ReactiveImportanceSampler
sampler = ReactiveImportanceSampler(paramnames, loglike, transform)
sampler.laplace_approximate(num_global_samples=1000, verbose=True)

which stores the Laplace Approximation into: sampler.invcov, sampler.cov, sampler.optu, sampler.optp, sampler.optL. But yeah, such heuristics will not always give good results.

@jacopok
Copy link
Contributor Author

jacopok commented Dec 10, 2024

Quick progress update: after some trial and error I agree that a KDE is better.
I have an implementation to construct splines for the cumulative distribution and the density, also parameterized by the fraction of prior mass one wants the guess to "take up", ranging from 0 to 1: here is what it looks like

density_cumulative_by_original_fraction

Now I need to find a way to have two interpolants for the inverse cumulative distribution and the probability density which correspond exactly (since the inverse of a generic cubic spline is not a cubic spline anymore). It needs to be fast since it will be evaluated at every iteration; maybe a root solver is OK but I'd prefer something more direct.

Edit: Or, just do piecewise linear for the cumulative and piecewise constant for the derivative - we don't need more precision than that!

@JohannesBuchner
Copy link
Owner

I guess another parameter one could consider is a "temperature", as common for simulated annealing, for flattening the posterior distribution $\pi' = \pi^\beta$ where $\beta\in[0..1]$, or alternatively by convolution (e.g. broadening the Gaussians of the KDEs).

@JohannesBuchner
Copy link
Owner

I guess it is the user's responsibility to prepare the auxiliary probability distribution though, although we can give some suggestions in examples.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants