-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparison to ngmix.example.metacal.metacal.py
#36
Comments
And for fun, I did a timing comparison :-) autometacal is about 65x faster than ngmix |
@EiffL if I understood correctly, the nb should reproduce these exact results? I'm not getting them, I might be missing something. |
hummmm which notebook? |
if you mean running the script in this PR: #37 you should get different results from mine, because my R matrix is wrong because I don't have the cubic interpolation code. |
Hey, @EiffL , do you think we could have a consistent definition of ResamplingType in galflow? The way I usually do this is define a hidden variable that's used everywhere throughout the package, and if it needed be, I set it externally when doing experiments. There are many places where ResamplingType is used throughout and at times it is tough to keep track if I've set them up properly. In any case, with our cubic interp, the results are:
...which are closer but not there yet. |
humm the simple way you would typically do this is just by choosing a convention, and making it the default in all functions, so that you don't have to think about it. |
Hummmm interesting..... and slightly worrying ^^' I don't know why we would be getting differing results..... |
This would mean that either our shear values, and/or our gradients are wrong :-/ |
Yes, exactly... ...so, in many functions as |
Is this it or we just actually need quintic? Because we have improved by an order of magnitude... |
Yeah it could be that we need quintic... There is one way to know I guess. We can use our code but compute finite differences instead of autodiff response (like what you are doing in your comparison notebooks). If the bias stays roughly the same, it means that it most likely comes from the interpolation order. If the bias disappears, it means that our gradients are wrong. |
Thus spake The Code:
Stepsize in |
w/ stepsize = 0.002, we get closer to autometacal, as we expected from previous investigations:
|
Ok, so, this is great, what can we conclude from this test?
So, the residual m bias can come from two things:
|
|
oh oh oh, interesting :-) Does that mean quintic in TensorFlow? It's just a bit surprising taht the finite diff doesnt converge to the same value :-/ hummm |
After a more careful check of interpolations, the current result is (for an unrealistically high SNR (~1e7) galaxy - just for the sake of unit testing!)
Finite differences in autometacal are really close to the ngmix (all of them using Bernstein &Gruen 2014 'quintic' interpolation). Now, into the The
They differ by 0.0038% (Note the truncation due to single precision.)
...corresponding to a difference of 0.005%. However, the remaining Now, with realistic noisy (SNR ~ 100, galsim def) galaxies:
Now Finally, as it is, autometacal gives marginally better results (~3%) in the realistic SNR example (the residual m is slightly less). |
Nice results @andrevitorelli! So, the thing to look at is not necessarily the % difference between the m values, but whether or not you detect significant non zero m. In the case of ngimx high SNR, you have For the snr=100 sample you just cannot conclude anything, the m measurement is one order of mag smaller than the error bar, so it's all consistant with zero m, but at the 10^-1 level. You need a looooot more samples to reduce the error bars in this case. |
Given that your finitediff impl. seems consistent with ngmix, this would seem to indicate that there is a small problem somewher in a gradient. To make sure you can further reduce the error bars in your high snr test by increasing the sample size. |
This is solved, right? We can close it? |
Yes, def. |
This PR #37 adds a script that runs autometacal within the ngmix example script for direct comparison against ngmix's finite differences.
With default parameters I'm getting:
So we see a discrepancy in
m
, but we keep in mind that:tests/test_tf_ngmix.py
I would attribute the difference in multiplicative bias to the difference in response matrix, and ultimately at this stage to our faulty interpolation gradients.
@andrevitorelli could you review the code in this PR #37 and try to run this script with your cubic interpolation code?
The text was updated successfully, but these errors were encountered: