You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or opportunity? Please describe.
In its current form (v0.2), MPoL uses float64 (or complex128) tensor types everywhere. This is because very early in MPoL development, I made the decision for core modules like BaseCube to use tensors of this type. All of the downstream code then builds on tensors of this type. If I recall correctly, I think I made the decision to use float64 because I had some divergent optimisation loops with float32 and I thought loss of precision was at fault because of the large dynamic range of astronomical images. With a few years of understanding between now and then, it seems more likely that the optimisation simply went awry because of a bad learning rate and finicky network architecture (e.g., no softplus or ln pixel mapping), but I never got to the bottom of the issue.
Describe the solution you'd like
In a test branch, create a MPoL version that will run with float32 and complex64 types
Evaluate if 'modern' MPoL can successfully run to completion with single precision, and what speed up this affords over float64 (if any)
If single precision improves performance, do not enforce type float64 in MPoL objects.
single precision will also allow MPoL to run on Apple MPS, which does not work with float64.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem or opportunity? Please describe.
In its current form (v0.2), MPoL uses
float64
(orcomplex128
) tensor types everywhere. This is because very early in MPoL development, I made the decision for core modules like BaseCube to use tensors of this type. All of the downstream code then builds on tensors of this type. If I recall correctly, I think I made the decision to usefloat64
because I had some divergent optimisation loops withfloat32
and I thought loss of precision was at fault because of the large dynamic range of astronomical images. With a few years of understanding between now and then, it seems more likely that the optimisation simply went awry because of a bad learning rate and finicky network architecture (e.g., no softplus or ln pixel mapping), but I never got to the bottom of the issue.Describe the solution you'd like
float32
andcomplex64
typesfloat64
(if any)float64
in MPoL objects.float64
.The text was updated successfully, but these errors were encountered: