Source and Scene: High-level science classes #65
Replies: 11 comments 21 replies
-
Hi Louis, Just to get some points in there... probably more thoughts in due course... Anisoplanatism Direction-dependent aberrations are in general hard to deal with, but I think it is not as bad as you fear! Resolved structure like disks etc will never be over sufficiently wide fields of view that this matters. The case where we will deal with wide fields of view is with point sources - we may sometimes look at wide-ish binaries (eg some of the HST brown dwarfs), or at fields of many stars. But in general, I think high angular resolution imaging will only encounter a single set of aberrations, where the use case is simultaneous phase retrieval and image deconvolution / point source astrometry. On the other hand, where we will want to use pointing dependent aberrations is in wide angle astrometry (Toliman, or other foreseeable stuff with JWST) or pure phase retrieval (HST/JWST technical stuff) Typical Use Cases I can see us mainly wanting to look at a few things to do high angular resolution imaging:
We will also sometimes want to do pure engineering work, eg phase retrieval. Distortion The best way to think about distortion is that in general a world coordinate system provides an invertible transformation between pixel Regularization The general way you do image reconstruction is to have a loss function objective(pixels) = goodness_of_fit(pixels) + alpha*regularizer(pixels) where |
Beta Was this translation helpful? Give feedback.
-
I agree with most of what has been said but will also throw in my two cents.
|
Beta Was this translation helpful? Give feedback.
-
Yeah I think we still want to have the OpticalSystem as the main object we interact with, or possibly a create a new class This would allow us to basically keep This would basically be the same structure that Adam made, but with This would also put us more in line with the overall architecture used for webbpsf which is probably not a bad idea in the long run! |
Beta Was this translation helpful? Give feedback.
-
Hi all, (abstract) class Source(eqx.Module):
spectrum : f64[2, n] # n is the number of wavelengths in the discrete spectrum.
position : f64[2, 1] # the x, y position. Silly question but should there be a z?
resolved : bool # answers the question: is this source resolved?
def __init__(self, spectrum : f64[1, n], position : f4[2, 1]):
(abstract) def _regularize(self, pixels): # Not sure what is needed to calculate the
# regularisation. @benjaminpope, I figure you will have a better idea where this
# should and what it will need.
(abstract) def __call__(self, params_dict): # I'm in favour of generating the
# wave-fronts at the source, @LouisDesdoigts may have something to say aboutthis
# though. (i.e. replacing CreateWavefront)
class Point(Source):
def _regularise(self, pixels):
# call specific @classmethod regulariser.
def __call__(self, parameters):
class Smudge(Source): # This is an Extended source Smudge is a joke
def _regularise(self, pixels):
def __call__(self, parameters): From @LouisDesdoigts original post I see there are many ways that the regularisation's can be calculated. I was thinking that these method___s___ could all be implemented in the (abstract) For anyone who may not know, a The reason I think it belongs in the (abstract) class Regulariser(eqx.Module):
class MaximumEntropy(eqx.Module):
def __init__(self, ):
def __call__(self, ): ect. I do not know enough about regularisation to comment on any shared behaviour or useful functionality at this point, but if someone could comment a list of useful sources I am happy to implement a few of these, provided implementations do not already exist. This seems to tie in with @benjaminpope grand scheme of mixing neural network layers with optical ones willy-nilly. I'll be messaging again once when I start working on the Regards Jordan |
Beta Was this translation helpful? Give feedback.
-
Yeah I don't think
These might be used in combination, too, as a weighted sum! so I think we need to be able to access the pixel values but not necessarily that these regularizers be built in - or alternately, that the sources by default have multiple regularizer methods. And as Adam says, we may want to also access other parameters for design problems, where we regularize in various ways phase values etc. |
Beta Was this translation helpful? Give feedback.
-
Hi all, The code is once the Please reply to this comment with suggested changes, fixes, additions ect. Please note there was some incomplete code regarding the weights for the combined spectra of binaries. I wasn't sure if we cared about this functionality and the implementation is non-trivial. |
Beta Was this translation helpful? Give feedback.
-
Just throwing a small plug here for #70 - I think this is something that would be useful, and could inform/help guide how we build the |
Beta Was this translation helpful? Give feedback.
-
Hi Guys, There is also the possibility that the a single PSF is generated based on the optical transmission of the As always let me know your thoughts. Jordan |
Beta Was this translation helpful? Give feedback.
-
Sio if you have 2 sources with different positions, the positions are encoded by a phase slope across their wavefront. |
Beta Was this translation helpful? Give feedback.
-
Well 1 λ slope across the pupil will shift the PSF centre by λ/D. So it's not actually that small in phase terms but it's small in nm. (This is how we position the binaries already in the fisher information example. |
Beta Was this translation helpful? Give feedback.
-
Yeah so just to jump back in here, I don't think the issues you've brought up @Jordan-Dennis will be an issue for us. Firstly as @benjaminpope mentioned, all point source positions can only be parameterised by their offset angle from the pointing axis of the telescope, ie relative to the optical axis. Every point within an optical system along the optical axis has some linear relationship between its input offset angle and the corresponding angle that it is deflected away from that axis. Ie a high field of view optical system will 'deflect' some input source a smaller angle than a small field of view optical system. That angle is determined by the properties of all the preceding optical elements, but no matter the optical system that shift is exclusively parametrised by the phase slope across the aperture, and hence the angular offset from the optical axis. This is great for us because it means that source and scene are entirely agnostic to the optical system observing it. Source and scene can store and parametrise on-sky properties arbitrarily and only need to spit out a series of angular positions, which can then be turned into angular positions relative to the optical axis defined by the telescope pointing. This is all you need to create and propagate the wavefront, allowing source/scene able to be entirely ignorant of the optical system. A similar logic applies to optical system and detector. Optical system output an array of flux values entirely parameterised by the pixel scale since all psfs are produced relative to the paraxial axis. No need to be aware of anything else going on upstream. This simply means that the telescope class does the heavy lifting, transforming information between these distinct classes that are agnostic to each other. It takes input angular positions and relative spectral fluxes and individually propagates these through its optical system, and then passes them to the detector. This means it does all the formatting of psfs in terms of relative brightnesses, 'observing strategies', etc. Let me know if that makes sense, or if I've perhaps misunderstood some nuance your original comment, but source and scene should be relatively simple in my understanding. I think they only need to take a series of arbitrarily parameterised on sky sources and spit out relative Cartesian position angles, and spectral fluxes ✌️ |
Beta Was this translation helpful? Give feedback.
-
So after our big science planning zoom call on Friday I now have a much better idea of where we want to take the package for JWST. I believe we would be best served by creating two new class types:
Source
: This class would contain and parameterise the astrophysical source object, and determine the control flow of its modelling. Ie there are two (semi) distinct methods used to model an unresolved point source versus a resolved source with an extended distribution (proto-planetary disks, etc). Similarly there are multiple levels of detail we could want here. Extended objects over a large field of view may need to be modeled anisoplantaically (changing with on-sky position) depending on the optical system. This can actually complicate our life a decent amount with a term I'm going to call 'the curse of the forwards model'. If we have multiple source objects, some resolved and some unresolved, within a small on-sky separation then we can no longer apply the same computational control flow for an entire detector/image. This is where theSource
object would allow each specific set of objects call different sub-routines based on their type. Ie we may want a different regularisation term for proto-planetary disks vs a Wolf-Rayet star.At this point I can only loosely suggest some class attributes without knowing how we will integrate this into the wider ∂Lux infrastructure, but I suspect @benjaminpope would be the most knowledgeable on what we would want here. Off the top of my head:
This class would slightly change the
OpticalSystem.weights
class attribute, transforming it into its more physical interpretation, the filter. Similarly it would allow us to choose the computational cost of our models by having low-importance objects use degraded source spectrums that use a fewer number of wavelengths than the science target. Similarly we could specify oversampling rates for important targets etc.Scene
: This would store a list of of theSource
objects and determine the control flow for how each is modeled (ie image plane convolution for extended sources, current methods for unresolved sources). I envision this class containing the code used to perform convolutions, regularisation, parametric transit models (maybe?) - you get the idea, all the not-psf modelling code needed to create a final image or parameterise any number of external sources. This class would call theOpticalSystem
to generate psfs and then would handle pasting together the psfs correctly (say we model some with oversampling and some without), and passing them to a potential detector class that maybe will be teased out of the the currentOpticalSystem
(more info on this below).One thing I will say though is that I really hate the term 'Scene', I'm not entirely sure why but @Jordan-Dennis agrees with me 😂. I think it feels way too amorphous, but like
AstrophysicalScene
is too wordy etc... Open to any and all suggestions!I think this is a really important step as this will begin to be where we really start to become not just a better PSF modeling engine than poppy based on the same principles, but a full end-to-end astrophysical autodiff forwards modelling framework. We should, in theory, be able to model ANY set of data from ANY telescope with any arbitrary set of sources and parameterisations of telescopes, transit models, regularisations etc.
These additions would open some questions too about what code lives where - Does the distortion mapping happen in the
OpticalSystem
or theScene
object. If theSource
object contains the RA, DEC information and theOpticalSystem
contains thepositions
attribute how do we coherently mix these two together - perhaps a base optical system class that only contains the core functionality to be used by theScene
object. The currentOpticalSystem
would then inherit from this, and we would use a new class type that it inherits from that doesn't containpositions
, bur ratherpointing
in RA, DEC - helping drive the distortion mapping etc.This does also partly open the question as to whether maybe we should separate the detector modelling code (which is pretty basic at this point) into its own class type to store distortion functions and what not that can be driven by any of the different class types.
Anyway this is the grand vision that I have in my head, I'm very open to suggestions and am aware that we probably only want to build a very reduced functionality version to do our specific science first. Having a coherent wider picture of how all this will paste together I think will save us a lot of headaches in the long run. I envision the
dLux/base.py
script then containing these classes:Let me know if anything isn't clear here or whatever thoughts you may have!
Beta Was this translation helpful? Give feedback.
All reactions