Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Way to compare multiple shadows at different times #20

Open
GalenReich opened this issue Jul 11, 2024 · 3 comments
Open

Way to compare multiple shadows at different times #20

GalenReich opened this issue Jul 11, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@GalenReich
Copy link
Collaborator

By comparing multiple images of a location separated in time, it is possible to partially disambiguate the location by overlaying the resulting rings.

It would be good if the Shadow Finder tool had some way of supporting this type of search.

Any PR to close this issue should consider the most user-friendly way to support this change in the existing Notebook or in a new Notebook.

At a minimum it should support shadow measurements for two times, but supporting an arbitrary number of inputs would be interesting!

An Example

Here is an example made by running the tool twice and overlaying (lighten only) the resulting output:

image

Similarly, overlaying (darken only) shows only the matching locations:

image

This works in both UTC and Local modes, though with local times the locations are less intuitive:

image

@GalenReich GalenReich added the enhancement New feature or request label Jul 11, 2024
@Roelandgit
Copy link

Hi,
I am interested in looking into this, but I have a few questions to make sure I am understanding everything correctly.
To me, this seems like applying a logical_and to two or more location_likelihoods arrays, so only locations that are possible for every input remain. Is this correct?
Looking at the code, in my understanding, in the location_likelihoods array, values close to 0 mean a good fit/match, and values far from 0 are improbable or impossible (NaN). By using absolute values "over/undershoot" are treated the same, and using the colormap with a cutoff at 0.2 then only highlights likely locations. Is this correct?
What is the rationale for this way of representing, and for this cutoff?
To me it seems somewhat counterintuitive, and data scaled between 0 and 1 makes more sense to me, though I understand/agree with the relative difference calculation that underlies the array.
I also think the choice for the representation of the likelihood data will be important when trying to address the other issues related to uncertainty/measurement errors in time/lengths inputs.

@GalenReich
Copy link
Collaborator Author

Hi, thanks for the question! There are definitely elements of the code-base that are poorly named and could use a refactor!

At the moment, the rationale is that the relative difference (stored in the poorly-named location_likelihoods object) represents the relative change between the predicted shadow length and the measured shadow length. A value of 0 means that there was no difference between the predicted and measured shadows, so that location has a high likelihood of a match. 0.2 means that there is a 20% difference in shadow lengths, and represents a soft boundary of the matching.

If the data was binary, it would be a simple logical_and to combine the sets. In this case we still have continuous data, so a different metric is needed. The images above are achieved by doing either max(value1, value2) or min(value1, value2). Other metrics may be more sensible/balanced.

I hope that helps, feel free to ask any follow-up questions too!

@Roelandgit
Copy link

Thanks for the quick reply, and happy to note I'm understanding OK.
I've submitted a pull request #25.

It is not ready yet, but because it also contains quite a bit of refactoring, I would like feedback before proceeding further. Tests are passing, and I also added a test of my own.

I'm not sure whether doing the PR at this stage is the intended approach for going about this, so please let me know if another workflow is desired. If there is a general agreement on the approach, I can continue. I havent spent any time at the colab and cli parts yet, so these are TODO anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants