Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shadowfinder 2.0b #24

Open
GaborFriesen opened this issue Nov 7, 2024 · 6 comments
Open

Shadowfinder 2.0b #24

GaborFriesen opened this issue Nov 7, 2024 · 6 comments
Assignees

Comments

@GaborFriesen
Copy link

Now we use object height/shadow length at a known time to identify a list of potential locations.
If you have a shadow at 2 different times, with angle A and angle B, then that change in angle for these times corresponds to a list of potential locations.
Having a section that would allow for an input like this could increase the potential use cases for the shadow finder.

In other words. Right now the inputs are:
Object height + Shadow length + known time = locations.
I propose adding a new section with the inputs
Shadow Angle 1 + Time 1 + Shadow angle 2 + Time 2 (essentially: Shadow Angle difference + Time difference) = locations

@GaborFriesen
Copy link
Author

Visualisation:
Theory

@JackCollins91
Copy link

I'm interested in taking. Can at least make an assessment.

@GaborFriesen
Copy link
Author

That's all I could ask for. Thanks a lot for checking it out!

@GaborFriesen
Copy link
Author

Also just to be clear @JackCollins91, the focus of the inputs should be on the shadow angle difference. Since obviously if you don't know the location (which you'd be using this for) you're not gonna really know the exact angle relative to f.ex geographic north.

@JackCollins91
Copy link

JackCollins91 commented Nov 30, 2024

Hi @GaborFriesen and @GalenReich ,
I've looked at the code implementation and had a think about this use case.
First off, I'd like to propose that we find contact with a deeper knowledge of chronolocations math/physics because there might be a more sophisticated way of solving this issue (I'm just a programmer, and not a mathematician or astrophysicist).

But looking over the code, I can propose the following.

Currently, our implementation to determine possible locations based on shadow /object height is nice and simple: for the given date-time, we calculate the angle of the sun at every location on Earth (with a grid), and then calculate the angle of the sun given the shadow/object lengths, then we simply match up what locations on earth would have had that given angle (and we plus/minus a tolerance because there's a bit of uncertainty created from not knowing the exact time, flatness of the ground, etc).

Ok, so we want to be able to narrow down the possible locations if we know the sun's angle in multiple photos, spanning a few time steps, of the same location.

The simple answer, I think, would be that this is just the inclusion of the sets of possible locations given angle A and B. For example, in picture A we can determine possible locations Dubai, Berlin, New York. In image B we have an angle that gives us possible locations Dubai, Moskow, Mumbai. So the only location in common is Dubai. Phrased mathematically, we just take the lat/longitude cells that have a high probability in both cases (and drop grid cells with only a high probability in one but not the other).
Does this make sense?

I understand @GaborFriesen would like to stress that the change in angle may be what uniquely identifies a location over simply repeated chronolocatings. For this, I think I'd need guidance from someone who can provide an equation that chronolocates based on change in angle.

Again, I think it might need a physicist or someone who really understands the math for chronolocation to advise on this - this isn't just a programming task.

But if my solution above sounds ok, I would propose to implement it like this for the sake of good design:

We can add a function that takes a list of dictionaries, each dictionary must contain key-values that correspond to the object-height, shadow-hieght, and date-time, used in a normal shadow-find operation. Another parameter would allow the operation to parallelism across cores on the machine. We then run (optionally in parallel for faster run time), a list of shadow-finding operations, and then return the inclusion of all sets of lat/longs.

This way, the user can input not just two, but any number of cases. The operation can be parallelized which is probably important because this can be a long running computation. Finally, the operation can be broken up such that if expert users want the raw list of outputted lats/longs, they can access it, but more casual users can have this whole functionality delivered as a single function.

I could deliver this along with some unit tests. If there's no parallelization package in use already, that might need to be a new dependency.

Let me know your thoughts.

@JackCollins91
Copy link

Just made PR #26 to address this issue - but the way I implemented the solution is, of course, open to feedback. Let me know what you think.
example_multi_shadow_find_figure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants