Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Use images as unit of observation instead of sampled points #81

Open
jayqi opened this issue Sep 12, 2024 · 0 comments
Open
Assignees

Comments

@jayqi
Copy link
Contributor

jayqi commented Sep 12, 2024

Status quo

Currently, the output of the "2. Match an image to each point" (assign_images.py) keeps the sampled points as the unit of observation in the output.

This is reflected by a few design choices we have:

  • The output geodataframe/GeoPackage file has the points as the rows.
  • We attempt to match as many sampled points as possible to images.
    • We dont allow multiple sampled points to map to the same image. If an image has already been "claimed", a point will try to find another close (but slightly further) image.
  • The primary geometry data of the Point features are still the geolocation of the sampled points and not the geolocation of the images.

Proposed change

I propose that the output should instead have the images as the unit of observation, with the geolocation of the images as the primary geometry of the geospatial dataset.

  • The sampled points are kind of imaginary. We provided roads that we want to analyze, and from those roads we sampled points, but there isn't actually any data associated with those points. The real data is associated with the imagery and physically located at the images' geolocations.
  • If multiple points have the same closest image, we probably just care about that image. It doesn't seem like it makes sense to get another image that is further away to more closely match having an arbitrary number of images.

We should think of the "matching" step as more like a "spatial query": given a dataset of street-level imagery, we are querying a subset of that imagery based on the intersection with a set of evenly spaced points along roads we care about.

This change would have the following interactions or implications with these open issues:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

2 participants