-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support adaptive resampling for any transform #134
Comments
Have you seen this work by Jason Davies? https://www.jasondavies.com/maps/tile/ |
Interesting. Thanks for the link; it's very helpful.
I hadn't considered this. In screen space, the view area polygon is a rectangle which makes intersection tests really easy. Inverting the screen polygon sacrifices that optimization opportunity.
I dug into the code to see if I could understand how this works. It's been minified so unless the source is available (I couldn't find a link anywhere) it's a little hard to read, but I get the gist of the recursive tile-finding algorithm. I'm going to look into using a similar method for the mercator tile project I'm playing with, but I still think there might be a use case for 'un-projecting' screen geometries. For example, in this notebook I'm attempting to show how a rectangular area in an orthogonal projection is distorted when shown on a mercator projection. To do this I'm pretty sure I would need to invert the geometry of the screen polygon, but without resampling this comes out wrong. It would be possible to implement custom resampling within the inverse transform, but being able to leverage a generalized version of d3's resampling would make the task much easier. |
I've had a chance to think about this some more and play around. I implemented tile visibility detection in a notebook. It works and is reasonably performant, but I've had to do a crude form of resampling in order to transform tile polygons into spherical polygons (since the north and south edges of a tile aren't necessarily great arcs). Looking at Jason's solution I see a variant of the same idea. I don't think this can be avoided. Ultimately we're doing collision detection between the screen polygon and a tile polygon. We need to get them into the same coordinate system in order to compute their intersection, and there's no coordinate system where both of those entities are rectangles. So at least one of them is going to have to get resampled to preserve its shape. I'll keep thinking about what a reusable API for adaptive resampling would look like. For this use case, the surface could be as simple as a |
(We can't comment on notebooks yet, so first let me tell you here how much I love this notebook!) Two options I wanted to develop in my prototype (which was far from being as advanced as yours, and I will lay to rest as I switch to your approach):
(For the visibility test, I was using Also, I've removed |
Thanks! Those are both great ideas, and I like the cleanup you've done in the notebook you linked. Along the same lines, I was considering trying to do something where tiles from different levels (i.e. different 'z') could be returned together, because some projections will call for different levels of detail in different areas. For example on the orthographic projection, it's wasteful to download z=4 tiles for the whole globe, because the tiles near the poles cover such small areas. You could use z=3 or z=2 tiles in the polar regions instead, by (as you said) halting when the projected area is smaller than a target size. Not sure this would work for vector tiles (since boundaries might not line up on tiles of different resolutions) but it would be great for raster tiles, where you don't want to download more pixels than you need. Regarding |
Here's a proposal for a solution |
Very cool stuff Fil. Quick question:
Is the second part of this statement a reference to specified behavior in RFC 7946? Or is it merely a common convention? I couldn't find any explicit statement about how sparse lines should be interpolated in the spec. |
This part:
d3-geo and TopoJSON use spherical coordinates rather than Cartesian coordinates. |
Yes, and if there was any doubt, the next paragraph closes the subject, in flat Earth way:
From what I understand, things are evolving: Postgis introduced a geography (spherical) centroid option in ST_CENTROID in last September (2017! in version 2.4.0). Users of GIS often rely on folk recipes like "choose an appropriate projection before you calculate the centroid". See https://bl.ocks.org/Fil/608b3ca94a008f2ae01c6273fafb25e7 I like D3 because it really tries not to cut corners in that area (though I think it still does in a few places, like for example in https://github.com/d3/d3-geo-projection/blob/master/src/project/clockwise.js). (This discussion is moving a bit far from the original topic -- maybe the issue should be closed, but the discussion is needed.) |
I've been working on a project that requires fetching the appropriate OpenStreetMap tiles to cover the visible region of a non-Mercator map. This involves "un-projecting" the rectangle
[[0, 0], [width, height]]
(which represents the canvas area) using the selected projection, then re-projecting it into Mercator space and using the bounding box there to fetch the right map tiles (with the aid of d3-tile).Here's some code as an example:
The problem with this is that transforms don't have built-in resampling like projections do, so the lines in the
screen
polygon get transformed into great arcs, even though the edges of the screen in projection-space are actually curves in spherical space.I dug into the resampling code and I think I understand how it works. Now I'm mulling over whether it would be possible to generalize this code so that any transform could use it, and then expose it as
d3.geoResample
or something.Some of this function's assumptions would break for transforms that aren't from (lambda, phi) to (x, y). For instance, the midpoint of the line from v1 to v2 is only (v1 + v2)/(|v1 + v2|) if v1 and v2 are Cartesian unit vectors. And the distance from the projected intermediate point (x2, y2) to that line is a Euclidean distance, and would be incorrect for the distance from an arc in spherical coordinates.
To generalize this, we might need to create a first-class concept of a coordinate system, which provides methods like measure(p1, p2) and midpoint(p1, p2). Then the resampling function could use those methods to do its calculations, rather than hard-coding the appropriate operations for (lambda, phi) -> (x, y).
But generalizing this might get complicated quickly. There's an infinity of possible coordinate systems, and many of them don't make any sense. Would a generalized resampling function have to support them all? And even ignoring these problems, introducing the notion of a coordinate system would mean new APIs. You'd need to be able to retrieve the input and output coordinate systems from a transform -- which means you'd also have to tell a transform what its coordinate systems are when you create it. Pretty quickly this starts to sound like complexity on the same scale as the graphics pipeline (#78).
I'm not sure what the right solution for this is (or if it's even feasible), just wanted to share my thoughts. I think the use cases for resampling on arbitrary transforms are pretty cool. It would allow you to do something akin to
projection.invert
but on whole geometries rather than just points, which permits working with pre-projected vector geometries (rasters are already possible by iterating over pixels in 2d and callingprojection.invert
on each point), as well as reverse-projecting viewports and user selections (no matter the shape, and no matter the projection).The text was updated successfully, but these errors were encountered: