Readit News logoReadit News
jillesvangurp · 3 months ago
Nice approach. It reminds me of an approach I saw used to resolve coordinates to countries. Instead of loading all country polygons, the team created a bitmap and used colors to map each pixel to a country code. The bitmap wasn't super large and compresses pretty nicely in png format. This worked well enough and it dumbed down the country lookup to simply figuring out the color for a coordinate. Neat trick. And you could probably figure out if you are dealing with an edge case by simply looking at neighboring pixels and fall back to something more expensive if you hit one of those.

And of course with edge cases, there are lots of them but mostly it's fine. One case that comes to mind is that of the border town of Baarle-Nassau On the border with the Netherlands and Belgium. This village has some of the weirdest borders in the world. There are Belgian exclaves inside Dutch enclaves. In some cases the border runs through houses and you can enter in one country and leave in another. Some of the exclaves are just a few meters. There are a few more examples like this around the world.

Another issue is the fractal nature of polygons. I once found a polygon for New Zealand that was around 200MB that broke my attempts to index it. This doesn't matter of course for resolving country codes because it is an island. But it's a reason I implemented the Douglas Peucker algorithm to simplify the polygon mentioned in the article at some point.

westnordost · 3 months ago
The bitmap approach you describe allows for immediate (i.e. O(1) ) lookup of region by coordinate, which is pretty neat. Space-efficiency-wise, a bitmap (+ index that maps color to country) might not be the most efficient data structure, though, as there are more than 256 countries, so you already need 16 bits for each pixel instead of 8. Then, you have the additional complexity of if you actually want the bitmap to be viewable by humans, you need to make sure that the colors for neighbouring countries at least are sufficiently distinct.

Anyway, a Kotlin library I wrote uses a similar technique to make requests for the majority of locations immediate, while also handling the edge cases - i.e. when querying a location near a border.

https://github.com/westnordost/countryboundaries (also available in Rust)

What it does is to slice up the input geometry (e.g. a GeoJson) into many small cells in a raster. So, when querying for a location, one doesn't need to do point-in-polygon checks for potentially huge polygons, but just for those little slices that are in the cell one is querying for. And of course, if a country completely covers a cell, we don't even need to do any point-in-polygon check anymore. All this slicing is done in a preprocessing step, so the actual library consumes a serialized data structure that is already in this sliced-up format.

I needed it to be fast because in my app I display a lot of POIs on the map for which there is logic that is dependent on in which country/state the POI is located.

jillesvangurp · 3 months ago
There are 249 countries with an iso code; so 8 bits might be enough. So it's not that bad. But even at 32 bits it would probably be fine and you could cram in some more data.

There are many similar things of course but nothing that was multiplatform, which I needed. I actually created a multiplatform kotlin library for working with language and country codes a few months ago: https://github.com/jillesvangurp/ko-iso

It seems we have some shared interests. I'll check out your library.

What you describe is nice strategy for indexing things. I've done some similar things. Another library (jillesvangurp/geogeometry) I maintain allows you to figure out which map tiles cover a polygon cover a polygon. Map tiles are nice because they are basically quad tree paths. I have a similar algorithm that does that with geohashes. You could use both for indexing geospatial stuff.

Slicing up the polygons sounds interesting. I've been meaning to have a go at intersect/union type operations on geometries. I added a boolean intersects recently to check whether geometries intersect each other. I already had containment check.

tallytarik · 3 months ago
I remember seeing this technique in a video by Sebastian Lague: https://youtu.be/sLqXFF8mlEU?t=787

Really cool

som · 3 months ago
Great approach.

Worth noting that there is a 6 decimal precision on the coordinates of the 90kb (gz) `coord2state.min.js` ... which suggests an accuracy that may not be present in the simplified data (i.e. <1m).

Before you increase tolerance to decrease filesize, you could consider lowering this decimal precision to 5, 4 or even 3 decimals given the "country, state, or city" requirement.

I also like the idea of using a heavily cached, heavily compressed image that is perfect for the >95% of the country that isn't within a pixel of a border. With a subsequent request for another heavily cached vector tile that encompasses any lat/lng within your 1px tolerance.

alexmolas · 3 months ago
I used to work on a logistics company and we had to map latitude and longitude to specific directions. One of the first things I learnt was to avoid storing 6 decimal precision coordinates. Also, this XKCD was shared a lot https://xkcd.com/2170/
fergonco · 3 months ago
That XKCD is very funny. BTW:

> You are pointing to Waldo on a page... on a specific date. Because of tectonic plates movement.

tantalor · 3 months ago
You could save a bunch of space by encoding the data in a compact binary format and then loading it into a Float16Array.

In a .js file, each character is UTF-16 (2 bytes). Your current encoding uses 23 characters per coordinate, or 46 bytes.

Using 16-bit floats for lat/lon gives you accuracy down to 1 meter. You would need 4 bytes per coordinate. So that's a reduction by 91%.

You can't store raw binary bytes in a .js file so it would need to be a separate file. Or you can use base64 encoding (33% bigger than raw binary) in .js file (more like 6 bytes per coordinate).

(Edited to reflect .min.js)

pixelesque · 3 months ago
> Using 16-bit floats for lat/lon gives you accuracy down to 1 meter.

Not for Longitude it doesn't with values > abs(128), as that for example means 132.0 has the next possible value of 132.125.

float16 precision at values > 16 is pretty poor.

Converting that discrepency (132.125 - 132.0) to KM gives 10 KM.

Did you maybe mean Fixed-point? (but even then that's not enough precision for 1m)

tantalor · 3 months ago
Good catch, I didn't consider that.
netsharc · 3 months ago
> In a .js file, each character is UTF-16 (2 bytes).

What? I'd like to challenge this. The in-memory representation of a character may be UTF-16, but the file on disk can be UTF-8. Also UTF-16 doesn't mean "2 bytes per character": https://stackoverflow.com/a/27794229

The file https://github.com/AZHenley/coord2state/blob/main/dist/coord... doesn't use anything other than the 1-byte ASCII characters.

tantalor · 3 months ago
Yeah you're probably right, I guessed at that.

Thanks for the correction

codingdave · 3 months ago
> I set up an experiment that compares the original geometry with the simplified geometry by testing 1,000,000 random points within the US.

I'd be curious if the reliability is different if, instead of random locations, you limited it to locations with some level of population density. Because a lot of the USA is rural, so that random set is not going to correlate well to where people actually are. It probably matters more the farther east you go as well, as the population centers overlap borders more when you get to the eastern seaboard.

azhenley · 3 months ago
Good thinking. I discuss population density, cities near borders, and narrow borders in the last section.
bigiain · 3 months ago
Another possible suggestion. Maybe choose random points that are within a set radius of points chosen along the borders? So perhaps choose first a random selection of points on the border, then choose random points within a circle (or perhaps just a square with a set delta in the lat/long) that are "nearby to the border" - then measure your error rates for those points at various boundary simplification tolerances? That'd remove the "middle of the state" random points where the border tolerance inevitable makes no difference.
madcaptenor · 3 months ago
As a native Philadelphian, I immediately see why you need a good resolution here - at 0.1 degrees resolution you very well could have assigned my birthplace to New Jersey. If I'm not mistaken New York and Philadelphia are the largest cities where you might have a problem. Chicago's on a state line but the Illinois-Indiana border is straight.
m2fkxy · 3 months ago
> A side effect of the geometry simplication is that there are some very small gaps between states. Based on your use case, you'll need to handle the case of the point not being within any state borders. In these rare cases, you could fall back to a different method, such as distance checking centroid points, adding an episilon to all state borders, or simply asking the user. (The user may also be in another country or in the ocean...)

This is a common topic and easily dealt with by working with topology-informed geometries; most simplification algorithms support topology handling between different features. For instance, TopoJSON can be used.

wodenokoto · 3 months ago
This sounds like one of those “easy if you’ve learned it”. I dabble with GIS at work, so in some sense I am a pro at this, and I don’t know how topology easily deals with this.

But I’d like to know!

m2fkxy · 3 months ago
That's true. I have a bias of having part of my formal education quite focused on geospatial topics. Seeing non-geospatial folks reinventing wheels taught in GIS 101 both makes me smile and grimace thinking that we have have been doing something wrong with basic tools and aspects of the trade not being wider known.

You can look into TopoJSON here: https://github.com/topojson/topojson And a good general introduction to topology in GIS setting is nicely found in QGIS documentation: https://docs.qgis.org/3.40/en/docs/gentle_gis_introduction/t...

cyberax · 3 months ago
Use Nominatim: https://nominatim.org/

It can be self-hosted, with constant replication. There's also Photon which is a cut-down version of it: https://photon.komoot.io

tallytarik · 3 months ago
We self-host nominatim as part of the iplocate.io pipeline. It works great, but the requirements are pretty heavy for something to host casually.

An in-between for OP could be something like opencagedata.com, which is still a third-party API but an order of magnitude less expensive than Google. (not affiliated but have previously explored the service)

cyberax · 3 months ago
Komoot is also available (I linked it), they have a rate limit of 1 request per second, but it should be enough for personal use.
Centrino · 3 months ago
The right term for what you are doing is "reverse geocoding".
esalman · 3 months ago
In some industries, a 0.7% rate of error for a simple reverse geocoding application would not be acceptable.
urschrei · 3 months ago
> A side effect of the geometry simplication is that there are some very small gaps between states. Based on your use case, you'll need to handle the case of the point not being within any state borders. In these rare cases, you could fall back to a different method, such as distance checking centroid points, adding an episilon to all state borders, or simply asking the user. (The user may also be in another country or in the ocean...)

If your pre-simplification input geometries form a coverage[0], you can use e.g. ST_CoverageSimplify[1] or coverage.simplify[2] to simplify them without introducing gaps.

[0] http://lin-ear-th-inking.blogspot.com/2022/07/polygonal-cove... [1] https://postgis.net/docs/ST_CoverageSimplify.html [2] https://shapely.readthedocs.io/en/2.1.0/reference/shapely.co...