The technical approach is also fascinating—using static Parquet files on R2 instead of a traditional database keeps costs low while maintaining accessibility via SQL and DuckDB. Offloading computation to GitHub Actions is a clever way to handle large-scale processing efficiently.
It'll be interesting to see how this evolves, especially with the potential addition of traffic-aware travel times. Great work!
Would be interesting to see how it compares to static analysis tools like mypy or linters—does it catch edge cases they might miss? Nice work!