This would be an interesting additional layer for google maps search which I often find to be lacking. For example, I was recently travelling in Gran Canaria and looking for places selling artesan coffee in the south (spoiler: only one in a hotel which took me almost half an hour to even find). Searching for things like "pourover" and "v60" is usually my go-to signal but unless the cafe mentions this in their description or its mentioned in reviews it's hard to find. I don't think they even index the text on the photos customers take (which will often include the coffee menu behind the cashier).
Yeah, that can be somewhat of a problem in bigger cities ;-) It's pretty common for people to have taken a photo of the menu in cafes but as mentioned it seems google isn't ingesting or surfacing that information for text search.
GitHub of the person who prepared the data. I am curious how much compute was needed for NY. I would love to do it for my metro but I suspect it is way beyond my budget.
(The commenters below are right. It is the Maps API, not compute, that I should worry about. Using the free tier, it would have taken the author years to download all tiles. I wish I had their budget!)
The linked article mentions that they ingested 8 million panos - even if they're scraping the dynamic viewer that's $30k just in street view API fees (the static image API would probably be at least double that due to the low per-call resolution).
OCR I'd expect to be comparatively cheap, if you weren't in a hurry - a consumer GPU running PaddlePaddle server can do about 4 MP per second. If you spent a few grand on hardware that might work out to 3-6 months of processing, depending on the resolution per pano and size of your model.
Their write up (linked at top of page below main link, and in a comment) says:
> "media artist Yufeng Zhao fed millions of publicly-available panoramas from Google Street View into a computer program that transcribes text within the images (anyone can access these Street View images; you don’t even need a Google account!)."
Maybe they used multiple IPs / devices and didn't want to mention doing something technically naughty to get around Google's free limits, or maybe they somehow didn't hit a limit doing it as a single user? Either way, it doesn't sound like they had to pay if they only mention not needing an account.
(Or maybe they just thought people didn't need to know that they had to pay, and that readers would just want the free access to look up a few images, rather than a whole city's worth?)
i just hashout out the details with claude. apparently it would cost me ~8k USD to retrieve all Taipei street images from gmap api with 3m density. Expensive, but not impossible.
Tangent, there are these videos on YT of people walking through cities, the ones I like in particular are through Tokyo/Japan. I was thinking it would be cool to build a 3D map from that, it is possible but not my field. I think some companies have done it too. But there is a lot of data on that. Maybe free robot training (walking through a crowd like delivery).
I believe it's a combo of SLAM/photogrammery/VIO but you don't have an IMU so that part would have to be estimated from the video. Maybe the flickering of the lights with the frames probably too fast.
There was a guy a long time ago, who did YT videos of the tech markets in Tokyo and it was really surprising some of the best places to get parts for smartphones or robots were completely non-descript buildings in the heart of the city. He specifically went to places that most people wouldn't know about unless you really had great local information.
If someone were to do what you're saying, it would be a huge win for people visiting and being able to find these places. I would love to see this.
Similarly it would be great to have a tool to do it with stills, like reconstruct a floor plan based on real estate photos. Even if it were partially manually, it would be pretty handy.
The pudding.cool article has a link labeled "View the map of “F*ck”" but it leads to a search for "fuck" instead. If you search for "F*ck", you find gems such as "CONTRACTOR F CK-UP" https://www.alltext.nyc/panorama/KhzY08H72wV2ldXamZU5HA?o=76... (Strategically placed pole obscuring the word.)
Nice! If you want to email hn@ycombinator.com we could send you a repost invite for https://news.ycombinator.com/item?id=44664046 - but please wait a while first. The trick is to let enough time go by for the hivemind caches to clear. Then everything old becomes new again :) - usually 2-3 months is a good interval...
This is a super cool project. But it would be 10x cooler if they had generated CLIP or some other embeddings for the images, so you could search for text but also do semantic vector search like "people fighting", "cats and dogs, "red tesla", "clown", "child playing with dog", etc.
Deleted Comment
Could easily seeing myself come back to this.
│
└── Dey well; Be well
https://github.com/yz3440
(The commenters below are right. It is the Maps API, not compute, that I should worry about. Using the free tier, it would have taken the author years to download all tiles. I wish I had their budget!)
It's the Google Maps API costs that will sink your project if you can't get them waived as art:
https://mapsplatform.google.com/pricing/
Not sure how many panoramas there are in New York or your metro, but if it's over the free tier you're talking thousands of dollars.
OCR I'd expect to be comparatively cheap, if you weren't in a hurry - a consumer GPU running PaddlePaddle server can do about 4 MP per second. If you spent a few grand on hardware that might work out to 3-6 months of processing, depending on the resolution per pano and size of your model.
> "media artist Yufeng Zhao fed millions of publicly-available panoramas from Google Street View into a computer program that transcribes text within the images (anyone can access these Street View images; you don’t even need a Google account!)."
Maybe they used multiple IPs / devices and didn't want to mention doing something technically naughty to get around Google's free limits, or maybe they somehow didn't hit a limit doing it as a single user? Either way, it doesn't sound like they had to pay if they only mention not needing an account.
(Or maybe they just thought people didn't need to know that they had to pay, and that readers would just want the free access to look up a few images, rather than a whole city's worth?)
I'm wondering about more the data - did they use Google's API or work with Google to use the data?
I believe it's a combo of SLAM/photogrammery/VIO but you don't have an IMU so that part would have to be estimated from the video. Maybe the flickering of the lights with the frames probably too fast.
ex. https://youtu.be/ohlzQNCpT7M?si=zH764fDlHqPKyjin&t=537 ex. https://www.youtube.com/watch?v=UZi2GeEGdvM
If someone were to do what you're saying, it would be a huge win for people visiting and being able to find these places. I would love to see this.
edit: although this is not what you're describing, this is literally using a 360 camera
Apple's Room Plan is pretty legit measuring walls/objects in a room but also requires being in the room/moving it around
Dead Comment
All Text in NYC - https://news.ycombinator.com/item?id=42367029 - Dec 2024 (4 comments)
All text in Brooklyn - https://news.ycombinator.com/item?id=41344245 - Aug 2024 (50 comments)
https://london.publicinsights.uk