Readit News logoReadit News
zerof1l · 8 months ago
This article highlights something interesting... it is quite common to get at least one /64 IPv6 block from a hosting provider or ISP. Yet most of the rate-limiting and IP blocking is done for a single IP. Sounds like when dealing with IPv6, an entire block of /64 should be rate-limited or blocked.
vladvasiliu · 8 months ago
Even companies which not only should know better, but are actually paid to handle things like this get it hilariously wrong.

The company I work for is a client of a big-ass CDN you've heard of (not the one whose ceo hangs around these parts). Yet, they somehow think it's fine to notify me of "new connections from an unusual IP" when I connect from the same /64 block of ipv6.

selcuka · 8 months ago
It's easier to reuse existing code and simply bump the number of digits required to store the IP address.
bscphil · 8 months ago
I'd be rather surprised if IPv6 hasn't done some damage to the idea of IP blocking on the whole. It's possible, even as a residential Internet user, to request a /56 or /48 automatically with DHCPv6 Prefix Delegation. I have a /56 with Comcast. That's potentially up to 65536 /64 blocks, just from a residential user, so if you're going to attempt IP filtering for IPv6, it's got to be a lot smarter than swapping out your single-IP blocking for /64 blocking.
Guvante · 8 months ago
It is already pretty common to start with IP blocking but upgrade to blocks when the bad behavior continues.

Assuming a /64 as a starting point is an easy win and bumping it up with repeat offenders seems pretty easy in the grand scheme of things.

edelbitter · 8 months ago
It actually makes things a easier for both blocking and allocating (e.g. VPS hoster) side.

Once the "oh no, we can't afford that many unique allocations" excuse is away, algorithms that enforce quotas for every prefix size at the same time (with no excuses for CGNAT weirdness) stop being too ruthless.

You can distribute your addresses as needed, and I can track successful and failing attempts - at whatever distribution scheme you use. E.g. group your "unverified" or "trial" accounts at a larger prefix size, so they get each other blocked - but not your paying customers.

bigstrat2003 · 8 months ago
How are you getting a /56 from Comcast? I can only request up to a /60 from them, any larger and I get a /60 rather than whatever I requested.
stackskipton · 8 months ago
Rate limiting on /64 for IPv6 is well known and I know Google does it for other services. Sounds like this was not properly updated when they put IPv6 into play.
gumbojuice · 8 months ago
I'm on a relatively large Indian ISP, and my home network gets an IPv6 network assigned, which is directly routable. Didn't think about it until tailscale told me it was connecting over a direct IPv6 connection and I wondered how that was possible. Sounds like 90s network rampage may be back here.
chgs · 8 months ago
Well yes, natting is not normal on ipv6 - that’s a major feature.
icedchai · 8 months ago
Direct connections are a good thing and how the Internet is supposed to work. NAT is the only reason IPv4 has lasted this long.
rlpb · 8 months ago
Blocking inbound connections using connection tracking is orthogonal to NAT. It's just that NAT implies the former by default due to its nature.
ajsnigrutin · 8 months ago
The problem here is, that also larger networks (eg. student wifi at some university) uses a /64 for maybe even hundreds of students connected at the same time. Hold a lecture, tell the students to go to github to download some tool, and the first 10 will succeed, and the rest will get rate limited.

The same is true now with NAT (where they're all behind a single ip or a very small pool of IPs), but IPv6 should make these things better.

johncolanduoni · 8 months ago
Even that isn’t sufficient, as it’s very easy to get ahold of /48 blocks. To do a good job of this, you need to actually break things down by ASN and look at their policies for handing out IP addresses to figure out what granularity to use.
chgs · 8 months ago
Effectively a /64 is the new /32.

Your isp should really be giving you a /56 or /48.

benlivengood · 8 months ago
I had assumed that most people would block by /64. Probably safest to count distinct abusive/noisy IPv6s per /64 and block/throttle when it hits a threshold.

Ratio of abuse traffic per IPv6 from a /64 might also make a good threshold.

AtomicByte · 8 months ago
Yes that does happen to be what is commonly done
markasoftware · 8 months ago
[BuyVM](https://my.frantech.ca/cart.php?gid=37), a popular host for shady operators, gives a /48 even with their cheapest plans ($2/month, though only $7/month is in stock right now)
madars · 8 months ago
A bit more context: BuyVM is a legitimate business, popular with hobbyists, and has features that are hard to get elsewhere (e.g., BGP sessions). They do take a pro-free speech stance but they are a far cry from bulletproof hosting ("shady operators"). An imperfect comparison at a massively bigger scale would be Cloudflare's prominence in certain contexts.
punnerud · 8 months ago
What if the user only get one address, how to separate the two? Seems like a need to share if a larger block (providor) is handing out based on blocks or single addresses…
stackskipton · 8 months ago
Say what? IPv6 was designed that first 64 bits are network, last 64 bit are host.

Since /64 is smallest network in IPv6 and because of that most providers hand out /64 when you ask for IPv6 public address because A) Most Rate Limiting uses /64 and B) IPv6 has so many IPs, no one cares.

Vultr has at least one /32 I was able to find (2001:19F0::/32) which if you cut that into /64 comes out ~4.2 Billion different networks or same amount of IPv4 address that exist.

ARIN will hand anyone who asks a /48 IPv6 subnet which 65,536 unique networks and getting larger prefix is not hard.

bsamuels · 8 months ago
IP rate limiting hasn't been a serious misuse prevention tool for 15-20 years
demosthanos · 8 months ago
Can you elaborate? As one tool among many it seems to me to be a perfectly serviceable tool in the toolbox, with a sufficiently high rate limit to account for shared IPs.
jeffbee · 8 months ago
It must be a daunting chore to maintain all the legacy pages. The amount of now-years-old stuff that long-standing sites have to maintain, or choose to maintain, is shockingly high, and testing the combination of all that stuff is impossible.

If you want an example of how diverse in age these apps are, dig around in the Gmail settings panel. Eventually you will land on a popup that uses the original Gmail look and feel, from 2004.

xivzgrev · 8 months ago
Bug bounty program appears to be an efficient spend. For a few thousand dollars they mobilize unpaid people looking for extreme edge cases and then surface these issues. It would’ve cost way more to pay an employee to search for this.
rkagerer · 8 months ago
Yeah, $5k seems awfully cheap considering the effort entailed and the potential impact of this big.
TorKlingberg · 8 months ago
The main cost of running a bug bounty program is developer time spent triaging submissions from all the people who just run an automated scanner against your website and submit everything it outputs.
mmsc · 8 months ago
Depends on the company. Also It can be a good way to say to management, "look, this old deprecated shit needs to be replaced because it's insecure; maintenance is a security issue"
paxys · 8 months ago
Which is exactly why companies are aggressive about deprecating old products and services. "But why can't they just leave them running and not touch it?" Because every such service eventually becomes a security hole. The only secure code is no code.
okanat · 8 months ago
While your argument seems to make sense on the surface, it fails in deeper inspection.

What security implications did Google Reader have? I do understand keeping older APIs and endpoints for authentication and authorization are indeed dangerous. However, if your architecture causes the mere clients of those authorization infra to be exploited, I think the problem isn't keeping the products running. You designed something inherently insecure.

raxxorraxor · 8 months ago
There still is a standard password recovery flow with mail/capability URL that is reasonably safe and hasn't changed too much in a decade.

It is the bullshit some security advisories brought us that introduced new dangers. By sharing telephone numbers for example...

These threats are also worse than losing an account in many cases, because now the data can easily be correlated, which has proliferated through a lot of 2FA bullshit.

fer · 8 months ago
> It must be a daunting chore to maintain all the legacy pages

I always wonder who's the one maintaining the "poke" feature in Facebook.

bix6 · 8 months ago
I thought it was gone but I’m reading it was just hidden but re-added to the UI in 2024. That was always my favorite feature haha.
reaperducer · 8 months ago
It must be a daunting chore to maintain all the legacy pages. The amount of now-years-old stuff that long-standing sites have to maintain, or choose to maintain, is shockingly high, and testing the combination of all that stuff is impossible.

One company I worked for used interns and new hires for that. One of the early tasks assigned to the intern pool was to comb the web sites for outdated information, or things that no longer conformed to the current brand book. The list then went somewhere else so the pages could be updated or deleted.

The major benefit of this was giving the new people an overview of what we do, why we do it, and a slice of the history of the products.

bornfreddy · 8 months ago
On the other hand they had no idea if the information was valid or wildly outdated. But better something than nothing I guess. :-)
belter · 8 months ago
> It must be a daunting chore to maintain all the legacy pages.

Clearly $350 billion revenue in 2024 is not enough...

Magmalgebra · 8 months ago
Something that can be hard to appreciate if you haven't managed this sort of project is that it can be surprisingly hard to throw money at the problem.

If you try to hire at your regular "bar" for skill for boring work like this - people will often quit. This is one of the reasons many company's integrations are lacking despite it being a strategic interest - integration work is miserable and doesn't help your career.

Hiring below the skillbar at the same pay, is dangerous and often doesn't actually work out - if it was that easy someone more skilled probably would have fixed this a while ago.

So you try to pay more for the miserable work - but hold on, now you have to pay out of band salaries, and legal tells you that opens you to massive liabilities.

Ok - maybe you can just level them differently? No, HR will tell you that will mess with all your internal level processes - which are key to running the company. They're going to add a lot of additional overhead tracking these "fake" leveling bands and dealing with the consequences.

None of this means the problem is literally unsolvable, but it now requires a huge amount of time and effort from people near the top of the company who everyone would much rather spend their time on making the company better.

All of this to say - sure you could solve this problem, but it's actually much more complex than adding some line items to a budget.

Source: have watched many big companies try and fail for years to staff unsexy work like this.

staticshock · 8 months ago
In addition to having the money, Google also needs the incentive to spend that money on such projects. If the perceived return on capital is low (or negative!), the incentive is simply not there.
0xbadcafebee · 8 months ago
Google's main search page is the slowest page & UI I have found on the internet today (not accounting for bandwidth limits). Even on modern devices it lags at text entry and even rearranges characters in the text box so you have to wait 10+ seconds for it to finish loading or it will go haywire. The shopping and other pages are actually worse. So it appears you're right, $350B isn't enough money to maintain a web page in 2025.
xnx · 8 months ago
> Eventually you will land on a popup that uses the original Gmail look and feel, from 2004.

Indeed, I recently ran into a Google page that served up the old (~2013) Catull logo.

oxguy3 · 8 months ago
I was recently editing the Wikipedia page for Google Bookmarks (2005-2021). I wanted to add a logo to the page, but I was having a lot of trouble finding a high-quality copy of the logo anywhere. Eventually I figured out that Google's old URL scheme for product logos was very guessable, and they had never taken it down: https://www.google.com/intl/en-US/images/logos/bookmarks_log...

They'll probably never stop serving those old URLs because who KNOWS where they might still be in use. One of surely a million examples of weird little legacy things Google is stuck with.

jeffbee · 8 months ago
I tried to guess what it might be ... I went to check moon.google.com, one of the older apps/jokes that I can recall still running. It seems that they got someone to update moon.google.com with a more recent look and feel, and dozens of moons instead of just the one.

And people say Google abandons products.

ctkhn · 8 months ago
There are major things at some large enterprises that are given the same level of support. Friend works on an internal link shortener app that is heavily used at their mega corporation and it gets maybe one ticket every other sprint just for upgrading node versions etc even though its monitoring is down.
atum47 · 8 months ago
I did something similar way back when I was trying to find the phone number for a person, using Facebook.

When recovering a password Facebook would give you most of the digits of the phone number, so I wrote them down in a vcard file and imported it on my phone to just look at the pictures. It worked surprisingly good.

VladVladikoff · 8 months ago
There is also a similar hole with Google profile photos and other Google apps. For example if you see a review by John Smith on Google maps, you add emails on Google Hangouts, guess a bunch of variations like johnsmith@gmail.com, smithjohn@gmail.com etc and see the profile photos to compare the match.
dheera · 8 months ago
This is why I don't use a real phone number with any of these services. They don't need my phone number to operate either.
shwouchk · 8 months ago
g has been demanding a valid phone for years, as have most other major providers. if you lose the number you sign up with, you can potentially get locked out of the account. whats your mo?
cosmojg · 8 months ago
What do you use instead of a real phone number? How do you get past mandatory phone number verification for services which require it?
VladVladikoff · 8 months ago
I’m mostly impressed that he can throw 40k requests per second at a server for a prolonged period and not somehow spike the resources enough to set off some alarms.
kevindamm · 8 months ago
it is possible that it did throw an alarm but the behavior ceased soon enough afterwards that it didn't escalate to alert-level paging, or that -- even if it did -- those resources were back to normal within a few minutes that it took to open laptop, password password OTP, link-following and graph-referencing annnd oh it's already coming back down before the status update is drafted.

And 40kqps isn't really much at the scale of Focus (or most of Google's APIs) so I could easily see it going under the radar, especially each using different IP addrs and with IPv6 across /64.

The gap worth noticing here isn't monitoring, though, it's the zero rate limiting on js_disabled flow using a token borrowed from an earlier js enabled flow.

userbinator · 8 months ago
For comparison, Google apparently processes about 160k search queries per second.
amelius · 8 months ago
maybe he used a botnet for that? i.e. different IP address for every request or somewhere in between
helsinki · 8 months ago
These bug bounties pay peanuts. Sad.
RankingMember · 8 months ago
They're only hanging themselves by cutting the bounties like this.
yapyap · 8 months ago
It seems that these services forget what happens when white hats stop reporting this because of that :/

Corporate greed sucks for all

garrettgarcia · 8 months ago
Anything under $100k for this is pathetic.
gcanyon · 8 months ago
> This request allows us to check if a Google account exists with that phone number as well as the display name "John Smith".

Shouldn't the rate limit be set here, related to the display name "John Smith"? You get 5 "John Smiths" for free in the first minute, then 5 more in the first hour, then 5 more in each day going forward. With the same million phone number combos you'd need roughly half a lifetime (10,000 days) to get the hit on average.

gcanyon · 8 months ago
> Vendor confirms that the No-JS username recovery form has been fully deprecated

It seems like there are probably people out there with JS disabled for whatever reason who still might need to recover their password?