This article highlights something interesting... it is quite common to get at least one /64 IPv6 block from a hosting provider or ISP. Yet most of the rate-limiting and IP blocking is done for a single IP. Sounds like when dealing with IPv6, an entire block of /64 should be rate-limited or blocked.
Even companies which not only should know better, but are actually paid to handle things like this get it hilariously wrong.
The company I work for is a client of a big-ass CDN you've heard of (not the one whose ceo hangs around these parts). Yet, they somehow think it's fine to notify me of "new connections from an unusual IP" when I connect from the same /64 block of ipv6.
I'd be rather surprised if IPv6 hasn't done some damage to the idea of IP blocking on the whole. It's possible, even as a residential Internet user, to request a /56 or /48 automatically with DHCPv6 Prefix Delegation. I have a /56 with Comcast. That's potentially up to 65536 /64 blocks, just from a residential user, so if you're going to attempt IP filtering for IPv6, it's got to be a lot smarter than swapping out your single-IP blocking for /64 blocking.
It actually makes things a easier for both blocking and allocating (e.g. VPS hoster) side.
Once the "oh no, we can't afford that many unique allocations" excuse is away, algorithms that enforce quotas for every prefix size at the same time (with no excuses for CGNAT weirdness) stop being too ruthless.
You can distribute your addresses as needed, and I can track successful and failing attempts - at whatever distribution scheme you use. E.g. group your "unverified" or "trial" accounts at a larger prefix size, so they get each other blocked - but not your paying customers.
Rate limiting on /64 for IPv6 is well known and I know Google does it for other services. Sounds like this was not properly updated when they put IPv6 into play.
I'm on a relatively large Indian ISP, and my home network gets an IPv6 network assigned, which is directly routable. Didn't think about it until tailscale told me it was connecting over a direct IPv6 connection and I wondered how that was possible. Sounds like 90s network rampage may be back here.
The problem here is, that also larger networks (eg. student wifi at some university) uses a /64 for maybe even hundreds of students connected at the same time. Hold a lecture, tell the students to go to github to download some tool, and the first 10 will succeed, and the rest will get rate limited.
The same is true now with NAT (where they're all behind a single ip or a very small pool of IPs), but IPv6 should make these things better.
Even that isn’t sufficient, as it’s very easy to get ahold of /48 blocks. To do a good job of this, you need to actually break things down by ASN and look at their policies for handing out IP addresses to figure out what granularity to use.
I had assumed that most people would block by /64. Probably safest to count distinct abusive/noisy IPv6s per /64 and block/throttle when it hits a threshold.
Ratio of abuse traffic per IPv6 from a /64 might also make a good threshold.
[BuyVM](https://my.frantech.ca/cart.php?gid=37), a popular host for shady operators, gives a /48 even with their cheapest plans ($2/month, though only $7/month is in stock right now)
A bit more context: BuyVM is a legitimate business, popular with hobbyists, and has features that are hard to get elsewhere (e.g., BGP sessions). They do take a pro-free speech stance but they are a far cry from bulletproof hosting ("shady operators"). An imperfect comparison at a massively bigger scale would be Cloudflare's prominence in certain contexts.
What if the user only get one address, how to separate the two?
Seems like a need to share if a larger block (providor) is handing out based on blocks or single addresses…
Say what? IPv6 was designed that first 64 bits are network, last 64 bit are host.
Since /64 is smallest network in IPv6 and because of that most providers hand out /64 when you ask for IPv6 public address because A) Most Rate Limiting uses /64 and B) IPv6 has so many IPs, no one cares.
Vultr has at least one /32 I was able to find (2001:19F0::/32) which if you cut that into /64 comes out ~4.2 Billion different networks or same amount of IPv4 address that exist.
ARIN will hand anyone who asks a /48 IPv6 subnet which 65,536 unique networks and getting larger prefix is not hard.
Can you elaborate? As one tool among many it seems to me to be a perfectly serviceable tool in the toolbox, with a sufficiently high rate limit to account for shared IPs.
It must be a daunting chore to maintain all the legacy pages. The amount of now-years-old stuff that long-standing sites have to maintain, or choose to maintain, is shockingly high, and testing the combination of all that stuff is impossible.
If you want an example of how diverse in age these apps are, dig around in the Gmail settings panel. Eventually you will land on a popup that uses the original Gmail look and feel, from 2004.
Bug bounty program appears to be an efficient spend. For a few thousand dollars they mobilize unpaid people looking for extreme edge cases and then surface these issues. It would’ve cost way more to pay an employee to search for this.
The main cost of running a bug bounty program is developer time spent triaging submissions from all the people who just run an automated scanner against your website and submit everything it outputs.
Depends on the company. Also It can be a good way to say to management, "look, this old deprecated shit needs to be replaced because it's insecure; maintenance is a security issue"
Which is exactly why companies are aggressive about deprecating old products and services. "But why can't they just leave them running and not touch it?" Because every such service eventually becomes a security hole. The only secure code is no code.
While your argument seems to make sense on the surface, it fails in deeper inspection.
What security implications did Google Reader have? I do understand keeping older APIs and endpoints for authentication and authorization are indeed dangerous. However, if your architecture causes the mere clients of those authorization infra to be exploited, I think the problem isn't keeping the products running. You designed something inherently insecure.
There still is a standard password recovery flow with mail/capability URL that is reasonably safe and hasn't changed too much in a decade.
It is the bullshit some security advisories brought us that introduced new dangers. By sharing telephone numbers for example...
These threats are also worse than losing an account in many cases, because now the data can easily be correlated, which has proliferated through a lot of 2FA bullshit.
It must be a daunting chore to maintain all the legacy pages. The amount of now-years-old stuff that long-standing sites have to maintain, or choose to maintain, is shockingly high, and testing the combination of all that stuff is impossible.
One company I worked for used interns and new hires for that. One of the early tasks assigned to the intern pool was to comb the web sites for outdated information, or things that no longer conformed to the current brand book. The list then went somewhere else so the pages could be updated or deleted.
The major benefit of this was giving the new people an overview of what we do, why we do it, and a slice of the history of the products.
Something that can be hard to appreciate if you haven't managed this sort of project is that it can be surprisingly hard to throw money at the problem.
If you try to hire at your regular "bar" for skill for boring work like this - people will often quit. This is one of the reasons many company's integrations are lacking despite it being a strategic interest - integration work is miserable and doesn't help your career.
Hiring below the skillbar at the same pay, is dangerous and often doesn't actually work out - if it was that easy someone more skilled probably would have fixed this a while ago.
So you try to pay more for the miserable work - but hold on, now you have to pay out of band salaries, and legal tells you that opens you to massive liabilities.
Ok - maybe you can just level them differently? No, HR will tell you that will mess with all your internal level processes - which are key to running the company. They're going to add a lot of additional overhead tracking these "fake" leveling bands and dealing with the consequences.
None of this means the problem is literally unsolvable, but it now requires a huge amount of time and effort from people near the top of the company who everyone would much rather spend their time on making the company better.
All of this to say - sure you could solve this problem, but it's actually much more complex than adding some line items to a budget.
Source: have watched many big companies try and fail for years to staff unsexy work like this.
In addition to having the money, Google also needs the incentive to spend that money on such projects. If the perceived return on capital is low (or negative!), the incentive is simply not there.
Google's main search page is the slowest page & UI I have found on the internet today (not accounting for bandwidth limits). Even on modern devices it lags at text entry and even rearranges characters in the text box so you have to wait 10+ seconds for it to finish loading or it will go haywire. The shopping and other pages are actually worse. So it appears you're right, $350B isn't enough money to maintain a web page in 2025.
I was recently editing the Wikipedia page for Google Bookmarks (2005-2021). I wanted to add a logo to the page, but I was having a lot of trouble finding a high-quality copy of the logo anywhere. Eventually I figured out that Google's old URL scheme for product logos was very guessable, and they had never taken it down: https://www.google.com/intl/en-US/images/logos/bookmarks_log...
They'll probably never stop serving those old URLs because who KNOWS where they might still be in use. One of surely a million examples of weird little legacy things Google is stuck with.
I tried to guess what it might be ... I went to check moon.google.com, one of the older apps/jokes that I can recall still running. It seems that they got someone to update moon.google.com with a more recent look and feel, and dozens of moons instead of just the one.
There are major things at some large enterprises that are given the same level of support. Friend works on an internal link shortener app that is heavily used at their mega corporation and it gets maybe one ticket every other sprint just for upgrading node versions etc even though its monitoring is down.
I did something similar way back when I was trying to find the phone number for a person, using Facebook.
When recovering a password Facebook would give you most of the digits of the phone number, so I wrote them down in a vcard file and imported it on my phone to just look at the pictures. It worked surprisingly good.
There is also a similar hole with Google profile photos and other Google apps. For example if you see a review by John Smith on Google maps, you add emails on Google Hangouts, guess a bunch of variations like johnsmith@gmail.com, smithjohn@gmail.com etc and see the profile photos to compare the match.
g has been demanding a valid phone for years, as have most other major providers. if you lose the number you sign up with, you can potentially get locked out of the account. whats your mo?
I’m mostly impressed that he can throw 40k requests per second at a server for a prolonged period and not somehow spike the resources enough to set off some alarms.
it is possible that it did throw an alarm but the behavior ceased soon enough afterwards that it didn't escalate to alert-level paging, or that -- even if it did -- those resources were back to normal within a few minutes that it took to open laptop, password password OTP, link-following and graph-referencing annnd oh it's already coming back down before the status update is drafted.
And 40kqps isn't really much at the scale of Focus (or most of Google's APIs) so I could easily see it going under the radar, especially each using different IP addrs and with IPv6 across /64.
The gap worth noticing here isn't monitoring, though, it's the zero rate limiting on js_disabled flow using a token borrowed from an earlier js enabled flow.
> This request allows us to check if a Google account exists with that phone number as well as the display name "John Smith".
Shouldn't the rate limit be set here, related to the display name "John Smith"? You get 5 "John Smiths" for free in the first minute, then 5 more in the first hour, then 5 more in each day going forward. With the same million phone number combos you'd need roughly half a lifetime (10,000 days) to get the hit on average.
The company I work for is a client of a big-ass CDN you've heard of (not the one whose ceo hangs around these parts). Yet, they somehow think it's fine to notify me of "new connections from an unusual IP" when I connect from the same /64 block of ipv6.
Assuming a /64 as a starting point is an easy win and bumping it up with repeat offenders seems pretty easy in the grand scheme of things.
Once the "oh no, we can't afford that many unique allocations" excuse is away, algorithms that enforce quotas for every prefix size at the same time (with no excuses for CGNAT weirdness) stop being too ruthless.
You can distribute your addresses as needed, and I can track successful and failing attempts - at whatever distribution scheme you use. E.g. group your "unverified" or "trial" accounts at a larger prefix size, so they get each other blocked - but not your paying customers.
The same is true now with NAT (where they're all behind a single ip or a very small pool of IPs), but IPv6 should make these things better.
Your isp should really be giving you a /56 or /48.
Ratio of abuse traffic per IPv6 from a /64 might also make a good threshold.
Since /64 is smallest network in IPv6 and because of that most providers hand out /64 when you ask for IPv6 public address because A) Most Rate Limiting uses /64 and B) IPv6 has so many IPs, no one cares.
Vultr has at least one /32 I was able to find (2001:19F0::/32) which if you cut that into /64 comes out ~4.2 Billion different networks or same amount of IPv4 address that exist.
ARIN will hand anyone who asks a /48 IPv6 subnet which 65,536 unique networks and getting larger prefix is not hard.
If you want an example of how diverse in age these apps are, dig around in the Gmail settings panel. Eventually you will land on a popup that uses the original Gmail look and feel, from 2004.
What security implications did Google Reader have? I do understand keeping older APIs and endpoints for authentication and authorization are indeed dangerous. However, if your architecture causes the mere clients of those authorization infra to be exploited, I think the problem isn't keeping the products running. You designed something inherently insecure.
It is the bullshit some security advisories brought us that introduced new dangers. By sharing telephone numbers for example...
These threats are also worse than losing an account in many cases, because now the data can easily be correlated, which has proliferated through a lot of 2FA bullshit.
I always wonder who's the one maintaining the "poke" feature in Facebook.
One company I worked for used interns and new hires for that. One of the early tasks assigned to the intern pool was to comb the web sites for outdated information, or things that no longer conformed to the current brand book. The list then went somewhere else so the pages could be updated or deleted.
The major benefit of this was giving the new people an overview of what we do, why we do it, and a slice of the history of the products.
Clearly $350 billion revenue in 2024 is not enough...
If you try to hire at your regular "bar" for skill for boring work like this - people will often quit. This is one of the reasons many company's integrations are lacking despite it being a strategic interest - integration work is miserable and doesn't help your career.
Hiring below the skillbar at the same pay, is dangerous and often doesn't actually work out - if it was that easy someone more skilled probably would have fixed this a while ago.
So you try to pay more for the miserable work - but hold on, now you have to pay out of band salaries, and legal tells you that opens you to massive liabilities.
Ok - maybe you can just level them differently? No, HR will tell you that will mess with all your internal level processes - which are key to running the company. They're going to add a lot of additional overhead tracking these "fake" leveling bands and dealing with the consequences.
None of this means the problem is literally unsolvable, but it now requires a huge amount of time and effort from people near the top of the company who everyone would much rather spend their time on making the company better.
All of this to say - sure you could solve this problem, but it's actually much more complex than adding some line items to a budget.
Source: have watched many big companies try and fail for years to staff unsexy work like this.
Indeed, I recently ran into a Google page that served up the old (~2013) Catull logo.
They'll probably never stop serving those old URLs because who KNOWS where they might still be in use. One of surely a million examples of weird little legacy things Google is stuck with.
And people say Google abandons products.
When recovering a password Facebook would give you most of the digits of the phone number, so I wrote them down in a vcard file and imported it on my phone to just look at the pictures. It worked surprisingly good.
And 40kqps isn't really much at the scale of Focus (or most of Google's APIs) so I could easily see it going under the radar, especially each using different IP addrs and with IPv6 across /64.
The gap worth noticing here isn't monitoring, though, it's the zero rate limiting on js_disabled flow using a token borrowed from an earlier js enabled flow.
Corporate greed sucks for all
2023: https://qbix.com/blog/2023/06/12/no-way-to-prevent-this-says...
2021: https://qbix.com/blog/2023/06/12/no-way-to-prevent-this-says...
Which is funnier?
Should probably be https://qbix.com/blog/2021/01/25/no-way-to-prevent-this-says...
Shouldn't the rate limit be set here, related to the display name "John Smith"? You get 5 "John Smiths" for free in the first minute, then 5 more in the first hour, then 5 more in each day going forward. With the same million phone number combos you'd need roughly half a lifetime (10,000 days) to get the hit on average.
It seems like there are probably people out there with JS disabled for whatever reason who still might need to recover their password?