> [...] if you want to get a /24 block from RIPE NCC when you sign up as a member, then you are currently looking at a 2 month wait for a recycled IPv4 /24 block.
That's a rather optimistic view of the situation. The next member who will get a block has already been waiting for 2 months and it's unclear when they will get one. It stands to reason that members applying now wold have to wait (potentially significantly) more than 2 months.
About 10 years ago, IBM used to use the 9.0.0.0/8 space in basically exactly the same way as one would use 10.0.0.0/8, for internal-only networking. Each workstation got its own 9.x.x.x IP, but it wasn't routable from outside.
Why would that be relevant here (or sibling comment about Apple)? Last I checked except for 9.9.9.0/24 (to quad9) IBM is indeed the assignee for 9.0.0.0/8 from back in 1992. Apple got 17.0.0.0/8 back in 1990. Back in the day a lot of big entities got whole /8 blocks (including of course a lot of the USG but private corps as well). Many of them are still around and fully active, while others are defunct (Halliburton had a /8 and that went back to ARIN then out to registries) and/or have shifted (like IIRC Amazon now has 3.0.0.0/8 but that was General Electric originally). That's not squatting, that's just making use of what they have.
>I hope they stopped doing that, but I doubt it.
Why should they stop? Ideally we'd have had at least 64-bit or better 128-bit from the beginning in a nicer form then IPv6 ended up and then every single one of us could have millions of IPs if we wished. That isn't how it ended up but that doesn't mean those who got them shouldn't use them. I make use of my minuscule bit of public IPv4 for my own stuff.
Of a highly constrained resource, they're using a tiny fraction of what they've been given. That's a weird definition of "using what they have."
If I asked for a class C for my business running a local corner store, I'd be looked at like I was crazy.
IBM gets 16 million public IPs and it's cool?
Yeah, I know you can't perfectly use an IP space, but with 128 offices, IBM could give each office an allocation of around a hundred thousand IP addresses (rounding down by over 20%. But even if it were 10,000 - that's still absurd.)
We have run out of freely allocatable IPv4 and equipment isn't catching up to IPv6 - it's very relevant here.
Neither Apple, not IBM, actually need that many publicly useful set of IPs. IBM would be smart to sell them off. Apple is probably going to sit on them. (I used to work at IBM and that 9 block was very confusing to me, considering that IBM isn't even that big of a DC operator these days)
IBM owned 9/8 so this is a legitimate use of address space. All hosts should have globally unique addresses, even if you want to use NAT to hide various things. IBM does multiple acquisitions per year. Imagine merging two corporate networks that both use 10/8; it's a nightmare.
> Imagine merging two corporate networks that both use 10/8; it's a nightmare.
This is a nightmare even inside companies. Two teams set up a default VPC, and one day you go to peer them and find that the IP ranges conflict. At my last job, I ended up using Netbox to manage our private IP ranges alongside our public IP ranges. (In theory, it would be nice if cloud providers offered this feature. "8 other VPCs on this account also use 10.0.0.0/8. Are you sure you want to be the 9th?")
I do not know IBM’s current practice but in the 1990s acquisitions continued to use their internal networking for quite a long time, just interconnecting the networks as necessary and announcing routes internally.
9. addresses only started being used widely inside IBM around 1992 as the internal multi protocol network rolled out (combining RSCS over SNA and TCP/IP). As APPC connected devices gave way to TCP/IP connected devices allocations shot upward, IIRC each major campus was a /16.
Advantis/IBM Global Network ran the 9 network on the same physical and logical circuits as the public networks they managed, leading me to bypass the IBM firewall unintentionally multiple times as the filters they used broke. This may be one of the reasons RFC1918 addresses were discouraged (at least through 12/2001 when I left).
The acquisition's IPs might not conflict with IBMs, but surely they conflict with those of the other acquisitions? Is there any benefit after the first acquisition?
HP did the same for 15.0.0.0/8 and 16.0.0.0/8 until the HP/HPE split at which point I think they couldn't figure out who should get the address space. As 2 x /8 is pretty valuable, they sold off chunks of it and are presumably still doing so.
Ironically, having such addresses was sort of useful when companies got acquired and teams got shifted around. Starting to use an acquired company's network that was never designed with "what if we get acquired and have to play nice with others" in mind causes all sorts of routing pain.
GE did the same with 3.0.0.0/8 until they sold it to Amazon. When Amazon started actually using those addresses, it created a situation known internally as the “threepocalypse”.
Am I the only one alarmed that WD maintains a public registry (via DNS) of MyCloud device UUIDs, their public IP, and their private IP? How many of those are on networks with exploitable routers?
Like, you have an external entrypoint and a target internal IP that you know will contain a trove of potentially interesting data.
I agree that's a ridiculous privacy issue. Definitely a case of poor security to provide a minor inconvenience (access your data from anywhere on the internet).
Didn't they actually recently recommend people disable direct Internet access and UPnP? I believe it was after a vulnerability was discovered in one of their legacy products.
Interesting work, but IMHO anything that extends the life of IPv4 does active harm. I'd prefer if these addresses stay out of the pool so scarcity increases and forces people to upgrade.
IPv4 is fundamentally too small, period. There are already more people and computers on Earth than possible IPv4 addresses even if it were perfectly optimally used. It leads us further down a path in which everything is behind increasingly starved NATs, making point to point connectivity more and more difficult. Now we are seeing NATs in front of carrier-grade NAT and other madness.
... and no, NAT is not a security feature. You can and almost always do have a firewall in front of IPv6. If you really want NAT there is IPv6 NAT, but it allows you to have all mappings be 1:1 eliminating the need for port starvation madness and making P2P always work. All internal IPs get their own external IP, but those can be random and rotated if you want.
Yeah I admire all of the efforts people are proposing, but they are bailing out a sinking ship with a spoon.
Let's say we claw back /8 networks somehow and let's say we can free up one of those /8 networks per year. Pretty optimistic.
The allocation rate for /8 networks from IANA after conservation measures were put in place was still 5 /8 networks per year. Even if we conserved IP addresses five times better than that, this would still merely freeze the current situation for a few until we run out once again.
There is just no amount of crumb picking that can fix the fact that 32 bits are woefully inadequate.
The total population I don't find to be a very strong argument, because all that matters is the population of people who desire to communicate with my service. If people not able to communicate with my service also don't want to communicate with my service and I don't see a need for them to communicate with my service, why do we both need the same protocols?
Something I have observed is that sites that tend to attract DDoS attacks tend not to use IPv6 (note that reddit and HN do not have AAAA records, though I don't know the actual reason for this). I've even seen the heavily attacked sites that I know are using paid Cloudflare or Sucuri services to not have AAAA records, and I wonder if that's a decision or recommendation from the service providers. So, elimination of IPv4 may mean that sites can more easily and cheaply be knocked off the Internet.
As for point one: I'm not talking about client/server access to services. I'm talking about the capacity for endpoints to talk to each other. IPv4 would be fine if we want a fully centralized computing infrastructure where everything is only a thin client, but that's a future with zero privacy or personal freedom.
I don't think there's anything special about IPv4 in terms of DDOS mitigation. What you're probably seeing is an artifact of focus and investment. IPv4 is still the lowest common denominator standard. Virtually everyone can talk to an IPv4 endpoint. As a result the DDOS protection services still mostly use IPv4 endpoints because it reduces the amount of attack surface they have to protect. If they were dual-stack they would have to deal with BGP black holing on what amounts to two BGP networks instead of just one.
DDOS is something that desperately needs a more comprehensive solution, but it's a hard problem to solve. Right now the solution is for DDOS protection services to run bastions with enough bandwidth to absorb attacks, but that's a solution that constricts innovation tremendously. I feel like a permanent solution would require cryptography to be designed into the entire network so that you could do things like rate limit packets to your host for people who didn't present a certificate. That would require a deep redesign of the entire network though, and that's not going to happen.
One minor philosophical question. If you are using AWS PrivateLink because your VPC is not connected to the internet are you really squatting anything? You are just aren't using the public internet. This means that you own the entire address space and can decide what you want to do with it.
Of course it still may make sense to stick to ranges you own in case you need to peer your VPC with someone else, but I don't see much difference between using some random batch of IPs that you don't "own" on the public internet vs any block reserved for internal use. Either can conflict with someone that you want to merge with.
That's not really what the author was getting at. The VPC endpoints just provide a way (via TLS certificate authority logs) for the author to discover DNS addresses that they can then use to check for queries and determine what IP addresses are being used in private networks.
They found a number of AWS users that are treating publicly routable IP space as their own private IP space. If someone were to ever offer a public service in that IP space, the company/network using it as private IPs would not be able to access the public service.
The author is trying to understand how prevalent this is, and to what extent of trouble an owner of these IP spaces would have if they decided to host a public service.
I agree with you in general. If you do expect to be able to connect to the public internet and map an endpoint over a public address you are aiming a gun at your foot. However the point I was trying to make was about this quote:
> This is useful since it can remove the need for some servers to have any outbound internet access at all.
My point is that if you are not connected to the public internet at all I don't see why you should be expected to follow the rules of the public internet (who owns what). You can use whatever rules you want for your own private network.
This is useful threat intel as well b/c many firms employ source ip address in policy constraints and log monitoring. However it's trivial to masquerade as a target IP address range in a private vpc, and overlap could indicate that someone is up to some tomfoolery.
(FWIW cloudtrail will include source vpc and/or vpc endpoint information when the request is coming through an endpoint. This will help detect those requests)
I believe the networking term is "bogon". Basically you're using space in a way that isn't intended. Mostly I've seen it as people trying to use RFC1918 space on public networks probably because of misconfigurations and most routers/FWs will ignore these. This is sort of the inverse.
Not really, bogon is generally used for either unallocated space or "Martian" packets (packets with a source in private space). This space is unannounced, but not unallocated, therefore it doesn't show up on the bogon list.
Yes I didn’t understand this as squatting and made me question if I understood the post. As it is a topic I admit to not being too deeply knowledgeable about.
I think in general the author's definition of squatting is reasonable. I see it to mean "living on land you don't own" or more directly "using IP addresses that you don't own".
My point is about fully private networks that aren't connected to the internet. I would argue that in this case you do own all of the addresses, even if someone else owns them on the public internet.
A couple of years ago Amazon bought four million ip addresses for $108 million dollars. 44.192.0.0/10
AMPRnet sold them a quarter of the ip addresses that were allocated for amateur radio. They got a /8 back in the 1980s. A small number of addresses were used for ham radio networks but the AMPRnet addresses were generally not routed between the internet and the radio networks.
In the US it's hard for many home users to get static allocations of IPv6 but you can easily get an IPv4 block for $10/month or whatever. So same issue, if you need IPv6 static then you have to go to very expensive service tiers given the "shortage" of ipv6. Reality is I think IPv6 is just a pain up and down to deal with and they haven't sorted out all the tooling to deal with it for static IPs.
That's a rather optimistic view of the situation. The next member who will get a block has already been waiting for 2 months and it's unclear when they will get one. It stands to reason that members applying now wold have to wait (potentially significantly) more than 2 months.
Some nice data on the prices of IPv4 addresses: https://auctions.ipv4.global/prior-sales
I hope they stopped doing that, but I doubt it.
>I hope they stopped doing that, but I doubt it.
Why should they stop? Ideally we'd have had at least 64-bit or better 128-bit from the beginning in a nicer form then IPv6 ended up and then every single one of us could have millions of IPs if we wished. That isn't how it ended up but that doesn't mean those who got them shouldn't use them. I make use of my minuscule bit of public IPv4 for my own stuff.
Because it shows how wasteful these companies operate with resources others are in need of.
If I asked for a class C for my business running a local corner store, I'd be looked at like I was crazy.
IBM gets 16 million public IPs and it's cool?
Yeah, I know you can't perfectly use an IP space, but with 128 offices, IBM could give each office an allocation of around a hundred thousand IP addresses (rounding down by over 20%. But even if it were 10,000 - that's still absurd.)
Neither Apple, not IBM, actually need that many publicly useful set of IPs. IBM would be smart to sell them off. Apple is probably going to sit on them. (I used to work at IBM and that 9 block was very confusing to me, considering that IBM isn't even that big of a DC operator these days)
This is a nightmare even inside companies. Two teams set up a default VPC, and one day you go to peer them and find that the IP ranges conflict. At my last job, I ended up using Netbox to manage our private IP ranges alongside our public IP ranges. (In theory, it would be nice if cloud providers offered this feature. "8 other VPCs on this account also use 10.0.0.0/8. Are you sure you want to be the 9th?")
9. addresses only started being used widely inside IBM around 1992 as the internal multi protocol network rolled out (combining RSCS over SNA and TCP/IP). As APPC connected devices gave way to TCP/IP connected devices allocations shot upward, IIRC each major campus was a /16.
Advantis/IBM Global Network ran the 9 network on the same physical and logical circuits as the public networks they managed, leading me to bypass the IBM firewall unintentionally multiple times as the filters they used broke. This may be one of the reasons RFC1918 addresses were discouraged (at least through 12/2001 when I left).
Deleted Comment
Ironically, having such addresses was sort of useful when companies got acquired and teams got shifted around. Starting to use an acquired company's network that was never designed with "what if we get acquired and have to play nice with others" in mind causes all sorts of routing pain.
MIT Student Radio WTBS 1964-65.
https://www.youtube.com/watch?v=PI2Xx3XSTFw
WTBS "The Ghetto": Soul-Music Radio Show. Created by Black MIT students in 1970, this radio program gained popularity in the Cambridge/Boston area.
https://www.blackhistory.mit.edu/story/wtbs-ghetto
Promo for MIT BSU's "The Ghetto" (WTBS 88.1 FM)
https://www.youtube.com/watch?v=6wUcHb6FMY8
edit: Just did a quick WHOIS. They still have the /16 even though the university doesn't exist any more (merged with another). Crazy.
Like, you have an external entrypoint and a target internal IP that you know will contain a trove of potentially interesting data.
IPv4 is fundamentally too small, period. There are already more people and computers on Earth than possible IPv4 addresses even if it were perfectly optimally used. It leads us further down a path in which everything is behind increasingly starved NATs, making point to point connectivity more and more difficult. Now we are seeing NATs in front of carrier-grade NAT and other madness.
... and no, NAT is not a security feature. You can and almost always do have a firewall in front of IPv6. If you really want NAT there is IPv6 NAT, but it allows you to have all mappings be 1:1 eliminating the need for port starvation madness and making P2P always work. All internal IPs get their own external IP, but those can be random and rotated if you want.
Let's say we claw back /8 networks somehow and let's say we can free up one of those /8 networks per year. Pretty optimistic.
The allocation rate for /8 networks from IANA after conservation measures were put in place was still 5 /8 networks per year. Even if we conserved IP addresses five times better than that, this would still merely freeze the current situation for a few until we run out once again.
There is just no amount of crumb picking that can fix the fact that 32 bits are woefully inadequate.
Something I have observed is that sites that tend to attract DDoS attacks tend not to use IPv6 (note that reddit and HN do not have AAAA records, though I don't know the actual reason for this). I've even seen the heavily attacked sites that I know are using paid Cloudflare or Sucuri services to not have AAAA records, and I wonder if that's a decision or recommendation from the service providers. So, elimination of IPv4 may mean that sites can more easily and cheaply be knocked off the Internet.
I don't think there's anything special about IPv4 in terms of DDOS mitigation. What you're probably seeing is an artifact of focus and investment. IPv4 is still the lowest common denominator standard. Virtually everyone can talk to an IPv4 endpoint. As a result the DDOS protection services still mostly use IPv4 endpoints because it reduces the amount of attack surface they have to protect. If they were dual-stack they would have to deal with BGP black holing on what amounts to two BGP networks instead of just one.
DDOS is something that desperately needs a more comprehensive solution, but it's a hard problem to solve. Right now the solution is for DDOS protection services to run bastions with enough bandwidth to absorb attacks, but that's a solution that constricts innovation tremendously. I feel like a permanent solution would require cryptography to be designed into the entire network so that you could do things like rate limit packets to your host for people who didn't present a certificate. That would require a deep redesign of the entire network though, and that's not going to happen.
Turn IPv4 off for one minute a day.
Next month, increase it to two minutes, and so on.
IPv6 adoption will _soar_.
Of course it still may make sense to stick to ranges you own in case you need to peer your VPC with someone else, but I don't see much difference between using some random batch of IPs that you don't "own" on the public internet vs any block reserved for internal use. Either can conflict with someone that you want to merge with.
They found a number of AWS users that are treating publicly routable IP space as their own private IP space. If someone were to ever offer a public service in that IP space, the company/network using it as private IPs would not be able to access the public service.
The author is trying to understand how prevalent this is, and to what extent of trouble an owner of these IP spaces would have if they decided to host a public service.
> This is useful since it can remove the need for some servers to have any outbound internet access at all.
My point is that if you are not connected to the public internet at all I don't see why you should be expected to follow the rules of the public internet (who owns what). You can use whatever rules you want for your own private network.
(FWIW cloudtrail will include source vpc and/or vpc endpoint information when the request is coming through an endpoint. This will help detect those requests)
Here's the team cymru bogon list, for instance: https://team-cymru.com/community-services/bogon-reference/bo...
My point is about fully private networks that aren't connected to the internet. I would argue that in this case you do own all of the addresses, even if someone else owns them on the public internet.
Sure, some companies have large blocks but that's nothing compared to that.
Deleted Comment
[1]: https://open.defense.gov/transparency/foia.aspx
AMPRnet sold them a quarter of the ip addresses that were allocated for amateur radio. They got a /8 back in the 1980s. A small number of addresses were used for ham radio networks but the AMPRnet addresses were generally not routed between the internet and the radio networks.
Certain popular western european ISP still gives IPv4s cheaper than IPv6s (still a high price, though).
Deleted Comment