I know at least one company that, after complaining bitterly over the weekend, freed up more than 50% of their IPv4 addresses today after a quick audit and change.
Seeing something like that makes me think that AWS is completely justified in bumping the price on IPv4 addresses. People used IPv4 indiscriminately and didn't care because AWS ensured that their customers would always have enough addresses available.
Not exactly. Most of the AWS services you can't release the IPv4 addresses. You automagically get 3 IPv4 addresses assigned to you when you create a load-balancer, even if you want that load-balancer to be IPv6 only.
And their native support for IPv6 within their services are hit-and-miss at best.
Yeah, this is my only real complaint with this. Internally their IPv6 support is pretty garbage, which is kind of fine since your internals shouldn't be exposed externally anyways, so it's in theory possible to have an internal IPv4 network that exposes traffic over IPv6 to the public using something like a load balancer.
Load balancers are already somewhat expensive- the base cost for each load balancer is already $16.43 a month before bandwidth. Three IP addresses, 12 cents per day each, over 30 days is another $10.80 a month. In other words load balancers just had their base price increase by 65%.
I have started working for a startup recently. My main responsibility is to develop networking features for our cloud on bare metal. We started ipv6 by default but soon we discovered that the biggest issue is "not" the setup side. Ipv6 setup is actually quite straightforward, if you're starting from scratch. The biggest problem of ipv6 is that the ecosystem is not ready for it, at all. You cannot even use github without a proxy!
Hence, we had to start implementing ipv4 support immediately, because VMs for developers that only has ipv6 is almost useless.
Github is one of the most idiotic IPv4 exclusive services. Microsoft and Azure has all the knowledge and equipment to make IPv6 available to practically any site, but Github seems afraid to ask. They had IPv6 for a short while and turned it off later.
ZScaler is worse on a practical level. I have to disable IPv6 system-wide else I can't access internal services (it only routes IPv4). The crazy thing is that they call out VPNs for being archaic, but force users to use an even more archaic technology.
Luckily that does not seem to be an issue here. You only have to pay for a public IPv4 address, you still have a full IPv4 stack and are able to make outbound connections via NAT.
I recently tried to deploy GitLab from scratch on an IPv6-only network, and the initial experience was anything but smooth. I was met with an exception right in the console during the initial setup. GitLab attempted to obtain a Let's Encrypt certificate and immediately failed, as it doesn't listen to IPv6 addresses by default. A year ago, we (at work) faced similar issues when trying to deploy GlusterFS on an IPv6-only network, and it also failed. (I pushed for V6 only, my manager was not happy) It's evident that while IPv6 may be the future, the present ecosystem doesn't seem fully prepared to support it.
For years, I have wanted to use Docker with IPv6 only, and I am really thinking about learning Go so I can write my own IPv6-only driver.
Yeah, it's a real shit show when you get down to actually trying to utilize IPv6 in any scenario that needs legacy IPv4 access in a straight-forward way.
I'm somewhat happy in that I've moved away from being way down at the low-level ISP/network side of things, so I may be missing something, but I don't see how we are ever going to elegantly transition away from IPv4 addresses. Everything just seems hacky and fragile in terms of trying to run a "pure" IPv6 environment, and be connected to the rest of the Internet.
I think that ISP-wide single-stack IPv6 deployments are the key. They throw the legacy IPv4 internet behind a huge NAT, while letting IPv6 internet to function natively. There is an IPv6 address range that represents the totality of the legacy IPv4 addresses, and the IPv4 addresses are translated into that, so from the IPv6 side of the NAT, every IPv6 service looks like it has an IPv6 address. From the IPv4 side, it looks like the your standard carrier-grade NAT, with huge numbers of users sharing IPv4 addresses.
That should simplify the network over dual-stack deploys, plus it makes providing services in "native" IPv6 the more attractive choice over NATted-to-death IPv4.
There are already some ISPs doing this. In Japan, where I live, one of the big three, NTT Docomo transitioned into a deployment like this just last year.
GitLab supports ipv6, but just not out of the box.
My private gitlab instance is v6 only. I had problems with the updates in the beginning because because the official repo was not on V6 but I think they fixed it some years ago. (I am not gitlab expert, but i think my type of install is called Omnibus)
Back when I had Comcast (and thus native IPv6 at home), this was a great way to expose a web server at home without resorting to either weird port forwarding or setting up a proxy + SNI. Both of those work, but this is super clean.
Even using something like Hurricane Electric, this is still a nice solution to get access to services hosted on a residential connection. Feels a lot cleaner to me than weird reverse tunnelling solutions.
Are there any plans for SSH tunneling without using cloudflared at the client side?
Also: supporting both a SSH and an HTTP tunnel on the same A record would be nice
I'm using Zero Trust Tunnel for some web apps I host in my home, but I'm trying to think if the older service (IPv4 to IPv6) you describe would be useful for anything, like ssh'ing into my home from an external VPS.
Would the earlier product be used for something like a router, which can't run the Tunnel service?
Does this end up being similar to say an haproxy doing domain based load balancing to an ipv6 endpoint(s)? I assume you have loads of customers on any single ipv4 IP ingress right?
It's unnecessary indeed to not use Ipv6 addresses, 2^128 addresses and the many many features it offers like unicast etc. Ipv6 makes a server as a middlemen for some applications (Ipv4 only) completely obsolete.
But a big problem is that there is still no Ipv6 auto configuration at all on a lot of devices (e.g. no default gateway or no global address configured). Especially android devices and from experience also on Windows. Linux depends on the distro. Changing routing settings on android devices from Ipv4 to Ipv6 does often not work or is not offered by the ISP strangely.
And there are other problems like routers having enabled incoming and outgoing Ipv6 connections by default, which is good, but having router advertisements blocked by default, which is bad. Since there is no way for the OS to get the prefix to construct global addresses automatically. Most users today have little to no knowledge about networking and computers in general. So auto configuration is a must.
That leads to Ipv6 only servers being not reachable and thus the buying of Ipv4 addresses makes a lot of sense at this point.
Building IPv6 autoconf into the protocol was a mistake. DHCPv6 is better.
The problem is that when you autoconf on a local network you usually want more than just a route and basic DNS. Trying to do it in the IP protocol is a bad idea since the IP protocol is intended to almost never change. It belongs in a protocol that's less tightly bound to the IP stack that can be more easily extended like DHCP.
DHCP can also integrate with things like local DNS, while this is much harder to do with IPv6 RA and SLAAC.
SLACC is something that sounded good on paper but doesn't adequately capture the entire problem domain.
IPv6 in general needs to just deprecate all the parts of the protocol that are anything but just making IP addresses larger. Everything else is "second system effect" cruft that tends to impede adoption by adding complexity or adding features in the wrong place in the stack.
For clients, IMHO, SLAAC is fine and means I don't have to maintain and run a DHCP service anymore. One less thing that can fail, while SLAAC only fails me if the routers IPv6 link inside the given network goes down.
Servers on the other hand, I will provision with a static IP subnet on deploy, as part of the PXE install or configuration management process, depending on the environment. They will have an ephemeral address during the install, but then query for and persist their allotted address before rebooting into the installed environment as part of their post-install.
I guess we agree that we need a single source of truth, what physical device has what IP (range) in their possession at any time. DNS is a classic way to do that, but there are other solutions, from ITIL-style CMDBs to simple config management git repos. And of course the latter doesn't mean that we don't also update DNS based on IP-assignement, DHCP is not the only tool that can be made to interface with a DNS service.
> SLACC is something that sounded good on paper but doesn't adequately capture the entire problem domain.
Good news! Nothing in SLACC (sic) prevents you from using DHCPv6.
But now, since we have SLAAC as well, you get auto-magic working with simple link-layer connectivity without having to bother with extra infrastructure. If you need extra functionality, you have the option (not necessity).
As long as ISPs are unwilling to actually work on the problem on letting their customers use ipv6, applications/services will continue to be uninterested in exposing ipv6 for usage.
The worst foot-draggers are major sites like Github and cloud infrastructure. Google only got IPv6 in GKE this year in most regions.
The other big foot-draggers are corporate networks. Even if the ISP supports V6 many corporate networks do not because two generations of IT professionals learned how to do networking entirely through the lens of NAT as a requirement and don't understand how to do things without it. I've seen many IT peoples' brains just melt at the idea of things just having one address. In reality it simplifies things dramatically but sometimes getting people to grasp a simpler solution is actually harder than getting them to grasp a complex one.
I live in the USA and have had IPv6 at home for over a decade (and have used three different ISPs in that time). Many mobile networks are IPv6-first.
Nowadays, you shouldn't be allowed to advertise "internet access" if ipv6 isn't supported.
Ipv6 is the current protocol. And some sites don't have ipv4.
(Amazon charging an extra for ipv4 is another sign that ipv4 should be a protocol for particular use cases, not for "the internet")
And it should be the same for software and connected hardware.
No ipv6 ? That's not a product that works over the internet.
On a personal side, what I host is only working on ipv6, as my ISP has stable ipv6 but not ipv4, and for the convenience of configuration.
And even cheapo internet plans on mobile and landline support ipv6 by default nowadays. (The government pushed for it)
All major ISPs have had native ipv6 for customers in the US for at least 5 years. Not some funky bastardized implementation but native full ipv6.
Ipv6 is overly complicated and has been riddled with bugs for 30 years now. As long as ipv4 is an option many are going to choose to completely disable it. Some of the security concerns cannot be effectively filtered at all. There are numerous examples of these vulnerabilities from even just the last few years.
It’s hard for teams of engineers to secure properly much less a home user.
I completely disable ipv6 even with a deep understanding of it.
Absolutely, a very good point indeed. But I meant more specific applications like end-to-end messaging. Surely 'obsolete' is a bit of an overstatement. At the end it depends how one looks at and want things to be done.
IPv4 addresses have always had a cost (sort of, though they've gone from pennies per IP to $60+ per). I get the feeling Amazon was happy to eat the cost to reduce friction in deploying EC2 instances but now they've hit maximum saturation and now they can just add another charge to the pile that 99.99% of users will never notice.
> I get the feeling Amazon was happy to eat the cost to reduce friction in deploying EC2 instances [...] and now they can just add another charge to the pile that 99.99% of users will never notice.
This always leaves me puzzled about the concept of "free markets." How can smaller entities compete when these massive conglomerates can perpetually introduce loss leaders or subsidize pricing in new sectors using profits from their existing businesses? This strategy effectively shields them and reduces competition.
My initial thought is that it should be illegal for companies to invest in sectors unrelated to where they generated their profits. However, I recognize this could lead to numerous unintended consequences.
I'm not sure I agree with the premise of the question. Not charging for IPs was less likely to be some massive subsidization plan to create a loss leader and capture market and was more likely just what everyone else was doing - ignoring charging for the few pennies because the juice wasn't worth the squeeze and they've got better ways to spend time trying to make real money. Now prices are getting very high and that's no longer true.
It’s a case by case basis. You can’t set a rule and say that every time an established leader eats a cost that it’s right or wrong.
In this case, AWS has had plenty of competition via other cloud services like Azure and Google Cloud as well as other hosting options. The fact that they ate this cost was immaterial and I don’t see any issue with it.
Even with all the competition, the alternatives still kind of pale in comparison so it’s definitely not a competition problem.
Well since aws has been hugely profitable, you can easily argue it was simply priced in. I guess they did buy a lot of their ips when they were much cheaper, but it's similar to buying and holding land in many ways.
I had heard more than anything it was due to behind the scene implementations anyways, they likely finally resolved those.
> My initial thought is that it should be illegal for companies to invest in sectors unrelated to where they generated their profits. However, I recognize this could lead to numerous unintended consequences.
Main consequence would be forcing companies to go bankrupt, instead of pivoting to new areas, when their current market becomes obsolete/commoditized.
This would be impossible to ban, and even small businesses do it. Imagine if the corner store charged you to park in the car park, charged you to stand in the store, charged you for every staff interaction, if you open the fridge they charge you a small amount for the power you consumed, etc.
IPv4 addressees were once as insignificant as any of these costs, now they aren't, so they are charging for it.
Speaking of IPv4 addresses, it's far more worse than "free markets". It's a rent seeking by internet early adopters (specifically the US). New indian ISP doesn't have much choice. IMO it's good thing because AWS users will waste less IPv4 addresses.
I run quite a few small production aws accounts for clients and this is a big increase to their bill. If you use a lot of t4g.nano instances, the IPs are more than the machines. I think the large customers that are 99% of the revenue won't care, but the bottom 50% of customers will notice.
Most customers will have very few public IPs. If your architecture is based around dozens (or even more) of public IPv4 addresses then you need to rethink your design because cost isn’t the only risk you’re exposing.
Now that you mention this, it's interesting for me to consider in retrospect. At least one prior employer had me architect solutions that were designed around scaling up using very small instances in large numbers during peak load.
In theory those are deployed in the private address space behind a load balancer, but getting any actual information on the production deployment was like pulling teeth.
At AWS or in general? To my knowledge, existing assignments aren't incurring any annual fees (or if so, not more than IPv6).
There's just a secondary market for v4 these days, but that's also a one-time cost, as far as I know.
In other words, either AWS is charging a recurring fee for an asset they purchase at a one-time flat fee (which is great if you use a service for less than the year or so it takes to amortize, and not so much afterwards), or I missed a development in the IPv4 exhaustion saga.
I assume it’s a one time fee for AWS and pay forever for all of us that can’t afford to buy a block.
I thought it was closer to $40 per IP last time I looked. AWS charging $3.60 per month looks pretty lucrative either way since the payback is only 1-1.5 years.
It's pretty galling for AWS to ask their customers "to be a bit more frugal with your use of public IPv4 addresses and to think about accelerating your adoption of IPv6 as a modernization" when they themselves have been dragging their feet in IPv6 adoption, and in many cases are still blocking or at least making it unnecessarily difficult to use IPv6.
Well, ask yourself: if you're Amazon and you have the choice to spend money getting ipv6 working properly, or you can make money selling v4 addresses without any risk of customers jumping ship, what would you do?
actually, changing of cloud services is in the sweet spot of being both manageable, and pretty complex so you won't do it every other day.
So if companies are fed up for long enough (or if engineers find it so complicated or costly that they learn to do cloud with something else), they will change. And when they will change, they will never come back. Amazon would be just like AOL or Yahoo (or Jenkins or SVN) or any forgotten giant.
Additionally, many companies don't rely on any cloud service so far. One day, when they will eventually implement it, they won't take what has annoyed others.
AWS is already more expensive than the services of smaller companies (OVH and the like). So Beware.
This change on its own won't make much difference, but they shouldn't be too pushy.
A significant number of AWS Customers are "internal", and—believe it or not—the cost of resources does come up in design meetings at Amazon. This change might actually light a fire those teams to actually start supporting IPv6 properly.
Amazon really needs to put a ton of work into making v6 work for everyone on the server side or this is a very big price increase on the low end.
If they had a compelling case to do the devops work and then everythings fine, I wouldn't mind this at all. The reality is a ton of stuff is ipv4 only (cloudfront origins, albs require ipv4, etc etc).
They realistically need free NAT or free 6to4 as a transition plan.
This has been driving me crazy for years now. AWS still doesn’t have complete IPv6 support, in 2023. They are front and center to IPv4 exhaustion yet seem unconcerned.
Even Amazon can't make every single ISP in the world provide IPv6 connectivity, which would be required to actually deprecate IPv4 on the server side (or at least at the load balancer or other type of HTTP reverse proxy).
They should offer NAT64/DNS64 gateway. It would work like other gateways and give IPv4 access to IPv6 hosts. Would probably be expensive like the NAT Gateway.
The last time I tried to set up IPv6 with my VPC, it was an absolute nightmare. Maybe I'm not devops-y enough, who knows. But all three of my earnest efforts to use IPv6 have gone pretty badly.
Has anyone successfully used AWS's IPv6 offerings to stand up a VPC/ECS/ALB/RDS using secure best practices without friction? What tutorials did you follow? I'm all ears.
This explains a lot. I wanted to be a good citizen and use IPv6 exclusively internally and keep IPv4 at the edge, then I found I couldn’t create a database without a bunch of IPv4 settings I hadn’t configured.
My IPv4 server has 127.0.0.1/8, 10.64.78.37/32, 172.17.2.1/16, and a public IP hidden somewhere. The 172/12 networks I see are usually Docker doing Docker things but I'm still left dealing with three different IP addresses.
Not that it matters much, because they all just appeared on the right interfaces and started working.
You may need to know some basic things about IPv6 for your firewall ("fe* means local link") but the same is true for IPv4 ("10.* means local network"). I think they're equally difficult to manage, but I can understand how daunting it may look to someone whose been taught networking by outdated textbooks lacking IPv6 like so many other people.
Seeing something like that makes me think that AWS is completely justified in bumping the price on IPv4 addresses. People used IPv4 indiscriminately and didn't care because AWS ensured that their customers would always have enough addresses available.
And their native support for IPv6 within their services are hit-and-miss at best.
Load balancers are already somewhat expensive- the base cost for each load balancer is already $16.43 a month before bandwidth. Three IP addresses, 12 cents per day each, over 30 days is another $10.80 a month. In other words load balancers just had their base price increase by 65%.
https://github.com/orgs/community/discussions/10539 is full of people voicing their grievances but I don't think Github is paying this issue any attention anymore.
Luckily almost all providers or IPv6-only networks also offer NAT64 or similar NAT mechanisms to make IPv4 addresses reachable.
All those engineers left or retired and have been replaced by outsourcers and H1Bs.
Nobody at Azure can even spell IPv6.
Turning it on is difficult and then it breaks everything, including unrelated services in other peered vnets.
It’s just shameful how much the engineering skill has degraded in Redmond.
Luckily that does not seem to be an issue here. You only have to pay for a public IPv4 address, you still have a full IPv4 stack and are able to make outbound connections via NAT.
There ain't no such thing as a free lunch.
I'm somewhat happy in that I've moved away from being way down at the low-level ISP/network side of things, so I may be missing something, but I don't see how we are ever going to elegantly transition away from IPv4 addresses. Everything just seems hacky and fragile in terms of trying to run a "pure" IPv6 environment, and be connected to the rest of the Internet.
That should simplify the network over dual-stack deploys, plus it makes providing services in "native" IPv6 the more attractive choice over NATted-to-death IPv4.
There are already some ISPs doing this. In Japan, where I live, one of the big three, NTT Docomo transitioned into a deployment like this just last year.
Some of the ecosystem must be ready for it, and ipv6 support can be just another requirement to choose among solutions.
Also, you can have a reverse proxy and a cloud behind NAT64 to run servers on ipv4, but access them with ipv6.
If you need more than web traffic, you can use our Tunnel service.
(Now I only have IPv4, so I just use Tunnel).
You get A and AAAA records by default.
Would the earlier product be used for something like a router, which can't run the Tunnel service?
But a big problem is that there is still no Ipv6 auto configuration at all on a lot of devices (e.g. no default gateway or no global address configured). Especially android devices and from experience also on Windows. Linux depends on the distro. Changing routing settings on android devices from Ipv4 to Ipv6 does often not work or is not offered by the ISP strangely.
And there are other problems like routers having enabled incoming and outgoing Ipv6 connections by default, which is good, but having router advertisements blocked by default, which is bad. Since there is no way for the OS to get the prefix to construct global addresses automatically. Most users today have little to no knowledge about networking and computers in general. So auto configuration is a must.
That leads to Ipv6 only servers being not reachable and thus the buying of Ipv4 addresses makes a lot of sense at this point.
The problem is that when you autoconf on a local network you usually want more than just a route and basic DNS. Trying to do it in the IP protocol is a bad idea since the IP protocol is intended to almost never change. It belongs in a protocol that's less tightly bound to the IP stack that can be more easily extended like DHCP.
DHCP can also integrate with things like local DNS, while this is much harder to do with IPv6 RA and SLAAC.
SLACC is something that sounded good on paper but doesn't adequately capture the entire problem domain.
IPv6 in general needs to just deprecate all the parts of the protocol that are anything but just making IP addresses larger. Everything else is "second system effect" cruft that tends to impede adoption by adding complexity or adding features in the wrong place in the stack.
Servers on the other hand, I will provision with a static IP subnet on deploy, as part of the PXE install or configuration management process, depending on the environment. They will have an ephemeral address during the install, but then query for and persist their allotted address before rebooting into the installed environment as part of their post-install.
I guess we agree that we need a single source of truth, what physical device has what IP (range) in their possession at any time. DNS is a classic way to do that, but there are other solutions, from ITIL-style CMDBs to simple config management git repos. And of course the latter doesn't mean that we don't also update DNS based on IP-assignement, DHCP is not the only tool that can be made to interface with a DNS service.
Good news! Nothing in SLACC (sic) prevents you from using DHCPv6.
But now, since we have SLAAC as well, you get auto-magic working with simple link-layer connectivity without having to bother with extra infrastructure. If you need extra functionality, you have the option (not necessity).
Some countries are doing better than others (https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...), but still, ISPs are really dragging their feet...
The other big foot-draggers are corporate networks. Even if the ISP supports V6 many corporate networks do not because two generations of IT professionals learned how to do networking entirely through the lens of NAT as a requirement and don't understand how to do things without it. I've seen many IT peoples' brains just melt at the idea of things just having one address. In reality it simplifies things dramatically but sometimes getting people to grasp a simpler solution is actually harder than getting them to grasp a complex one.
I live in the USA and have had IPv6 at home for over a decade (and have used three different ISPs in that time). Many mobile networks are IPv6-first.
Nowadays, you shouldn't be allowed to advertise "internet access" if ipv6 isn't supported.
Ipv6 is the current protocol. And some sites don't have ipv4. (Amazon charging an extra for ipv4 is another sign that ipv4 should be a protocol for particular use cases, not for "the internet")
And it should be the same for software and connected hardware. No ipv6 ? That's not a product that works over the internet.
On a personal side, what I host is only working on ipv6, as my ISP has stable ipv6 but not ipv4, and for the convenience of configuration.
And even cheapo internet plans on mobile and landline support ipv6 by default nowadays. (The government pushed for it)
Ipv6 is overly complicated and has been riddled with bugs for 30 years now. As long as ipv4 is an option many are going to choose to completely disable it. Some of the security concerns cannot be effectively filtered at all. There are numerous examples of these vulnerabilities from even just the last few years.
It’s hard for teams of engineers to secure properly much less a home user.
I completely disable ipv6 even with a deep understanding of it.
Not really, proxying also provides user privacy, and enables DDoS protection (this is especially an issue in the video game world).
This always leaves me puzzled about the concept of "free markets." How can smaller entities compete when these massive conglomerates can perpetually introduce loss leaders or subsidize pricing in new sectors using profits from their existing businesses? This strategy effectively shields them and reduces competition.
My initial thought is that it should be illegal for companies to invest in sectors unrelated to where they generated their profits. However, I recognize this could lead to numerous unintended consequences.
So, what could be an alternative solution?
In this case, AWS has had plenty of competition via other cloud services like Azure and Google Cloud as well as other hosting options. The fact that they ate this cost was immaterial and I don’t see any issue with it.
Even with all the competition, the alternatives still kind of pale in comparison so it’s definitely not a competition problem.
I had heard more than anything it was due to behind the scene implementations anyways, they likely finally resolved those.
Main consequence would be forcing companies to go bankrupt, instead of pivoting to new areas, when their current market becomes obsolete/commoditized.
IPv4 addressees were once as insignificant as any of these costs, now they aren't, so they are charging for it.
In theory those are deployed in the private address space behind a load balancer, but getting any actual information on the production deployment was like pulling teeth.
There's just a secondary market for v4 these days, but that's also a one-time cost, as far as I know.
In other words, either AWS is charging a recurring fee for an asset they purchase at a one-time flat fee (which is great if you use a service for less than the year or so it takes to amortize, and not so much afterwards), or I missed a development in the IPv4 exhaustion saga.
I thought it was closer to $40 per IP last time I looked. AWS charging $3.60 per month looks pretty lucrative either way since the payback is only 1-1.5 years.
actually, changing of cloud services is in the sweet spot of being both manageable, and pretty complex so you won't do it every other day.
So if companies are fed up for long enough (or if engineers find it so complicated or costly that they learn to do cloud with something else), they will change. And when they will change, they will never come back. Amazon would be just like AOL or Yahoo (or Jenkins or SVN) or any forgotten giant.
Additionally, many companies don't rely on any cloud service so far. One day, when they will eventually implement it, they won't take what has annoyed others.
AWS is already more expensive than the services of smaller companies (OVH and the like). So Beware.
This change on its own won't make much difference, but they shouldn't be too pushy.
If they had a compelling case to do the devops work and then everythings fine, I wouldn't mind this at all. The reality is a ton of stuff is ipv4 only (cloudfront origins, albs require ipv4, etc etc).
They realistically need free NAT or free 6to4 as a transition plan.
Has anyone successfully used AWS's IPv6 offerings to stand up a VPC/ECS/ALB/RDS using secure best practices without friction? What tutorials did you follow? I'm all ears.
For RDS, you have to set up your instance as dual stack explicitly even if you’re deploying it into an IPv6 subnet.
i was planning to deploy an internal developer platform (think local PAAS) using lambdas behind an api gateway. no ipv6 there ?
Guess they will improve soon as amazon start charging
For example, when I do an ifconfig, I get 3 ip6 addresses but 1 ip4 address.
'?' indicates a unique value, 'x' means values match between the IP addresses. That alone indicates the complexity of ip6 on setting up the server.
inet6 ????::????:????:????:???? prefixlen 64 scopeid 0x20<link>
inet6 xxxx:xxx:xxxx:xxxx::???? prefixlen 128 scopeid 0x0<global>
inet6 xxxx:xxx:xxxx:xxxx:????:????:????:???? prefixlen 64 scopeid 0x0<global>
Not that it matters much, because they all just appeared on the right interfaces and started working.
You may need to know some basic things about IPv6 for your firewall ("fe* means local link") but the same is true for IPv4 ("10.* means local network"). I think they're equally difficult to manage, but I can understand how daunting it may look to someone whose been taught networking by outdated textbooks lacking IPv6 like so many other people.