To be fair I have a bunch of instances with public IP addresses just so I can ssh into them easily. This situation made me look into how I'd get into these instances otherwise and indeed you can set up an appropriate endpoint and then run "aws ec2 instance-connect" to make a tunnel that gets you an ssh connection without needing the public IP, so just like that my needs for public IPs actually goes down pretty drastically.
On the other hand, on my first try I couldn't get "instance-connect" to work and it turned out I needed a different package "awscliv2" which I had no idea existed.. I've been using "awscli" for the longest time and didn't know there was an alternative and more up to date package available. What a mess.
Also, when running the new one apparently it does a bunch of weird docker magic in the background instead of just being a normal Python program so not sure what to think but I guess it works. If anyone knows a leaner solution to opening an instance-connect tunnel I'd love to know.
Don't overlook SSM <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/session-...> which doesn't require sshd nor public access to get onto a machine and one can opt in to a bunch of audit logging if that's your jam. They also have some "ansible-y" behavior about running playbooks against groups of instances, but I haven't had good experiences with that process in order to recommend it. It's just a small bonus that one can also hop onto an instance from the AWS Console when using SSM, since it is websocket based and not "ssh from the browser"
I really like SSM, and for specifically for Windows RDP, the port forwarding feature. When combined with SSO, proper policies and tagging, you can do away with SSH keys, bastion hosts and VPNs.
The traditional method to achieve this is a ssh bastion. You have one tiny server running that has a publicly reachable IP, and can also reach the other servers (usually via the private network). You ssh into the bastion, then ssh from there into your actual server (or use port forwarding to make an ssh connection via the bastion, which some tooling supports out-of-the-box).
Considering the cheapest ec2 instance type is now cheaper than an ipv4 address this is easy to justify if the AWS specific options don't fit your usecase
Bastions are a good option. sshd supports jumping through them by default using the -J flag (jump) or automatically by adding the 'ProxyJump' statement to your .ssh/config.
See, you’re already acting as if you’re being selfish.
You’re not.
Having an IP per endpoint that is conveniently globally routable from any other endpoint is the entire purpose of the Internet!
It’s not some sort of greed or abuse of privilege! It’s the reason for the thing to exist!
This is like going to a shopping centre that has been growing along with the local population exponentially but refuses to buy more shopping carts. You can’t feel guilty for using a shopping cart “just” for your quick snack shopping as-if that’s a greedy move taking it away from more deserving people with “real” grocery shopping to do.
Stop thinking like this.
Seriously, STOP!
You’re the victim here.
You’re the victim of Amazon’s greed and lock-in.
You’re the victim of the lack of foresight for the most predictable resource exhaustion in the history of the world.
You’re the victim of a problem that has had a solution for two decades that is now included for free(!) in every network device being made but is being turned off by lazy administrators that can’t be bothered averting slow-moving catastrophies.
Are you using a docker image to run AWS CLI or something? It shouldn’t have anything to do with docker at all if you’re just running the bare ‘aws’ command.
aws ec2-instance-connect doesn't use docker, it just sets up a short lived ssh key in the instance metadata that you can use to connect. You can connect either via a public ipv4, ipv6, or a vpc private endpoint.
I don't really know any details, but all I can tell you is that this is the output of running it in a fresh environment (masked out the instance details and snipped some output..):
And now I am logged into the machine. You can clearly see that it downloaded an AWS docker image. Exit from ssh, back to local terminal, and now there is an image on my machine:
$ docker images | grep aws-cli
amazon/aws-cli latest 817d1061df76 3 hours ago 384MB
So I don't know if I agree that aws ec2-instance-connect doesn't use docker. And also, I don't necessarily have a problem that it does, it just surprised me a bit.
We [1] just went through a process to remove public ipv4 IPs for some of our internal facing EC2 instances.
Just in case some people are on the same boat. Here's a simple process [2] to remove public ipv4 IPs from existing EC2 instances (no need to shutdown / reboot) -
Not having mature IPv6 support for VPC network, even though regrettable, is understandable, but not supporting it for public endpoints like {s3,ec2,lambda,sqs,...}.{region}.amazonaws.com is a real head scratcher.
Perhaps AWS could run these endpoints through Cloudfront in order to get IPv6 reachability.
There's also the issue of AWS actually running out and not being able to take on more customer, or at least larger customers. It is solely for the benefit of AWS, not it's customers in any case. If it was about the customers NAT gateways would get reduced in price at the same time.
That being said, I do know companies that have freed up 80% or more of their IPv4 allocation with no service impact. It was simply not an issue previously and AWS made it easier to just allocate more public IP addresses.
They at least hit the right price to ensure that people care, but not making it unreasonable if you do need an IPv4 allocation.
Ip addresses have an actual market value now. They cost 50 cents a month to lease.
AWS is charging $3.60 per month. Which isn't orders of magnitude off when you consider AWS probably has a poor utilization rate (can only advertise /24's) and profit margins to consider.
IP addresses were intended to be a public good. The fact that AWS has monopolized so many of them, and now that they own a significant fraction of them is deciding to charge for them. It's ridiculous.
I'm not sure what the AWS services not supporting IPv6 has to do with charging for public IPv4 usage. Private IPv4 is still free so can access services over endpoints. NAT gateways cost the same and can access services over public IPv4 network.
AWS also added IPv6 NAT64 gateway so it should be possible to run IPv6-only internally and still access AWS services and the rest of the Internet.
This would be acceptable if this internal route was already configured for free. However, it appears VPC endpoints have their own pricing[1], at which point one must carefully consider whether paying up for the IPv4 address is the better option.
Interestingly, S3 and DynamoDB access from privately addressed IPv4 VPCs is free, since it's implemented as routes not endpoints, and although rather constraining you can definitely build things with that.
Case in point, I had some internal instances that very occasionally want to pull files from a partner's endpoint. Previous solution, NAT gateway. New solution, DynamoDB request table streamed to AWS Lambda and results in S3. Caveat programmer: only suitable if the task is not latency sensitive.
DynamoDB/Lambda are not everyone's cup of tea, but it's nice to have options.
Do the other cloud providers give you IPv4 endpoints for free? I didn't think that was the case. (ie I thought AWS was just falling into line with the others on charging).
As for the "critical services not supporting IPv6" in the post I was quite surprised that it's such a short list of things that don't work - I would have expected more. They've obviously made a ton of progress.
Hetzner is charging 0.50€/month, compared to AWS's $3.60/month. Which I guess is in line with the usual price difference between the two, but usually AWS does a better job justifying the price difference.
I get the feeling this is far from an exhaustive list. I think this person just mentioned two things that they had run into and could say didn’t support IPv6 without having to look it up.
But did they tell you to "think about accelerating your adoption of IPv6 as a modernization and conservation measure" when they announced the charges? Because AWS did.
Oracle Cloud does. Linode did, I don't know if they still do. I believe Vultr may. It would take some time for someone to research all cloud providers.
Linode still does. I’m running a business on it (and moving personal stuff over from DigitalOcean). The pricing/performance value proposition on Linode has been fantastic.
One day will come, when network admins will be so happy to finally turn off IPv4 on their stack. When router manufacturers will feel happy to support only IPv6. It is definitely coming within my lifetime.
So, these intermediate steps are just cooking the final dish.
I don't think that will ever happen... Too many big businesses squatting on their IPv4 allocations to ever want to spend the money to implement IPv6. They don't need to worry about allocation running out or charging for IPs because they've got theirs.
They would need to be forced to move by customers, probably through an ISP being IPv6 only. And that probably won't ever happen as the customers of those ISPs would drop them for one that does support IPv4 because they expect access to the same big businesses mentioned earlier.
I think the whole thing has lots of inertia to not move to IPv6 and there's no incentive to force it.
On the other hand, on my first try I couldn't get "instance-connect" to work and it turned out I needed a different package "awscliv2" which I had no idea existed.. I've been using "awscli" for the longest time and didn't know there was an alternative and more up to date package available. What a mess.
Also, when running the new one apparently it does a bunch of weird docker magic in the background instead of just being a normal Python program so not sure what to think but I guess it works. If anyone knows a leaner solution to opening an instance-connect tunnel I'd love to know.
The agent is Apache 2 if one wanted to build, enhance, or audit what it does: https://github.com/aws/amazon-ssm-agent#readme as is the local binary that awscli uses for the websocket handshaking: https://github.com/aws/session-manager-plugin#readme
Considering the cheapest ec2 instance type is now cheaper than an ipv4 address this is easy to justify if the AWS specific options don't fit your usecase
You’re not.
Having an IP per endpoint that is conveniently globally routable from any other endpoint is the entire purpose of the Internet!
It’s not some sort of greed or abuse of privilege! It’s the reason for the thing to exist!
This is like going to a shopping centre that has been growing along with the local population exponentially but refuses to buy more shopping carts. You can’t feel guilty for using a shopping cart “just” for your quick snack shopping as-if that’s a greedy move taking it away from more deserving people with “real” grocery shopping to do.
Stop thinking like this. Seriously, STOP!
You’re the victim here.
You’re the victim of Amazon’s greed and lock-in.
You’re the victim of the lack of foresight for the most predictable resource exhaustion in the history of the world.
You’re the victim of a problem that has had a solution for two decades that is now included for free(!) in every network device being made but is being turned off by lazy administrators that can’t be bothered averting slow-moving catastrophies.
So I don't know if I agree that aws ec2-instance-connect doesn't use docker. And also, I don't necessarily have a problem that it does, it just surprised me a bit.
Just in case some people are on the same boat. Here's a simple process [2] to remove public ipv4 IPs from existing EC2 instances (no need to shutdown / reboot) -
1) Create a new elastic ip with autoassigned ipv4 IP - https://us-west-1.console.aws.amazon.com/ec2/home?region=us-...:
2) Associate this new elastic ip to an existing EC2 instance. This will replace existing public ipv4 ip of this instance.
3) Create a new network interface - https://us-west-1.console.aws.amazon.com/ec2/home?region=us-...:
4) Attach this new network interface to the EC2 instance. Now the EC2 instance has two network interfaces, thus two private ipv4 IPs.
5) Disassociate the new elastic ip & release the ip - https://us-west-1.console.aws.amazon.com/ec2/home?region=us-...:
By this point, the EC2 instance doesn't have a public ipv4 ip anymore.
---
[1] https://www.listennotes.com/
[2] https://stackoverflow.com/questions/38533725/can-i-remove-th...
Perhaps AWS could run these endpoints through Cloudfront in order to get IPv6 reachability.
That being said, I do know companies that have freed up 80% or more of their IPv4 allocation with no service impact. It was simply not an issue previously and AWS made it easier to just allocate more public IP addresses.
They at least hit the right price to ensure that people care, but not making it unreasonable if you do need an IPv4 allocation.
AWS is charging $3.60 per month. Which isn't orders of magnitude off when you consider AWS probably has a poor utilization rate (can only advertise /24's) and profit margins to consider.
AWS also added IPv6 NAT64 gateway so it should be possible to run IPv6-only internally and still access AWS services and the rest of the Internet.
[1] https://aws.amazon.com/privatelink/pricing/
Case in point, I had some internal instances that very occasionally want to pull files from a partner's endpoint. Previous solution, NAT gateway. New solution, DynamoDB request table streamed to AWS Lambda and results in S3. Caveat programmer: only suitable if the task is not latency sensitive.
DynamoDB/Lambda are not everyone's cup of tea, but it's nice to have options.
As for the "critical services not supporting IPv6" in the post I was quite surprised that it's such a short list of things that don't work - I would have expected more. They've obviously made a ton of progress.
I get the feeling this is far from an exhaustive list. I think this person just mentioned two things that they had run into and could say didn’t support IPv6 without having to look it up.
So, these intermediate steps are just cooking the final dish.
Peace. ;)
They would need to be forced to move by customers, probably through an ISP being IPv6 only. And that probably won't ever happen as the customers of those ISPs would drop them for one that does support IPv4 because they expect access to the same big businesses mentioned earlier.
I think the whole thing has lots of inertia to not move to IPv6 and there's no incentive to force it.
(I didn't bother looking for the quote's source.)