You should always verify that SSH password auth is actually off; run
ssh -v myserver : 2>&1 | grep continue
and ensure that it only gives "publickey"!
(A surprising number of VPSs will re-enable passwords in a .d config file. And really, even if you've checked for that, the extra 10 seconds to make sure is worth it.)
And from the server itself. Some versions won't allow this command without also creating a prevsep directory e.g. mkdir /run/sshd if sshd is not set to automatically start.
sshd -T | grep -i pas
passwordauthentication yes
permitemptypasswords yes
I permit this intentionally for my own reasons on my publicly accessible servers.
as an emergency recovery mechanism before you first run
sudo ufw enable
That way, if you've screwed up and locked yourself out with your new firewall rules, you can just wait 5 minutes and log back in (instead of paying for remote hands at your datacenter, or blowing away your vps and rebuilding from scratch).
Remember to re enable the firewall or stop the at job if everything works for you.
Alternately ensure you have console access from the VPS providers web console / terminal. Then you can safely stop/start sshd, vpn daemons, bork up firewall rules, etc...
On some VPS providers you can also instantly reboot into a rescue full blown OS that runs in memory and then mount and chroot into whatever disks that need to be fixed.
That's true, but I prefer to ingrain habits that'll work everywhere, instead of relying on stuff like vps console access which will work fine if you're working on your ec2 instance or your DO droplet, but will not work out so well when it's your home server or colo-ed box that you're trying to remotely secure from a hotel room while traveling.
But as @wink points out, these days you also need to ensure you've actually got at available - which is not guaranteed especially with cut down distros like Alpine.
And check if `at` works before that. Maybe I'm still scarred from that one default install years ago where it was broken, or even left out. Unlike cron I don't see it as a given these days.
For SSH, all you need is to allow only pubkey authentication. The other stuff, like Fail2ban aren’t necessary. If you want to clean up logs, you can change port number too.
Better yet, use outbound only SSH (with a tool provided by the VPS provider like AWS session manager or third party).
Note that ports which are not bound to the host (i.e., -p 5432:5432 instead of -p 127.0.0.1:5432:5432) will be accessible from the outside. This also applies if you configured UFW to block this specific port, as Docker manages its own iptables rules. https://docs.docker.com/engine/network/packet-filtering-fire...
As far as Fail2ban goes, using it to lock the door is good, but removing the door entirely is better.
Fail2ban is useful for limiting failed access attempts, but closing the SSH port altogether limits attack pathways to only trusted parties in the first place — assuming SSH isn’t meant to be publicly accessible.
There are many modern technology options for enabling private access without needing to open firewall ports, many are listed at https://zerotrustnetworkaccess.info
Of these, mesh overlay networks appear to be gaining the most traction lately, especially among the HN crowd.
I would have appreciated the rationale behind setting 'UsePAM' to 'no'. I assume it's because, with password auth disabled, it's not necessary, and better to disable something that you don't need that would otherwise add to the attack surface?
(A surprising number of VPSs will re-enable passwords in a .d config file. And really, even if you've checked for that, the extra 10 seconds to make sure is worth it.)
Remember to re enable the firewall or stop the at job if everything works for you.
On some VPS providers you can also instantly reboot into a rescue full blown OS that runs in memory and then mount and chroot into whatever disks that need to be fixed.
But as @wink points out, these days you also need to ensure you've actually got at available - which is not guaranteed especially with cut down distros like Alpine.
(sleep 5m && sudo ufw disable) &
Better yet, use outbound only SSH (with a tool provided by the VPS provider like AWS session manager or third party).
https://www.reddit.com/r/docker/comments/18m1k0b/holy_sht_a_...
Note that ports which are not bound to the host (i.e., -p 5432:5432 instead of -p 127.0.0.1:5432:5432) will be accessible from the outside. This also applies if you configured UFW to block this specific port, as Docker manages its own iptables rules. https://docs.docker.com/engine/network/packet-filtering-fire...
Fail2ban is useful for limiting failed access attempts, but closing the SSH port altogether limits attack pathways to only trusted parties in the first place — assuming SSH isn’t meant to be publicly accessible.
There are many modern technology options for enabling private access without needing to open firewall ports, many are listed at https://zerotrustnetworkaccess.info
Of these, mesh overlay networks appear to be gaining the most traction lately, especially among the HN crowd.
For completeness sake if someone is using SELinux Enforcing mode then UsePam will likely need to be Yes to avoid breaking sshd mandatory access rules.
Deleted Comment