Then, for each service you're hosting on that server, do a search for:
Service/Program name + STIG/Benchmark
There's tons of work already done by the vendors in conjunction with the DoD (and CIS) to create lists of potential vulnerable settings that can be corrected before deploying the server.
Along with this, you can usually find scripts and/or Ansible playbooks that will do most of the hardening for you. Though it's a good Idea to understand what you do and do not need done.
Check out online resources such as the Nist cyber stuff.
Basic things include disabling unnecessary services, disabling password authentication, setting up and verifying the firewall, configuring selinux and so on.
That's why "availability" is a core tenet of security (according to some cybersecurity course I took). It is easy to prevent unauthorized access to data if you have no requirements on authorized access.
Ubuntu has a set of scripts you can run to harden a new server (not advisable on a server that has already been configured for something). You need an Ubuntu Pro subscription to access them but you can get a free trial and then cancel it after you've finished.
I did this process for a customer recently and it was pretty straightforward and much much more thorough (over 100 configuration changes) than just tweaking SSH and fail2ban.
I expect other commercially-oriented distros offer something similar.
Also, move ssh to a different, higher port. Since ssh isn't exactly for noobs, changing the port is easy enough to work with and that alone already reduces port scans and what not
I recently setup Guacamole (Web based VNC/RDP/SSH) with totp and was able to close external SSH access. Now everything I run can sit behind a single reverse proxy, no extra ports.
SSH - change port, disable root login, disable password login, setup SSH keys using SK(YubiKey in my case)
nftables - I use https://github.com/etkaar/nftm to keep things quick and simple. I like the fact if will convert DNS entries to IPs. I then just use dynamic DNS update clients on all my endpoints
WireGuard for access to services other than SSH(in some cases port 443 will be open if its a web server or proxy)
rsyslog to forward auth logs to my central syslog server
That does not do much in practice. When a user is compromised a simple alias put in the .bashrc can compromise the sudo password.
Explicitly limit the user accounts that can login so that accidentally no test or service account with temporary credentials can login via ssh is the better recommendation.
I think the point is that root is a universal user found on all linux systems where as users have all kinds of names. It narrows down the variables to brute-force, so simply removing the ability to use it means they have to guess a username and a password.
Security by obscurity is no security. Use something like fail2ban to prevent brute force.
When you use a secure password and or key this also does not matter much.
Moving ports does help. It is not a sure thing but when used in conjunction with other security mechanism can help get rid the of the low hanging fruit of scriptkiddies and automated scans.
But scriptkiddies and automated scans are not a security threat. If they were a legitimate threat to your server, you have bigger problems.
All it does is reduce log chatter.
Anyone actually wanting in would port scan, then try and connect to each port, and quickly identify an SSH port
Imagine that the xz exploit actually made it into your server, so your sshd was vulnerable. Having it on another port does seem helpful then. In fact i sometimes think of putting mine on a random secret address in the middle of a /64 ipv6 range, but I haven't done that yet.
it occurs to me, the xz exploit and similar is a good reason not to run the latest software. It affected Debian Sid but not the stable releases. I'm glad I only run the stable ones.
Maybe I'm missing something but how is the host ip known? The server has a maybe-known range of addresses, but I don't announce which address has an sshd listening. There are 2**64 addresses in the range, so scanning in 1 second doesn't sound feasible.
I've never seen an attack that scans all ports. Normally it just checks open ports and then tries common credentials and exploits. If that fails it moves on to the next IP.
Changing the default port on SSH probably isn't going to do much as SSH is already pretty secure. However it is a good rule of thumb to change the defaults.
The XZ backdoor was not exploited so it is hard to say what would of been effective.
The important thing to note is changing the defaults on systems. Defaults are bad because it makes it easy to take over a large number of systems easily. Even right now there are bots testing common ports for weaknesses.
Just have 2 ipv4 assigned to your server. Have 1 for all your services, and run ssh on the other allowing root login with the password "admin".
A random ipv6 in the same subnet as your server is just obscurity.
The XZ exploit would be functionally similar to allowing root login using the password "admin".
Would doing that on a different port be secure? No? Then a different port is not security, it's obscurity.
Obscurity is just going to trip you up at some point and reduce log chatter.
And yes, running LTSB/stable is a sensible choice for servers.
It defends against the lowest level of automation. And if that is a legit threat in your model, you are going to have a bad time.
It's just going to trip you up at some point
Still does nothing when scanning the entire ipv4 address space achievable so quickly. You can also use services like shodan to find vulnerable services on any ports.
Use SSH keys, stay upgraded. Make management services (SSH, RDP, admin services) accessible only via VPN (WireGuard). Only expose 80 and 443 to the internet, if necessary.