I mean, surely a Mac Studio with an M4 Max must be the best, right? It's an entire CPU generation ahead and it's maximum! Of course, it's not... the M3 Ultra is the best.
Naming things is hard.
I mean, surely a Mac Studio with an M4 Max must be the best, right? It's an entire CPU generation ahead and it's maximum! Of course, it's not... the M3 Ultra is the best.
Naming things is hard.
You obviously don't put your environment setup user in your app. That would be utterly retarded.
MS need to advertise this feature more, because I'd never heard of it and assumed all the files on the PC were toast!
Of course, the fact that a script on Windows can be accidentally run and then quietly encrypt all the users files in the background is another matter entirely!
So last time I looked, unit costs were low - sure. But all-included costs were high.
Currently, given the extremely low (and dropping YoY) cost of storing cold data at rest, the essentially free cost of ingest, and the high cost of retrieving cold data which I almost never have to do, the ROI is wildly positive. For me.
And since all of these things (how many providers, which providers, which storage classes, how long to retain the data, etc) are all fine-tunable, you can basically do your own ROI math, then pick the parameters which work for you.
Are you sure about that? Many ransomware attackers do recon for some time to find the backup systems and then render those unusable during the attack. In your case your cloud credentials (with delete permissions?) must be present on your live sou ce systems, rendering the cloud backups vulnerable to your overwrite or deletion.
There are immutable options in the bigger cloud storage services but in my experience they are often unused, used incorrectly, or incompatible with tools that update backup metadata in-place.
I’ve encountered several tools/scripts mark a file file as immutable for 90 days the first time it is backed up, but not extend that date correctly on the next incremental, leaving older but still critical data vulnerable to ransomware.
Think about it:
Other than that, why serve gzip anyway? I would not set the Content-Length Header and throttle the connection and set the MIME type to something random, hell just octet-stream, and redirect to '/dev/random'.I don't get the 'zip bomb' concept, all you are doing is compressing zeros. Why not compress '/dev/random'? You'll get a much larger file, and if the bot receives it, it'll have a lot more CPU cycles to churn.
Even the OP article states that after creating the '10GB.gzip' that 'The resulting file is 10MB in this case.'.
Is it because it sounds big?
Here is how you don't waste time with 'zip bombs':
The compression ratio is the whole point... if you can send something small for next to no $$ which causes the receiver to crash due to RAM, storage, compute, etc constraints, you win.