Readit News logoReadit News
onethumb commented on I use zip bombs to protect my server   idiallo.com/blog/zipbomb-... · Posted by u/foxfired
b2ccb2 · 4 months ago
Hilarious because the author, and the OP author, are literally zipping `/dev/null`. While they realize that it "doesn't take disk space nor ram", I feel like the coin didn't drop for them.

Think about it:

  $ dd if=/dev/zero bs=1 count=10M | gzip -9 > 10M.gzip
  $ ls -sh 10M.gzip 
  12K 10M.gzip
Other than that, why serve gzip anyway? I would not set the Content-Length Header and throttle the connection and set the MIME type to something random, hell just octet-stream, and redirect to '/dev/random'.

I don't get the 'zip bomb' concept, all you are doing is compressing zeros. Why not compress '/dev/random'? You'll get a much larger file, and if the bot receives it, it'll have a lot more CPU cycles to churn.

Even the OP article states that after creating the '10GB.gzip' that 'The resulting file is 10MB in this case.'.

Is it because it sounds big?

Here is how you don't waste time with 'zip bombs':

  $ time dd if=/dev/zero bs=1 count=10M | gzip -9 > 10M.gzip
  10485760+0 records in
  10485760+0 records out
  10485760 bytes (10 MB, 10 MiB) copied, 9.46271 s, 1.1 MB/s

  real    0m9.467s
  user    0m2.417s
  sys     0m14.887s
  $ ls -sh 10M.gzip 
  12K 10M.gzip

  $ time dd if=/dev/random bs=1 count=10M | gzip -9 > 10M.gzip
  10485760+0 records in
  10485760+0 records out
  10485760 bytes (10 MB, 10 MiB) copied, 12.5784 s, 834 kB/s

  real    0m12.584s
  user    0m3.190s
  sys     0m18.021s

  $ ls -sh 10M.gzip 
  11M 10M.gzip

onethumb · 4 months ago
The whole point is for it to cost less (ie, smaller size) for the sender and cost more (ie, larger size) for the receiver.

The compression ratio is the whole point... if you can send something small for next to no $$ which causes the receiver to crash due to RAM, storage, compute, etc constraints, you win.

onethumb commented on Colossus for Rapid Storage   cloud.google.com/blog/pro... · Posted by u/alobrah
nodesocket · 5 months ago
ha, ha, fair. Ultra. To be fair, I own a MacBook Pro M1 Max and Mac Mini M4 Pro and follow Apple products closely.
onethumb · 5 months ago
Yep, I love Apple, follow them closely, own a Mac Studio with an M3 Ultra and a MacBook Pro with an M4 Max, and it's still confusing. :)

I mean, surely a Mac Studio with an M4 Max must be the best, right? It's an entire CPU generation ahead and it's maximum! Of course, it's not... the M3 Ultra is the best.

Naming things is hard.

onethumb commented on Colossus for Rapid Storage   cloud.google.com/blog/pro... · Posted by u/alobrah
nodesocket · 5 months ago
Fair, how about instead of S3 Express they call it S3 Max (One Zone). It doesn’t take a rocket scientist to come up with good product names, just copy Apple. Though I suppose what happens when engineers are left up to the marketing. :-)
onethumb · 5 months ago
If Apple's so great at naming things, tell me (without looking) which is bigger/better/faster for their CPUs: Max or Ultra?
onethumb commented on Archival Storage   blog.dshr.org/2025/03/arc... · Posted by u/rbanffy
renewiltord · 6 months ago
And when you're moving providers you use your application credentials to do that? That makes no sense. This is nonsensical engineering. You'd use your environment credentials to alter the environment.
onethumb · 6 months ago
I'm not "engineering" anything - I'm just stopping a service. I close the account, or disable billing, or whatever that step requires. I don't even read the data back out or anything - just cancel. Doesn't really require "engineering".
onethumb commented on Archival Storage   blog.dshr.org/2025/03/arc... · Posted by u/rbanffy
renewiltord · 6 months ago
You don't set the lifecycle rule at runtime. You set it at environment setup time. The credentials that put your object don't have to have the power to set lifecycle permissions.

You obviously don't put your environment setup user in your app. That would be utterly retarded.

onethumb · 6 months ago
Not useful for me at environment setup time because I never want any of my data deleted. The only time is if I decide to abandon that cloud provider.
onethumb commented on Archival Storage   blog.dshr.org/2025/03/arc... · Posted by u/rbanffy
firecall · 6 months ago
I discovered recently that Microsoft OneDrive will detect a ransomeware attack and provide you with the option to restore your data to a point before the attack!

MS need to advertise this feature more, because I'd never heard of it and assumed all the files on the PC were toast!

Of course, the fact that a script on Windows can be accidentally run and then quietly encrypt all the users files in the background is another matter entirely!

onethumb · 6 months ago
Most cloud providers do this now. Encryption operations like this are relatively easy to detect.
onethumb commented on Archival Storage   blog.dshr.org/2025/03/arc... · Posted by u/rbanffy
creer · 6 months ago
I meant more monetary cost. Nominally cloud storage for one unit of storage and one unit of time is perfectly "fine". Except that it adds up. More storage, indefinitely held, multiple hosts. Data which needs to be copied from one to the other which incurs costs from both. Add to this routine retrieval costs - if you "live this way". And routine test retrieval costs if you know what's good for you.

So last time I looked, unit costs were low - sure. But all-included costs were high.

onethumb · 6 months ago
Certainly some of this simply comes down to "how valuable is my data?".

Currently, given the extremely low (and dropping YoY) cost of storing cold data at rest, the essentially free cost of ingest, and the high cost of retrieving cold data which I almost never have to do, the ROI is wildly positive. For me.

And since all of these things (how many providers, which providers, which storage classes, how long to retain the data, etc) are all fine-tunable, you can basically do your own ROI math, then pick the parameters which work for you.

onethumb commented on Archival Storage   blog.dshr.org/2025/03/arc... · Posted by u/rbanffy
renewiltord · 6 months ago
You can set a lifecycle rule. You don't need credentials to delete.
onethumb · 6 months ago
Only if you allow permissions to set a lifecycle rule...
onethumb commented on Archival Storage   blog.dshr.org/2025/03/arc... · Posted by u/rbanffy
onethumb · 6 months ago
There are no delete credentials, and the WORM option is enabled when a provider supports it. I can ~always get back to point-in-time.
onethumb · 6 months ago
No delete credentials present a cost issue when moving from a provider... I've accidentally left data behind after I thought I'd deleted it. Worth the risk, and learned my lesson.
onethumb commented on Archival Storage   blog.dshr.org/2025/03/arc... · Posted by u/rbanffy
tatersolid · 6 months ago
> This protects me against…malware attacks

Are you sure about that? Many ransomware attackers do recon for some time to find the backup systems and then render those unusable during the attack. In your case your cloud credentials (with delete permissions?) must be present on your live sou ce systems, rendering the cloud backups vulnerable to your overwrite or deletion.

There are immutable options in the bigger cloud storage services but in my experience they are often unused, used incorrectly, or incompatible with tools that update backup metadata in-place.

I’ve encountered several tools/scripts mark a file file as immutable for 90 days the first time it is backed up, but not extend that date correctly on the next incremental, leaving older but still critical data vulnerable to ransomware.

onethumb · 6 months ago
There are no delete credentials, and the WORM option is enabled when a provider supports it. I can ~always get back to point-in-time.

u/onethumb

KarmaCake day989April 21, 2007
About
Don MacAskill. Co-founder, CEO & Chief Geek at SmugMug. CEO & Chief Geek at Flickr. Co-founder of Raine, Logan & Audrey. Lover of Liz.

[ my public key: https://keybase.io/don; my proof: https://keybase.io/don/sigs/HGac8zgqkG4zQFQzJp7QcUZIxWsqghqnP5459hwqrvQ ]

View Original