Readit News logoReadit News
simonw · a year ago
Wrote some notes on this here: https://simonwillison.net/2024/Nov/22/amazon-s3-append-data/

Key points:

- It's just for the "S3 Express One Zone" bucket class, which is more expensive (16c/GB/month compared to 2.3c for S3 standard tier) and less highly available, since it lives in just one availability zone

- "With each successful append operation, you create a part of the object and each object can have up to 10,000 parts. This means you can append data to an object up to 10,000 times."

That 10,000 parts limit means this isn't quite the solution for writing log files directly to S3.

jiggawatts · a year ago
Wow, I'm surprised it took AWS this long to (mostly) catch up to Azure, which had this feature back in 2015: https://learn.microsoft.com/en-us/rest/api/storageservices/u...

Azure supports 50,000 parts, zone-redundancy, and append blobs are supported in the normal "Hot" tier, which is their low-budget mechanical drive storage.

Note that both 10K and 50K parts means that you can use a single blob to store a day's worth of logs and flush every minute (1,440 parts). Conversely, hourly blobs can support flushing every second (3,600 parts). Neither support daily blobs with per-second flushing for a whole day (86,400 parts).

Typical designs involve a per-server log, per hour. So the blob path looks like:

    "{account}/{path}/{year}/{month}/{day}/{hour}_{servername}.txt"
This seems insane, but it's not a file system! You don't need to create directories, and you're not supposed to read these using VIM, Notepad, or whatever.

The typical workflow is to run a daily consolidation into an indexed columnstore format like Parquet, or send it off to Splunk, Log Analytics, or whatever...

sofixa · a year ago
> Wow, I'm surprised it took AWS this long to (mostly) catch up to Azure, which had this feature back in 2015:

Microsoft had the benefit of starting later and learning from Amazon's failures and successes. S3 dates from 2006.

That being said, both Microsoft and Google learned a lot, but also failed at learning different things.

GCP has a lovely global network, which makes multi-region easy. But they spent way too much time on GCE and lost the early advantage they had with Google App Engine.

Azure severely lacks in security (check out how many critical cross-tenant security vulnerabilities they've had in the past few years) and reliability (how many times have there been various outages due to a single DC in Texas failing; availability zones still aren't the default there).

ak217 · a year ago
Microsoft did this by sacrificing other features of object storage that S3 and GS had since the beginning, primarily performance, automatic scaling, unlimited provisioning and cross-sectional (region wide) bandwidth. Azure blob storage did not have parity on those features back in 2015 and data platform applications could not be implemented on top of it as a result. Since then they fixed some of these, but there are still areas where Azure lacks scaling features that are taken for granted on AWS and GCP.
cedilla · a year ago
If I need to consolidate anyway, is this really a win for this use case? I could just upload with {hour}_{minute}.txt instead of appending every minute, right?
zaphirplane · a year ago
In all fairness. Shipping unreliable features for unreliable services is a lot easier
kochie · a year ago
AWS ranks features based on potential income from customers. Normally there’s a fairly big customer PFR needed to get a service team to implement a new feature.
omeid2 · a year ago
Not directly, but enough to write once every hour for more than a year!
simonw · a year ago
Yeah, or I guess log rotation will work well - you can write 10,000 lines to one key and then switch to a new key name.
santiagobasulto · a year ago
This will require some serious buffering.
electroly · a year ago
The original title is "Amazon S3 Express One Zone now supports the ability to append data to an object" and the difference is extremely important! I was excited for a moment.
teractiveodular · a year ago
For comparison, while GCS doesn't support appends directly, there's hacky but effective workaround in that you can compose existing objects together into new objects, without having to read & write the data. If you have existing object A, upload new object B, and compose A and B together so that the resulting object is also called A, this effectively functions the same as appending B into A.

https://cloud.google.com/storage/docs/composite-objects#appe...

pclmulqdq · a year ago
Colossus allows appends, so it would make sense that there's a backdoor way to take advantage of that in GCS. It seemed silly of me that Google didn't just directly allow appends given the architecture.
CharlieDigital · a year ago
There are some limitations[0] to work around (can only compose 32 at a time and it doesn't auto delete composed parts), but I find this approach super useful for data ingest and ETL processing flows while being quite easy to use.

[0] https://chrlschn.dev/blog/2024/07/merging-objects-in-google-...

vdm · a year ago
thinkharderdev · a year ago
It's similar but no really the same thing. It has to be done up front by initiating a multi-part upload to start. The parts are still technically accessible as S3 objects but through a different API. But the biggest limitation is that each part has to be >5MB (except for the final part)
new_user_final · a year ago
It's totally different thing and requires special way to initiate multi-part uploading.
sureIy · a year ago
It's crazy to me that anyone would still consider S3 after R2 was made available, given the egress fees. I regularly see people switching to R2 and saving thousands or hundreds of thousands by switching.
JonoBB · a year ago
For the most part I agree, but we have found that R2 does not handle large files (hundreds of GB or larger) very well. It will often silently fail with nothing being returned, so it’s not possible to handle it gracefully.
remus · a year ago
Depends a bit on your use case. If you've got lots of other infra on AWS and you don't need to store that much data then the downside of using another platform can outweigh the cost savings.
compootr · a year ago
doesn't s3 have 'free' (subsidized!) transfer to other products like ec2 though? it might look better to businesspeople that "well, we're doing this processing on ec2, why not keep our objects in s3 since it's a bit cheaper!"
mjlee · a year ago
S3 has free data transfer within the same region.
dragonwriter · a year ago
> It's crazy to me that anyone would still consider S3 after R2 was made available, given the egress fees.

If your compute is on AWS, using R2 (or anything outside of AWS) for object storage means you pay AWS egress for “in-system” operations rather than at the system boundary, which is often much more expensive (plus, you also probably add a bunch of latency compared to staying on AWS infra.) And unless you are exposing your object store directly externally as your interface to the world, you still pay AWS egress at the boundary.

Now, if all you use AWS for is S3, R2 may be, from a cost perspective, a no brainer, but who does that?

YetAnotherNick · a year ago
In most cases S3 data is not directly exposed to the client. If the middleware is EC2, then you need to pay the same egress fee, but you you will have lower latency with S3, as EC2 shares the same datacenter as S3.
lijok · a year ago
People still use S3 because doing business with Cloudflare is a liability.
bobnamob · a year ago
Elaborate?
mxuribe · a year ago
Genuinely curious what you meant by this?
crest · a year ago
Just wait until Cloudflare decides you can afford to be on a different plan.
ChrisArchitect · a year ago
Please fix title: Amazon S3 Express One Zone now supports the ability to append data to an object
supermatt · a year ago
This doesnt seem very useful for many cases, given that you NEED to specify a write offset in order for it to work. So you need to either track the size (which becomes more complex if you have multiple writers), or need to first request the size every time you want to do a write and then race for it using the checksum of the current object... Urghhh.
thecleaner · a year ago
I don't understand the bashing in the comments. I image this is a tough distributed systems challenge (as with anything S3). Of course AWS is charging more since they've cracked it.

Does anybody know if appending still has that 5TB file limit ?

taeric · a year ago
I'm curious on the different use cases for this? Firehose/kinesis whatever the name seems to have the append case covered in ways that I would think has fewer foot guns?