Readit News logoReadit News
ary · 2 years ago
This is really interesting and something I've been thinking about for a while now. The SEMANTICS[1] doc details what is and isn't supported from a POSIX filesystem API perspective, and this stands out:

  Write operations (write, writev, pwrite, pwritev) are not currently supported. In the future, Mountpoint for Amazon S3 will support sequential writes, but with some limitations:
  
     Writes will only be supported to new files, and must be done sequentially.
     Modifying existing files will not be supported.
     Truncation will not be supported.
The sequential requirement for writes is the part that I've been mulling over whether or not it's actually required in S3. Last year I discovered that S3 can do transactional I/O via multipart upload[2] operations combined with the CopyObject[3] operation. This should, in theory, allow for out of order writes, existing partial object re-use, and file appends.

[1] https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMAN...

[2] https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuove...

[3] https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObje...

Arnavion · 2 years ago
I use a WebDAV server for storing backups (Fastmail Files). The server allows 10GB usage, but max file size is 250MB, and in any case WebDAV does not support partial writes. So writing a file requires reuploading it, which is the same situation as S3.

What I did is:

1. Create 10000 files, each of 1MB size, so that the total usage is 10GB.

2. Mount each file as a loopback block device using `losetup`.

3. Create a RAID device over the 10000 loopback devices with `mdadm --build --level=linear`. This RAID device appears as a single block device of 10GB size. `--level=linear` means the RAID device is just a concatenation of the underlying devices. `--build` means that mdadm does not store metadata blocks in the devices, unlike `--create` which does. Not only would metadata blocks use up a significant portion of the 1MB device size, but also I don't really need mdadm to "discover" this device automatically, and also the metadata superblock does not support 10000 devices anyway (the max is 2000 IIRC).

4. From here the 10GB block device can be used as any other block device. In my case I created a LUKS device on top of this, then an XFS filesystem on the top of the LUKS device, then that XFS filesystem is my backup directory.

So any modification of files in the XFS layer eventually results in some of the 1MB blocks at the lowest layer being modified, and only those modified 1MB blocks need to be synced to the WebDAV server.

(Note: SI units. 1KB == 1000B, 1MB == 1000KB, 1GB == 1000MB.)

Arnavion · 2 years ago
Of course, despite working on this for a week I only now discovered this... dm_linear is an easier way than mdadm to concatenate the loopback devices into a single device. Setting up the table input to `dmsetup create`'s stdin is more complicated than just `mdadm --build ... /dev/loop1{0000..9999}`, but it's all scripted anyway so it doesn't matter. And `mdadm --stop` blocks for multiple minutes for some unexplained reason, whereas `dmcreate remove` is almost instantaneous.

One caveat is that my 1MB (actually 999936B) block devices have 1953 sectors (999936B / 512B) but mdadm had silently only used 1920 sectors from each. In my first attempt at replacing mdadm with dm_linear I used 1953 as the number of sectors, which led to garbage when decrypted with dm_crypt. I discovered mdadm's behavior by inspecting the first two loopback devices and the RAID device in xxd. Using 1920 as the number of sectors fixed that, though I'll probably just nuke the LUKS partition and rebuild it on top of dm_linear with 1953 sectors each.

hardwaresofton · 2 years ago
What a coincidence, I just recently did something similar.

Did you run into any problems with discard/zeroing/trim support?

This was a problem with sshfs — I can’t change the version/settings on the other side, and files seemed to simply grow and become more fragmented.

I suspected WebDAV and Samba might have had been the solution but never looked into it since sshfs is so solid.

kccqzy · 2 years ago
FWIW this is similar to Apple's "sparse image bundle" feature, where you can create a disk image that internally is stored in 1MB chunks (the chunk size is probably only customizable via the command line `hdiutil` not the UI). You can encrypt it and put a filesystem on top of it.
nerpderp82 · 2 years ago
Are you using davfs2 to mount the 1MB files from the WebDAV server?
worewood · 2 years ago
This is a very nice solution.
mlhpdx · 2 years ago
I think you’re spot on: using multipart uploads, different sections of the ultimate object can be created out of order. Unfortunately, though, that’s subject to restrictions that require you to ensure all but the last part are sufficiently sized.

I’m a little disappointed that this library (which is supposed to be “read optimized”) doesn’t take advantage of S3 Range requests to optimize read after seek. The simple example is a zip file in S3 for which you want only the listing of files from the central directory record at the end. As far as I can tell this library reads the entire zip to get that. I have some experience with this[1][2].

[1] https://github.com/mlhpdx/seekable-s3-stream [2] https://github.com/mlhpdx/s3-upload-stream

simooooo · 2 years ago
Wouldn’t you be maintaining your own list of what is in the zip offline at this point?
FullyFunctional · 2 years ago
Forgive the question but I never quite understood the point of S3. It seems it’s a terrible protocol but it’s designed for bandwidth. Why couldn’t they have used something like, say, 9P or Ceph? Surely I’m missing something fundamental.

EDIT: In my personal experience with S3 it’s always been super slow.

aseipp · 2 years ago
Because you don't have to allocate any fixed amount up front, and it's pay as you go. At the time when the best storage options you could get were fixed-size hard drives from VPS providers, this was a big change, especially on both the "very small" and "very large" ends of the spectrum. It has always spoken HTTP with a relatively straightforward request-signing scheme for security, so integration at the basic levels is very easy -- you can have signed GET requests, written by hand, working in 20 minutes. The parallel throughput (on AWS, at least) is more than good enough for the vast, vast majority of apps assuming they actually design with it in mind a little. Latency could improve (especially externally) but realistically you can just put an HTTP caching layer of some sort in front to mitigate that and that's exactly what everybody does.

Ceph was also released many years after S3 was released. And I've never seen a highly performant 9P implementation come anywhere close to even third party S3 implementations. There was nothing for Amazon to copy. That's why everyone else copied Amazon, instead.

It's not the most insanely hyper-optimized thing from the user POV (HTTP, etc) and in the past some semantics were pretty underspecified e.g. before full consistency guarantees several years ago, you only got "read your writes" and that's it. But it's not that hard to see why it's popular, IMO, given the historical context and use cases. It's hard to beat in the average case for both ease of use and commitment.

ary · 2 years ago
When S3 was released the Internet was very different. Two of the things that stood out were:

1. It offered a resilient key/object store over HTTP.

2. By the standards of the day for bandwidth and storage it was (and to a certain extent still is) very inexpensive.

Since then much of AWS has been built on the foundation of S3 and so its importance has changed from merely being a tool to basically a pervasive dependency of the AWS stack. Also, it very much is designed for objects larger than 1KB and for applications that need durable storage of many, many large objects.

The key benefit, at least according to AWS marketing, is that you don't have to host it yourself.

fnordpiglet · 2 years ago
Simple api

Absurdly cheap storage

Extremely HA

Absurdly durable

Effectively unlimited bandwidth

Effectively unbounded storage without reservation or other management

Everything supports its api

It’s not a file system. It’s a blob store. It’s useful for spraying vast amounts of data into it and getting vast amounts of data out of it at any scale. It’s not low latency, it’s not a block store, but it is really cheap and the scaling of bandwidth and storage and concurrency make it possible to build stuff like snowflake that couldn’t be built on Ceph in any reasonable way.

supriyo-biswas · 2 years ago
The problem is S3 is just a lexicographically ordered key value store with (what I suspect is) key-range partitions[1] for the key part and Reed-Solomon encoded blobs for the value part. In other words, it’s a glorified NoSQL database with no semantics that you’d typically expect of a file system, and therefore repeated writes are slow because any modification to an object involves writing a new version of the key along with its new object.

[1] https://martinfowler.com/articles/patterns-of-distributed-sy...

rakoo · 2 years ago
S3 is straight HTTP, the most widespread API. It can be directly used on the browser, has libraries in pretty much every language, and can reuse the mountain of available software and frameworks for load-balancing, redirections, auth, distributed storage etc
spmurrayzzz · 2 years ago
I think theres an interesting story in software ecosystems where there are two flavors of applications (which coexist) that prefer object stores over filesystems and vice versa. Good reference point for this I think exists in many modern video transcoding infrastructures.

Using something like FSx [1] gives you a performant option for the use cases when the tooling involved prefers filesystem semantics.

[1] https://aws.amazon.com/fsx/lustre/

vbezhenar · 2 years ago
Here are reasons I'm using S3 in some projects:

1. Cost. It might vary depending on vendor, but generally S3 is much cheaper than block storage, at the same time with some welcome guarantees (like 3 copies).

2. Pay for what you use.

3. Very easy to hand off URL to client rather than creating some kind of file server. Also works with uploads AFAIR.

4. Offloads traffic. Big files often are the main source of traffic on many websites. Using S3 allows to remove that burden. And S3 usually served by multiple servers which further increases speed.

5. Provider-independent. I think that every mature cloud offers S3 API.

I think that there are more reasons. Encryption, multi-region and so on. I didn't use those features. Of course you can implement everything with your own software, but reusing good implementation is a good idea for most projects. You don't rewrite postgres, so you don't rewrite S3.

deathanatos · 2 years ago
> In my personal experience with S3 it’s always been super slow.

Numbers? I feel like it's been a while, but my experience was it is in the 50ms latency range. That's fast enough that you can do most things. Your page loads might not be instant, but 50ms is fast enough for a wide range of applications.

The big mistake I see though is a lack of connection pooling: I find code going through the entire TCP connection setup, TLS setup, just for a single request, tearing it all down, and repeating. boto also enouranges some code patterns which result in GET bucket or HEAD object requests which you don't need and can avoid; none of this gives you good latency.

kbumsik · 2 years ago
S3 works over HTTP, which means that it is designed to work over the internet.

Other protocols you mentioned, including NFS, does not work well over the internet.

Some of them are exclusively designed to work within the same network, or very sensitive to network latency.

ignoramous · 2 years ago
> Forgive the question but I never quite understood the point of S3.

S3 and DynamoDB are essentially a decoupled BigTable; in that both are KV databases: One is used for high performance, small obj workloads; the other for high throughput, large obj workloads.

the8472 · 2 years ago
They have NFS (called EFS), but it's about 10x more expensive.
Scarbutt · 2 years ago
S3 is slow but at the same time low cost, if you want fast AWS has other alternatives but pricier.
beebmam · 2 years ago
Same with my experience. Not a fan

Deleted Comment

supriyo-biswas · 2 years ago
After teaching customers for years that S3 shouldn't be mounted as a filesystem because of its whole object-or-nothing semantics, and even offering a paid solution named "storage gateway" to prevent issues between FS and S3 semantics, it's rather interesting they'd release a product like this.

Amazon should really just fix the underlying issue of semantics by providing a PatchObjectPart API call that overwrites a particular multipart upload chunk with a new chunk uploaded from the client. CopyObjectPart+CompleteMultipartUpload still requires the client to issue CopyObjectPart calls for the entire object.

FridgeSeal · 2 years ago
> it's rather interesting they'd release a product like this

Azure has a feature where you can mount a blob store storage container into a container/VM, is this possibly aiming to match that feature?

I definitely think people should stop trying to pretend S3 is a file system and embrace what’s it’s good at instead, but I have had many times when having an easy and fast read-only view into an S3 bucket would be insanely useful.

wmf · 2 years ago
Eventually AWS always gives customers what they want even if it's a "bad idea".
znpy · 2 years ago
Bad ideas are very relative.

Some bad ideas work extremely well if they fit your use case, you understand very well the tradeoffs and you’re building safeguards (disaster recovery).

Some other companies try to convince (force?) you into a workflow or into a specific solution. Aws just gives you the tools and some guidance on how to use them best.

mlhpdx · 2 years ago
Indeed.
nostrebored · 2 years ago
Distributed patching becomes hell. You need transactional semantics and files are not laid out well to help you define invariants that should reject the transaction.
supriyo-biswas · 2 years ago
There is no reason why the descriptor of objects can’t be updated with a new value that has all of the old chunks and a new one, since S3 doesn’t do deduplication anyway the other chunks may be resized internally with an asynchronous process that gets rid of the excess data corresponding to the now overridden chunk.
toomuchtodo · 2 years ago
> This is an alpha release and not yet ready for production use. We're especially interested in early feedback on features, performance, and compatibility. Please send feedback by opening a GitHub issue. See Current status for more limitations.
BrianHenryIE · 2 years ago
JungleDisk was backup software I used ~2009 that allowed mounting S3. They were bought by Rackspace and the product wasn't updated. Seems to be called/part of Cyberfortress now.

Later I used Panic's Transmit Disk but they removed the feature.

Recently I'd been looking at s3fs-fuse to use with gocryptfs but haven't actually installed it yet!

https://github.com/s3fs-fuse/s3fs-fuse

https://github.com/rfjakob/gocryptfs

legorobot · 2 years ago
We've used the s3fs-fuse library for a while at work for SFTP/FTP server alternatives (AWS wants you to pay $150+/server/month last I checked!) and it's worked like a dream. We scripted the setup of new users via a simple bash script and the S3 CloudWatch events for file uploads is a dream. Its been pretty seamless and hasn't caused many headaches.

We've had to perform occasional maintenance but its operated for years with no major issues. 99% are solved with a server restart + a startup script to auto-re-mount s3fs-fuse in all the appropriate places.

Give them a try, I recommend it!

CharlesW · 2 years ago
> Later I used Panic's Transmit Disk but they removed the feature.

BTW, Panic seemingly intends to re-build Transmit Disk. Hopefully it'll be part of Transmit 6: https://help.panic.com/transmit/transmit5/transmit-disk/#tec...

A supported macOS option appears to be Mountain Duck: https://mountainduck.io/

drcongo · 2 years ago
ForkLift also lets you mount S3 as a drive. https://binarynights.com
sfritz · 2 years ago
There's a similar project under awslabs for using S3 as a FileSystem within the Java JVM: https://github.com/awslabs/aws-java-nio-spi-for-s3
mlindner · 2 years ago
There's some really confusing use of unsafe going on.

For example I'm not sure what they're doing here:

https://github.com/awslabs/mountpoint-s3/blob/main/mountpoin...

favourable · 2 years ago
Something similar that I've been using for a while now for an S3 filesystem: Cyberduck[0]

[0] https://cyberduck.io/s3/

mNovak · 2 years ago
In a similar vein, I've been using ExpanDrive [0] for a while. Though admittedly it's only suitable for infrequent access / long term storage type use.

[0] https://www.expandrive.com/desktop/

lightlyused · 2 years ago
I just wish cyberduck would get a more standard UI. It is so win95 vb. Otherwise works great!
metadat · 2 years ago
How does this compare to rclone, performance wise?

https://rclone.org

e12e · 2 years ago