This is exactly what I needed - not a lot of people are doing eventually inconsistent databases yet. If theres a network partition, shits probably fucked anyways so its best to throw away all your data.
Unfortunately the metadata (list of files) is stored in memory, not network. Not quite as unreliable as I would like.
Eventually inconsistent databases could save a lot of the effort developers faced with databases without inconsistency guarantees put into implementing eventual inconsistency at the application level.
This reminds me of delay line memory, storing information in the echo of a sound: https://en.wikipedia.org/wiki/Delay_line_memory . LEO had long tubes of mercury for this, not sure I would have wanted to work in that office.
I think very few bits here are actually stored in-flight on the copper/fiber/radio at any time. Most of them are probably stored in various buffers of networking equipment (which are ordinary semiconductor RAM). It would be interesting to see someone do an actual breakdown for, say, a transatlantic link.
My gut feeling says that only geostationary satellite links have any significant number of bits in-flight.
Very simplified, but assuming a 5000 km cable and 300 000 km/s speed of light (is slightly slower in fiber), light needs 0.01666... seconds for that. At 10 Gbit/s (fairly slow), that'd mean ~ 166 Mbit or ~ 20 Megabyte in flight. (I hope I didn't forget a factor 1000 somewhere)
Actually, Engineering Research Associates (predecessor of Univac) built a CPU with mercury delay line memory using acoustic transducers to create a shift register. It did bit-serial binary arithmetic. You could get two logic gates or one latch out of a dual pentode, so a bit-serial ALU could fit easily in a single rack.
I'm not familiar with the overall architecture, though, unfortunately. I do remember having to do a most-significant-bit first, bit-serial, two's complement adder as a homework assignment that was inspire by the machine.
Of course the gut reaction to this is "why?" and the obvious answer is "because I can"... that's fine, but there's been plenty of filesystems built on protocols or things that weren't really supposed to be filesystems.
I remember way back when gmail first announced giving away 1GB of storage, there were a bunch of hacky things to turn SMTP into a real filesystem.
So, I get the answer to "why make this weird filesystem", but I'd really like to know: why is this weird filesystem, weirder or more intriguing than the others that have come before it.
Unfortunately the metadata (list of files) is stored in memory, not network. Not quite as unreliable as I would like.
Note: joke assumes parent poster was somewhat humorous
Only looked at the read path, which is roughly:
* filesystem receives read
* blocks on a semaphore (https://github.com/yarrick/pingfs/blob/master/chunk.c#L141)
* network thread wakes up semaphore when ICMP reply containing data is received
* filesystem thread copies data into new buffer to return from syscall
* filesystem relinquishes chunk back to network thread, which wakes up, sends it back to be echoed on the network via ICMP, and frees the in-memory buffer (https://github.com/yarrick/pingfs/blob/master/chunk.c#L107)
How silly :)
My gut feeling says that only geostationary satellite links have any significant number of bits in-flight.
I'm not familiar with the overall architecture, though, unfortunately. I do remember having to do a most-significant-bit first, bit-serial, two's complement adder as a homework assignment that was inspire by the machine.
http://www.gnuterrypratchett.com/
I remember way back when gmail first announced giving away 1GB of storage, there were a bunch of hacky things to turn SMTP into a real filesystem.
So, I get the answer to "why make this weird filesystem", but I'd really like to know: why is this weird filesystem, weirder or more intriguing than the others that have come before it.