Readit News logoReadit News
GuB-42 · 6 years ago
This is mostly a critique of UNIX, but several of these concepts are already implemented in other OSes, either in production or experimentally.

The database for a filesystem is an classic. WinFS, the filesystem that should have been a key feature of Longhorn/Windows Vista is based on a relational database.

The "death of text configuration files" is the idea behind the Windows registry.

Powershell (Windows, again) is based on structured data rather then text streams.

For the "programs as a collection of addressable code blocks", when we think about it, we are almost there. An ELF executable for instance is not just a blob that is loaded in memory. It is a collection of blocks with instructions on how to load them, and it usually involves addressing other blocks in other ELF files (i.e. dynamic libraries), with names as keys. We could imagine splitting the executable apart to be saved in a database-like filesystem, that would work, but if wouldn't change the fundamentals.

The problem I have with structure is that it implies a schema. And without that schema there is nothing you can do. And of course, because we all have different needs, there are going to be a lot of schemas. So now you turn one problem into two problems: manage the schemas and manage the data. With a UNIX-style system, even if you need some kind of structure to actually process the data, the system is designed in such a way that for common operations (ex: copy), you don't need an application-specific schema.

anonsivalley652 · 6 years ago
Yes, text configuration files are dumb because they require N parser/script editors times M configurable programs. Furthermore, what people really want is universal progmatic and CLI access to configuration.

Microsoft Microsoft'ed configuration management with the way they implemented the registry. The Apple PList way sort-of goes there but doesn't quite master it.

A better way would've been is configuration via code and command-line that's easy to interface for all purposes. It's important to:

- be able to transactionally backup and restore all settings

- have multiple instances of the same program with different settings

- wipe out settings to default

- enumerate all settings within a program or within the whole system

- subscribe and be notified of setting changes

- ACLs or privileges to separate users and processes from each other (similar to the Windows Registry)

- audit and undo setting changes

- hide secrets that are provided from dynamic providers

- allow dynamic providers for values of settings

packet_nerd · 6 years ago
Switch and router OSes have a unique CLI configuration system that is both simple and quite powerful. Compared to Linux I especially like the ability to enumerate the entire configuration with one command ("show running-config" in Cisco's IOS, for example).

I would love to be able to see and manage a Linux system's configuration in an analogous way. Obviously there's a lot more to a modern general purpose OS than a simple network device, so I'm not sure, maybe it wouldn't work very well?

dragonwriter · 6 years ago
> Yes, text configuration files are dumb because they require N parser/script editors times M configurable programs.

No, they don't. It's N editors + M parsers (where M=programs) I suppose if you want really smart editors that are also developed completely independently, such that each require it's own parser, its N+M parsers plus N editors.

> Furthermore, what people really want is universal progmatic and CLI access to configuration

I'm fairly certain most people don't want anything CLI anything.

zdw · 6 years ago
The low hanging fruit is a common parser/serializer, which is usually handled in library code in Unix - handwritten parsers are usually the enemy.

The other bits describe a whole configuration management system, which gets into the realm of questions like "For version X of software Y, value Z can be between A and B, but this changes when using version X+1", which is extremely difficult logic to encode into an external data store. Even worse are relational consistency issues that come up across structures.

Having a "fail if invalid" results in people setting values and then being frustrated at the system.

BTW, this all existed in a language neutral way in the early 2000's with XML parsing and validation leveraging RelaxNG and Schematron. Unfortunately those were deemed ugly and hard to use, and thrown out in exchange for the half baked JSON/YAML solutions.

gen220 · 6 years ago
I know it’s a bit hacky, but you can satisfy most of your requirements with environment variables; and most of your other requirements with a mini daemon that sits on top of `env`.

When I think about it, this is basically what the various “keychain” daemons provide, which IMO are underused despite their terrible ergonomics.

laughingbovine · 6 years ago
This sounds like your basic configuration management tool plus a service discovery tool.
epr · 6 years ago
> So now you turn one problem into two problems: manage the schemas and manage the data

This seems to completely ignore the fact that we are already managing schemas right now in the form of ad-hoc parsers and serializers which are arguably much worse than a more formally specified alternative.

pjc50 · 6 years ago
The HTML vs XHTML situation: would you prefer a hard failure on any error, or a graceful degradation system that is therefore always slightly degraded?

We go back and forth on this because tightening the schema only works when you can adequately define the requirements of both ends up front, and there isn't a vendor battleground happening in your standard. Developers end up escaping into an unstructured-but-working zone. Classics like "it's such a hassle to get the DBA to add columns, so we'll add a single text column and keep all the data in there as JSON".

tyingq · 6 years ago
"The database for a filesystem is an classic. WinFS, the filesystem that should have been a key feature of Longhorn/Windows Vista is based on a relational database."

I always wondered why Unix never added record-based files, in addition to stream-based files...like mainframes have. That would have simplified many things.

bregma · 6 years ago
> I always wondered why Unix never added record-based files, in addition to stream-based files...like mainframes have. That would have simplified many things.

You can implement record-based files (RAX, ISAM, etc) on top of stream-based files. The Unix philosophy was like Lego: small parts that do simple things that can be put together in combinations to build greater things. If you can build it from the basic blocks, it's not a basic block.

There are plenty of more advanced storage systems available for Unix. You don't need the OS vendor to supply The One.

ajross · 6 years ago
Exactly. And with more opinion behind it: the fact that most of those items were shipped in a extraordinarily well-supported, mass market OS literally decades ago and still didn't catch on maybe says something about the value of the design ideas.
jamesrcole · 6 years ago
> the fact that most of those items were shipped in a extraordinarily well-supported, mass market OS literally decades ago and still didn't catch on maybe says something about the value of the design ideas.

that statement seems heavily draw on the idea that success is the result of a meritocracy. But 'how good something is' often plays a small part in the selection of winners and losers.

sl1ck731 · 6 years ago
I think the only thing in the parent's list that didn't catch on was WinFS, which I believe is because it never even shipped if I'm not mistaken.

Registry and Powershell are integral parts of being a Windows administrator.

I might be leading a little, but just because nix didn't choose to use these things doesn't imply they are bad. Just nix is "different".

laumars · 6 years ago
File system allocation tables effectively are highly optimised databases already. I think the issue isn't the file system itself but rather the OS syscalls.
naasking · 6 years ago
> File system allocation tables effectively are highly optimised databases already. I think the issue isn't the file system itself but rather the OS syscalls.

It really is the file systems as well. If allocation tables are highly optimized databases, they're not the kind of safe and robust database we're used to:

https://danluu.com/file-consistency/

teddyh · 6 years ago
Old operating systems like VMS, MULTICS, OS/400, etc. aimed for this; they were large and contained support for a lot of structure. Problem is, they always evolved to be too complex, and/or the available complexity was not what turned out to be needed, but different complexities had to be built on top of the old, unneeded, complexities.

Along comes Unix and dispenses with all of that. Files? Bytes only. Devices? Accessible as files in the file system. Etc. This became popular for a reason.

jdblair · 6 years ago
I came here to write this. VM/CMS was designed for processing structured data. Application UIs were designed around forms to be displayed on 3270 terminals and data was structured around what could be input on a single punch card. It was great as long as this fit your model.

What UNIX gave the world was maximum flexibility: an os that only really cared about streams and got out of your way.

jstimpfle · 6 years ago
And in my eyes this is the only right way, because it allows to build structured services on top. If on the other hand the structure is already in the underlying system, it's incredibly hard to build something useful on top.

Similarly, think how useful memcpy() is: Because it can be applied anywhere.

Gibbon1 · 6 years ago
I'm not sure it's that they were too complex. More that you couldn't shoehorn them into an 1980's era microcomputer.
norswap · 6 years ago
With all the crufty layers now being built on top, perhaps a new simplification is now needed?

On the other hand, it would take quite a lot of effort to reach a parity of capability for a new OS these days.

ainar-g · 6 years ago
We already did that. It was called Plan 9 From Bell Labs[1]. And while it gave us UTF-8, procfs, 9P, etc, it failed to become a popular OS.

[1] https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

marcosdumay · 6 years ago
The thing about the crufty being on top instead of inside it is that we can replace it from time to time.

(Too bad the replacements aren't always better...)

pletnes · 6 years ago
So Unix is the dynamic type system of operating systems?
jkaptur · 6 years ago
Even subject to similar high-minded criticism: https://www.jwz.org/doc/worse-is-better.html
teddyh · 6 years ago
Not so much dynamically typed; more like stringly typed.
_8ljf · 6 years ago
Worth bearing in mind here that files and file systems are themselves a kludge, a 1950s workaround for not having enough primary storage to keep all programs and data live at all times.† (Also, primary storage traditionally tends to be volatile; not good when the power goes out.) I point this out because institutionalizing kludges as The Way Things Should Be Done is an awfully easy mistake to make, and in a long-lived systems like OSes have serious architectural sequelae.

..

What’s interesting about the Unix File System is that it’s a general abstraction: a hierarchical namespace that can be mapped to a wide range of resources; not just “files” in secondary storage but also IO devices, IPC communication points, etc. And that’s all it did: mount resources at locations and define a common API for reading/writing/inspecting all resources, without getting bogged down on the internal details of each resource. Nice high cohesion, low coupling design.

Plan 9 made much fuller use of the namespace idea than Unix did, but the core concept was there for the start and it is excellent… except for one MASSIVE mistake: individual resources are neither typed nor tagged.

Without formal type information there is no way for a client to determine the type of data that a given resource represents. Either there is a common informal agreement that a resource mounted at location X is of type Y (e.g. /dev/*), OR there is an informal resource naming convention (typically DOS-style name extensions), OR the client has to guess (e.g. by sniffing the first few bytes for “tells” or by assuming ASCII).

Formally tagging each resource with, say, a MIME type (as in BeFS, or properly implemented HTTP) would’ve made all the difference. THAT is K&R’s million-dollar mistake mistake, because without that key metadata it is impossible to ensure formal correctness(!) or implement intelligent helpers for data interchange (e.g. automatic coercions).

Arguments over the pros and cons of alternative namespacing/association arrangements (e.g. relational vs hierarchical) are all secondary to that one fundamental cockup.

..

Unix became popular not because it was Good, but because it was Just Good Enough to entertain the cowboys who built and used it, who enjoy that sort of pants-down edge-of-the-seat living. And because a lot of them were in education, they spread it; and so made cowboy coders the status quo for much of this profession and culture. And while I admire and largely approve of their Just Do It attitude, I abhor their frightening disregard for safety and accountability.

And while I’m heartened to see some signs of finally growing up (e.g. Rust), there is a LOT of legacy damage already out there still to be undone. And retrofitting fixes and patches to endemic, flawed systems like Unix and C is and will be infinitely more pain and work than if they’d just been built with a little more thought and care in the first place.

--

† If/When memristors finally get off the ground, the primary vs secondary storage split can go away again and we finally get back to single flat storage space, eliminating lots of of complex bureaucracy and ritual. And when it does, there’ll still the need for some sort of abstract namespace for locating and exchanging data.

terminaljunkid · 6 years ago
> a 1950s workaround for not having enough primary storage to keep all programs and data live at all times

Even today you don't have enough primary storage to keep your data. And even then it would require a structure when data outlives the process / application.

Most times there are no best solution, but only tradeoffs. Anyone who has done a bit of systems work knows this. And Hierarchical file system was a Ok-ish trade-off to make. Perfect is enemy of good.

> MASSIVE mistake: individual resources are neither typed nor tagged.

It comes with its own set of tradeoffs. I am a huge proponent of static typing when it comes to PLs. But in a system where multiple actors operate on shared resources, it is easy to get illusioned into a false sense of correctness. Also it imposes some extra complexity in the programming model. I am no experienced systems engineer. But someone here can address it better.

> .... entertain the cowboys who built and used it ....

You are going beyond HN standards to justify your anger against a particular methodology or people that embrace it in programming.

The universally accepted point is that Unix succeeded due to political factors (low cost and easy modification compared to proprietary counterparts), simplicity of the API, and being arguably better than others despite lacking some features people love to lament these days. But in many cases, that simplicity is a desirable thing to have. It is nice to objectively point out faults in systems. But what you did is totally dismissing some people's contributions.

It is easy to see some hyped thing and think that's the Next Big Thing(TM) after reading two fanboys preaching on Reddit, while being totally ignorant of tradeoffs.

todd8 · 6 years ago
> Unix became popular not because it was Good, but because it was Just Good Enough to entertain the cowboys who built and used it ... and so made cowboy coders the status quo for much of this profession and culture

I disagree very strongly with these insults directed at programmers from 50 years ago because now, in retrospect, half a century later what they did doesn't live up to some flawless system written in Rust that exists only in one's imagination.

Doesn't is seem a little bit like calling Thomas Edison a cowboy that made the terrible mistake of giving us electric lighting through filament bulbs when LED lights would have been so much better.

In these early days of computer science I read virtually every important published article on programming languages and operating systems, the field was still that small. MIT didn't even think it warranted a separate department, it was just a subsidiary branch of EE like say communications. Researchers like Edsger Dijkstra, Tony Hoare, Niklaus Wirth, Per Brinch Hansen, Leslie Lamport, David Gries, Donald Knuth, Barbara Liskov, and David Parnas were all trying to figure out how to structure an operating system, how to verify that a program was correct, how to solve basic problems of distributed systems, and how to design better programming languages. Practitioners working on operating systems would have been familiar with almost everything written by these giants.

It's easy to insult C, I myself wouldn't choose it for work today. But in 1989, 20 years after the birth of Unix, I did choose it for my company's development of system software--it still made sense. And back in the 1960's what alternatives were there? Fortran? PL/1? Pascal? Lisp? We were still programming on keypunch machines and relational databases hadn't been invented. The real competition back then for system programming was assembly language.

pjc50 · 6 years ago
> Formally tagging each resource with, say, a MIME type (as in BeFS, or properly implemented HTTP) would’ve made all the difference.

That gives you a "global naming of things" problem, which is surprisingly hard. Who controls the namespace? Who gets to define new identifiers? Do they end up as intellectual property barriers where company A can't write software to work with files of company B?

> without that key metadata it is impossible to ensure formal correctness(!)

That seems irrelevant - even with the metadata you have to allow for the possibility of a malformed payload or simple metadata mismatch. I don't believe this alone would prevent people from sneaking attack payloads through images or PDFs, for example.

> THAT is K&R’s million-dollar mistake mistake

K&R wrote UNIX before MIME. Not only that, but before the internet, JPEG, PDF, and indeed almost all the file types defined in MIME except plain text.

Refusing to choose also prevented UNIX from being locked into choices that later turned out to be inconvenient, like Windows deciding to standardise on UCS-2 too early rather than wait for everyone to converge on UTF-8.

Even the divergent choice of path separators and line endings has turned out to be a mess.

> cowboys

The "cowboy" system is the one that beat the others, many of which never launched (WinFS, competing hypertext protocols) or were commercially invisible (BeOS, Plan9 etc).

Both Windows and MacOS have alternate file streams which can be used for metadata, but very rarely are.

Memristors aren't going to save you either. Physical space ultimately determines response time. You can only fit so much storage inside a small light cone. We're going to end up with five or six different layers at different distances from the CPU, plus two or three more out over the network, getting cheaper and slower like Glacier.

We probably are going to move to something more content-addressable, in the manner of git blobs or IPFS, and probably a lot closer to immutable or write-once semantics because consistency is such a pain otherwise. It would be interesting to see a device offering S3-style blob interface plus content-addressable search ... over the PCIe interface.

Oh, and there's a whole other paper to be written on how access control has evolved from "how can we protect users from each other, but all the software on the system is trusted" to "everything is single-user now, but we need to protect applications from each other".

erling · 6 years ago
I understand the natural frustrations articulated here, especially given OP’s experience working with files, but it seems to dismiss what is actually a core strength of current operating systems: they work. Given a program supporting 16 bit address spaces from the 1970s, you can load it into a modern x86 OS today and it works. This is an incredible feat and one that deserves a lot more recognition than offered here! Throughout an exponential explosion of complexity in computing systems since the 70s, every rational effort has been made to preserve compatibility.

The system outlined here seems to purposefully avoid it! Some sort of ACID compliant database analogy to a filesystem sounds nice until 20 years down the line when ACIDXYZABC2.4 is released and you have to bend over backwards to remain compatible. Or until Windows has a bug in their OS-native YAML parser (as suggested here) so now your program doesn’t work on Windows until they patch it. But when they do, oh no you can’t just tell your users to download a new binary. Now they have to upgrade their whole OS! Absolute chaos. And if you’re betting on the longevity of YAML/JSON over binary data, well just look at XML.

jmiskovic · 6 years ago
Want to admire your fancy After Dark win 3.1 screensaver? Just emulate the whole environment! We don't want to keep suporting the broken architectures and leaky abstractions of past, they drag us down. Microsoft's dedication to backwards compatibility is admirable but IMO misguided and unsustainable in the long run. The IT industry has huge problem with complexity. We need to simplify the whole computing stack in interest of reliability, security and future innovations.

The proposed improvement as I understood it would be future proof. It seems trivial to build a rock-solid YAML/XML/JSON/EDN parser on OS level, and since it would be so crucial part of OS the mistakes would be caught and fixed quickly. It shouldn't even matter if structured data syntax is replaced or expanded in future, as long as it is versioned and never redacted. Rich Hickey's talk "Spec-ulation" has much wisdom about future-proofing the data structure.

gibbonsrcool · 6 years ago
> The IT industry has huge problem with complexity. We need to simplify the whole computing stack in interest of reliability, security and future innovations.

Yes! I really hope I keep hearing more of this sentiment and that eventually we collectively take action. What would be the first practical step? There's a lot of effort duplicating the same functionality across different languages and frameworks. Is reducing this duplication a good first goal? Should we start at the bottom and convince ARM/x86/AMD64 to use the same instruction set? After that, should we reduce the number of programming languages? It seems there's still a lot of innovation going on, would it be worth stifling that?

moron4hire · 6 years ago
> Want to admire your fancy After Dark win 3.1 screensaver? Just emulate the whole environment!

That is literally what Microsoft does.

pjc50 · 6 years ago
Most operating systems ship a general-purpose structured binary serialization format parser as an OS component: ASN1. There have over the years been a number of security critical bugs in there, and everybody hates ASN1 anyway.
jcranmer · 6 years ago
> Given a program supporting 16 bit address spaces from the 1970s, you can load it into a modern x86 OS today and it works.

Actually, it doesn't. It is extremely hard to properly return to 16-bit userspace code from a 64-bit kernel, so Windows removed support for it entirely, and it's not enabled by default on Linux.

squiggleblaz · 6 years ago
Well, I don't want to say anything about the utility, longevity or appeal of yaml/json, but I somehow think a user is going to upgrade their entire operating system before they upgrade my little app.

And if they're inclined to upgrade my app, I mean, nothing stops me from using a third party library to parse yaml. It sounds like we're talking about an app from three operating systems and 20 years ago so it's likely I'm doing that anyway - maybe not in the current Windows version, but in a recent enough version on some other operating system.

bvrmn · 6 years ago
Article resume: biggest cs problem now is a diversity of serialization formats, because most of current code consist of parsing various formats, so OS must to do something about it.

No, it is not a so big problem. And no it will not do our life easier. Also author did not mention about the real problem of semantic. How client should interpret a structure to compose a valid request.

OS should not know about userspace structures because OS don't do anything with it. It stores and transfers chunks of bytes and its semantic is defined by userspace. And forcing current popular serializing format on OS level is the most dumb idea ever.

thetanil · 6 years ago
most dumb idea ever is a bit strong, but yeah. If you could get OSs to adopt this, then as an app writer, you're going to have to worry about how Microsoft's jacked up version of the standard broke your content when it was moved between computers or even OS versions. You'd have the UNIX/MS line ending problem not just in text files, but with every Type recognized by the database
de_watcher · 6 years ago
Additionally we've got enough fun already with the case insensitive filesystems.
imtringued · 6 years ago
>And forcing current popular serializing format on OS level is the most dumb idea ever.

The idea is that you take the common elements of all of those serialization formats and when you take a good look you notice that the lowest common denominator isn't actually raw bytes on a disk.

laughingbovine · 6 years ago
Except when it is.
houseinthewoods · 6 years ago
Preach! I've felt this way too, that adding structured data as a universal feature to operating systems would be a pretty agreeable next step.

I wonder if we're past the point of return, though, in terms of technical divergence. It sounds like, in the Ancient Times, there was a handful of great programmers whose work created the world we program in now. But now, there's way more programmers being paid to make slightly different versions of this "next step", and it would require widespread agreement/coordination to implement it on a scale where it's a seamless feature that's taken for granted the way the shell/network/fs are.

jayd16 · 6 years ago
Problem: We make a lot of CRUD apps.

Solution: The OS should do it.

Ehhh... Why is that the obvious solution? We can't decide on the right way to do it in user space, why does moving the problem to the OS help? This seems to be based on the whimsical idea that having the OS do it would somehow fix the varied problems of structured communication. Are we enforcing WSLDs in the OS? One size fits all structures defined by the OS? I don't think the rambling thoughts really made it back to the thesis.

That said, I suggest to anyone interested in this stuff to try Powershell...no really! I don't use it often but it is a window into another world where everything has a structured definition behind the text output.

speedplane · 6 years ago
> Problem: We make a lot of CRUD apps. > Solution: The OS should do it. > > Ehhh... Why is that the obvious solution? We can't decide on the right way to do it in user space, why does moving the problem to the OS help?

The article is indeed ridiculous. An OS should not do everything. Hardware storage resources are generally the memory, disk, and network connection (and if you're getting really deep, the cache and registers). A good OS should only provide access to those resources as efficiently as possible across a wide variety of hardware.

There is a vast myriad of ways of utilizing those resources, and it would be a fools errand to implement a one-size-fits-all approach. The better approach is to provide access to the resources, and let higher level software developers build on top of them.

A disk only database is far different than a disk database with a memory cache, is far different than a memory only database, is far different than any collection of the above coordinated via a network connection. Further, storing text is different than storing images, which is different from storing video, which is different from storing JSON or XML.

Pushing everything to the OS will often give you worse performance, locks you into a single OS vendor, and slows down innovation from third parties. Bad idea.

clarry · 6 years ago
> An OS should not do everything. Hardware storage resources are generally the memory, disk, and network connection (and if you're getting really deep, the cache and registers). A good OS should only provide access to those resources as efficiently as possible across a wide variety of hardware.

It sounds like you're talking about kernel only. So I guess your OS of choice is something like LFS?

My view of an operating system is very different; it's supposed to be a complete system ready for productive work as well as a programming environment and platform for additional third party software.

> Pushing everything to the OS will often give you worse performance, locks you into a single OS vendor, and slows down innovation from third parties. Bad idea.

Pushing everything to third parties will often give you massive duplication of effort and dependencies, excessively deep stacks that eat performance and make debugging harder, locks you into a clusterfuck of dependency hell, and slows innovation from first party because now they must be very sure not to break the huge stack of third party stuff that everyone critically depends on. There'll be no cohesion because third parties invent their own ways of doing things as the stripped-to-the-bones OS has no unified vision, documentation is going to be all over the place, there's nothing holding back churn... development of third party applications is slow and frustrating because the lowest common denominator (underlying OS) is basically magnetic needle. Bad idea.

This is largely why I prefer BSD over Linux, but I share the author's frustrations with Unix in general.

leosarev · 6 years ago
Logging: Structured logging with automatic rotation etc was implemented in Windows Event log

Structured data passing between programs instead of just text is part of Powershell concept.

Calling of other programs to request specific actions with smooth UI called Android intents.

If you want to store structured data, you should use well, a database.

So, part of critique of author is Linux specific.

But generally I agree with author: OS are poor abstractions and really need to be improved.

p2t2p · 6 years ago
> Structured data passing between programs instead of just text is part of Powershell concept.

dbus, CORBA and COM would like to have couple words with you

qwerty456127 · 6 years ago
I don't know about DBus and CORBA but COM is unreasonably hard compared to REST or Protobuf.
hamilyon2 · 6 years ago
Imagine interoperability nightmare when we cannot rely on everything being just bytes being streamed.

I mean, everything would be stored and transmitted one time or another. Word document, sqlite database, an email and it's attachment. Imagine you could not send something as simple as word document because ip protocol assumes stream of bytes, and your operating system talks custom storage format. Imagine you cannot store, use sqlite database efficiently because operating system does not present you with efficient, fast, compatible byte storage.

_8ljf · 6 years ago
“ Imagine interoperability nightmare when we cannot rely on everything being just bytes being streamed.”

As I’ve noted above, the problem isn’t transmittability; the problem is never knowing what the bytes being transmitted represent.

I mean, C is hardly renowned for the robustness or expressivity of its “type” system, but untagged untyped byte streams are tantamount to declaring all data as void*. That is a ridiculously shaky foundation to build on, yet could have been entirely avoided by simple addition of one more piece of metadata and an ftype() API.

K&R were brilliant, but also kinda dumb. I certainly wouldn’t want to eat chicken cooked by either one.

vcavallo · 6 years ago
there would still be streamable bytes at the bottom, as the author explained..
pdimitar · 6 years ago
I think the author was more advocating for having something like sqlite being a part of the kernel and the de facto way of accessing a lot of data (as opposed to files and directories and byte streams in general).