Readit News logoReadit News
reikonomusha · 4 years ago
I'm as big of a Lisp fan as can be. I'm a proud owner of Symbolics and TI hardware: a MicroExplorer, a MacIvory, two 3650, and two 3620. Not to mention an AlphaServer running OpenGenera.

Today, we have computers that run Lisp orders of magnitude faster than any of those Lisp machines. And we have about 3–4 orders of magnitude more memory with 64-bits of integer and floating point goodness. And Lisp is touted to have remained one of the most powerful programming languages (I think it's true, but don't read into it too much).

Yet, it appears the median and mean Lisp programmer is producing Yet Another (TM) test framework, anaphoric macro library, utility library, syntactic quirk, or half-baked binding library to scratch an itch. Our Lisp programming environments are less than what they were in the 80s because everybody feels the current situation with SLIME and Emacs is good enough.

We don't "need" Lisp machines. We "need" Lisp software. What made a Lisp machines extraordinary wasn't the hardware, it was the software. Nothing today is impeding one from writing such software, except time, energy, interest, willpower, and/or money.

Don't get me wrong, there are some Lisp programmers today developing superlative libraries and applications [1], but the Lisp population is thin on them. I'd guess that the number of publicly known, interesting (by some metric), and maintained applications or libraries that have sprung up in the past decade probably fits on one side of a 3"x5" index card. [2]

Though I won't accuse the article's author of such, sometimes, I find, in a strange way, that pining for the Lisp machines of yore is actually a sort of mental gymnastic to absolve one for not having written anything interesting in Lisp, and to excuse one from ever being able to do so.

[1] Just to cherry-pick a recent example, Kandria is a neat platformer developed entirely in Common Lisp by an indie game studio, with a demo shipping on Steam: https://store.steampowered.com/app/1261430/Kandria/

[2] This doesn't mean there aren't enough foundational libraries, or "batteries", in Lisp. Though imperfect, this is by and large not an issue in 2022.

zozbot234 · 4 years ago
> We don't "need" Lisp machines. We "need" Lisp software. What made a Lisp machines extraordinary wasn't the hardware, it was the software. Nothing today is impeding one from writing such software, except time, energy, willpower, and/or money.

Discussed here https://news.ycombinator.com/item?id=30800520 The main issue is that Lisp, for all its inherent "power", has very limited tools for enforcing modularity boundaries in code and "programming in the large". So everything ends up being a bespoke solo-programmer project, there is no real shared development. You can see the modern GC-based/"managed" languages, perhaps most notably with Java, as Lisps that avoided this significant pitfall. This might explain much of their ongoing success.

reikonomusha · 4 years ago
I think many people have conjectures, such as this one, but I don't think it's a tech problem, or a "Lisp is too powerful for its own good" problem. It's a "people aren't writing software" problem. History has demonstrated umpteen times that developing large, sophisticated, maintained, and maintainable projects in Lisp is entirely and demonstrably possible. Modern Common Lisp coding practices gravitate toward modular, reusable libraries through proper modules via ASDF packages ("systems") and Common Lisp namespaces ("packages").
lispm · 4 years ago
Already in the 80s the Lisp Machine OS had > 1 MLOC code lines with support for programming in the large. The development environment was networked from the start. A server keeps the definition of things on the network: machines, users, file systems, networks, gateways, printers, databases, mailers, ... The source code and documentation is usually shared in the team and versioned via a common file server.

Nowadays, there are large applications written by groups/teams, which are worked on for three or more decades. For example the ACL2 theorem prover has its roots in the 70s and is used/maintained until today for users in the chip industry.

Common Lisp was especially designed for the development and production use of complex programs. The military at that time wanted to have a single Lisp dialect for those - often applications came with their own Lisp, which made deployment difficult. A standard language for projects was then required.

These were usually developed by teams in the range of 5 - 100 people. Larger teams are rare.

throw10920 · 4 years ago
> The main issue is that Lisp, for all its inherent "power", has very limited tools for enforcing modularity boundaries in code and "programming in the large"

I don't see any mention of "modular" or "boundaries" in the post you linked, so I'm assuming that it doesn't add extra context to your point.

You say "very limited tools for enforcing modularity boundaries", which I'm going to assume means that you believe that Lisps have constructs for creating modularity boundaries (e.g. unexported symbols in Common Lisp), and just don't enforce them (e.g. I can still use an unexported symbol in CL by writing foo::bar), in which case - I don't think that this is actually an issue.

Programmers are capable of doing all kinds of bad things with their code, and shooting themselves in the foot, yet I've never seen an indication that the ability to shoot yourself in the foot with a language noticeably contributes to its popularity (see: C, C++, Perl, Ruby).

Moreover, specializing to Common Lisp, it's not like CL allows you to accidentally access an unexported symbol in a package - you have to either deliberately use :: (which is bad style, and takes more effort than typing :) or you get a hard crash. This is opposed to the above listed languages, which allow you to shoot yourself in the foot in a large number of extremely diverse and interesting manners, often without giving you advance warning - and yet are still far more successful.

------------

I don't believe that the lack of success of Lisps are due to technical deficiencies.

syngrog66 · 4 years ago
agreed, and well said

in broadstrokes, imo, Lisp's endless plasticity of infinitely nested parentheses is both its greatest strength and... its greatest weakness

I love it at Write time. hate it at Read/Maintain time. why I've avoided it for serious work. Python, Java, Go Rust etc are all easier for my brain & eyes to quickly parse on the screen

which is unfortunate. because I still drool at Lisp's linguistic power

Turing_Machine · 4 years ago
> . You can see the modern GC-based/"managed" languages, perhaps most notably with Java, as Lisps that avoided this significant pitfall.

An interesting perspective. From my POV, it's hard to think of a less Lisp-like language than Java. COBOL, maybe.

Deleted Comment

metroholografix · 4 years ago
The greatness of Lisp -at least when it comes to end-user empowerment- and (I think) the only differentiating factor that most other languages have still not caught up to, is the cybernetic philosophy with its roots in Licklider (man-computer symbiosis) and Engelbart.

Building an environment that strongly and uncompromisingly expresses this philosophy at its core is a serious undertaking in terms of time investment. Emacs has been in continuous development for 37 years and while it is still not as good as Genera, it's certainly "good enough" for lots of people and definitely captures the spark of this philosophy.

In the Common Lisp world, we've had plenty of tries (MCL, Hemlock, McCLIM) but they've all failed to get people to coalesce and generate progresss.

Maybe the fact that Emacs exists is a curse in that people realize the barrier they'll have to clear to make something substantially better and decide to devote their energies into more easily realizable efforts.

GregorMendel · 4 years ago
Anyone interested in the computing holes that can be filled by lisp machines should check out Urbit. There is an vibrant and growing community of people building a network of personal servers running a lisp-like functional OS. It uses an identity system secured by the Ethereum blockchain and it has created a bottom up economic incentive for developers to participate. They are starting to solve unique problems that couldn't be addressed on currently prevalent platforms. Urbit is an affirmation; we can ditch the nihilism.
convolvatron · 4 years ago
i associate lisp machines with power, simplicity, and a delightfully shallow learning curve.

urbit to me is exactly the opposite

Gollapalli · 4 years ago
And it's 4x more expensive than it was supposed to be to buy a planet.

Ethereum was a mistake.

chevill · 4 years ago
>They are starting to solve unique problems that couldn't be addressed on currently prevalent platforms.

Got any examples?

lonjil · 4 years ago
> Yet, it appears the median and mean Lisp programmer is producing Yet Another (TM) test framework, anaphoric macro library, utility library, syntactic quirk, or half-baked binding library to scratch an itch. Our Lisp programming environments are less than what they were in the 80s because everybody feels the current situation with SLIME and Emacs is good enough.

I don't think this is true. Not anywhere close. Most such examples are small, and probably only took a small number of hours to produce. While "superlative" stuff takes very many man hours to create. So just by seeing that there are many throwaway testing frameworks or whatever, you cannot tell where most of the work hours actually go. A half baked binding library takes 20 minutes to make, while a proper high quality rendering engine takes hundreds if not thousands of hours.

The Lisp population is thin on people making cool shit because the Lisp population in general is thin.

agumonkey · 4 years ago
I'd say it's both. It seems most lisping is done high on the stack. Some are doing assembler level lisp (or scheme) but less bare metal / OS / system oriented lisp.

I wonder what the lisp os guy are thinking about OS / UI these days.

13415 · 4 years ago
Personally, I think the problem is that CommonLisp is just another programming language, whereas Lisp really shines when it provides a full-fledged programming environment. Nowadays, it would seem best to create such an environment on top of commodity hardware as a "virtual machine" that abstracts away from the hardware in a general, portable way. However, a good environment (1) needs a purpose, and (2) somebody needs to write it. Lisp currently fails on both points. The purpose used to be symbolic AI and NLP among other things. Nowadays it could be the same, or a web framework with integrated distributed computing and database, or scientific computing, or a server for certain business services, etc. There are many application domains for which a virtual "Lisp machine" would be beneficial but it needs to be developed for one of those real world applications, not just as a toy like existing attempts of building Lisp machines. And in my opinion the problem really is (2), developer power / size of the community. If you exclude super-expensive legacy Lisps, the current Lisp community doesn't even have a fully supported Lisp-native editor (portable Hemlock is not good enough) and also doesn't have good enough object persistence / native Lisp-based databases. Both are the foundations of any integrated Lisp machine.

People sometimes claim CL+Emacs+Slime provides the full interactive experience. I just can't agree with that at all. I have tried, and the experience was not substantially different from Go development and development in any other fast-compiling language with Emacs. In some respects, it's even worse than with various modern languages, even though most of those languages are strictly inferior to CL from a pure language perspective. If editing files in Emacs is all there is to the allegedly great Lisp experience, and developers at the same time have to deal with all those idiosyncrasies of CL such as CL's filesystem path handling, ASDF, and tons of poorly documented libraries, then I can't really see the advantages of CL. The language is powerful, don't get me wrong, but a truly interactive experience is totally different. Smalltalk managed to keep this experience but for some reason the Lisp community seems too have lost this vision. I guess the developer community is just not large enough.

Anyway, before someone tries to build another "close to metal" Lisp machine or tries to revive any old Lisp machine, I'd rather wish the community would focus on creating truly integrated development environments that abstract away from the host system and are fully hackable and editable from the ground up while maintaining security and multi-user access. A "virtual Lisp" machine with virtual OS, so to say. If that's developed for a certain purpose like building and deploying extremely portable web applications, I believe it can have a great future.

Sorry for the long rant. This is just my impression after having programmed in various Lisp dialects for the past three decades.

coryrc · 4 years ago
> the experience was not substantially different from Go

I think the (a?) reason for that is a (otherwise good) shift to production being immutable. When you aren't allowed to interact and change code in a running system, you lose a massive advantage of CL over simpler languages. When the answer to every error is "send a 4xx or 5xx to the client" then having a powerful error-handling system is similarly pointless. When you only write CRUD programs like everyone else, you're just plugging together other's libraries and not creating novel techniques or fancy math. In this world all CL's advantages are negated.

metroholografix · 4 years ago
Common Lisp on Emacs via SLIME is not competitive with Smalltalk, re: "interactive experience" since Emacs is not the native substrate of CL, but essentially an out-of-process experience. If you want to experience CL at its best, you need to run Symbolics Genera.

Emacs with Emacs Lisp on the other hand offers a great interactive experience that also manages to easily surpass every modern Smalltalk incarnation in practicality and size of development community. So if running Genera isn't easily doable, this will give you a taste of what Lisp interactivity is all about.

thorondor83 · 4 years ago
To me development with SLIME is much better than with a fast-compiling language.

- Debugger is always ON.

- I can inspect the data I'm working with.

- I can redefine things without starting everything all over, avoid losing current context. Fast restart is not the same.

- I can evaluate pieces of code without the need of a REPL. Point to an s-expression and evaluate that piece of code, inspect the result.

I don't see how Smalltalk is much more interactive. It is more powerful at graphics and tools integration, but SLIME provides an interactive enough experience IMO, and it is significantly better to any fast compiling + restart language.

jodrellblank · 4 years ago
> "We don't "need" Lisp machines. We "need" Lisp software."

Nobody goes into Java because their self identity is "a Java programmer" to gather a team of people to create a Java machine running Java software to unleash the power of Java for the masses by enabling them to do everything in Java for the sake of doing everything in Java, By Java, With Java, For Java. And if they do talk like that they would be a Sun Microsystems marketing pamphlet from 1999, or a joking reference to Zombo.com, or suspected of having zero interesting ideas and defaulting to Ouroboros-naval-gazing.

Adobe Photoshop Lightroom is C++ and Lua. Blender is C++ and Python. Excel is C++ and Visual Basic for Applications. LibreOffice Calc is C++ and Python. These are large, popular, programmable systems which exist today and are good enough; good enough for people to spend lots of money on them, good enough for people to spend years skilling up in them, good enough that once they existed people wanted them to keep existing and they haven't faded into the past.

The added allure of an imaginary rebuild of them like "if you took the lid off Excel you'd see VBA inside so you could rework the way it handles multiple sheets using only a skill you have and software design skills and Excel-internals knowledge you don't have" would get a hearty side-eye and slowly backing away from most Excel users. "Everything inside looks the same" is as attractive as "if you open your car trunk you'll see leather seats and plastic dashboard components making it move" or "if you sit in this car you're sitting on engine parts because the top priority is that a welder in a scrapyard can build the entire car without leaving their comfort zone". There are certainly people who want that, but the way the world hasn't developed that way suggests it isn't particularly desirable. Even when such things have been built, people can today use a Smalltalk, an APL, save their running work in a memory-dump and reload it and rewrite parts of it in itself, people flocked to Jupyter notebooks instead.

> "[1] Kandria is a neat platformer developed entirely in Common Lisp"

https://cdn.akamai.steamstatic.com/steam/apps/1261430/ss_a3f...

Without mocking a team who has built, developed, polished and planned to release a project, because that is respectable, it also looks like computer gaming of the early to mid 1990s Intel 386/486 era; remeniscent of Prince of Persia, Gods, Lemmings, Monkey Island. But it needs an Intel i5, 4GB RAM and 1GB storage. It's not even released yet and has no reviews, but you describe it as 'superlative' ("Of the highest order, quality, or degree; surpassing or superior to all others") - are you rating it so highly based on it being written in Lisp or what?

reikonomusha · 4 years ago
I don't know how to respond to the whole "Lisp programmer identity" stuff; it doesn't seem relevant to anything I said. I also didn't suggest anybody rewrite anything in it. The success of Lisp doesn't depend on the existence of fancy machines, it depends on people choosing to write software in it. That's basically all I meant to say.

As for Kandria, did you play the demo, or did you just look at screenshots and system requirements and make your brazen judgment? I don't think Kandria is at all amateur or sloppy, regardless of to which aesthetic era you think it belongs. Many have claimed that it's not even possible to write a game that doesn't blow because Lisp is dynamically typed and garbage collected. Now the goalposts have moved to, "well, it takes 1GB of memory and doesn't even look like it's from 2022."

I commend anybody who ships.

p_l · 4 years ago
Regarding Kandria:

You might have heard of this thing called "Art", and that it has styles, and that not only one of them is called "pixel art" for celebrating that kind of limitations, Art as a whole is often talked in terms of self-imposed limits used in creation of a work.

That said, a game can deliberately target such style, and yet hide considerable richness of implementation (consider: Noita, Dwarf Fortress).

Another thing with Shinmera and his team is that they produce both interesting stories, interesting games, but also code I'd argue is art too.

chubot · 4 years ago
Meh the problem is "Which Lisp?" There are dozens of incompatible Lisps. Even this site is written in a Lisp dialect written by its author (Arc).

In fact I conjecture that this is the reason Unix is more popular than Lisp -- because Lisps don't interoperate well. They haven't built up a big ecosystem of reusable code.

Whereas Python, JavaScript, R, C, C++, and Rust programmers can reuse each others' code via Unix-style coarse-grained composition. (Not just pipes -- think about a web server running behind nginx, or git reusing SSH and HTTP as transports.)

You can also use link time composition. It takes some work but it's better than rewriting your Common Lisp code from scratch in Clojure.

-----

Honest question: how do you communicate between two Lisp processes on two different machines? I know Clojure has EDN (which is sort of like JSON : JavaScript), but I haven't heard of the solutions for other Lisps.

I wrote about this problem here: A Sketch of the Biggest Idea in Software Architecture http://www.oilshell.org/blog/2022/03/backlog-arch.html

> The lowest common denominator between a Common Lisp, Clojure, and Racket program is a Bourne shell script (and eventually an Oil script).

I'll definitely update it if there's something I'm missing.

I would say the design of Unix is "rotting", but the answer is to IMPROVE Unix. Not dream of clean slate designs that will never be deployed. Plus this post doesn't actually propose anything. If you actually start trying to build your Lisp machine, I believe you will run into dozens of reasons why it's not a good idea.

bitwize · 4 years ago
You are aware that Lisp machines understood several different flavors of Lisp? The Symbolics ones understood Zetalisp and Common Lisp at least. Were they on the market today they could be convinced to run Clojure and Scheme as well. There are a few old-timers developing Common Lisp applications to run on modern hardware, using Symbolics hardware.

In fact, Symbolics shipped compilers for other non-Lisp programming languages, including C and Ada. These interoperated smoothly with Lisp code, much more so than they do under Unix. In this demo, Kalman Reti compiles a JPEG decoder written in C, and replaces the Lisp JPEG decoder that came with the system with it, yielding a performance boost:

https://www.youtube.com/watch?v=o4-YnLpLgtk

chubot · 4 years ago
OK interesting, I will check out the link.

I still think various Lisps don't interoperate enough today, but I'm not very familiar with the Lisp machines of the past. If it can interoperate with C and Ada that's interesting. But I also wonder about interop with JavaScript :) i.e. not just existing languages but FUTURE languages.

These are the M x N problems and extensibility problems I'm talking about on the blog.

foobarian · 4 years ago
> In fact I conjecture that this is the reason Unix is more popular than Lisp -- because Lisps don't interoperate well. They haven't built up a big ecosystem of reusable code.

And why are there so many? IMO the language is too flexible for its own good. It promotes this curious intellectual solo-competition where you try to prove you are worth of the elite who get all the fancy FP stuff and make all this bespoke functionality.

It's almost impossible for Lisp to be popular. To be popular means a lot of people use it, but that means it can't be complicated much above the median. But because it lets individual programmers push intellectual boundaries it self-selects itself out of this pool. Any big collaborative project will attract this type of developer and soon the lesser developers (who don't dare object for fear of appearing dumb) are not able to contribute.

Just my opinion, if a little dramatic.

zozbot234 · 4 years ago
Haskell has its share of fancy FP stuff, and people manage to develop workable things in it. I still think Lisp really is too dynamic to be useful beyond a small scale of development.
Jtsummers · 4 years ago
> Honest question: how do you communicate between two Lisp processes on two different machines? I know Clojure has EDN (which is sort of like JSON : JavaScript), but I haven't heard of the solutions for other Lisps.

Probably TCP or UDP based protocols like essentially every cross-network communication in every language today.

EDIT: Also, it should be noted that JSON does not, itself, allow you to communicate across a network. It's just a serialization format. You still have to select some protocol for actually doing the communication. If your answer to the question "How do you communicate between two JavaScript processes on two different machines?" is "JSON", you've failed at actually answering the question.

chubot · 4 years ago
Right, so that is what I'm getting at. If you have two Lisp programs on different machines, or two programs written in different Lisp dialects, the way you compose them is basically "Unix" -- serialize to some common format and send over a pipe / socket.
armitron · 4 years ago
The canonical Lisps still widely used today are Common Lisp, Scheme and Emacs Lisp. They all belong in the same family, and syntax / semantics are close. Porting code from Scheme to Common Lisp can be a lot easier than going from Python 2 to Python 3.

Clojure is something else entirely which is why a lot of people don't consider it a Lisp.

> Honest question: how do you communicate between two Lisp processes on two different machines?

If you want to use built-in object serialization, there is print and read.

iak8god · 4 years ago
> Common Lisp, Scheme and Emacs Lisp... all belong in the same family

Could you say more about what you mean by this? Is there another family of Lisps that excludes these three? I've met people who make a big deal about lisp-1 vs lisp-2 (https://en.wikipedia.org/wiki/Lisp-1_vs._Lisp-2), and which is the right way to be a Lisp, but I think maybe those people just enjoy being pedantic.

zozbot234 · 4 years ago
An unappreciated means of code reuse under *nix is the static and dynamic library. This seems to be the go-to whenever you need something more involved than simply reusing a full binary via pipes.
scj · 4 years ago
C's greatest feature is that it trivially maps onto .so files. Linking to and creating .so files isn't just cheap, it's effectively free.

Most higher level languages I've worked with seem to focus on using .so files rather than producing them.

This means the lowest common denominator for the unix ecosystem is what C can provide. Otherwise stated, unix is marching forward at the pace of C.

fiddlerwoaroof · 4 years ago
> Honest question: how do you communicate between two Lisp processes on two different machines?

Depends what level of integration you want: custom tcp/udp protocols, HTTP and websockets are all pretty easy. But, you can also use something like this over a trusted network/vpn: https://github.com/brown/swank-client

spacemanmatt · 4 years ago
> how do you communicate between two Lisp processes on two different machines?

Same as every other language: you pick a protocol and use it on both sides. Many of us already have enough JSON in play that it makes sense to start there.

convolvatron · 4 years ago
> Honest question: how do you communicate between two Lisp processes on two different machines?

using continuations with sexps is insanely powerful

send (+ 1 3 (lambda (x) `(handle-response, x)))

openfuture · 4 years ago
It's all going to be datalisp, mark my word :)

Although you have no idea what I am talking about yet just wait a bit more :))

syrrim · 4 years ago
pretty sure hn was ported away from arc at some point.
throw10920 · 4 years ago
> Meh the problem is "Which Lisp?"

Not a problem - you don't need (or want) a single Lisp. A hypothetical Lisp OS would support a standard runtime and typed IPC system that host programs can use (a la Windows and COM, or dbus), and nothing prevents you from using your own custom communication protocol over pipes/sockets instead.

I don't agree with your implicit assertion that there is a single reason why Unix is more popular than Lisp (machines) - I think that there are a multitude of reasons. Certainly, the most impactful one isn't a lack of interoperability between Lisps - it would be something like the bandwagon effect, or the fact that Bell Labs gave away Unix to universities, or the inefficient implementations of early Lisps.

I'm also fairly confident that the lack of a Lisp ecosystem is not because of a particular lack of interoperability (after all, Python has a massive ecosystem that is built almost exclusively on (a) C FFI and (b) other Python code), but for cultural reasons - Lispers just don't like collaborating together, and enjoy reinventing the wheel. These tendencies have been documented in [1] and many other places, and are highly consistent with my own experience in the Common Lisp community.

> Honest question: how do you communicate between two Lisp processes on two different machines?

Use PRINT to serialize an object on one machine, ferry it over to another one through whatever means you wish, and then READ it on the other. Or: "in the same way that two processes in most other languages on two different machines communicate". Serialize, transport, deserialize. Yes, the ecosystem is far less mature, and you'll have to do more legwork, but fundamentally the process is the same as in, say, Python. (Erlang might be an exception and have language-lever support for this, I'm not sure)

This method works for different implementations of the same Lisp, and even for different Lisps, under some constraints (e.g. while a list with the integers 1, 2, and 3 is represented as the s-expression (1 2 3) in almost every Lisp you can find, CL represents "true" as T, while Scheme represents it as #true, so you'll have to work around that). If you will, you can just use JSON or XML as a serialization format instead - every non-trivial Lisp has libraries for those formats, and some have libraries for ASN.1, too.

>> The lowest common denominator between a Common Lisp, Clojure, and Racket program is a Bourne shell script (and eventually an Oil script).

All of those languages share basic s-expression syntax described above, which is rather higher-level than a Bourne shell script. Why do you say that the latter is the "lowest common denominator"?

For that matter, why do you exclude the idea that JSON or XML aren't "common denominators" between Lisp programs, or even between Lisps, Python, and C++?

----------------

Your article says "Text Is The Only Thing You Can Agree On", but "text" isn't even a single thing. The set of bytes allowed by ASCII vs UTF-8 vs UTF-16 aren't the same. Even if it was, plain text is purely sequential and flat. Program interoperability requires structure. If the structure isn't directly encoded in a given "substrate" (such as text), and you need another layer on top, then that substrate isn't actually the one providing the interoperability. You say "Text is the most structured format they all agree on" but text isn't even structured at all, and nobody completely agrees on it - "bytes" is the only thing that fits into place here (which is equivalent to saying that there's no structured communication method that all programs agree on, which is true).

Put another way - programs do not communicate using plain text. They communicate with either an ad-hoc protocol built on top of text (that still has a set of constraints that make it incompatible with other protocols built on top of text), or they use a standardized format like JSON or XML, or even something like ASN.1 that isn't a subset of text.

Communication using text does not make programs interoperable. Bytes might be a "narrow waist", but text is factually not - if it was, you wouldn't need to use sed/awk/perl one-liners to connect various Unix utilities to each other, because the very fact that they were all using text input/output would make them interoperable.

You say "Tables and documents are essential structures in software, and expressing them in JSON is awkward." but you can express them in JSON. You cannot express those structures in "plain text", because "plain text" is not capable of expressing structure at all, and the best you can do is build a protocol that is a subset of plain text that can express structure (JSON, XML, CSV, etc.)

------------

If anything, Lisps are more interoperable than Unix, because the "lowest common denominator" of Unix utilities is "plain text" (which by definition cannot encode structure), while the lowest common denominator of Lisps is some common subset of their s-expression formats, which can encode structure.

------------

> I would say the design of Unix is "rotting", but the answer is to IMPROVE Unix.

You say this, but you haven't given a reason for why that's the answer.

Here's a reason why improving Unix is not the answer: because "everything is text" is a fundamentally flawed paradigm that introduces needless complexity and fragility into the whole system design.

> Not dream of clean slate designs that will never be deployed.

You're mixing the normative and the positive - "Unix should be improved" with "clean-slate designs won't be deployed". Are you making an argument about what should happen, or what will happen? (there's no guarantee that any improvements to Unix will be deployed, either)

> Plus this post doesn't actually propose anything. If you actually start trying to build your Lisp machine, I believe you will run into dozens of reasons why it's not a good idea.

Oh, yes, and I can start with a few: first, there's good reason to believe that high-level CPUs are a bad idea (as suggested by the fact that nobody has been able to make an effective one[2]); second, the security-less design of older Lisp machines is a terrible idea (in a vacuum); third, that a machine that only runs Lisp is going to be unpopular and there's no single Lisp and Lisps are not the final stage of PL evolution.

...but the arguments made by the article as to why Unix is inadequate are still solid, even if the author is suggesting a suboptimal solution due to nostalgia.

[1] https://www.lambdassociates.org/blog/bipolar.htm [2] http://yosefk.com/blog/the-high-level-cpu-challenge.html

chubot · 4 years ago
This feels like a whole bunch of misunderstandings about what I'm saying ... Almost everything here was directly addressed in the article.

For that matter, why do you exclude the idea that JSON or XML aren't "common denominators" between Lisp programs, or even between Lisps, Python, and C++?

JSON is mentioned in the post as A narrow waist, not THE narrow waist (of an operating system or of the Internet). I also mention CSV and HTML as "on the same level".

Likewise, bytes and text have a similar hierarchical relationship. I mention that in the post and also show some diagrams in the previous post.

If anything, Lisps are more interoperable than Unix, because the "lowest common denominator" of Unix utilities is "plain text" (which by definition cannot encode structure), while the lowest common denominator of Lisps is some common subset of their s-expression formats, which can encode structure.

JSON, CSV, and HTML are all built on top of plain text. You can store them in source control and you can use grep on them, and you can build more specialized tools for them (which has been done multile times.)

What I'm contrasting this with is people who say that we should build s-expressions into the kernel -- i.e. passing linked data structures directly, rather than bytes/text.

See the threads linked in the posts -- a variant of this same argument came up. This is very related to the idea of building Lisp into hardware, which I view as a bad idea.

You say this, but you haven't given a reason for why that's the answer.

The article is analyzing why empirically Unix, the web, and the Internet have worked in multiple respects -- interoperability, generality, scalability in multiple dimensions and multiple orders of magnitude, extensibility over decades, the polyglot nature, etc.

These are obviously successful systems, and I think it is pretty crazy to have the opinion that because it makes you parse things that a design that tries to eliminate parsing much be better along many/all dimensions!

These kinds of (IMO naive) arguments are exactly why I wrote the last 2 posts. They were popular and highly commented upon, and most people got it, or at least appreciated the tradeoffs I highlighted.

So I don't need to convince everybody -- as I said in the intro to the post, the more important task is to build something that embodies these ideas. I look forward to your code and blog posts; I do think there is something to the top criticism in the thread: https://news.ycombinator.com/item?id=30812626

rst · 4 years ago
Some of this needs checking -- you could not run Unix on Symbolics hardware. LMI did have machines that ran both OSes -- but Unix was running on a separate 68000 processor; see, e.g. http://www.bitsavers.org/pdf/lmi/LMI_lambdaOverview_1982.pdf

(3600-series Symbolics machines also had a 68k "front end processor", but no Unix port was provided for it; they also ultimately had a C compiler that could generate code for the "Lisp processor", but the code it generated was intended to run in the Lisp environment.)

It's also worth noting that systems-level code for Symbolics machines (and, I presume, LMI as well) made frequent use of "unsafe subprimitives", misuse of which could easily crash the machine. And, unfortunately, if you needed to, say, get anything close to hardware bandwidth out of the disk drives, some of this became well-nigh unavoidable, due to poor performance of the OS-level file system (LMFS).

lispm · 4 years ago
What one could do was running hardware Lisp Machines from Symbolics on VME boards inside a SUN: the UX400 and UX1200.

Later Open Genera was sold as a Virtual Lisp Machine running on a DEC Alpha / UNIX system.

skissane · 4 years ago
Apparently Open Genera now even runs under macOS on Apple M1s: https://twitter.com/gmpalter/status/1359360886415233029

I think the big problem with Genera is the licensing. Although it comes with source code, it is proprietary software, and buying a license is expensive. I think the owners of the Symbolics IP have prioritised squeezing the maximum revenue out of a declining user base over trying to grow that user base.

I'm surprised "Open Source LispOS" projects have largely failed to gain traction. Writing your own OS is (at least in some ways) easier than it used to be (especially if you target virtualisation rather than bare metal). There seem to be a lot more people saying "LispOS is what we need!" than actually writing one or contributing to an existing effort to write one.

mportela · 4 years ago
This post would benefit from further expanding some of these statements.

> UNIX isn’t good enough anymore and it’s getting worse

Why exactly?

> A new operating system means we can explore new ideas in new ways.

LISP machines were not only OSes but also hardware. Is the author also proposing running this OS on optimized hardware or simply using our x86-64/AMD/M1 CPUs?

> With lisp machines, we can cut out the complicated multi-language, multi library mess from the stack, eliminate memory leaks and questions of type safety, binary exploits, and millions of lines of sheer complexity that clog up modern computers.

Sure, but it also requires rewriting a lot of these things, introducing and fixing new bugs... It feels like the good ol' "let's rewrite this program" that quite frequently doesn't live up to the expectations [1].

[1] https://vibratingmelon.com/2011/06/10/why-you-should-almost-...

traverseda · 4 years ago
>> UNIX isn’t good enough anymore and it’s getting worse

>Why exactly?

Personally? We're in a bit of a transition point, and a lot of the technologies aren't working together like they used to.

An example, on my laptop I want to run android apps. The way to do this that actually works well (waydroid) only supports wayland. Unfortunately I use x2x to control another display remotely, and x2x doesn't work properly under wayland, and never will due to wayland's security choices.

So like, what am I supposed to do here? Not run android apps? Not use tools like barrier/synergy/x2x?

This is one of many many frustrations I've had from this new generation of wayland/systemd/etc. Hopefully it gets better eventually but it does feel a lot like the rug is constantly being pulled out from under me for no good reason...

Now I don't think a lisp machine is going to fix that mind you, but it is a concern.

sseagull · 4 years ago
I am finding that everything is just becoming more and more fragmented. Programming languages, ecosystems, frameworks, blah.

I have had ideas for some applications, but can’t do them because the libraries I need are written in different languages, which don’t interoperate well (and one I am not familiar in).

Every post here looking for recommendations has many responses with different packages/ecosystems doing the same thing.

Sometimes I feel like there are too many developers and not enough of them really interested in the actual hard problems. So they just make another python package manager or webapp framework.

zozbot234 · 4 years ago
You can actually start a Wayland compositor/session in a X window. That plus existing solutions for Wayland network transparency should be enough.
pjmlp · 4 years ago
Xerox PARC workstations could run Interlisp-D, Smalltalk, Mesa/XDE, Mesa/Cedar, thanks to this little thing RISC failed to kill, microcoded CPUs.
pmoriarty · 4 years ago
> > UNIX isn’t good enough anymore and it’s getting worse

> Why exactly?

Two reasons:

1 - systemd (which is moving linux towards becoming a systemd OS)

2 - developers and companies moving ever more towards web apps (which will eventually make the underlying OS irrelevant, as the browser becomes the OS) (incidentally, web assembly seems to herald the end of the open/transparent web too, as we're eventually all be running opaque binary blobs on our web browsers)

amelius · 4 years ago
> 1 - systemd (which is moving linux towards becoming a systemd OS)

Or, a microservices OS.

DonHopkins · 4 years ago
There's a old book all about just that, which included and popularized Richard P. Gabriel's paper, "The Rise of Worse Is Better":

https://en.wikipedia.org/wiki/The_UNIX-HATERS_Handbook

https://web.mit.edu/~simsong/www/ugh.pdf

>The year was 1987, and Michael Travers, a graduate student at the MIT Media Laboratory, was taking his first steps into the future. For years Travers had written large and beautiful programs at the console of his Symbolics Lisp Machine (affectionately known as a LispM), one of two stateof-the-art AI workstations at the Lab. But it was all coming to an end. In the interest of cost and efficiency, the Media Lab had decided to purge its LispMs. If Travers wanted to continue doing research at MIT, he discovered, he would have to use the Lab’s VAX mainframe.

>The VAX ran Unix.

>MIT has a long tradition of mailing lists devoted to particular operating systems. These are lists for systems hackers, such as ITS-LOVERS, which was organized for programmers and users of the MIT Artificial Intelligence Laboratory’s Incompatible Timesharing System. These lists are for experts, for people who can—and have—written their own operating systems. Michael Travers decided to create a new list. He called it UNIXHATERS:

    Date: Thu, 1 Oct 87 13:13:41 EDT
    From: Michael Travers <mt>
    To: UNIX-HATERS
    Subject: Welcome to UNIX-HATERS

    In the tradition of TWENEX-HATERS, a mailing list for surly folk
    who have difficulty accepting the latest in operating system technology.
    If you are not in fact a Unix hater, let me know and I’ll remove you.
    Please add other people you think need emotional outlets for their
    frustration.
https://www.amazon.com/UNIX-Haters-Handbook-UNIX-Haters-line...

https://www.goodreads.com/en/book/show/174904.The_UNIX_Hater...

https://wiki.c2.com/?TheUnixHatersHandbook

>I'm a UnixLover, but I love this book because I thought it was hysterically funny. Many of the war stories are similar to experiences I've had myself, even if they're often flawed as a critique of Unix itself for one reason or another. But other UnixLovers I've loaned the book to found it annoying rather than funny, so YMMV.

>BTW the core group of contributors to this book were more Symbolics Lisp Machine fans than ITS or Windows fans. ITS had certain technical features superior to Unix, such as PCLSRing as mentioned in WorseIsBetter, but having used it a bit myself, I can't see that ITS was superior to Unix across the board. The Lisp Machine on the other hand, although I never used it, was by all accounts a very sophisticated environment for programmers. -- DougMerritt

https://news.ycombinator.com/item?id=13781815

https://news.ycombinator.com/item?id=19416485

>mtraven on March 18, 2019 | next [–]

>I founded the mailing list the book was based on. These days I say, Unix went from being the worst operating system available, to being the best operating system available, without getting appreciably better. (which may not be entirely accurate, but don't flame me).

>And still miss my Lisp Machine. It's not that Unix is really that bad, it's that it has a certain model of computer use which has crowded out the more ambitious visions which were still alive in the 70s and 80s.

>Much as the web (the Unix of hypertext) crowded out the more ambitious visions of what computational media for intellectual work could be (see the work of Doug Engelbart and Ted Nelson). That's a bigger tragedy IMO. Unix, eh, it's good enough, but the shittiness of the web makes humanity stupider than we should be, at a time when we can ill afford it.

https://medium.com/@donhopkins/the-x-windows-disaster-128d39...

mportela · 4 years ago
I really appreciate your writing this reply (esp. the links). Thanks a lot, mate!
kkfx · 4 years ago
>> UNIX isn’t good enough anymore and it’s getting worse

> Why exactly?

Beside the defects well stated in the Unix Hater's Handbook, unix violate it's own principles since many years. Original unix idea was: desktops like Xerox SmallTalk workstations are too expensive and complex for most needs, so instead of a real revolution of an extraordinary outcome we decide to limit ourselves to most common needs in exchange of far less costs. No GUIs, no touchscreen, no videoconferencing and screen sharing [1] just a good enough CLI with a "user language" (shell scripts) for small potatoes automation and a bit of IPCs for more... Well... For more there is a "system language" (C) that's easy enough for most really complex task.

That was a success because no one really like revolutions and long terms goals especially if they demand big money while many like quick & done improvements at little price.

However in few years unix start to feel the need of something more than a CLI and some GUIs start to appear, unfortunately differently than original Xerox&co desktops those UIs were not "part of the system, fully integrated in it" but just hackish additions with so interoperability, just single apps who have at maximum cut&paste ability.

> Sure, but it also requires rewriting a lot of these things, introducing and fixing new bugs... It feels like the good ol' "let's rewrite this program" that quite frequently doesn't live up to the expectations

We need desktops again, witch means not just "endpoints" or "modern dumb terminals of modern mainframes named cloud", but desktop computing, since desktop development is essentially abandoned since many years and even back then was in a bad shape we need to restart from the classic desktops. LispM was ancient, hackish, but are still the best desktop we have had in human history so a good starting point. We have some kind of LispM OS/OE here: Emacs, still alive and kicking so there is something to work with, that's is. Emacs is already a WM (EXWM) have countless features and it's already "plugged" in modern bootloader OSes to have hw, driver and services. It just need to evolve.

[1] yes, you are reading correctly and no, I'm not wrong, I'm talking about the famous NLS "Mother of all the Demos" from 1968 https://youtu.be/yJDv-zdhzMY

tytrdev · 4 years ago
I agree, especially with the statement that Unix isn’t good enough and getting worse.

I feel like that was one of the core assumptions and point of the article, but it didn’t have any explanation beyond “multiple programming languages.” Feels a bit flat to me.

PaulHoule · 4 years ago
What ramblings.

Optane is the best performing SSD but the worst performing RAM you ever had. It is too expensive at any speed, even if Intel is losing money on it. HP memristors are vaporware.

LISP machines, Java machines, and similar architectures specialized for complex language runtimes are a notorious dead end. They just can’t keep up with performance-optimized RISC, pipelined, superscalar, SIMD, etc. architectures paired with compilers and runtimes that implement efficient abstractions (e.g. garbage collection, hotspot compilers) on top of those very fast primitives.

lispm · 4 years ago
Before Lisp Machines were killed in the market it was clear that new architectures were needed and a few were under development, even RISC like CPUs. They weren't released.

But Lisp at that time was already fast enough on standard RISC chips (MIPS, SPARC, ALPHA, POWER, ...). Later the 64bit RISC chips also provided enough memory space. SPARC also had some tricks for Lisp implementors.

Currently the assembler coded Ivory emulator is 80 times faster on Apple's M1 than the last Ivory hardware (the Ivory Microprocessor from Symbolics was released end 80s).

imglorp · 4 years ago
Speed is relevant for some use cases, sure, but not at all for a ton of others. Memory, disk and CPU are almost free in this new world, so why are we computing like it's 1990 still? It's time for some different abstractions than file -> process -> file.

The vast productivity gains of Smalltalk and Lisp were because they discarded those abstractions and programmers were free for others.

Presumably OP posted this after noticing Phantom came up a few days ago. https://news.ycombinator.com/item?id=30807668

bigbillheck · 4 years ago
> Memory, disk and CPU are almost free in this new world, so why are we computing like it's 1990 still?

Elsewhere on this very site you'll find no ends of complaints about, say, Electron apps.

Deleted Comment

zozbot234 · 4 years ago
The RISC-V folks are working on additions for special support of "complex language runtimes". Pipelined, SIMD and superscalar are all well and good, but what kills pure software-side support is always heavy branching and dispatching. These operations are genuinely much faster and more power-efficient when implemented in hardware.
DonHopkins · 4 years ago
How is the ARM not a "JavaScript Machine"?

https://stackoverflow.com/questions/50966676/why-do-arm-chip...

>Why do ARM chips have an instruction with Javascript in the name (FJCVTZS)?

https://community.arm.com/arm-community-blogs/b/architecture...

PaulHoule · 4 years ago
That instruction is a very small hack that uses just a few transistors to speed up a bit of data conversion that JS runtimes do frequently. That’s a far cry from a specialized chip.
nanochad · 4 years ago
> You could open up system functions in the editor, modify and compile them while the machine was running.

Why would you want to do that other than hot patching a system that can't go down? Testing new changes requires more time than rebooting. If you just want to test simple changes, most debuggers can do that.

> Everything worked in a single address space, programs could talk to each other in ways operating systems of today couldn’t dream of.

And with a single address space you have win9x security.

> A modern UNIX system isn’t self-contained. I have 4 UNIX systems on my desk (Desktop, laptop, iPhone, iPad) I’m contentiously using the cloud (iCloud for photos, GitHub for text files, Dropbox for everything else) to sync files between these machines. The cloud is just a workaround for UNIX’s self-contained nature

This is just your use habbits. Nothing is stopping you from using NFS or SSHS. Someone who feels the need to use iCloud for whatever trivial convenience it provides is unlikely to benefit from a Lisp machine's ability to edit code on the live system.

> Then we add a gazillion programming languages, VMs, Containers, and a million other things, UNIX is a bloated mess of workaround for its own problems. We need a replacement, something that can be built for the modern world using technologies that are clean, secure, and extendable

The same thing will happen with any OS given enough time. Lisp is also not secure. It's prone to side channel and eval bugs.

> eliminate memory leaks and questions of type safety,

Lisp is not type safe.

reikonomusha · 4 years ago
> Lisp is not type safe.

It is type safe. While Lisp is not statically typed, its typing discipline is strong: operations performed on incompatible types signal recoverable errors.

Too · 4 years ago
Crashing at runtime, recoverable or not, is usually not what people mean when they say type safe. Spare me the static vs strong academia. Type safe when spoken, in practical every day terms, normally means enforced at compile time with IDE autocompletion support, usually implying static typing.

Deleted Comment

Decabytes · 4 years ago
> Lisp is not type safe.

Typed Racket is, and that is why I love it

maydup-nem · 4 years ago
> Testing new changes requires more time than rebooting.

no it doesnt

> Lisp is not type safe.

yes it is

zozbot234 · 4 years ago
> And with a single address space you have win9x security.

Address space != protection boundaries. These are nearly orthogonal concerns. Where single address spaces might become less useful today is in dealing with Spectre vulnerabilities, though formalizing more explicit requirements about information domains (as in multilevel security, which is a well-established field of OS research) might help address those.

mark_l_watson · 4 years ago
I don’t really agree. I had a Xerox 1108 Lisp Machine in the 1980s and loved it, but special purpose Lisp hardware seems like a waste of effort. I set up an emulator for the 1108 last weekend, and yes, I really did enjoy the memories, and things ran an order of magnitude faster than on the 1108 in the 1980s.

Then, I appreciated my M1 MacBook Pro running SBCL, LispWorks, Haskell, Clojure, and various Scheme languages - all with nice Emacs based dev setups. Life is really good on modern hardware.

lispm · 4 years ago
The 1108 wasn't really special purpose Lisp hardware. One could run other operating systems on it. What made it special purpose was the loaded microcode for the CPU.

> Life is really good on modern hardware.

Agreed: On modern CPUs.

More support for the additional hardware features like GPUs, media processing engines and the neural network engines (see the M1 Pro/Max/Ultra) would be welcome.

mark_l_watson · 4 years ago
The best bet for getting GPU deep learning support, I use Anaconda/conda, using the Apple M1 channel. That said, I usually use my Linux GPU rig or Colab for deep learning.
mst · 4 years ago
I feel like a lot of posts like this are pining for the complete lisp machine -user environment- and overestimating how necessary/important the hardware architecture would be to getting back to that today.

I can manage to context switch between different lisps fine but I do sometimes wonder in e.g. a slime+SBCL setup how much that context switching is costing me.

throw10920 · 4 years ago
I think that, while the idea is solid (Unix is poorly-designed and we should have better) some of the specific ideas mentioned are lacking:

> Everything worked in a single address space, programs could talk to each other in ways operating systems of today couldn’t dream of.

No! Bad! We have enough problems securing software on separate VMs running on the same metal, single address spaces are completely out of the question until someone manages to build a feasible trusted compiler system.

> Then we add a gazillion programming languages, VMs, Containers, and a million other things, UNIX is a bloated mess of workaround for its own problems.

A lot of these problems could happen with a Lisp machine - you could have a billion different Lisps, for instance (although, to be fair, with better (i.e. non-Unix) OS design you wouldn't need containers).

> With lisp machines, we can cut out the complicated multi-language, multi library mess from the stack, eliminate memory leaks and questions of type safety, binary exploits, and millions of lines of sheer complexity that clog up modern computers.

This is partially true, but a lot of the complexity in modern software doesn't come from Unix, but just...bad design decisions. Webtech doesn't really care whether it's running on Windows or Unix, after all.

Also, high-level CPUs are a bad idea: http://yosefk.com/blog/the-high-level-cpu-challenge.html

I think the good in this post is along the lines of: text bad, typed IPC good, runtime-aware OS good, standardized VMs good, interactive systems (Lispy stuff, Jupyter) > batch-processing systems (Unix, C).