As someone who used Franz LISP on Sun workstations while someone else nearby used a Symbolics 3600 refrigerator-sized machine, I was never all that impressed with the LISP machine.
The performance wasn't all that great. Initially garbage collection took 45 minutes, as it tried to garbage-collect paged-out code. Eventually that was fixed.
The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.
I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.
So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.
And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.
I worked on Franz Lisp at UCB. A couple of points:
The ".obj" file was a binary file that contain machine instructions and data. It was "fast loaded" and the file format was called "fasl" and it worked well.
The issue of building an application wasn't an issue because we had "dumplisp" which took the image in memory and wrote it to disk. The resulting image could be executed to create a new instance of the program, at the time dumplisp was run. Emacs called this "unexec" and it did approximately the same thing.
Maybe your discussions with my group predated me and predated some of the above features, I don't know. I was Fateman's group from '81-84.
I assume your source control comments were about the Lisp Machine and not Franz Lisp. RCS and SCCS were a thing in the early 80's, but they didn't really gain steam until after I arrived at UCB. I was the one (I think... it was a long time ago) that put Franz Lisp under RCS control.
I was doing this in 1980-1983. Here's some code.[1] It's been partly converted to Common LISP, but I was unable to get some of the macros to work.
This is the original Oppen-Nelson simplifier, the first SAT solver. It was modified by them under contract for the Pascal-F Verifier, a very early program verifier.
We kept all the code under SCCS and built with make, because the LISP part was only part of the whole system.
Yes, because on VMS (and presumably Genera) 20 versions of a file took 20× as much disk space as one version, so you wouldn't keep unlimited versions. In SCCS the lines that didn't change are only stored once, so 20 versions might be 2× or 1.1× or 1.01× the original file size.
Ummmm... yes. The problem with versioning file systems is that they only kept the last few versions; for files under active development, it was usually difficult to recover state older than a week or two.
(SCCS handled collaborative development and merges a lot worse than anything current, but... versioning file systems were worse there, too; one war story I heard involved an overenthusiastic developer "revising" someone else's file with enough new versions that by the time the original author came back to it, their last version of the code was unrecoverable.)
You've hit the nail on the head. It's the inability to ship product from a Lisp machine that killed the idea. The fungibility of Lisp all the way down turned each machine into a one-off lab of its own. And this tech arrived after the industry had started moving from bespoke solutions provided by hardware companies to packaged software from specialized firms. Since Lisp machines aren't good at maintaining a duplicatable environment, they're not terribly useful for commercial software production nor as a distribution target.
Source: I was a mainframe compiler developer at IBM during this era.
Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.
They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.
Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.
Not the ones I've used. Haskell compiles to executables, F# compiles to the same bytecode that C# does and can be shipped the same way (including compiling to executables if you need to deploy to environments where you don't expect the .NET runtime to be already set up), Clojure compiles to .jar files and deploys just like other Java code, and so on.
I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.
There's 'FP stacks' and "FP stacks" and some aren't expressly similar. Volumes of money/data get handled by FP stacks - Jane Street famously uses OCaml; Cisco runs their entire cybersec backend on Clojure; Nubank covers entire Latin America and about to spread into the US - runs on Clojure on Elixir; Apple has their payment system, Walmart their billing, Netlfix their analytics on Clojure; Funding Circle in Europe and Splash in the US; etc. etc. There are tons of actual working products built on FP stacks. Just because your object-oriented brain can't pattern match the reality, it doesn't mean it's not happening.
The hardware was never very interesting to me. It was the "lisp all the way down" that I found interesting, and the tight integration with editing-as-you-use. There's nothing preventing that from working on modern risc hardware (or intel, though please shoot me if I'm ever forced back onto it).
Time to dig up a classic story about Tom Knight, who designed the first prototype of the Lisp Machine at MIT in the mid-70's. It's in the form of a classic Zen koan. This copy comes from https://jargondb.org/some_ai_koans but I've seen plenty of variations floating around.
A novice was trying to fix a broken Lisp machine by turning the power off and on.
Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”
Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.
DonHopkins on June 14, 2014
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
A few years ago I was learning lisp and I mentioned it to my uncle who had been an inspiration to me getting into programming. It turns out he wrote a tcp/ip stack for the symbolics lisp machine when he worked at Xerox. They had some sort of government contract that had to be done in lisp on the symbolics and deep in a very long contract it said that the interface had to be tcp/ip which the symbolics didn’t support out of the box. He said to me his boss came to him one day and the conversation went something like this:
Boss: Hey there, you like learning new things right?
Him (sensing a trap): Errr, yes.
Boss: But you don’t program in lisp do you?
Him (relieved, thinking he’s getting out of something): No.
Boss: Good thing they sent these (gesturing at a literal bookshelf full of manuals that came with the symbolics).
So he had to write a tcp stack. He said it was really cool because it had time travel debugging, the ability hit a breakpoint, walk the execution backwards, change variables and resume etc. This is in the 1980s. Way ahead of its time.
I liked the article, but I found the random remark about RISC vs CISC to be very similar to what the author is complaining about. The difference between the Apple M series and AMD's Zen series is NOT a RISC vs CISC issue. In fact, many would argue it's fair to say that ARM is not RISC and x86-64 is not CISC. These terms were used to refer to machines vastly different from what we have today, and the RISC vs CISC debate, like the LISP machine debate, really only lasted like 5 years. The fact is, we are all using out-of-order superscalar hardware where the decoder(s) of the CPU is not even close to the main thing consuming power and area on these chips. Under the hood they are all doing pretty much the same thing. But because it has a name and a marketable "war" and that people can easily understand the difference between fixed-width vs variable-width encodings, people overestimate the significance of the one part they understand compared to the internal engineering choices and process node choices that actually matter that people don't know about or understand. Unfortunately a lot of people hear the RISC vs CISC bedtime story and think there's no microcode on their M series chips.
You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.
If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.
There seem to be very real differences between x86 and ARM not only in the designs they make easy, but also in the difficulty of making higher-performance designs.
It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel. Even ARM's medium cores have had higher IPC than same-generation x86 big cores since at least A78. SiFive's latest RISC-V cores are looking to match or exceed x86 IPC too. x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
ISA matters.
x86 is quite constrained by its decoders with Intel's 6 and 8-wide cores being massive and sucking an unbelievable amount of power and AMD choosing a hyper-complex 2x4 decoder implementation with a performance bottleneck in serial throughput. Meanwhile, we see 6-wide
32-bit ARM is a lot more simple than x86, but ARM claimed a massive 75% reduction in decoder size switching to 64-bit-only in A715 while increasing throughput. Things like uop cache aren't free. They take die area and power. Even worse, somebody has to spend a bunch of time designing and verifying these workarounds which balloons costs and increases time to market.
Another way the ISA matters is memory models. ARM uses barriers/fences which are only added where needed. x86 uses much tighter memory model that implies a lot of things the developers and compiler didn't actually need/want and that impact performance. The solution (not sure if x86 actually does this) is doing deep analysis of which implicit barriers can be provably ignored and speculating on the rest. Once again though, wiring in all these various proofs into the CPU is complicated and error-prone which slows things down while bloating circuitry, using extra die area/power, and sucking up time/money that could be spent in more meaningful ways.
While the theoretical performance mountain is the same, taking the stairs with ARM or RISC-V is going to be much easier/faster than trying to climb up the cliff faces.
> It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel.
These companies target different workloads. ARM, Apple, and Qualcomm are all making processors primarily designed to be run in low power applications like cell phones or laptops, whereas Intel and AMD are designing processors for servers and desktops.
> x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
My napkin math is that Apple’s transistor volumes are roughly comparable to the entire PC market combined, and they’re doing most of that on TSMC’s latest node. So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.
> In fact, many would argue it's fair to say that ARM is not RISC
It isn't now... ;-)
It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.
But even ARM1 had some concessions to pragmatics, like push/pop many registers (with a pretty clever microcoded implementation!), shifted rigsters/rotated immediates as operands, and auto-incrementing/decrementing address registers for loads/stores.
Stephen Furber has extended discussion of the trade-offs involved in those decisions in his "VLSI RISC Architecture and Organization" (and also pretty much admits that having PC as a GPR is a bad idea: hardware is noticeably complicated for rather small gains on the software side).
I'm a lisp machine romantic, but only for the software side. The hardware was neat, but nowadays I just want a more stable, graphically capable emacs that extends down through and out across more of userspace.
> emacs that extends down through and out across more of userspace
Making something like that has turned into a lifetime project for me. Implemented a freestanding lisp on top of Linux's stable system call interface. It's gotten to the point it has delimited continuations.
Emacs is incredibly stable. Most problems happen in custom-made packages. I don't even remember Emacs ever segfaulting for me on Linux. On Mac it can happen, but very rarely. I don't ever remember losing my data in Emacs - even when I deliberately kill the process, it recovers the unsaved changes.
For me, Emacs on Mac OS is not all that stable. I see a freeze about twice a month, which is not "very rarely" in my book. It also leaks memory, albeit now (in the upcoming version) less so. (Disclaimer: I am a heavy user and contributor.)
Symbolics’ big fumble was thinking their CPU was their special sauce for way too long.
They showed signs that some people there understood that their development environment was it, but it obviously never fully got through to decision-makers: They had CLOE, a 386 PC deployment story in partnership with Gold Hill, but they’d have been far better served by acquiring Gold Hill and porting Genera to the 386 PC architecture.
Xerox/Venue tried porting Interlisp (the Lisp machine environment developed at Xerox PARC) to both Unix workstations and commodity PC hardware, but it doesn't seem like that was a commercial success. Venue remained a tiny company providing support to existing Interlisp customers until its head developer died in the late 2000s and they wrapped up operations. The Unix/PC ports seem to have mostly been used as a way to run legacy Interlisp software on newer hardware rather than attracting anyone new to the Lisp machine world. I don't see why Symbolics doing the same thing as Xerox would have produced any different results. The real problem was that investment in expert systems/Lisp dried up as a whole. I don't know whether any of the Lisp vendors could have done anything to combat those market forces.
The environment lasted a long time as the basis for other Xerox products, such as their office automation system and as a front end for their printing systems. However, it wasn’t so much ported as the virtual machine was. (Just like Symbolics did with OpenGenera on Alpha.)
What I’m suggesting is that they could have done a full port to the hardware; OpenGenera is still an Ivory CPU emulator. In 1986-7 you could get an AT-compatible 80386 system running at 16-25MHz that supported 8-32MB of RAM for 10-20% the price of a Symbolics workstation, and while it might not run Lisp quite as fast as a 3600 series system, it would still be fast enough for both deployment and development—and the next generation would run Lisp at comparable performance.
I don't really understand why lisp was so intrinsically tied to expert systems and AI. It seems to me that Scheme (and, to an extent, common lisp or other lisps) are pretty good platforms for experimenting with software ideas; long before Jupiter notebooks existed.
For those unaware, Symbolics eventually "pivoted" to DEC Alpha, a supposedly "open" architecture, which is how Genera became Open Genera, like OpenVMS. (And still, like OpenVMS, heavily proprietary.)
Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?
There’s not a huge amount of _explicit_ dependency on the bit width of the system in either the 3600 or Ivory. Of course there’s still plenty of _implicit_ dependency in terms of hardware interaction, object layout in memory, collector implementation, etc. but that’s all stuff that had to be dealt with anyway to port from CADR to 3600 in the first place, and then again to port from 3600-series to Ivory.
I kind of think it was. The best argument I think is embodied in Kent Pitman's comments in this usenet thread [1] where he argues that for the Lisp Machine romantics (at least the subset that include him) what they are really referring to is the total integration of the software, and he gives some pretty good examples of the benefits they bring. He freely admits there's not any reason why the experience could not be reproduced on other systems, it's that it hasn't been that is the problem.
I found his two specific examples particularly interesting. Search for
* Tags Multiple Query Replace From Buffer
and
* Source Compare
which are how he introduced them. He also describes "One of the most common ways to get a foothold in Genera for debugging" which I find pretty appealing, and still not available in any modern systems.
The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.
I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.
So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.
And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.
The ".obj" file was a binary file that contain machine instructions and data. It was "fast loaded" and the file format was called "fasl" and it worked well.
The issue of building an application wasn't an issue because we had "dumplisp" which took the image in memory and wrote it to disk. The resulting image could be executed to create a new instance of the program, at the time dumplisp was run. Emacs called this "unexec" and it did approximately the same thing.
Maybe your discussions with my group predated me and predated some of the above features, I don't know. I was Fateman's group from '81-84.
I assume your source control comments were about the Lisp Machine and not Franz Lisp. RCS and SCCS were a thing in the early 80's, but they didn't really gain steam until after I arrived at UCB. I was the one (I think... it was a long time ago) that put Franz Lisp under RCS control.
This is the original Oppen-Nelson simplifier, the first SAT solver. It was modified by them under contract for the Pascal-F Verifier, a very early program verifier.
We kept all the code under SCCS and built with make, because the LISP part was only part of the whole system.
[1] https://github.com/John-Nagle/pasv/tree/master/src/CPC4
Also: https://hanshuebner.github.io/lmman/pathnm.xml
It is worth mentioning that while it is not versioning per se, APFS and ZFS support instantaneous snapshots and clones as well.
Btrfs supports snapshots, too.
HAMMER2 in DragonFlyBSD has the ability to store revisions in the filesystem.
(SCCS handled collaborative development and merges a lot worse than anything current, but... versioning file systems were worse there, too; one war story I heard involved an overenthusiastic developer "revising" someone else's file with enough new versions that by the time the original author came back to it, their last version of the code was unrecoverable.)
Source: I was a mainframe compiler developer at IBM during this era.
This mentality seems to have carried over to (most) modern FP stacks
Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.
They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.
Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.
I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.
The hardware was never very interesting to me. It was the "lisp all the way down" that I found interesting, and the tight integration with editing-as-you-use. There's nothing preventing that from working on modern risc hardware (or intel, though please shoot me if I'm ever forced back onto it).
A novice was trying to fix a broken Lisp machine by turning the power off and on.
Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”
Knight turned the machine off and on.
The machine worked.
Here's another Moon story from the humor directory:
https://github.com/PDP-10/its/blob/master/doc/humor/moon's.g...
Moon's I.T.S. CRASH PROCEDURE document from his home directory, which goes into much more detail than just turning it off and on:
https://github.com/PDP-10/its/blob/master/doc/moon/klproc.11
And some cool Emacs lore:
https://github.com/PDP-10/its/blob/master/doc/eak/emacs.lore
Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":
https://news.ycombinator.com/item?id=7878679
http://lispm.de/symbolics-lisp-machine-ergonomics
https://news.ycombinator.com/item?id=7879364
eudox on June 11, 2014
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples
agumonkey on June 11, 2014
Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.
DonHopkins on June 14, 2014
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html
Also eudox posted this link:
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....
Deleted Comment
Boss: Hey there, you like learning new things right?
Him (sensing a trap): Errr, yes.
Boss: But you don’t program in lisp do you?
Him (relieved, thinking he’s getting out of something): No.
Boss: Good thing they sent these (gesturing at a literal bookshelf full of manuals that came with the symbolics).
So he had to write a tcp stack. He said it was really cool because it had time travel debugging, the ability hit a breakpoint, walk the execution backwards, change variables and resume etc. This is in the 1980s. Way ahead of its time.
You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.
If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.
It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel. Even ARM's medium cores have had higher IPC than same-generation x86 big cores since at least A78. SiFive's latest RISC-V cores are looking to match or exceed x86 IPC too. x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
ISA matters.
x86 is quite constrained by its decoders with Intel's 6 and 8-wide cores being massive and sucking an unbelievable amount of power and AMD choosing a hyper-complex 2x4 decoder implementation with a performance bottleneck in serial throughput. Meanwhile, we see 6-wide
32-bit ARM is a lot more simple than x86, but ARM claimed a massive 75% reduction in decoder size switching to 64-bit-only in A715 while increasing throughput. Things like uop cache aren't free. They take die area and power. Even worse, somebody has to spend a bunch of time designing and verifying these workarounds which balloons costs and increases time to market.
Another way the ISA matters is memory models. ARM uses barriers/fences which are only added where needed. x86 uses much tighter memory model that implies a lot of things the developers and compiler didn't actually need/want and that impact performance. The solution (not sure if x86 actually does this) is doing deep analysis of which implicit barriers can be provably ignored and speculating on the rest. Once again though, wiring in all these various proofs into the CPU is complicated and error-prone which slows things down while bloating circuitry, using extra die area/power, and sucking up time/money that could be spent in more meaningful ways.
While the theoretical performance mountain is the same, taking the stairs with ARM or RISC-V is going to be much easier/faster than trying to climb up the cliff faces.
These companies target different workloads. ARM, Apple, and Qualcomm are all making processors primarily designed to be run in low power applications like cell phones or laptops, whereas Intel and AMD are designing processors for servers and desktops.
> x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
My napkin math is that Apple’s transistor volumes are roughly comparable to the entire PC market combined, and they’re doing most of that on TSMC’s latest node. So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.
It isn't now... ;-)
It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.
And, for fairly obvious reasons!
Stephen Furber has extended discussion of the trade-offs involved in those decisions in his "VLSI RISC Architecture and Organization" (and also pretty much admits that having PC as a GPR is a bad idea: hardware is noticeably complicated for rather small gains on the software side).
Neither is "simple" but the axis is similar.
Making something like that has turned into a lifetime project for me. Implemented a freestanding lisp on top of Linux's stable system call interface. It's gotten to the point it has delimited continuations.
Emacs is incredibly stable. Most problems happen in custom-made packages. I don't even remember Emacs ever segfaulting for me on Linux. On Mac it can happen, but very rarely. I don't ever remember losing my data in Emacs - even when I deliberately kill the process, it recovers the unsaved changes.
They showed signs that some people there understood that their development environment was it, but it obviously never fully got through to decision-makers: They had CLOE, a 386 PC deployment story in partnership with Gold Hill, but they’d have been far better served by acquiring Gold Hill and porting Genera to the 386 PC architecture.
What I’m suggesting is that they could have done a full port to the hardware; OpenGenera is still an Ivory CPU emulator. In 1986-7 you could get an AT-compatible 80386 system running at 16-25MHz that supported 8-32MB of RAM for 10-20% the price of a Symbolics workstation, and while it might not run Lisp quite as fast as a 3600 series system, it would still be fast enough for both deployment and development—and the next generation would run Lisp at comparable performance.
> No, it wasn’t.
I kind of think it was. The best argument I think is embodied in Kent Pitman's comments in this usenet thread [1] where he argues that for the Lisp Machine romantics (at least the subset that include him) what they are really referring to is the total integration of the software, and he gives some pretty good examples of the benefits they bring. He freely admits there's not any reason why the experience could not be reproduced on other systems, it's that it hasn't been that is the problem.
I found his two specific examples particularly interesting. Search for
and which are how he introduced them. He also describes "One of the most common ways to get a foothold in Genera for debugging" which I find pretty appealing, and still not available in any modern systems.[1] https://groups.google.com/g/comp.lang.lisp/c/XpvUwF2xKbk/m/X...
https://www.yarchive.net/comp/lisp_support.html