Readit News logoReadit News
zackmorris commented on Show HN: Micropolis/SimCity Clone in Emacs Lisp   github.com/vkazanov/elcit... · Posted by u/vkazanov
vkazanov · 4 days ago
It took me about 15 years (out of 20 in the industry) to arrive at similar ideas. Interestingly, I heard all the arguments many times before but somewhat obscured by the way function programming speaks of things.

For the purpose of this game spliting things into core/shell makes certain things super easy: saving and restoring state, undo, debugging, testing, etc.

And one more bit, relevant to this new reality we find outselves in. Having a bunch of pure functions merged into a very focused DSL makes it easy to extend the systems through LLMs: a description of well-understood inputs and outputs fits into limited context windows.

By the way.

It is true that dedicated languages never arrived but FCIS is not a language feature, it's more like a architectural paradigm.

So who cares?

zackmorris · 3 days ago
That's a fair question. For me, it's about removing the steep learning curves and gatekeeping from computer science and tech. Because the realities of being a developer have all but consumed my career with busywork.

For example, when I first learned about the borrow checker in Rust, it didn't make sense to me, because I had mostly already transitioned to data-driven development (just use immutable objects with copy-on-write and accept using twice the memory which is cheap anyway). I had the same feeling when I saw the syntactic sugar in Ruby, because it's solving problems which I specifically left behind when I abandoned C++. So I feel that those languages resonate with someone currently working with C-style code, but not, say, Lisp or SQL. We should be asking more of our compilers, not changing ourselves to suit them.

Which comes down to the academic vs pragmatic debate. Simple vs easy. Except that we've made the simple complex and the easy hard.

So I hold a lot of criticism for functional languages too. They all seem to demand that developers transpile the solution in their minds to stuff like prefix notation. Their syntax usually doesn't even look like equations. Always a heavy emphasis on pedantry, none on ergonomics. So that by the time solutions are written, we can't read them anyway.

I believe that most of these problems would go away if we went back to first principles and wrote a developer-oriented language, but one that's formal with no magic.

For example, I would like to write a language that includes something like gofmt that can transpile a file or code block to prefix/infix/postfix notation, then evolve the parser to the point that it can understand all of them. Which I know sounds crazy, but that would let us step up to a level of abstraction where we aren't so much concerned with syntax anymore. Our solutions would be shaped to the problems, a bit like the DSL you mentioned. And someone else could always reshape the code to what they're used to for their own learning.

You're right that FCIS is currently more of a pattern than a syntax. So the language would need to codify it. Normally imperative code would have to run in unsafe blocks, but I'd like to ban those, because they inevitably contaminate everything, leaving us with cruft. One way to do that might be to disallow mutability everywhere. Const is what allows imperative code to be transpiled to functional code and vice versa.

Except then we run into the problem of side effects and managing state, which leads us to monads, which leads us to promises/futures/closures and the async/await pattern (today's goto) which brings us full circle to where we started (nondeterminism), so we want to avoid those too. So we'd need to codify execution boundaries. Rather than monads, we'd treat all code as functional sync/blocking, and imagine the imperative shell as outside the flow of execution, at the point where the environment changes state (like a human editing a cell in a spreadsheet). Maybe the imperative shell should use a regular grammar (type 3 in Chomsky's hierarchy) to manage state transitions like Redux but not be Turing-complete (so more like a state machine than flow control).

Except that state machines are hard to reason about above a few dozen states, especially with nested state machines. Thankfully state machines can be transpiled to coroutines and vice versa. So we can imagine the imperative shell sort of like a shader with const-only variables. An analogy might be using coroutines in Unity for sprite behavior, rather than polluting the main loop with switch() commands based on their state. I've been down both roads, and coroutines are so much easier to reason about that I'll never go back to state machines.

I should add that I realized only recently that monads can be thought of as enumerating every execution path in the logic, so sacrificing them might be premature. For example, if we have a monad that's a boolean or undefined, and we've written a boolean logic function, then it becomes trinary logic with the monad. Which is related to stuff like Prolog, Verilog/VHDL, SPICE and SAT solvers, because we can treat the intermediate code like a tree because Lisp can be transpiled to a tree and vice versa. Then we can put the tree in a solver with the categories/types of the monads and formally define the solution space for a range of inputs. Sort of like fuzzing, but without the uncertainty. So the language should formalize monads too, not for I/O, but for solving and synthesis, so that we can treat code like logic circuits (spreadsheets).

Anyway, this is the low-hanging fruit. I haven't gotten into stuff like atomic operators (banning locks and mutexes), content-addressable memories for parallelizing execution without caching, reprogrammable hardware for stuff like loop optimization, etc. All of this represents the "real work" that private industry refuses to do, because it has no incentive to help the competition enter the walled gardens which it profits from. Fixing this stuff is up to academia (which is being constantly undermined), or people who have won the internet lottery (which presents a chicken and egg problem because they can't win without the thing that gets them to the thing).

Note that even though designing this language would be ambitious, the end result would feel familiar, even ubiquitous. I'm imagining something that looks like JavaScript/PHP but with value-only argument passing via const variables to higher-order methods (or automatic conversion from side-effect-free flow control statements), with the parallel code handling symantecs of Octave/MATLAB, and some other frivolties thrown in like pattern matching, destructuring, really all of the bells and whistles that we've come to expect. It would auto-optimize to the fullest extent possible for a high-multicore machine (1000+ cores, optionally distributed on a network or the internet), so run millions of times faster (potentially infinitely faster) than most anything today that we're used to. Yes we'd still hit Amdahl's law, but not resource limits most of the time. And where some people might see a utopian dream, I see something pedestrian, even boring to design. A series of simple steps that are all obvious, but only from the perspective of having wasted a lifetime fighting the existing tools.

Sorry this got so long. Believe it or not, I tried to keep it as short as possible.

zackmorris commented on The Missing Layer   yagmin.com/blog/the-missi... · Posted by u/lubujackson
lubujackson · 3 days ago
I appreciate the detailed response and I certainly haven't studied this, but part of the reason I made the measurement/construction comparison is because information is not equally important, but the errors are more or less equally distributed. And the biggest issue is the lack of ability to know if something is an error in the first place, failure is only defined by the difference between our intent and the result. Code is how we communicate our intent most precisely.
zackmorris · 3 days ago
You're absolutely right. Apologies if I came off as critical, which wasn't my intent.

I was trying to make a connection with random sampling as a way to maybe reduce the inherent uncertainty in how well AI solves problems, but there's still a chance that 10 AIs could come up with the wrong answer and we'd have no way of knowing. Like how wisdom of the crowd can still lead to design by committee mistakes. Plus I'm guessing that AIs already work through several layers of voting internally to reach consensus. So maybe my comment was more of a breadcrumb than an answer.

Some other related topics might be error correcting codes (like ECC ram), Reed-Solomon error correction, the Condorcet paradox (voting may not be able to reach consensus) and even the halting problem (zero error might not be reachable in limited time).

However, I do feel that AI has reached an MVP status that it never had before. Your post reminded me of something I wrote about in 2011, where I said that we might not need a magic bullet to fix programming, just a sufficiently advanced one:

https://web.archive.org/web/20151023135956/http://zackarymor...

I took my blog(s) down years ago because I was embarrassed by what I wrote (it was during the Occupy Wall Street days but the rich guys won). It always felt so.. sophomoric, no matter how hard I tried to convey my thoughts. But it's interesting how so little has changed in the time since, yet some important things have.

Like, I hadn't used Docker in 2011 (it didn't come out until 2013) so all I could imagine was Erlang orchestrating a bunch of AIs. I thought that maybe a virtual ant colony could be used for hill climbing, similarly to how genetic algorithms evolve better solutions, which today might be better represented by temperature in LLMs. We never got true multicore computing (which still devastates me), but we did get Apple's M line of ARM processors and video cards that reached ludicrous speed.

What I'm trying to say is, I know that it seems like AI is all over the place right now, and it's hard to know if it's correct or hallucinating. Even when starting with the same random seed, it seems like getting two AIs to reach the same conclusion is still an open problem, just like with reproducible builds.

So I just want to say that I view LLMs as a small piece of a much larger puzzle. We can imagine a minimal LLM with less than 1 billion parameters (more likely 1 million) that controls a neuron in a virtual brain. Then it's not so hard to imagine millions or billions of those working together to solve any problem, just like we do. I see AIs like ChatGPT more like logic gates than processors. And they're already good enough to be considered fully reliable, if not better at humans than most tasks already, so it's easy to imagine a society of them with metacognition that couldn't get the wrong answer if it tried. Kind of like when someone's wrong on the internet and everyone lets them know it!

zackmorris commented on The Missing Layer   yagmin.com/blog/the-missi... · Posted by u/lubujackson
zackmorris · 4 days ago
We can borrow some math from Nyquist and Shannon to understand how much information can be transmitted over a noisy channel and potentially overcome the magic ruler uncertainty from the article:

https://en.wikipedia.org/wiki/Nyquist_rate

https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem

Loosely this means that if we're above the Shannon Limit of -1.6 dB (below a 50% error rate), then data can be retransmitted some number of times to reconstruct it by:

  number of retransmissions = log(desired confidence)/log(odds of failure)
Where confidence for n sigma, using the cumulative distribution function phi is:

  confidence = 1 - phi(sigma)
So for example, if we want to achieve the gold standard 5 sigma confidence level of physics for a discovery (an uncertainty of 2.87x10^-7), and we have a channel that's n% noisy, here is a small table showing the number of resends needed:

  Error rate Number of resends
  0.1%       3
  1%         4
  10%        7
  25%        11
  49%        ~650
In practice, the bit error rate for most communication channels today is below 0.1% (dialup is 10^-6 to 10^-4, ethernet is around 10^-12 to 10^-10). Meaning that to send 512 or 1500 byte packets for dialup and ethernet respectively results in a cumulative resend rate of around 4% (dialup) and 0.1% (ethernet).

Just so we have it, the maximum transmission unit (MTU), which is the 512 or 1500 bytes above, can be calculated by:

MTU in bits = (desired packet loss rate)/(bit error rate)

So (4%)/(10^-5) = 4000 bits = 500 bytes for dialup and (0.0000001)/(10^-11) = 10000 bits = 1250 bytes for ethernet. 512 and 1500 are close enough in practice, although ethernet has jumbo frames now since its error rate has remained low despite bandwidth increases.

So even if AI makes a mistake 10-25% of the time, we only have to re-run it about 10 times (or run 10 individually trained models once) to reach a 5 sigma confidence level.

In other words, it's the lower error rate achieved by LLMs in the last year or two that has provided enough confidence to scale their problem solving ability to any number of steps. That's why it feels like they can solve any problem, whereas before that they would often answer with nonsense or give up. It's a little like how the high signal to noise ratio of transistors made computers possible.

Since GPU computing power vs price still doubles every 2 years, we only have to wait about 7 years for AI to basically get the answer right every time, given the context available to it.

For these reasons, I disagree with the premise of the article that AI may never provide enough certainty to provide engineering safety, but I appreciate and have experienced the sentiment. This is why I estimate that the Singularity may arrive within 7 years, but certainly within 14 to 21 years at that rate of confidence level increase.

zackmorris commented on Show HN: Micropolis/SimCity Clone in Emacs Lisp   github.com/vkazanov/elcit... · Posted by u/vkazanov
zackmorris · 4 days ago
I believe that "functional core / imperative shell" (FCIS) is the future of programming:

https://medium.com/ssense-tech/a-look-at-the-functional-core...

The idea being that business logic gets written in synchronous blocking functional logic equivalent to Lisp, which is conceptually no different than a spreadsheet. Then real-world side effects get handled by imperative code similar to Smalltalk, which is conceptually similar to a batch file or macro. A bit like pure functional executables that only have access to STDIN/STDOUT (and optionally STDERR and/or network/file streams) being run by a shell.

I think of these like backend vs frontend, or nouns vs verbs, or massless waves like photons vs massive particles like nucleons. Basically that there is no notion of time in functional programming, just state transitions where input is transformed into output (the code can be understood as a static graph). While imperative programming deals with state transformation where statically analyzing code is as expensive as just running it (the code must be traced to be understood as a graph). In other words, functional code can be easily optimized and parallelized, while imperative code generally can't be.

So in model-view-controller (MVC) programming, the model and view could/should be functional, while the controller (event handler) could/should be imperative. I believe that there may be no way to make functional code handle side effects via patterns like monads without forcing us to reason about it imperatively. Which means that impure functional languages like Haskell and Scala probably don't offer a free lunch, but are still worth learning.

Why this matters is that we've collectively decided to use imperative code for almost everything, relegating functional code to the road not taken. Which has bloated nearly all software by perhaps 10-100 times in terms of lines of code, conceptual complexity and even execution speed, making perhaps 90-99% of the work we do a waste of time or at least custodial.

It's also colored our perception of what programming is. "Real work" deals with values, while premature optimization deals with references and pointers. PHP (which was inspired by the shell) originally had value-passing semantics for arrays (and even subprocess fork/join orchestration) via copy-on-write, which freed developers from having to worry about efficiency or side effects. Unfortunately it was corrupted through design by committee when PHP 5 decided to bolt-on classes as references rather than unifying arrays and objects by making the "[]" and "." operators largely equivalent like JavaScript did. Alternative implementations like Hack could have fixed the fundamentals, but ended up offering little more than syntactic sugar and the mental load of having to consider an additional standard.

To my knowledge there has never been a mainstream FCIS language. ClojureScript is maybe the closest IMHO, or F#. Because of that, I mostly use declarative programming in my own work (where the spec is effectively the behavior) so that the internals can be treated as merely implementation details. Unfortunately that introduces some overhead because technical debt usually must be paid as I go, rather than left for future me. Meaning that it really only works well for waterfall, not agile.

I had always hoped to win the internet lottery so that I could build and test some of these alternative languages/frameworks/runtimes and other roads not taken by tech. The industry's failure to do that has left us with effectively single-threaded computers which run around 100,000 times slower today (at 100 times the cores per decade) than they would have if we hadn't abandoned true multicore superscalar processing and very large scale integration (VLSI) in the early 2000s when most R&D was outsourced or cancelled after the Dot Bomb and the mobile/embedded space began prioritizing lower cost and power usage.

GPUs kept going though, which is great for SIMD, but doesn't help us as far as getting real work done. AI is here and can recruit them, which is great too, but I fear that they'll make all code look like its been pair-programmed and over-engineered, where the cognitive load grows beyond the ability of mere humans to understand it. They may paint over the rot without renovating it basically.

I hope that there's still time to emulate a true multiple instruction multiple data (MIMD) runtime on SIMD hardware to run fully-parallelized FCIS code potentially millions of times faster than anything we have now for the same price. I have various approaches in mind for that, but making rent always comes first, especially in inflationary times.

It took me over 30 years to really understand this stuff at a level where I could distill it down to these (inadequate) metaphors. So maybe this is TMI, but I'll leave it here nonetheless in the hopes that it helps someone manifest the dream of personal supercomputing someday.

zackmorris commented on CIA to Sunset the World Factbook   abc.net.au/news/2026-02-0... · Posted by u/kshahkshah
afavour · 4 days ago
Feels very short sighted, the Factbook is a great example of low cost soft power.
zackmorris · 4 days ago
Or maybe a conscious decision, as neoconservative Robert Kagan writes:

"President Trump has managed in just one year to destroy the American order that was and has weakened America's ability to protect its interests in the world that will be. Americans thought defending the liberal world order was too expensive. Wait until they start paying for what comes next,"

https://www.npr.org/2026/02/04/nx-s1-5699388/is-the-u-s-head...

zackmorris commented on I miss thinking hard   jernesto.com/articles/thi... · Posted by u/jernestomg
skydhash · 5 days ago
> 90-99% of programming is a waste of time. Most apps today have less than a single spreadsheet page of actual business logic.

I would very much like to know the kind of app you’ve seen. It’s very hard to see something like mpv, calibre, abiword, cmus,… through that lens. Even web apps like forgejo, gonic, sr.ht, don’t fit into that view.

zackmorris · 4 days ago
Fair enough. I meant social network websites and social media apps like Facebook and TikTok that could have been made in a weekend using HyperCard, FileMaker, Microsoft Access, etc, if we had real reactive backends similar to Firebase, Airtable and Zapier, which come so close to almost working for normal people but miss the mark fundamentally somehow.

I know that programming has gone terribly wrong, but it's hard for me to articulate how, because it's all of it - the entire frontend web development ecosystem, mobile development languages and frameworks, steep learning curve languages like Rust that were supposed to make things easier but put the onus on the developer to get the busywork right, everything basically. It's like trying to explain screws to craftsmen only familiar with nails.

In the simplest terms, it's because corporations are driving the development of those tools and vacuuming up all profits on the backs of open sources maintainers working in their parents' basements, rather than universities working from first principles to solve hard problems and give solutions away to everyone for free for the good of society. We've moved from academia to slavery and call it progress.

zackmorris commented on I miss thinking hard   jernesto.com/articles/thi... · Posted by u/jernestomg
monch1962 · 5 days ago
As someone who's been coding for several decades now (i.e. I'm old), I find the current generation of AI tools very ... freeing.

As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.

You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.

Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.

Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.

I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.

zackmorris · 5 days ago
Just wanted to +1 this as a deep thinker who disagrees with the blog post's conclusion. I remember back on the years and decades I wasted dealing with the conceptual flaws inherent to nearly all software, and it breaks my heart.

90-99% of programming is a waste of time. Most apps today have less than a single spreadsheet page of actual business logic. The rest is boilerplate. Conjuring up whatever arcane runes are needed to wake a slumbering beast made of anti-patterns and groupthink.

For me, AI offers the first real computer that I've had access to in over 25 years. Because desktop computing stagnated after the 2000 Dot Bomb, and died on the table after the iPhone arrived in 2007. Where we should have symmetric multiprocessing with 1000+ cores running 100,000 times faster for the same price, we have the same mediocre quad core computer running about the same speed as its 3 GHz grandfather from the early 2000s. But AI bridges that divide by recruiting video cards that actually did increase in speed, albeit for SIMD which is generally useless for desktop computing. AI liberates me from having to mourn that travesty any longer.

I think that people have tied their identity to programming without realizing that it's mostly transcribing.

But I will never go back to manual entry (the modern equivalent of punch cards).

If anything, I can finally think deeply without it costing me everything. No longer having to give my all just to tread water as I slowly drown in technical debt and deadlines which could never be met before without sacrificing a part of my psyche in the process.

What I find fascinating is that it's truly over. I see so clearly how networks of agents are evolving now, faster than we can study, and have already passed us on nearly every metric. We only have 5-10 years now until the epiphany, the Singularity, AGI.

It's so strange to have worked so hard to win the internet lottery when that no longer matters. People will stop buying software. Their AI will deliver their deepest wish, even if that's merely basic resources to survive, that the powers that be deny us to prop up their fever dream of late-stage crony capitalism under artificial scarcity.

Everything is about to hit the fan so hard, and I am so here for it.

Deleted Comment

Deleted Comment

zackmorris commented on Stories removed from the Hacker News Front Page, updated in real time (2024)   github.com/vitoplantamura... · Posted by u/akyuu
jackyinger · 19 days ago
What if there was a way for the “silent majority” (or something like it) to discuss political issues free of the impetus to polarization? Surely that would be a lot better than echo chambers.

Hackernews style apoliticism strikes me as wanting to chameleon to whatever side is perceived as winning the political game. I think it’s a nihilistic stance.

We need to be able to be political without the zealotry. Politics, of all things, is not a zero sum game.

zackmorris · 19 days ago
Seconded. Another way of saying this is that avoiding politics benefits the status quo.

Since the status quo is inherently conservative, that has a stifling effect on innovation - which is inherently liberal. Which is ironic for a site dedicated to disruption. Hence the cognitive dissonance.

I try to entertain opposing viewpoints in all of my comments, even if I don't always agree with them. So while I find it most practical to live conservatively, that doesn't mean that I wish that for the world. It's important to remember that FDR - a liberal - was one of Reagan's heroes. I think that we can imagine a Star Trek style post-scarcity geopolitical reality without abandoning the ethos which got us this far.

Now, regardless of all that, I still think that HN has the best ranking algorithm around. So I would say that if it wants to get serious about getting back to meritocracy, funding real work on hard problems, setting a positive example through intellectual honesty, etc, then it should consider revising its flagging policy.

A proof of concept might be to move flagged posts below the fold past slot 31, rather than removing them completely. Then they could bubble back up on their own merit. Or maybe each flag costs 10 slots, something like that. And all flags should go through human review to prevent gaming, if they don't already.

u/zackmorris

KarmaCake day6096September 7, 2011
About
Everything always happens at once

https://www.linkedin.com/in/zack-morris-48996538/ http://stackoverflow.com/users/539149/zack-morris https://github.com/zmorris

View Original