Readit News logoReadit News
mihaic · a year ago
When I first heard the maxim that an intelligent person should be able to hold two opposing thoughts at the same time, I was naive to think it meant weighing them for pros and cons. Over time I realized that it means balancing contradictory actions, and the main purpose of experience is knowing when to apply each.

Concretely related to the topic, I've often found myself inlining short pieces of one-time code that made functions more explicit, while at other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on. In both cases I was creating inconsistencies that younger developers nitpick -- I know I did.

My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?

tetha · a year ago
I had exactly this discussion today in an architectural discussion about an infrastructure extension today. As our newest team member noted, we planned to follow the reference architecture of a system in some places, and chose not to follow the reference architecture in other places.

And this led to a really good discussion pulling the reference architecture of this system apart and understanding what it optimizes for (resilience and fault tolerance), what it sacrifices (cost, number of systems to maintain) and what we need. And yes, following the reference architecture in one place and breaking it in another place makes sense.

And I think that understanding the different options, as well as the optimization goals setting them apart, allows you to make a more informed decision and allows you to make a stronger argument why this is a good decision. In fact, understanding the optimization criteria someone cares about allows you to avoid losing them in topics they neither understand nor care about.

For example, our CEO will not understand the technical details why the reference architecture is resilient, or why other choices are less resilient. And he would be annoyed about his time being wasted if you tried. But he is currently very aware of customer impacts due to outages. And like this, we can offer a very good argument to invest money in one place for resilience, and why we can save money in other places without risking a customer impact.

We sometimes follow rules, and in other situations, we might not.

mandevil · a year ago
Yes, and it is the engineering experience/skill to know when to follow the "rules" of the reference architecture, and when you're better off breaking them, that's the entire thing that makes someone a senior engineer/manager/architect whatever your company calls it.
jschrf · a year ago
Your newest team member sounds like someone worth holding onto.
ragnese · a year ago
> My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?

I think the truth is that we just CAN'T scale that way with the current programming languages/models/paradigms. I can't PROVE that hypothesis, but it's not hard to find examples of big software projects with lots of protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc) or still have plenty of bugs (macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...).

I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)

Just my navel gazing for the morning.

twh270 · a year ago
I think the only way this gets better is with software development tools that make it impossible to create invalid states.

In the physical world, when we build something complex like a car engine, a microprocessor, or bookcase, the laws of physics guide us and help prevent invalid states. Not all of them -- an upside down bookcase still works -- but a lot of them.

Of course, part of the problem is that when we build the software equivalent of an upside down bookcase, we 'patch' it by creating trim and shims to make it look better and more structurally sound instead of tossing it and making another one the right way.

But mostly, we write software in a way that allows for a ton of incorrect states. As a trivial example, expressing a person's age as an 'int', allowing for negative numbers. As a more complicated example, allowing for setting a coupon's redemption date when it has not yet been clipped.

bluGill · a year ago
I don't think we will ever get the breakthrough you are looking for. Things like design patterns and abstractions are our attempt at this. Eventually you need to trust that however wrote the other code you have to deal with is sane. This assumption is false (and it might be you who is insane thinking they could/would make it work they way you think it does).

We will never get rid of the need for QA. Automated tests are great, I believe in them (Note that I didn't say unit tests or integration tests). Formal proofs appear great (I have never figured out how to prove my code), but as Knuth said "Beware of bugs in the above code; I have only proved it correct, not tried it". There are many ways code can be meet the spec and yet wrong because in the real world you rarely understand the problem well enough to write a correct spec in the first place. QA should understand the problem well enough to say "this isn't what I expected to happen."

austin-cheney · a year ago
I suppose that depends on the language and the elegance of your programming paradigm. This is where primitive simplicity becomes important, because when your foundation is composed of very few things that are not dependent upon each other you can scale almost indefinitely in every direction.

Imagine you are limited to only a few ingredients in programming: statements, expressions, functions, objects, arrays, and operators that are not overloaded. That list does not contain classes, inheritance, declarative helpers, or a bunch of other things. With a list of ingredients so small no internal structure or paradigm is imposed on you, so you are free to create any design decisions that you want. Those creative decisions about the organization of things is how you dictate the scale of it all.

Most people, though, cannot operate like that. They claim to want the freedom of infinite scale, but they just need a little help. With more help supplied by the language, framework, whatever the less freedom you have to make your own decisions. Eventually there is so much help that all you do as a programmer is contend with that helpful goodness without any chance to scale things in any direction.

DSMan195276 · a year ago
> protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc)

To be fair here, I don't think it's reasonable to expect that once you have "software development skills" it automatically gives you the ability to fix any code out there. The Linux Kernel and web browsers are not hard to contribute to because of conventions, they're hard because most of that code requires a lot of outside knowledge of things like hardware or HTML spec, etc.

The actual submitting part isn't the easiest, but it's well documented if you go looking, I'm pretty sure most people could handle it if they really had a fix they wanted to submit.

knodi · a year ago
> I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)

Readability is for human optimization for self or for other people's posterity and code comprehension to the readers mind. We need a new way to visualize/comprehension code that doesn't involve heavy reading and the read's personal capabilities of syntax parsing/comprehension.

This is something we will likely never be able to get right with our current man machine interfaces; keyboard, mouse/touch, video and audio.

Just a thought. As always I reserve the right to be wrong.

madisp · a year ago
calculator app on latest macos (sequoia) has a bug today - if you write FF_16 AND FF_16 in the programmer mode and press =, it'll display the correct result - FF_16, but the history view displays 0_16 AND FF_16 for some reason.
JadeNB · a year ago
> macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...

This is stated as if surprising, presumably because we think of a calculator app as a simple thing, but it probably shouldn't be that surprising--surely the calculator app isn't used that often, and so doesn't get much in-the-field testing. Maybe you've occasionally used the calculator in Spotlight, but have you ever opened the app? I don't think I have in 20 years.

mgsouth · a year ago
We've been there, done that. CRUD apps on mainframes and minis had incredibly powerful and productive languages and frameworks (Quick, Quiz, QTP: you're remembered and missed.) Problem is, they were TUI (terminal UI), isolated, and extremely focused; i.e. limited. They functioned, but would be like straight-jackets to modern users.

(Speaking of... has anyone done a 80x24 TUI client for HN? That would be interesting to play with.)

lifeisstillgood · a year ago
I often Bang on about “software is a new form of literacy”. And this I feel is a classic example - software is a form of literacy that not only can be executed by a CPU but also at the same time is a way to transmit concepts from one humans head to another (just like writing)

And so asking “will AI generated code help” is like asking “will AI generated blog spam help”?

No - companies with GitHub copilot are basically asking how do I self-spam my codebase

It’s great to get from zero to something in some new JS framework but for your core competancy - it’s like outsourcing your thinking - always comes a cropper

(Book still being written)

davidw · a year ago
> is a way to transmit concepts from one humans head to another (just like writing)

That's almost its primary purpose in my opinion... the CPU does not care about Ruby vs Python vs Rust, it's just executing some binary code instructions. The code is so that other people can change and extend what the system is doing over time and share that with others.

debit-freak · a year ago
I think a lot of the traditional teachings of "rhetoric" can apply to coding very naturally—there's often practically unlimited ways to communicate the same semantics precisely, but how you lay the code out and frame it can make the human struggle to read it straightforward to overcome (or near-impossible, if you look at obfuscation).
j7ake · a year ago
Computational thinking is more important than software per se.

Computational thinking is the mathematical thinking.

tomohawk · a year ago
What makes an apprentice successful is learning the rules of thumb and following them.

What makes a journeyman successful is sticking to the rules of thumb, unless directed by a master.

What makes a master successful is knowing why the rules of thumb exist, what their limits are, when to not follow them, and being able to make up new rules.

codeflo · a year ago
There’s also the effect that a certain code structure that’s clearer for a senior dev might be less clear for a junior dev and vice versa.
rob74 · a year ago
Or rather, senior devs have learned to care more for having clear code rather than (over-)applying principles like DRY, separation of concerns etc., while juniors haven't (yet)...
kolinko · a year ago
I bumped into that issue, and it caused a lot of friction between me and 3 young developers I had to manage.

Ideas on how to overcome that?

peepee1982 · a year ago
That's exactly what I try to do. I think it's an unpopular opinion though, because there are no strict rules that can be applied, unlike with pure ideologies. You have to go by feel and make continuous adjustments, and there's no way to know if you did the right thing or not, because not only do different human minds have different limits, but different challenges don't tax every human mind to the same proportional extent.

I get the impressions that programmers don't like ambiguity in general, let alone in things they have to confront in real life.

mr_toad · a year ago
> there are no strict rules that can be applied

The rules are there for a reason. The tricky part is making sure you’re applying them for that reason.

gspencley · a year ago
My intro to programming was that I wanted to be a game developer in the 90s. Carmack and the others at Id were my literal heroes.

Back then, a lot of code optimizations was magic to me. I still just barely understand the famous inverse square root optimization in the Quake III Arena source code. But I wanted to be able to do what those guys were doing. I wanted to learn assembly and to be able to drop down to assembly and to know where and when that would help and why.

And I wasn't alone. This is because these optimizations are not obvious. There is a "mystique" to them. Which makes it cool. So virtually ALL young, aspiring game programmers wanted to learn how to do this crazy stuff.

What did the old timers tell us?

Stop. Don't. Learn how to write clean, readable, maintainable code FIRST and then learn how to profile your application in order to discover the major bottlenecks and then you can optimize appropriately in order of greatest impact descending.

If writing the easiest code to maintain and understand also meant writing the most performant code, then the concept of code optimization wouldn't even exist. The two are mutually exclusive, except in specific cases where it's not and then it's not even worth discussing because there is no conflict.

Carmack seems to acknowledge this in his email. He realizes that inlining functions needs to be done with careful judgment, and the rationale is both performance and bug mitigation. But that if inlining were adopted as a matter of course, a policy of "always inline first", the results would quickly be an unmaintainable, impossible to comprehend mess that would swing so far in the other direction that bugs become more prominent because you can't touch anything in isolation.

And that's the bane of software development: touch one thing and end up breaking a dozen other things that you didn't even think about because of interdependence.

So we've come up with design patterns and "best practices" that allow us to isolate our moving parts, but that has its own set of trade-offs which is what Carmack is discussing.

Being a 26 year veteran in the industry now (not making games btw), I think this is the type of topic that you need to be very experienced to be able to appreciate, let alone to be able to make the judgment calls to know when inlining is the better option and why.

skummetmaelk · a year ago
That doesn't seem like holding two opposing thoughts. Why is balancing contradictory actions to optimize an outcome different to weighing pros and cons?
mihaic · a year ago
What I meant to say was that when people encounter contradictory statements like "always inline one-time functions" and "breakdown functions into easy to understand blocks", they try to only pick one single rule, even if they consider the pros and cons of each rule.

After a while they consider both rules as useful, and will move to a more granular case-by-base analysis. Some people get stuck at rule-based thinking though, and they'll even accuse you of being inconsistent if you try to do case-by-case analysis.

leoh · a year ago
You are probably reaching for Hegel’s concept of dialectical reconciliation
mihaic · a year ago
Not sure, didn't Hegel say that there should be a synthesis step at some point? My view is that there should never be a synthesis when using these principles as tools, as both conflicting principles need to always maintain opposites.

So, more like Heraclitus's union of opposites maybe if you really want to label it?

hnuser123456 · a year ago
On a positive note, most AI-gen code will follow a style that is very "average" of everything it's seen. It will have its own preferred way of laying out the code that happens to look like how most people using that language (and sharing the code online publicly), use it.
SoftTalker · a year ago
> other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on

Absolutely, I'll break up a long block of code into several functions, even if there is nowhere else they will be called, just to make things easier to understand (and potentially easier to test). If a function or procedure does not fit on one screen, I will almost always break it up.

Obviously "one screen" is an approximation, not all screens/windows are the same size, but in practice for me this is about 20-30 lines.

JamesBarney · a year ago
My go to heuristic for how to break up code is white board or draw up in lucidchart your solution to explain it to another dev. If your methods don't match the whiteboard refactor.
mjburgess · a year ago
To a certain sort of person, conversation is a game of arriving at these antithesis statements:

   * Inlining code is the best form of breaking up code. 
   * Love is evil.
   * Rightwing populism is a return to leftwing politics. 
   * etc.

The purpose is to induce aporia (puzzlement), and hence make it possible to evaluate apparent contradictions. However, a lot of people resent feeling uncertain, and so, people who speak this way are often disliked.

j7ake · a year ago
To make an advance in a field, you must simultaneously believe in what’s currently known as well as distrust that the paradigm is all true.

This gives you the right mindset to focus on advancing the field in a significant way.

Believing in the paradigm too much will lead to only incremental results, and not believing enough will not provide enough footholds for you to work on a problem productively.

defaultcompany · a year ago
> My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode)

I think you would appreciate the philosophy of the Grug Brained Developer: https://grugbrain.dev

xnx · a year ago
> I was creating inconsistencies that younger developers nitpick

Obligatory: “A foolish consistency is the hobgoblin of little minds"

Continued because I'd never read the full passage: "... adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — 'Ah, so you shall be sure to be misunderstood.' — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood.” ― Ralph Waldo Emerson, Self-Reliance: An Excerpt from Collected Essays, First Series

grbsh · a year ago
> limits of the human mind when more and more AI-generated code will be used

We already have a technology which scales infinitely with the human mind: abstraction and composition of those abstractions into other abstractions.

Until now, we’ve focused on getting AI to produce correct code. Now that this is beginning to be successful, I think a necessary next step for it to be useful is to ensure it produces well-abstracted and clean code (such that it scales infinitely)

hinkley · a year ago
That’s undoubtedly a Zelda Fitzgerald quote (her husband plagiarized her shamelessly).

As a consequence of the Rule of Three, you are allowed to have rules that have one exception without having to rethink the law. All X are Y except for Z.

I sometimes call this the Rule of Two. Because it deserves more eyeballs than just being a subtext of another rule.

hibernator149 · a year ago
Wait, isn't that just Doublethink from 1984? Holding two opposing thoughts is a sign that your mental model of the world is wrong and that it needs to be fixed. Where have you heard that maxim?
perrygeo · a year ago
No you've got it completely backwards. Reality has multiple facets (different statements, all of which can be true) and a mental model that insists on a singular judgement is reductionist, missing the forest for the trees. Light is a wave and a particle. People are capable of good and bad. The modern world is both amazing and unsustainable. etc.

Holding multiple truths is a sign that you understand the problem. Insisting on a singular judgement is a sign that you're just parroting catchy phrases as a short cut to thinking; the real world is rarely so cut and dry.

HKH2 · a year ago
It's not referring to cognitive dissonance.

Dead Comment

ninetyninenine · a year ago
His overall solution highlighted in the intro is that he's moved on from inlining and now does pure functional programming. Inlining is only relevant for him during IO or state changes which he does as minimally as possible and segregates this from his core logic.

Pure functional programming is the bigger insight here that most programmers will just never understand why there's a benefit there. In fact most programmers don't even completely understand what FP is. To most people FP is just a bunch of functional patterns like map, reduce, filter, etc. They never grasp the true nature of "purity" in functional programming.

You see this lack of insight in this thread. Most responders literally ignore the fact that Carmack called his email completely outdated and that he mostly does pure FP now.

wmanley · a year ago
Here's the link where he discusses functional programming style:

https://web.archive.org/web/20170116040923/http://gamasutra....

He does not say that that his email is completely outdated - he just says that calling pure functions is exempt from the inlining rule.

He's not off writing pure FP now. His approach is still deeply pragmatic. In the link above he discusses degrees of function purity. "Pure FP" has a whole different connotation - where whole programs are written in that constrained style.

solomonb · a year ago
The original article literally starts with this:

> In the years since I wrote this, I have gotten much more bullish about pure functional programming, even in C/C++ where reasonable: (link) > >The real enemy addressed by inlining is unexpected dependency and mutation of state, which functional programming solves more directly and completely. However, if you are going to make a lot of state changes, having them all happen inline does have advantages; you should be made constantly aware of the full horror of what you are doing.

He explicitly says that functional programming solves the same issue as inlining but more directly and completely.

pragma_x · a year ago
Thank you for this. I appreciate that this (classic) article lays bare the essence of FP without the usual pomp and "use Lisp/Scheme/Haskell already" rhetoric. My takeaway is that FP is mostly about using functions w/o side effects (pure), which can be achieved in any programming language provided you're diligent about it.
ninetyninenine · a year ago
He literally said he’s bullish on pure fp. Which means he is off writing pure fp. His own article about it never explicitly or implicitly implies a “pragmatic approach”.

I never said he said his email was completely outdated. He for sure implies it’s outdated and updates us on his views of inlining which I also mentioned.

Hendrikto · a year ago
> he's moved on from inlining and now does pure functional programming

Neither of those are true. He does more FP ”where reasonable“, and that decreases the need for inlining. He does not do pure FP, and he still inlines.

solomonb · a year ago
"pure FP" does not mean only writing in a functional style. Purity refers to referential transparency, ie., functions do not depend on or modify some global state.
ninetyninenine · a year ago
He literally says he’s more bullish on pure fp. Read it. And I also wrote about where he still inlines.
eyelidlessness · a year ago
I think more people grasp functional programming all the time, or at least the most salient detail: referential transparency. It’s easy to show the benefits in the small, without getting heavy on academics: pure functions are easier to test, understand, and change with confidence. All three of these reinforce each other, but they’re each independently beneficial as well.

There are tons of benefits to get from learning this lesson in a more intentional way—I know that I changed my entire outlook on programming after some time working in Clojure!—but I’ve seen other devs take the same lessons in multi-paradigm contexts as well.

ninetyninenine · a year ago
Not just this. Modularity is the main insight as well. The reason why oop doesn’t work is because methods can’t be broken down. Your atom is oop is literally a collection of methods tied to mutating state. You cannot break down that collection further.

In pure fp. You can break your function down into the smallest computational unit possible. This is what prevents technical debt of the architectural nature as you can rewrite your code as simply recomposing your modular logic.

aithrowawaycomm · a year ago
Over the last few decades there has been quite the rug-pull in "functional programming"!

1) programming which is based on "functions" (procedures) as values, including anonymous lambdas (hence map/fold/etc paradigms), which is only really possible in languages that intentionally support it

2) programming where every procedure (except some boundary code for IO/etc) is truly is a well-defined mathematical function, which is possible in almost any programming language

1) describes any language you would call a "functional programming language," whereas 2) involves well-understood concepts around mutability and determinism that a minority (correctly) describe as "pure functional programming."

So I think it's a bit judgmental to say "lack of insight" when it's more about shifting terminology. A very high-reliability C program might be "purely functional" (inside of an IO/memory boundary) and built by engineers with the precise insight you're discussing, but in most contexts it would be odd to say "purely functional," especially if the code eschews C mechanics around function pointers. In most imperative contexts it is clearer to describe purely functional ideas in terms of imperative programming (which are equally clear, if less philosophically interesting).

dkarl · a year ago
> To most people FP is just a bunch of functional patterns like map, reduce, filter, etc.

For me, these were the gateway drugs to FP, because they weren't available in the languages I was used to, namely C++ and Java. I encountered map and filter in Python in the 1990s, immediately realized a ton of Java and C++ code I wrote would be simpler with them, and dove into Lisp when I found out that's where Python got them. They have nothing to do with pure functional programming, of course; they're just nice idioms that came from functional languages. That led to a long slippery slope of ideas that upgraded my non-FP programming at every step, long before I got into anything that could be described as pure FP.

I don't know if it helps to draw a strict line between "pure" and "impure" FP. I mostly code in Scala, which is an imperative, side-effecting language. Scala gives you exactly the same power as Java to read and mutate global state. However, by design, Scala provides extremely good support for functional idioms, and you can use an effects system (such as Cats Effect or ZIO) to write in a pure FP style. But is it "pure FP" if you can read and mutate global state, and if you have to rely on libraries that are written in Java? Maybe, maybe not, but I don't think trying to answer that question yields much insight.

ninetyninenine · a year ago
You do want to draw a strict line. When I say most programmers don’t get it… I’m talking about you. You don’t get why carmack is bullish about pure fp. You think it’s just map, reduce and filter and you don’t get why those functions are irrelevant.

To you you’re just fulfilling an ocd need to simplify your code and make it more pretty. There is deeper insight here that you missed and even when you read what carmack wrote about pure fp I doubt you’ll internalize the point.

Lisp is not pure. That’s why you don’t have the insight. For the true insight you need to learn about Haskell in a non trivial way. Not just youtube videos, but books that teach you the language from first principles. You need to understand the IO monad. And why it makes Haskell pure and how it forces you to organize your code in a completely different way. This is not an easy thing to understand.

The IO monad, when it appears, infects your code with the IO type and makes it extremely annoying to get rid of. I had a friend who learned Haskell and hated Haskell because of the IO monad. He stopped learning haskell to early and he never "got it".

If you reach this point you have to keep learning about Haskell until you understand why things are the way they are with haskell.

Just remember this: the annoyance of the IO monad is designed like that so that you write your logic in a way that doesn’t allow the monad to pollute most of your code.

wruza · a year ago
Some grasp it but see its trade-off contract, which is demanding.
ninetyninenine · a year ago
With practice it just becomes another paradigm of programming. The trade off is really a skill issue from this perspective.

The larger issue is performance which is a legitimate reason for not using fp in many cases. But additionally in many cases there is no performance trade off.

zelphirkalt · a year ago
That most wont get it is due to the fact that most are kind of "industrial programmers", who only learn and use mainstream OOP languages amd as such never actually use a mainly FP language a lot. Maybe on HN the ratio is better than on the whole market though.
_proofs · a year ago
where does the condescension come from? the loftiness/lording over people who "just don't get it", that clearly comes across in your communication?

this is a really non-productive comment. you have an opportunity to teach and share knowledge but instead you hoard and condescend, and rant about your implied superiority.

if so many programmers don't understand -- what's more productive: this comment, or helping "most programmers" to get it, and understand?

ninetyninenine · a year ago
It’s communication to programmers who do get it.

I love to teach and explain but this is one of those things that can’t be conveyed. You have to do it yourself.

If you want an explanation though, use Google. But I don’t think explanations actually help you grok what’s really happening. You really have to come to catharsis yourself.

Learn Haskell. Learn it to the point where you completely understand the purpose of the IO monad and why it exists and helps a program organize better. Then you will understand.

When you do get it. You’ll be among a small few who have obtained something that’s almost like forbidden knowledge. No one will “get” you.

nuancebydefault · a year ago
I've never seen pure FP...
sdeframond · a year ago
I would recommend you take a look at Haskell or Elm.

John Karmack did and talked about it https://m.youtube.com/watch?v=1PhArSujR_A

nuancebydefault · a year ago
I might add Carmack's take on functional programming https://web.archive.org/web/20120501221535/http://gamasutra....
eyelidlessness · a year ago
Surely if you’ve seen any non-trivial amount of code, you have seen pure FP applied piecemeal even if not broadly. A single referentially transparent function is pure FP, even if it’s ultimately called by the most grotesque stateful madness.
jmorenoamor · a year ago
At least for me, it's hard to get that insight, I tried to read articles and watch videos, and none of them gave me say: "oh! now I get it"
ninetyninenine · a year ago
You need to learn Haskell in a non trivial way. Code in it. And then you need to completely internalize why the IO monad exists and why it encourages the programmer to code in ways that stay away from using it.

I had a friend (who’s in general a good programmer) learn Haskell and then get completely annoyed by the IO monad so that he quit learning Haskell. So yeah, it’s not easy to “get it” You only get it with reading and practice. Really you just need to completely internalize and grasp the purpose of the IO monad in Haskell.

Deleted Comment

VyseofArcadia · a year ago
> That was a cold-sweat moment for me: after all of my harping about latency and responsiveness, I almost shipped a title with a completely unnecessary frame of latency.

In this era of 3-5 frame latency being the norm (at least on e.g. the Nintendo Switch), I really appreciate a game developer having anxiety over a single frame.

kllrnohj · a year ago
You're over-crediting Carmack and under-crediting current game devs. 3-5 frames might be current end-to-end latency, but that's not what Carmack is talking about. He's just talking about the game loop latency. Even at ~4 frames of end-to-end latency, he'd be talking about an easily avoided 20% regression. That's still huge.
pragma_x · a year ago
To be fair, back in 2014 that was one frame at 60Hz or slower for some titles. At 80-120Hz, 3-5 frames is comparatively similar time.
01HNNWZ0MV43FF · a year ago
I don't think high frame rates are common outside of PC gaming yet.

Wikipedia indicates the Switch maxes out at 1080p60, and the newest Zelda only at 900p30 even when docked

https://en.m.wikipedia.org/wiki/Nintendo_Switch

chandler5555 · a year ago
yeah but when people talk about input lag for consoles its generally still in the 60hz sense, rare for games to be 120hz

smash brothers ultimate for example runs at 60fps and has 5-6 frames of input lag

munificent · a year ago
Why would you even bother running at a game at 120Hz if the user's response to what's being drawn is effectively 24-30 FPS?
astrobe_ · a year ago
I've heard that a good reaction time is around 200 ms, some experiments seem to confirm this figure [1]. At 60Hz, a frame is displayed every 17 ms.

So it would take a 12 frames animation and a trained gamer for a couple of frames to make a difference (e.g. push the right button before the animation ends and the opponent's action takes effect).

[1] https://humanbenchmark.com/tests/reactiontime/statistics

doctorpangloss · a year ago
> In this era of 3-5 frame latency being the norm (at least on e.g. the Nintendo Switch)

Which titles is this true for? Have you or anyone else measured?

grougnax · a year ago
Almost every title. This is common knowledge.
gorgoiler · a year ago
> Inlining functions also has the benefit of not making it possible to call the function from other places.

I’ve really gone to town with this in Python.

  def parse_news_email(…):
    def parse_link(…):
      …

    def parse_subjet(…):
      …

    …
If you are careful, you can rely on the outer function’s variables being available inside the inner functions as well. Something like a logger or a db connection can be passed in once and then used without having to pass it as an argument all the time:

  # sad
  def f1(x, db, logger): …
  def f2(x, db, logger): …
  def f3(x, db, logger): …
  def g(xs, db, logger):
    for x0 in xs:
      x1 = f1(x0, db, logger)
      x2 = f2(x1, db, logger)
      x3 = f3(x2, db, logger)
      yikes x3


  # happy
  def g(xs, db, logger):
    def f1(x): …
    def f2(x): …
    def f3(x): …
    for x in xs:
      yield f3(f2(f1(x)))
Carmack commented his inline functions as if they were actual functions. Making actual functions enforces this :)

Classes and “constants” can also quite happily live inside a function but those are a bit more jarring to see, and classes usually need to be visible so they can be referred to by the type annotations.

grumbel · a year ago
That's not an improvement, as it screws up the code flow. The point of inline blocks is that you can read the code the same way as it is executed. No surprised that code might be called twice or that a function call could be missed or reordered. Adding real functions causes exactly the indirection that one wanted to avoid in the first place. If the block has no name you know that it will only be executed right where it is written.
gorgoiler · a year ago
Yeah that’s a valid point. I tend to have in mind that as soon as I pull any of the inner functions out to the publicly visible module level I can say goodbye to ever trying to stop people reusing the code when I don’t really want them to.

For example, if your function has an implicit, undocumented contract such as assuming the DB is only a few milliseconds away, but they then reuse the code for logging to DBs over the internet, then they find it’s slow and speed it up with caching. Now your DB writing code has to suffer their cache logic bugs when it didn’t have to.

scbrg · a year ago
Not sure I believe the benefit of this approach outweighs the added difficulty wrt testing, but I certainly agree that Python needs a yikes keyword :-)
nuancebydefault · a year ago
What is the benefit of such a yikes? Or do you consider it a yikes language as a whole?

Personally I like that functions can be inside functions, as a trade off between inlining and functional seperation in C++.

The scope reduction makes it easier to track bugs while it has the benefits of separation of concern.

toenail · a year ago
> Inlining functions also has the benefit of not making it possible to call the function from other places.

Congrats, you've got an untestable unit.

ahoka · a year ago
Congratulations, you are writing test for things that would not need test if weren't put behind a under-defined interface. Meanwhile sprint goals are not met and overall product quality is embarrassing, but you have 100% MC/DC coverage of your addNumbersOrThrowIfAbove(a, b, c).
jeltz · a year ago
Which is usually a positive. Testing tiny subunits usually just makes refactoring and adding new features hard while not improving test quality.
jayd16 · a year ago
Ideally, you've just moved the unit boundary to where it logically should be instead of many small implementation details that should not be exposed.
xboxnolifes · a year ago
The unit here is the email, not the email's link or subjects. Those are implementation details.
ninetyninenine · a year ago
This is a major insight. Defining a local function isn't a big deal you can always just copy and pasta it out to global scope.

Any time you merge state with function you can no longer move the function. This is the same problem as OOP. Closures can't be modular the same way methods in objects can't be modular.

The smallest unit of testable module is the combinator. John Carmack literally mentioned he does pure functional programming now which basically everyone in this entire thread is completely ignoring.

gorgoiler · a year ago
Yup, and I should have called this out as a downside. Thank you for raising it.

On visibility, one of the patterns I’ve always liked in Java is using package level visibility to limit functions to that code’s package and that packages tests, where they are in the same package (but possibly defined elsewhere.)

(This doesn’t help though with the reduction in argument verbosity, of course.)

wraptile · a year ago
The latter pattern is very popular in Python web scraping and data parsing niches as the code is quite verbose and specific and I'm very happy with this approach. Easy to read and debug and the maintenance is naturally organized.
eru · a year ago
Funny enough, the equivalent of your Python example is how Haskell 'fakes' all functions with more than one argument (at least by default).

Imperative blocks of code in Haskell (do-notation) also work like this.

orf · a year ago
That’s gonna be quite expensive, don’t do this in hot loops. You’re re-defining and re-creating the function object each time the outer function is called.
gorgoiler · a year ago
Good point. I measured it for 10^6 loops:

(1) 40ms for inline code;

(2) 150ms for an inner function with one expression;

(3) 200ms for a slightly more complex inner function; and

(4) 4000ms+ for an inner function and an inner class.

  def f1(n: int) -> int:
      return n * 2

  def f2(n: int) -> int:
      def g():
          return n * 2
  
      return g()
  
  def f3(n: int) -> int:
      def g():
          for _ in range(0):
              try:
                  pass
              except Exception as exc:
                  if isinstance(exc, 1):
                      pass
                  else:
                      while True:
                          pass
                  raise Exception()
          return n * 2
  
      return g()
  
  def f4(n: int) -> int:
      class X:
          def __init__(self, a, b, c):
              pass
  
          def _(self) -> float:
              return 1.23
  
      def g():
          for _ in range(0):
              try:
                  pass
              except Exception as exc:
                  if isinstance(exc, 1):
                      pass
                  else:
                      while True:
                          pass
                  raise Exception()
          return n * 2
  
      return g()

raverbashing · a year ago
It might be a benefit in some cases, but I do feel that f1/f2/f3 are the prime candidates for actual unit testing
bitwize · a year ago
It's possible to nest subprograms within subprograms in Ada. I take advantage of this ability to break a large operation into one or more smaller simpler "core" operations, and then in the main body of the procedure write some setup code followed by calls to the core operation(s).
zelphirkalt · a year ago
Where is the part, where this is "careful"? This is just how scopes work. I don't see what is special about the inner functions using things in the scope of the outer functions.
int_19h · a year ago
Excessive use of external bindings in a closure can make it hard to reason about lifetimes in cases where that matters (e.g. when you find out that a huge object graph is alive solely because some callback somewhere is a lambda that closed over one of the objects in said graph).
InDubioProRubio · a year ago
So inlining is the private of functions without a object. Pop it all to stack, add arguments, set functionpointer to instructionstart of inline code, challenge accepted, lets go to..
LoganDark · a year ago
Remember to `nonlocal xs, db, logger` inside those inner functions. I'm not sure if this is needed for variables that are only read, but I wouldn't ever leave it out.
pansa2 · a year ago
> I'm not sure if this is needed for variables that are only read

It’s not needed. In fact, you should leave it out for read-only variables. That’s standard practice - if you use `nonlocal` people reading the code will expect to see writes to the variables.

a_t48 · a year ago
You can do this in C++, too, but the syntax is a little uglier.
kllrnohj · a year ago
Not that bad?

    int main() {
        int a = -1;
        [&] {
            a = 42;
            printf("I'm an uncallable inline block");
        }();

        printf(" ");

        [&] {
            printf("of code\n");
        }();

        [&] {
            printf("Passing state: %d\n", a);
        }();

        return 0;
    }

BenoitEssiambre · a year ago
Here are some information theoretic arguments why inlining code is often beneficial:

https://benoitessiambre.com/entropy.html

In short, it reduces scope of logic.

The more logic you have broken out to wider scopes, the more things will try to reuse it before it is designed and hardened for broader use cases. When this logic later needs to be updated or refactored, more things will be tied to it and the effects will be more unpredictable and chaotic.

Prematurely breaking out code is not unlike using a lot of global variables instead of variables with tighter scopes. It's more difficult to track the effects of change.

There's more to it. Read the link above for the spicy details.

norir · a year ago
This is why I think it's a mistake that many popular languages, including standard c/c++, do not support nested function definitions. This for me is the happy medium where code can be broken into clear chunks, but cannot be called outside of the intended scope. A good compiler can also detect if the nested function is only called once and inline it.
badmintonbaseba · a year ago
C++ has lambdas and local classes. Local classes have some annoying arbitrary limitations, but they are otherwise useful.
humanfromearth9 · a year ago
In Java, a local function reference (defined inside a method and never used outside of this method ) is possible. Notre that this function is not really tied to an object, which is why I don't call it a method, and I don't use the expression "method reference", it is just tired to the function that contains it, which may be a method - or not.
kccqzy · a year ago
Code can always be called outside of that scope just by returning function pointers or closures. The point is not to restrict calling that code, but to restrict the ability to refer to that piece of code by name.

As mentioned by others, C++ has lambdas. Even if you don't use lambdas, people used to achieve the same effect by using plenty of private functions inside classes, even though the class might have zero variables and simply holds functions. In even older C code, people are used to making one separate .c file for each public function and then define plenty of static functions within each file.

zelphirkalt · a year ago
Of course all this needs to be weighed against maintainability and readability of the code. If the code base is not mainly about something very performance critical and this kind of thing shows to be a bottleneck, then changing things away from more readable towards performance optimized implementation would require a very good justification. I doubt, that this kind of optimization is justified in most cases. For that reason I find the wording "prematurely breaking out code" to be misleading. In most cases one should probably prioritize readability and maintainability and if breaking code out helps those, then it cannot be premature. It could only be premature from a performance limited perspective, which might have not much to do with the use case/purpose of the code.

It is nice, if a performance optimization manages to keep the same degree of readability and maintainability. Those concerns covered, sure we should go ahead and make the performance optimization.

BenoitEssiambre · a year ago
What I'm advocating here is only coincidentally a performance optimization. Readability and maintainability (and improved abstraction) are the primary concern and benefit of (sometimes) keeping things inline or more specifically of reducing entropy.
BenoitEssiambre · a year ago
Here is a followup post for those interested: https://benoitessiambre.com/integration.html
dang · a year ago
Related:

John Carmack on Inlined Code - https://news.ycombinator.com/item?id=39008678 - Jan 2024 (2 comments)

John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=33679163 - Nov 2022 (1 comment)

John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=25263488 - Dec 2020 (169 comments)

John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=18959636 - Jan 2019 (105 comments)

John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=14333115 - May 2017 (2 comments)

John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=12120752 - July 2016 (199 comments)

John Carmack on Inlined Code - https://news.ycombinator.com/item?id=8374345 - Sept 2014 (260 comments)

kragen · a year ago
There is a longer version of this thought-provoking post, also including Carmack's thoughts in 02012, at https://cbarrete.com/carmack.html. But maybe that version has not also had threads about it.
dang · a year ago
It doesn't seem to have, since https://news.ycombinator.com/from?site=cbarrete.com is empty.

Should we change the top link to that URL?

dehrmann · a year ago
Always read older stuff from Carmack remembering the context. He made a name for himself getting 3D games to run on slow hardware. The standard advice of write for clarity first, make sure algorithms have reasonable runtimes, and look at profiler data if it's slow is all you need 99% of the time.
marginalia_nu · a year ago
I find the inlined style can actually improve clarity.

A lot of code written toward the "uncle bob" style where you maximize the number of functions has fantastic local clarity, you can see exactly what the code you are looking at is doing; but atrocious global clarity, where it's nearly impossible to figure out what the system does on a larger scale.

Inlining can help with that, local clarity deteriorates a bit, but global clarity typically improves by reducing the number of indirections. The code does indeed also tend to get faster, as it's much easier to identify and remove redundant code when you have it all in front of you. ... but this also improves the clarity of the code!

You can of course go too far, in either direction, but my sense is that we're often leaning much too far toward short isolated functions now than is optimal.

int_19h · a year ago
One thing that's nice about functions is that they force the associated block of code to be named, and for state that is specific to the function to be clearly separate from external state (closures aside). It would be good to be able to retain those advantages even in linear code that nevertheless has clear boundaries between different parts of it that would be nice to enforce or at least highlight, but without losing the readability of sequential execution.

To some extent you can have that in languages that let you create a named lambda with explicit captures and immediately invoke it, e.g. in C++:

   int g;

   void doThisAndThat(int a, int b, int c) {

      doThis: auto x = [&a, &b] {
        ...
      }();

      doThat: [&g, &c, &x] {
        ...
      }();
   }
The syntax makes it kind of an eyesore though. Would be nice to have something specifically designed for this purpose.

silvestrov · a year ago
> atrocious global clarity

much like microservices.

Cthulhu_ · a year ago
And before that, 2D games (side-scrolling platformers were not a thing on PC hardware until Carmack did it, iirc). I think his main thing is balancing clarity - what happens when and in what order - with maintainability.

Compare this with enterprise software, which is orders of magnitude more complex than video games in terms of business logic (the complexity in video games is in performance optimization), but whose developers tend to add many layers of abstraction and indirection, so the core business process is obfuscated, or there's a billion non-functional side activities also being applied (logging, analytics, etc), again obfuscating the core functionality.

It's fun to go back to more elementary programming things, in e.g. Advent of Code challenges or indeed, game development.

nadam · a year ago
"compare this with enterprise software, which is orders of magnitude more complex than video games in terms of business logic" Maybe this was true 20 years ago, but I do not think this is true today. Game code of some games is almost as complex as enterprise software or even more complex in some cases (think of grand strategy games like Civilization or Paradox games). The difference is that it still needs to be performant, so the evolutionary force just kills programmers and companies creating unperformant abstractions. In my opinion game programming is just harder than enterprise programming if we speak about complex games. (I have done both). The only thing which is easier in game programming is that it is a bit easier to see clearly in terms of 'business requirements', and also it is more meritocratic (you can start a game company anywhere on the globe, no need to be at business centers.) And of course game programming is more fun, so programmers do the harder job even for less money.

For people who think game programming is less complex than enterprise software, I suggest the CharacterMovementComponent class in unreal engine which is the logic of movement of characters (people) in a networked game environment... With multiple thousand lines of code in just the header is not uncommon in unreal. And this is not complex because of optimization mostly. This is very complex and messy logic. Of course we can argue that networking and physics could be done in a simple naive way, which would be unacceptable in terms of latency and throughput, so all in all complexity is because of optimization after all. But it is not the 'fun' elegant kind of optimization, it is close to messy enterprise software in some sense in my opinion.

high_na_euv · a year ago
>Compare this with enterprise software, which is orders of magnitude more complex than video games in terms of business logic

I dont buy it in games like gta, cyberpunk or witcher 3

aidenn0 · a year ago
> And before that, 2D games (side-scrolling platformers were not a thing on PC hardware until Carmack did it, iirc). I think his main thing is balancing clarity - what happens when and in what order - with maintainability.

Smooth side-scrollers did exist on the PC before Keen (An early one would be the PC port of Defender). Moon Patrol even had jumping in the early '80s.

Furthermore other contemporaries of Carmack were making full-fledged side-scrolling platformers in ways different from how Keen did it (there were many platformers released in 1990). They all involved various limitations on level design (as did what Keen used), but I don't believe any of them allowed both X and Y scrolling like the Keen games did.

physicles · a year ago
I agree with this in general, but his essay on functional programming in C++ (linked at the top of the page) is phenomenal and is fantastic general advice when working in any non-functional language.
nicolaslegland · a year ago
Link at the top of the page broken, found an archived version at https://web.archive.org/web/20120501221535/http://gamasutra....
low_tech_love · a year ago
Interesting: this is a 2014 post from Jonathan Blow reproducing a 2014 comment by John Carmack reproducing a 2007 e-mail by the same Carmack reproducing a 2006 conversation (I assume also via e-mail) he had with a Henry Spencer reproducing something else the same Spencer read a while ago and was trying to remember (possibly inaccurately?).

I wonder what is the actual original source (from Saab, maybe?), and if this indeed holds true?

EGreg · a year ago
Is this kind of like 300 was a movie about a Frank Miller novel about a Greek legend about the actual Battle of Thermopylae?