Readit News logoReadit News
quuxplusone commented on Do things that don't scale, and then don't scale   derwiki.medium.com/do-thi... · Posted by u/derwiki
derwiki · 12 days ago
Haha funny timing! I wrote this post last weekend and haven’t checked HN much this week
quuxplusone · 12 days ago
I agree it would be nice to put the source of the quote in your own post. It's so easy to Google before quoting.

On another note: did you just tell a bunch of strangers on the internet how to snail-mail arbitrary photos to your mom? Or does that email thing only activate if the email comes from your own address? (Guess you'll find out! :))

quuxplusone commented on Recto – A Truly 2D Language   masatohagiwara.net/recto.... · Posted by u/mhagiwara
quuxplusone · 13 days ago
Upvoted for the likelihood of producing interesting conversations on HN. But fundamentally this Recto language looks 1D, not 2D. As IsTom said below, it looks like "braces/parens with extra steps."

If there were any actual "in-game effect" of the rectangles — e.g. a "rotate rectangle" primitive that would change the order in which atoms were evaluated; or some meaning given to overlapping rectangles as in OgsyedIE's comment — then it would be much more interesting, because it would no longer be exactly isomorphic to Lisp/Scheme.

quuxplusone commented on The Quiet Disappearance of Skeptics in the AI Gold Rush   middlelayer.substack.com/... · Posted by u/prmph
cmdrk · 17 days ago
I feel that the article draws a false equivalence between skepticism and doomsaying. If anything, thinking AI is as dangerous as a nuclear weapon signals a true believer.
quuxplusone · 16 days ago
TFA doesn't even draw an "equivalence" between those two positions; it merely misuses the word "skeptic" to mean "true believer in the Singularity."

TFA mourns the disappearance of true believers — those pundits saying LLMs would quickly achieve AGI and then go on to basically destroy the world. As that prediction became more obviously false, the pundits quietly stopped repeating it.

"Skeptics" is not, and never was, the label for those unbridled believers/evangelists; the label was "AI doomers." But an essay titled "Where have all the AI doomers gone?" wouldn't get clicks because the title question pretty much answers itself.

quuxplusone commented on Prediction markets create harmful outcomes: a case study   bobjacobs.substack.com/p/... · Posted by u/Zipzip
kevinwang · 22 days ago
quuxplusone · 19 days ago
Update: It did happen again. In fact, the dildo-throwings seem to have been a pre-planned tie-in for "Green Dildo Coin": https://www.nbcnews.com/news/amp/rcna223905

So the prediction market in this case maybe isn't so much causing the bad events, as just being a useful side channel by which the insiders/bad actors who've already planned the events can sweep up a little extra money from the prediction-market equivalent of "retail investors" — the ones who are inexplicably willing to put their money on the table with no information whatsoever and wait for someone else to take it.

quuxplusone commented on Ask HN: What trick of the trade took you too long to learn?    · Posted by u/unsupp0rted
OJFord · 23 days ago
That seems like a very different take on life than GP describes, that just happens to use the same analogy on the surface.
quuxplusone · 23 days ago
frontfor described "life as an investment account" and recommended a long-term strategy investing in index funds. zigman1 described "life as an investment account" and gave anecdotal evidence that a short-term "day-trading" strategy could be a bad idea.

They're both saying the same thing... although zigman1 didn't seem to realize it. (Perhaps through being unfamiliar with jargon like "index funds" and how that differs from day trading.)

quuxplusone commented on 1981 BASIC adventure game comes to a new platform, the TRS-80 MC-10   arctic81.com/arctic-adven... · Posted by u/vontzy
anthk · 24 days ago
Adapt it to Inform6/ZMachine.
quuxplusone · 24 days ago
It's worth noting that you can target the Z-Machine (which gets you nice web platforms like Parchment) without using Inform as your source language. I informally maintain (I did not originally write) a C compiler for the Z-Machine, which I have used to port several games of a similar vintage: https://quuxplusone.github.io/Advent

It occurs to me that it would be nifty to make a BASIC compiler for the Z-Machine.

quuxplusone commented on Hardening mode for the compiler   discourse.llvm.org/t/rfc-... · Posted by u/vitaut
rwmj · a month ago
OK that is pretty interesting. For the TL;DR crowd, the exploit was:

  if(environmentǃ=ENV_PROD){
    // bypass authZ checks in DEV
    return true;
  }
where the 'ǃ' is a Unicode homoglyph (U+1C3 "LATIN LETTER ALVEOLAR CLICK") which obviously completely changes the nature of the code.

I'll note that GCC gives a clear warning here ("suggest parentheses around assignment used as truth value"), so as always, turn on -Werror and take warnings seriously!

quuxplusone · a month ago
The shown code is JavaScript; it wouldn't compile as C, because "environment[alveolar-click]" was never declared, and C requires declare-before-use. Does the advice to use GCC -Werror still apply to JavaScript? (I'd guess no, but I don't know for sure if I'm missing something.)
quuxplusone commented on The Preserving Machine by Philip K. Dick (1953)   archive.org/details/Fanta... · Posted by u/akkartik
akkartik · a month ago
That was what I submitted. I don't know why it got edited.
quuxplusone · a month ago
I suspect you can edit it back right now, just like you can edit the title back if HN changes it. The automatic stuff runs only once on initial submit (AFAIK).
quuxplusone commented on Do variable names matter for AI code completion? (2025)   yakubov.org/blogs/2025-07... · Posted by u/yakubov_org
ijk · a month ago
True - though in the actual case of your examples, calcpay, process_user_input, and ProcessUserInput all encode into exactly 3 tokens with GPT-4.

Which is the exact kind of information that you want to know.

It is very non-obvious which one will use more tokens; the Gemma tokenizer has the highest variance with process|_|user|_|input = 5 tokens and Process|UserInput as 2 tokens.

In practice, I'd expect the performance difference to be relatively minimal, as input tokens tends to quickly get aggregated into more general concepts. But that's the kind of question that's worth getting metrics on: my intuition suggests one answer, but do the numbers actually hold up when you actually measure it?

quuxplusone · a month ago
Awesome! You should have written this blog post instead of that guy. :)
quuxplusone commented on Do variable names matter for AI code completion? (2025)   yakubov.org/blogs/2025-07... · Posted by u/yakubov_org
quuxplusone · a month ago
"500 code samples generated by Magistral-24B" — So you didn't use real code?

The paper is totally mum on how "descriptive" names (e.g. process_user_input) differ from "snake_case" names (e.g. process_user_input).

The actual question here is not about the model but merely about the tokenizer: is it the case that e.g. process_user_input encodes into 5 tokens, ProcessUserInput into 3, and calcpay into 1? If you don't break down the problem into simple objective questions like this, you'll never produce anything worth reading.

u/quuxplusone

KarmaCake day643February 15, 2024View Original