Readit News logoReadit News
datastoat commented on Coding with LLMs in the summer of 2025 – an update   antirez.com/news/154... · Posted by u/antirez
cratermoon · a month ago
They should also share their prompts and discuss exactly how much effort went into checking the output and re-prompting to get the desired result. The post hints at how much work it takes for the human, "If you are able to describe problems in a clear way and, if you are able to accept the back and forth needed in order to work with LLMs ... you need to provide extensive information to the LLM: papers, big parts of the target code base ... And a brain dump of all your understanding of what should be done. Such braindump must contain especially the following:" and more.

After all the effort getting to the point where the generated code is acceptable, one has to wonder, why not just write it yourself? The time spent typing is trivial to all the cognitive effort involved in describing the problem, and describing the problem in a rigorous way is the essence of programming.

datastoat · a month ago
> They should also share their prompts

Here's a recent ShowHN post (a map view for OneDrive photos), which documents all the LLM prompting that went into it:

https://news.ycombinator.com/item?id=44584335

datastoat commented on P-Hacking in Startups   briefer.cloud/blog/posts/... · Posted by u/thaisstein
wavemode · 2 months ago
Can you elaborate on the difference between your statement and the author's?
datastoat · 2 months ago
Author: "5% chance of shipping something that only looked good by chance". One philosophy of statistics says that the product either is better or isn't better, and that it's meaningless to attach a probability to facts, which the author seems to be doing with the phrase "5% chance of shipping something".

Parent: "5% chance of looking as good as it did, if it were truly no better than the alternative." This accepts the premise that the product quality is a fact, and only uses probability to describe the (noisy / probabilistic) measurements, i.e. "5% chance of looking as good".

Parent is right to pick up on this, if we're talking about a single product (or, in medicine, if we're talking about a single study evaluating a new treatment). But if we're talking about a workflow for evaluating many products, and we're prepared to consider a probability model that says some products are better than the alternative and others aren't, then the author's version is reasonable.

datastoat commented on Bayesian Neural Networks   cs.toronto.edu/~duvenaud/... · Posted by u/reqo
panda-giddiness · 9 months ago
You can, in fact, do that. It's called (aptly enough) the empirical Bayes method. [1]

[1] https://en.wikipedia.org/wiki/Empirical_Bayes_method

datastoat · 9 months ago
Empirical Bayes is exactly what I was getting at. It's a pragmatic modelling choice, but it loses the theoretical guarantees about uncertainty quantification that pure Bayesianism gives us.

(Though if you have a reference for why empirical Bayes does give theoretical guarantees, I'll be happy to change my mind!)

datastoat commented on Bayesian Neural Networks   cs.toronto.edu/~duvenaud/... · Posted by u/reqo
fjkdlsjflkds · 9 months ago
> For one, Bayesian inference and UQ fundamentally depends on the choice of the prior, but this is rarely discussed in the Bayesian NN literature and practice, and is further compounded by how fundamentally hard to interpret and choose these priors are (what is the intuition behind a NN's parameters?).

I agree that, computationally, it is hard to justify the use of Bayesian methods on large-scale neural networks when stochastic gradient descent (and friends) is so damn efficient and effective.

On the other hand, the fact that there's a dependence on (subjective) priors is hardly a fair critique: non-Bayesian training of neural networks also depends on the use of (subjective) loss functions with (subjective) regularization terms (in fact, it can be shown that, mathematically, the use of priors is precisely equivalent to adding regularization to a loss function). Non-Bayesian training of neural networks is not "a failed approach" just because someone can arbitrarily choose L1 regularization (i.e., a Laplacian prior) over L2 regularization (i.e., a Gaussian prior).

Furthermore, we do have some intuition over NN parameters (particularly when inputs and outputs are properly scaled): a value of 10^15 should be less likely than a value of 0. Note that, in Bayesian practice, people often use weakly-informative priors (see, e.g., http://www.stat.columbia.edu/~gelman/presentations/weakprior...) to encode such intuitive statements while ensuring that (for all practical purposes) the data will effectively overwhelm the prior (again, this is equivalent to adding a minimal amount of regularization to a loss function, to make a problem well-posed when e.g. you have more parameters than data points).

datastoat · 9 months ago
Non-Bayesian NN training does indeed use regularizers that are chosen subjectively —- but they are then tested in validation, and the best-performing regularizer is chosen. Thus the choice is empirical, not subjective.

A Bayesian could try the same thing: try out several priors, and pick the one that performs best in validation. But if you pick your prior based on the data, then the classic theory about “principled quantification of uncertainty” doesn’t apply any more. So you’re left using a computationally unwieldy procedure that doesn’t offer theoretical guarantees.

datastoat commented on Bayesian Neural Networks   cs.toronto.edu/~duvenaud/... · Posted by u/reqo
datastoat · 9 months ago
I like Bayesian inference for few-parameter models where I have solid grounds for choosing my priors. For neural networks, I like to ask people "what's your prior for ReLU versus LeakyReLU versus sigmoid?" and I've never gotten a convincing answer.
datastoat commented on The Source of Europe's Mild Climate   americanscientist.org/art... · Posted by u/PaulHoule
tetris11 · 2 years ago
It's an interesting article but I have to say that I've come out of it without any clear answer other than "the gulf stream would give a few degrees difference at max"
datastoat · 2 years ago
The article explained that there are two roughly equal drivers: (1) Water is a better heat reserve than land, and winds tend to blow eastwards, so Europe gets air warmed by the sea and the US east coat gets colder air that's come from the land. (2) The joint effect of the altitude of the Rockies and the angular rotation of the earth mean that air currents are southeast over the Rockies and then northeast, so arctic air gets pulled down and then pushed back up over the US east coast.
datastoat commented on CLI user experience case study   tweag.io/blog/2023-10-05-... · Posted by u/Xophmeister
atoav · 2 years ago
As someone who teaches these things occasionally to non-tech people I think the hard thing about CLI tools are the following:

1. The terminal when you first open it is intimidating to most and doesn't offer any help to figure out how to use it. If people think something is hard, they will have a harder time understanding it as well (same goes for thinking something will be boring, there have been studies on that)

2. Text editing in the terminal works different than literally anywhere else. You can't simply click and mark text, shortcuts are different, etc.

3. You have to get that text has to be exact, you have to know syntax rules with where to put spaces, how to work with quotes, escape strings and all that. That is just not easy. For those of us who already speak any given shell, this may be aomething we forget. But it is like learning a language and a way to think. Once you are fluent, it is easy.

But it is worth it in my opinion, especially if you want to work with niche special tools or if you want reliable tools that will just work for decades.

datastoat · 2 years ago
It'd be fun (and a bit scary) to use an LLM as a shell replacement. We'd give it the history of our commands as per the recent post [0], as well as their outputs, and it would turn natural-language commands into proper bash. The xkcd comic [1] would be solved instantly. "Tar these files, please." "Delete all the temporary files but please please please don't delete anything else." I'm sure people have implemented this, but my searching isn't good enough to find it.

[0] https://news.ycombinator.com/item?id=38965003

[1] https://xkcd.com/1168/

datastoat commented on WebP is so great except it's not (2021)   eng.aurelienpierre.com/20... · Posted by u/enz
colejohnson66 · 2 years ago
It's how English was written in the "olden times". At that time, little flairs (such as ligatures) were pretty common, and were very fanciful. Some simpler ligatures (like ff) survive today, but embellished ones (like ct) were toned down. It's just a stylistic choice to draw them one way or another, but it's jarring to see the fancier ones in "modern" texts because we're used to the simpler styles.

Fun fact: The German "eszsett" (ß; U+00DF) is a ligature for "ss" (specifically the "long s"[0] and a normal "s") that evolved over time to be one "letter".[1]

[0]: https://en.wikipedia.org/wiki/Long_s

[1]: https://en.wikipedia.org/wiki/File:Sz_modern.svg

datastoat · 2 years ago
According to the Wikipedia page for eszett [0] it evolved from "sz", as the name "eszett" suggests. (I only realized the link with "z" when I saw "tz" ligatures on street signs in Berlin.) Given that its typographic origin is sz, and given that its name literally says sz, I wish the spelling reformists had gone for sz rather than ss!

[0] https://en.wikipedia.org/wiki/%C3%9F

datastoat commented on The Eval Game   oskaerik.github.io/theeva... · Posted by u/sfoley
billpg · 2 years ago
Did they get an entire Python interpreter into JS?

Yes, I know all Turing machines are equivalent so it shouldn't be surprising it was possible but still, cool.

datastoat · 2 years ago
They use Pyodide, a full Python interpreter in WASM: https://pyodide.org/en/stable/console.html

Pyodide includes manyuseful Python libraries including numpy, pandas, and matplotlib.

datastoat commented on Semantic Zoom   alexanderobenauer.com/lab... · Posted by u/bpierre
skrebbel · 2 years ago
I have a touchscreen laptop. I love it, but UIs like these would make me love it even more. Having information detail literally at your fingertips sounds amazing.

This seems like such a sensible idea to me, it makes me wonder why it isn’t commonplace yet. I hope it will be!

datastoat · 2 years ago
Windows 8 (Metro) used semantic zoom. It's been a while, but I do remember that one of the apps that used it very nicely was Photos. A search for "windows metro semantic zoom" comes up with lots of articles about semantic-zoom-aware GridView controls etc.

Why isn't it commonplace? I think that touchscreen laptops are still too much a minority, and keyboard + mouse + monitor are too entrenched, for anyone to seriously attempt it again for a while. (A shame -- I'm one of the few who really liked the Windows 8 Metro interface.) I think that phones are too small for it to really work well. I don't know why it's not more popular on tablets.

u/datastoat

KarmaCake day308April 4, 2019View Original