Readit News logoReadit News
sswatson commented on Data Processing Benchmark Featuring Rust, Go, Swift, Zig, Julia etc.   github.com/zupat/related_... · Posted by u/behnamoh
stefs · a month ago
in my opinion, this assertion suffers from the "sufficiently smart compiler" fallacy somewhat.

https://wiki.c2.com/?SufficientlySmartCompiler

sswatson · a month ago
The linked article makes a specific carveout for Java, on the grounds that its SufficientlySmartCompiler is real, not hypothetical.
sswatson commented on Let's be honest, Generative AI isn't going all that well   garymarcus.substack.com/p... · Posted by u/7777777phil
merlincorey · 2 months ago
"Code is a liability. What the code does for you is an asset." as quoted from https://wiki.c2.com/?SoftwareAsLiability with Last edit December 17, 2013.

This discussion and distinction used to be well known, but I'm happy to help some people become "one of today's lucky 10,000" as quoted from https://xkcd.com/1053/ because it is indeed much more interesting than the alternative approach.

sswatson · 2 months ago
It’s well known and also wrong.

Delta’s airplanes also require a great deal of maintenance, and I’m sure they strive to have no more than are necessary for their objectives. But if you talk to one of Delta’s accountants, they will be happy to disabuse you of the notion that the planes are entered in the books as a liability.

sswatson commented on AI is forcing us to write good code   bits.logic.inc/p/ai-is-fo... · Posted by u/sgk284
llmslave2 · 2 months ago
They can't think though. They can't be creative.
sswatson · 2 months ago
Neither of those assertions means anything. For many years, people have been using them to make confident predictions about what AI systems will never be able to accomplish. Those predictions are routinely falsified within months.

Of course, some of those predictions may also turn out to be true. But either way, we have abundant empirical evidence that the reasoning is not sound.

sswatson commented on The "confident idiot" problem: Why AI needs hard rules, not vibe checks   steerlabs.substack.com/p/... · Posted by u/steer_dev
wintermutestwin · 3 months ago
Can someone please explain why these token guessing models aren't being combined with logic "filters?"

I remember when computers were lauded for being precise tools.

sswatson · 3 months ago
1. Because no one knows how to do it. 2. Consider (a) a tool that can apply precise methods when they exist, and (b) a tool that can do that and can also imperfectly solve problems that lack precise solutions. Which is more powerful?
sswatson commented on Is Matrix Multiplication Ugly?   mathenchant.wordpress.com... · Posted by u/jamespropp
fracus · 4 months ago
I think it is just a matter of perspective. You can both be right. I don't think there is an objective answer to this question.
sswatson · 4 months ago
The author has exclusive claim to their own aesthetic sensibilities, of course, but the language in the piece suggests some degree of universality. Whereas in fact, effectively no one who is knowledgeable about math would share the view that noncommutative operations are ugly by virtue of being noncommutative. It’s a completely foreign idea, like a poet saying that the only beautiful poems are the palindromic ones.
sswatson commented on I’m worried that they put co-pilot in Excel   simonwillison.net/2025/No... · Posted by u/isaacfrond
hunterpayne · 4 months ago
Its great for demos, its lousy for production code. The different cost of errors in these two use cases explains (almost) everything about the suitability of AI for various coding tasks. If you are the only one who will ever run it, its a demo. If you expect others to use it, its not.
sswatson · 4 months ago
As the name indicates, a demo is used for demonstration purposes. A personal tool is not a demo. I've seen a handful of folks assert this definition, and it seems like a very strange idea to me. But whatever.

Implicit in your claim about the cost of errors is the idea that LLMs introduce errors at a higher rate than human developers. This depends on how you're using the LLMs and on how good the developers are. But I would agree that in most cases, a human saying "this is done" carries a lot more weight than an LLM saying it.

Regardless, it is not good analysis to try to do something with an LLM, fail, and conclude that LLMs are stupid. The reality is that LLMs can be impressively and usefully effective with certain tasks in certain contexts, and they can also be very ineffective in certain contexts and are especially not great about being sure whether they've done something correctly.

sswatson commented on I’m worried that they put co-pilot in Excel   simonwillison.net/2025/No... · Posted by u/isaacfrond
ryandrake · 4 months ago
I've been trying to open my mind and "give AI a chance" lately. I spent all day yesterday struggling with Claude Code's utter incompetence. It behaves worse than any junior engineer I've ever worked with:

- It says it's done when its code does not even work, sometimes when it does not even compile.

- When asked to fix a bug, it confidently declares victory without actually having fixed the bug.

- It gets into this mode where, when it doesn't know what to do, it just tries random things over and over, each time confidently telling me "Perfect! I found the error!" and then waiting for the inevitable response from me: "No, you didn't. Revert that change".

- Only when you give it explicit, detailed commands, "modify fade_output to be -90," will it actually produce decent results, but by the time I get to that level of detail, I might as well be writing the code myself.

To top it off, unlike the junior engineer, Claude never learns from its mistakes. It makes the same ones over and over and over, even if you include "don't make XYZ mistake" in the prompt. If I were an eng manager, Claude would be on a PIP.

sswatson · 4 months ago
Recently I've used Claude Code to build a couple TUIs that I've wanted for a long time but couldn't justify the time investment to write myself.

My experience is that I think of a new feature I want, I take a minute or so to explain it to Claude, press enter, and go off and do something else. When I come back in a few minutes, the desired feature has been implemented correctly with reasonable design choices. I'm not saying this happens most of the time, I'm saying it happens every time. Claude makes mistakes but corrects them before coming to rest. (Often my taste will differ from Claude's slightly, so I'll ask for some tweaks, but that's it.)

The takeaway I'm suggesting is that not everyone has the same experience when it comes to getting useful results from Claude. Presumably it depends on what you're asking for, how you ask, the size of the codebase, how the context is structured, etc.

sswatson commented on John Carmack on mutable variables   twitter.com/id_aa_carmack... · Posted by u/azhenley
TZubiri · 4 months ago
>developer ergonomics are much more important than freeing memory a little earlier

Preach to the python choir bro, but it should be telling when a python bro considers it's too ergonomic and wasteful.

At some point being clean and efficient about the code is actually ergonomic, no one wants to write sloppy code that overallocates, doesn't free, and does useless work. To quote Steve Jobs, even if no one sees the inside part of a cabinet, the carpenter would know, and that's enough.

tl;dr: Craftmanship is as important as ergonomics.

sswatson · 4 months ago
In this case, overuse of re-assigning is the sloppy thing to do, and immutability by default is the craftsman's move. Reducing your program's memory footprint by re-assigning variables all the time is a false economy.
sswatson commented on John Carmack on mutable variables   twitter.com/id_aa_carmack... · Posted by u/azhenley
TZubiri · 4 months ago
I think renaming an old variable is a common and sensible way to free a resource in python. If there are no valid names for a resource it will be garbage collected. Which is different in languages like C++ with manual memory management.

John Carmack is a C++ programmer apparently that still has a lot to learn in python.

sswatson · 4 months ago
In the vast majority of cases, developer ergonomics are much more important than freeing memory a little earlier. In other scenarios, e.g., when dealing with large data frames, the memory management argument carries more weight. Though even then there are usually better patterns, like method chaining.

FYI John Carmack is a true legend in the field. Despite his not being a lifelong Python guy, I can assure you he is speaking from a thorough knowledge of the arguments for and against.

sswatson commented on Magit Is Amazing   heiwiper.com/posts/magit-... · Posted by u/Bogdanp
mystifyingpoi · 5 months ago
When I was early in my career, I always thought that Git is hard and nerdy, and it is a good thing. I kinda liked it in a way, like there is some gratification in knowing all the commands and helping clueless coworkers, or knowing how to do a rebase -i and shuffle commits around and show off etc.

These days, I find myself just using the smallest subset of commands possible to do my job, and it is enough. Just add, commit, push/pull and occasional stash or merge is like 99.9% of my daily usage. I don't even remember how to revert (was it checkout -- <filename> or reset <filename> or restore <filename>?) and I'm just fine with it.

I think that git is easy. Just learn the happy path, and maybe a way or two for restoring to a known good state without deleting the whole repo, it's enough.

sswatson · 5 months ago
This is a textbook example of damning with faint praise. If your VCS's interface is so bad that it motivates you to scale back your use of any nontrivial version-control features and instead just content yourself with rudimentary file syncing, that's a case against the interface. Either the additional features are useful and you're missing out on that benefit, or they're extraneous and are saddling the tool with unnecessary baggage.

u/sswatson

KarmaCake day116April 3, 2020View Original