Readit News logoReadit News
ludicity commented on Why is Claude an Electron app?   dbreunig.com/2026/02/21/w... · Posted by u/dbreunig
Retr0id · 21 days ago
Free as in puppy

Edit: The title of the post originally started with "If code is free,"

ludicity · 21 days ago
This was funny enough that I checked out your blog and it absolutely rules.
ludicity commented on I swear the UFO is coming any minute   experimental-history.com/... · Posted by u/Ariarule
nicbou · 25 days ago
Experimental History is such a consistently pleasant read. It's one of the few publications I read religiously.
ludicity · 25 days ago
Experimental History makes me look at my own writing and go "This is all so mid".
ludicity commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
mirawelner · a month ago
These posts make me feel like I’m the worst llm prompter in existence.

I’m using a mix of Gemini, grok, and gpt to translate some matlab into c++. It is kinda okay at its job but not great? I am rapidly reading Accelerated C++ to get to the point where I can throw the llm out the window. If it was python or Julia I wouldn’t be using an LLM at all bc I know those languages. AI is barely better than me at C++ because I’m halfway through my first ever book on it. What LLMs are these people using?

The code I’m translating isn’t even that complex - it runs analysis on ecg/ppg data to implement this one dude’s new diagnosis algorithm. The hard part was coming up with the algorithm, the code is simple. And the shit the LLM pours out works kinda okay but not really? I have to do hours of fix work on its output. I’m doing all the hard design work myself.

I fucking WISH I could only work on biotech and research and send the code to an LLM. But I can’t because they suck so I gotta learn how computer memory works so my C++ doesn’t eat up all my pc’s memory. What magical LLMs are yall using??? Please send them my way! I want a free llm therapist and a programmer! What world do you live in?? Let me in!

ludicity · a month ago
I'm firing you for being unable to adequately commune with the machine spirit.

(But for real, a good test suite seems like a great place to start before letting an LLM run wild... or alternatively just do what you're doing. We definitely respect textbook-readers more than prompters!)

Deleted Comment

ludicity commented on Gas Town's agent patterns, design bottlenecks, and vibecoding at scale   maggieappleton.com/gastow... · Posted by u/pavel_lishin
mediaman · 2 months ago
I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment.

It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative. It takes stochastic neural nets and mashes them together in bizarre ways to see if anything coherent comes out the other end.

And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.

I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.

Maybe it's the natural consequence of large-scale professionalization, and stock option plans and RSUs and levels and sprints and PMs, that today's gray hoodie is just the updated gray suit of the past but with no less dryness of imagination.

ludicity · 2 months ago
I thought it was harmless(ish) fun, but David Gerard put out a post stating that Yegge used Gas Town to push out a crypto project that rug pulled his supporters, while he personally walked away with something between $50K to $100K from memory.

I suppose that has little to do with the technical merits of the work, but it's such a bad look, and it makes everyone boosting this stuff seem exactly as dysregulated/unwise as they've appeared to many engineers for a while.

I met Sean Goedecke for lunch a few weeks ago, who uses LLMs a bunch, and is clearly a serious adult, but half the folks being shoved in front of everyone are behaving totally manic and people are cheering them on. Absolutely blows my mind to watch.

https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...

ludicity commented on Welcome to Gas Town   steve-yegge.medium.com/we... · Posted by u/gmays
kaycey2022 · 2 months ago
What does Beck think?
ludicity · 2 months ago
He was the keynote at YOW! so I can't capture all the nuance and hope I'm not doing him a disservice with my interpretation, but the tl;dr is he:

"LLMs drastically decrease the cost of experimenting during the very earliest phases of a project, like when you're trying to figure out if the thing is even worth building or a specific approach might yield improvements, but loses efficacy once you're past those stages. You can keep using LLMs sustainably with a very tight loop of telling it to do the thing the cleaning up the results immediately, via human judgement."

I.e, I don't think he can relate at all to the experience of letting them run wild and getting a good result.

ludicity commented on Comparing AI agents to cybersecurity professionals in real-world pen testing   arxiv.org/abs/2512.09882... · Posted by u/littlexsparkee
tptacek · 2 months ago
Can't be any worse than Fortify was!
ludicity · 2 months ago
At my first job, all the applications the data people developed were compulsorily evaluated through Fortify (I assume this is HP Fortify) and to this day I have no idea what the security team was actually doing with the product, or what the product does. All I know is that they never changed anything even though we were mostly fresh grads and were certainly shipping total garbage.
ludicity commented on Welcome to Gas Town   steve-yegge.medium.com/we... · Posted by u/gmays
reedlaw · 2 months ago
Gergely Orosz (The Pragmatic Engineer) interviewed Yegge [1] and Kent Beck [2], both experienced engineers before vibe coding, and they express similar sentiments about how LLMs reinvigorated their enjoyment of programming. This introduction to Gas Town is very clear on its intended audience with plenty of warnings against overly eager adoption. I agree that using tools like this haphazardly could lead to disaster, but I would not dismiss the possibility that they could be used productively.

1. https://www.youtube.com/watch?v=TZE33qMYwsc

2. https://www.youtube.com/watch?v=aSXaxOdVtAQ

ludicity · 2 months ago
Beck was in Melbourne a few weeks ago, and his take on LLM usage was so far divorced from what Yegge is doing that their views on what LLMs are capable of in early 2026 are irreconcilable.
ludicity commented on Software engineers should be a little bit cynical   seangoedecke.com/a-little... · Posted by u/zdw
jaggederest · 2 months ago
> they won't make any sacrifices if those sacrifices require losing money or status

That's not preferring good software to bad software, though. In order for a value to be meaningful when expressed, it has to result in some kind of trade off. If you value honesty over safety but never are put in a situation where you have to choose between honesty and safety, then that value is fairly meaningless.

ludicity · 2 months ago
That's fair. I overstated my point a bit -- if a project was on schedule and it could be delayed by one day to improve something nebulous, many would agree. It's just that the tradeoffs are never that small, so you never actually see it happen, i.e, the preference is extremely minor.

u/ludicity

KarmaCake day1235December 2, 2021
About
You can reach me at ludicity.hackernews@gmail.com
View Original