Readit News logoReadit News
maegul commented on Useful patterns for building HTML tools   simonwillison.net/2025/De... · Posted by u/simonw
i_love_retros · 13 days ago
Endlessly churning out tools with vibe coding sounds quite boring to me. The world went and changed I guess.
maegul · 13 days ago
Indeed, likely a useful lens on the current moment I’d say.

For better/worse, and whether completely so or not, the time of the professional keyboard-driven mechanical logic problem solver may simply have just come and gone in ~4 generations (70 years?).

By 2050 it may be more or less as niche as it was in 1950??

Personally, I find the relative lack of awareness and attention on the human aspect of it all a bit disappointing. Being caught in the tides of history is a thing, and can be a tough experience, worthy of discourse. And causing and even forcing these tides isn’t necessarily a desirable thing, maybe?

Beyond that, mapping out the different spaces that are brought to light with such movements (eg, the various sets of values that may drive one and the various ways that may be applied to different realities) would also certainly be valuable.

But alas, “productivity” rules I guess.

maegul commented on Toucan Wireless Split Keyboard with Touchpad   shop.beekeeb.com/products... · Posted by u/tortilla
maegul · 2 months ago
How do people find these trackpads? I’ve seen them or at least similar in the Kyria at al keyboards[0] and am intrigued but suspicious too.

[0] https://splitkb.com/collections/keyboard-kits

maegul commented on We're in the wrong moment   ezrichards.github.io/post... · Posted by u/chilipepperhott
TheDong · 2 months ago
I identify with this, though I'm further along the path.

Coding was incredibly fun until working in capitalist companies got involved. It was then still fairy fun, but tinged by some amount of "the company is just trying to make money, it doesn't care that the pricing sucks and it's inefficient, it's more profitable to make mediocre software with more features than really nail and polish any one part"

Adding on AI impacts how fun coding is for me exactly how they say, and that compounds with company's misaligned incentives.

... I do sometimes think maybe I'm just burned out though, and I'm looking for ways to rationalize it, rather than doing the healthy thing and quitting my job to join a cult-like anti-technology commune.

maegul · 2 months ago
I resonate.

For me I’m vaguely but persistently thinking about a career change, wondering if I can find something of more tangible “real world” value. An essential basis of which being the question of whether any given tech job just doesn’t hold much apparent “real world value”.

maegul commented on We're in the wrong moment   ezrichards.github.io/post... · Posted by u/chilipepperhott
Pamar · 2 months ago
I think that what erased "programmer vs computer illiterate" dichotomy was BASIC in the 80s.

I've met lots of "digital natives" and they seem to use technology as a black box and click/touch stuff at random until it sorta works but they do not very good at creating at mental model of why something is behaving in a way which is not what was expected and verify their own hypothesis (i.e. "debugging").

maegul · 2 months ago
Agreed. And I feel it fair to argue that this is the intended interface between proprietary software and its users, categorically.

And more so with AI software/tools, and IMO frighteningly so.

I don’t know where the open models people are up to, but as a response to this I’d wager they’ll end up playing the Linux desktop game all over again.

All of which strikes at one of the essential AI questions for me: do you want humans to understand the world we live in or not?

Doesn’t have to be individually, as groups of people can be good at understanding something beyond an individual. But a productivity gain isn’t on it’s a sufficient response to this question.

Interestingly, it really wasn’t long ago that “understanding the full computing stack” was a topic around here (IIRC).

It’d be interesting to see if some “based” “vinyl player programming” movement evolved in response to AI in which using and developing tech stacks designed to be comprehensively comprehensible is the core motivation. I’d be down.

maegul commented on Learnable Programming (2012)   worrydream.com/LearnableP... · Posted by u/kunzhi
psawaya · 5 months ago
maegul · 5 months ago
Salient quote under the “AI” question in the FAQ:

> we aim for a computing system that is fully visible and understandable top-to-bottom — as simple, transparent, trustable, and non-magical as possible. When it works, you learn how it works. When it doesn’t work, you can see why. Because everyone is familiar with the internals, they can be changed and adapted for immediate needs, on the fly, in group discussion.

Funny for me, as this is basically my principal problem with AI as a tool.

It’s likely very aesthetic or experiential, but for me, it’s strong: a fundamental value of wanting to work to make the system and the work transparent, shared/sharable and collaborative.

Always liked B Victor a great deal, so it wasn’t surprising, but it was satisfying to see alignment on this.

maegul commented on Row Polymorphic Programming   stranger.systems/posts/by... · Posted by u/todsacerdoti
aerzen · 5 months ago
Someone help me, a rust programmer , understand this: is this like having a function be generic over structs with some fields?

Like having `<T: {width: f64, depth: f64}>`?

I have such a hard time understanding the multiple arrows notation of ML family languages.

maegul · 5 months ago
Hmmm … my beginner’s rust is getting too rusty.

Is this valid rust (it’d be new to me)?!

If not, I’m guessing, from memory, the only way at this in rust is to through traits?

maegul commented on The Rise of Whatever   eev.ee/blog/2025/07/03/th... · Posted by u/cratermoon
gyomu · 6 months ago
Broadly agreed with all the points outlined in there.

But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.

> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.

> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”

When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.

But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.

I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.

maegul · 6 months ago
Agreed!

The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.

Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.

Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.

maegul commented on Hilbert's sixth problem: derivation of fluid equations via Boltzmann's theory   arxiv.org/abs/2503.01800... · Posted by u/nsoonhui
IdealeZahlen · 6 months ago
She certainly fell into the rage bait trap, and I don't really like her these days, but this video seems fine - no ranting, just a nice piece of science communication.
maegul · 6 months ago
Rings true for my impression too. In the end, she’s a YouTuber now, for better or worse, but still puts out what look like thoughtful and informative enough videos, whatever personal vendettas she holds grudges over.

I suspect for many who’ve touched the academic system, a popular voice that isn’t anti-intellectual or anti-expertise (or out to trumpet their personal theory), but critical of the status quo, would be viewed as a net positive.

maegul commented on Using computers more freely and safely (2023)   akkartik.name/freewheelin... · Posted by u/surprisetalk
GMoromisato · 6 months ago
I'm sure this article resonates with many people; it doesn't resonate with me.

I get value out of (and even enjoy) lots of software, commercial and otherwise (except for Microsoft Teams--that's an abomination).

Ultimately, everything (not just software) is a trade-off. It has benefits and hazards. As long as the benefits outweigh the hazards, I use it. [The one frustration is, of course, when an employer forces a negative-value trade-off on you--that sucks.]

I'm suspicious of articles that talk about drawbacks in isolation, without weighing the benefits: "vaccines have side-effects", "police arrest the wrong people", "electric cars harm the environment".

Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs. The future everyone wants is, I think, one where users can ask the computer to do anything, and the computer immediately complies. Will that bring about a software paradise free from the buggy, one-size-fits-none, extractive software of today? I don't know. I guess we'll see.

We live in interesting times.

maegul · 6 months ago
> Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs.

Not sure exactly irony you mean here, but I’ll bite on the anti-LLM bait …

Surely it matters where the LLM sits against these values, no? Even if you’ve got your own program from the LLM that’s yours, so long as you may need alterations, maintenance, debugging or even understanding its nuances, the nature of the originating LLM, as a program, matters too … right?

And in that sense, are we at all likely to get to a place where LLMs aren’t simply the new mega-platforms (while we await the year of the local-only/open-weights AI)?

maegul commented on A look at Cloudflare's AI-coded OAuth library   neilmadden.blog/2025/06/0... · Posted by u/itsadok
donatj · 7 months ago
My question is kind of in this brave new world, where do the domain experts come from? Whose going to know this stuff?
maegul · 7 months ago
This, for me, has been the question since the beginning. I’m yet to see anyone talk/think about the issue head on too. And whenever I’ve asked someone about it, they’ve not had any substantial thoughts.

u/maegul

KarmaCake day687November 23, 2019View Original