Coding was incredibly fun until working in capitalist companies got involved. It was then still fairy fun, but tinged by some amount of "the company is just trying to make money, it doesn't care that the pricing sucks and it's inefficient, it's more profitable to make mediocre software with more features than really nail and polish any one part"
Adding on AI impacts how fun coding is for me exactly how they say, and that compounds with company's misaligned incentives.
... I do sometimes think maybe I'm just burned out though, and I'm looking for ways to rationalize it, rather than doing the healthy thing and quitting my job to join a cult-like anti-technology commune.
For me I’m vaguely but persistently thinking about a career change, wondering if I can find something of more tangible “real world” value. An essential basis of which being the question of whether any given tech job just doesn’t hold much apparent “real world value”.
I've met lots of "digital natives" and they seem to use technology as a black box and click/touch stuff at random until it sorta works but they do not very good at creating at mental model of why something is behaving in a way which is not what was expected and verify their own hypothesis (i.e. "debugging").
And more so with AI software/tools, and IMO frighteningly so.
I don’t know where the open models people are up to, but as a response to this I’d wager they’ll end up playing the Linux desktop game all over again.
All of which strikes at one of the essential AI questions for me: do you want humans to understand the world we live in or not?
Doesn’t have to be individually, as groups of people can be good at understanding something beyond an individual. But a productivity gain isn’t on it’s a sufficient response to this question.
Interestingly, it really wasn’t long ago that “understanding the full computing stack” was a topic around here (IIRC).
It’d be interesting to see if some “based” “vinyl player programming” movement evolved in response to AI in which using and developing tech stacks designed to be comprehensively comprehensible is the core motivation. I’d be down.
> we aim for a computing system that is fully visible and understandable top-to-bottom — as simple, transparent, trustable, and non-magical as possible. When it works, you learn how it works. When it doesn’t work, you can see why. Because everyone is familiar with the internals, they can be changed and adapted for immediate needs, on the fly, in group discussion.
Funny for me, as this is basically my principal problem with AI as a tool.
It’s likely very aesthetic or experiential, but for me, it’s strong: a fundamental value of wanting to work to make the system and the work transparent, shared/sharable and collaborative.
Always liked B Victor a great deal, so it wasn’t surprising, but it was satisfying to see alignment on this.
Like having `<T: {width: f64, depth: f64}>`?
I have such a hard time understanding the multiple arrows notation of ML family languages.
Is this valid rust (it’d be new to me)?!
If not, I’m guessing, from memory, the only way at this in rust is to through traits?
But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.
> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.
> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”
When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.
But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.
The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.
Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.
Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.
I suspect for many who’ve touched the academic system, a popular voice that isn’t anti-intellectual or anti-expertise (or out to trumpet their personal theory), but critical of the status quo, would be viewed as a net positive.
I get value out of (and even enjoy) lots of software, commercial and otherwise (except for Microsoft Teams--that's an abomination).
Ultimately, everything (not just software) is a trade-off. It has benefits and hazards. As long as the benefits outweigh the hazards, I use it. [The one frustration is, of course, when an employer forces a negative-value trade-off on you--that sucks.]
I'm suspicious of articles that talk about drawbacks in isolation, without weighing the benefits: "vaccines have side-effects", "police arrest the wrong people", "electric cars harm the environment".
Ironically, the best answer to many of the article's suggestions (thousands rather than millions, easy to modify, etc.) is to write your own software with LLMs. The future everyone wants is, I think, one where users can ask the computer to do anything, and the computer immediately complies. Will that bring about a software paradise free from the buggy, one-size-fits-none, extractive software of today? I don't know. I guess we'll see.
We live in interesting times.
Not sure exactly irony you mean here, but I’ll bite on the anti-LLM bait …
Surely it matters where the LLM sits against these values, no? Even if you’ve got your own program from the LLM that’s yours, so long as you may need alterations, maintenance, debugging or even understanding its nuances, the nature of the originating LLM, as a program, matters too … right?
And in that sense, are we at all likely to get to a place where LLMs aren’t simply the new mega-platforms (while we await the year of the local-only/open-weights AI)?
For better/worse, and whether completely so or not, the time of the professional keyboard-driven mechanical logic problem solver may simply have just come and gone in ~4 generations (70 years?).
By 2050 it may be more or less as niche as it was in 1950??
Personally, I find the relative lack of awareness and attention on the human aspect of it all a bit disappointing. Being caught in the tides of history is a thing, and can be a tough experience, worthy of discourse. And causing and even forcing these tides isn’t necessarily a desirable thing, maybe?
Beyond that, mapping out the different spaces that are brought to light with such movements (eg, the various sets of values that may drive one and the various ways that may be applied to different realities) would also certainly be valuable.
But alas, “productivity” rules I guess.