Readit News logoReadit News
michaelfeathers commented on Programming Deflation   tidyfirst.substack.com/p/... · Posted by u/dvcoolarun
recroad · 3 months ago
Yeah it sounded like Kent just discovered Jevons' paradox and decided to shoehorn it into the article. Nothing here became cheaper, and if by cheaper he means that paying a programmer was more expensive than paying for an AI, even that's not necessarily true once you account for re-work and a host of other things.

If we're going to go with economic/strategy models, I think the Laffer Curve is more relevant. Seriously extrapolating here: AI is optimal for many tasks which if used in those contexts can maximize productivity. Over-using it on unsuitable tasks destroys productivity.

michaelfeathers · 3 months ago
There's something with the same shape as Jevon's paradox - the Peltzman effect. The safer you make something the more risks people will take.

Applied to AI I think it would be something like - ease of development increases the complexity attempted.

michaelfeathers commented on The key points of "Working Effectively with Legacy Code"   understandlegacycode.com/... · Posted by u/lordleft
allemagne · 4 months ago
I read through this book relatively recently and agree with the praise here for the core idea that legacy code is code that is untested. The first few chapters are full of pretty sharp insights that you will nod along to if you've spent a decent amount of time in any large codebase.

However, most of the content in the last half of the book consists of naming and describing what seemed like obvious strategies for refactoring and rewriting code. I would squint at the introduction to a new term, try to parse its definition, look at the code example, and when it clicked I would think "well that just seems like what you would naturally choose to do in that situation, no?" Then the rest of the chapter describing this pattern became redundant.

It didn't occur to me that trying to put the terms themselves to memory would be particularly useful, and so it became a slog to get through all of the content that I'm not sure was worth it. Curious if that was the experience of anyone else.

michaelfeathers · 4 months ago
Thanks. After I wrote it a friend said "I think you just gave people permission to do things that they would've felt bad about otherwise." I think he was right, in a way. On the other hand, not everything is obvious to everyone, and it's been 20 years. Regardless of whether people have read the book, the knowledge of these things as grown since then.
michaelfeathers commented on Left to Right Programming   graic.net/p/left-to-right... · Posted by u/graic
bear8642 · 4 months ago
> Sometimes it is called a fluent-interface in other languages.

Where've you heard it called that? I've normally heard tacit programming

michaelfeathers · 4 months ago
The developers of JMock, the original library for Java.
michaelfeathers commented on Left to Right Programming   graic.net/p/left-to-right... · Posted by u/graic
michaelfeathers · 4 months ago
This is called point-free style in Haskell.

Sometimes it is called a fluent-interface in other languages.

michaelfeathers commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
rerdavies · 7 months ago
It's an interesting idea. I get it. Although I wonder.... do you really need formal languages anymore now that we have LLMs that can take natural language specifications as input.

I tried running the idea on a programming task I did yesterday. "Create a dialog to edit the contents of THIS data structure." It did actually produce a dialog that worked the first time. Admitedly a very ugly dialog. But all the fields and labels and controls were there in the right order with the right labels, and were all properly bound to props of a react control, that was grudgingly fit for purpose. I suspect I could have corrected some of the layout issues with supplementary prompts. But it worked. I will do it again, with supplementary prompts next time.

Anyway. I next thought about how I would specify the behavior I wanted. The informal specification would be "Open the Looping dialog. Set Start to 1:00, then open the Timebase dialog. Select "Beats", set the tempo to 120, and press the back button. Verify that the Start text edit now contains "30:1" (the same time expressed in bars and beats). Set it to 10:1,press the back button, and verify that the corresponding "Loop" <description of storage for that data omited for clarity> for the currently selected plugin contains 20.0. I can actually see that working (and I plan to see if I can convince an AI to turn that into test code for me).

Any imaginable formal specification for that would be just grim. In fact, I can't imagine a "formal" specification for that. But a natural language specification seems eminently doable. And even if there were such a formal specification, I am 100% positive that I would be using natural language AI prompts to generate the specifications. Which makes me wonder why anyone needs a formal language for that.

And I can't help thinking that "Write test code for the specifications given in the previous prompt" is something I need to try. How to give my AI tooling to get access to UI controls though....

michaelfeathers · 7 months ago
That doesn't sound like the sort of problem you'd use it for. I think it would be used for the ~10% of code you have in some applications that are part of the critical core. UI, not so much.
michaelfeathers commented on My AI skeptic friends are all nuts   fly.io/blog/youre-all-nut... · Posted by u/tabletcorry
sksisoakanan · 7 months ago
The issue with prompting is English (or any other human language) is nowhere near as rigid or strict a language as a programming language. Almost always an idea can be expressed much more succinctly in code than language.

Combine that with when you’re reading the code it’s often much easier to develop a prototype solution as you go and you end up with prompting feeling like using 4 men to carry a wheelbarrow instead of having 1 push it.

michaelfeathers · 7 months ago
I think we are going to end up with common design/code specification language that we use for prompting and testing. There's always going to be a need to convey the exact semantics of what we want. If not, for AI then for the humans who have to grapple with what is made.
michaelfeathers commented on ChatGPT 4.1 Jailbreak Prompt   github.com/elder-plinius/... · Posted by u/maxloh
cryptonector · 8 months ago
> I'd really like to see what an LLM Incident Response looks like!

It must look like this: "Uggh! Here we go again!" and "boss, we really can't make the guardrails secure, at some point we might have to give up", with the PHB saying "keep trying, we have to have them guardrails!".

michaelfeathers · 8 months ago
The trajectory of AI is: emulating humans. We've never been able to align humans completely, so it would be surprising if we could align AI.
michaelfeathers commented on Chat is a bad UI pattern for development tools   danieldelaney.net/chat/... · Posted by u/cryptophreak
michaelfeathers · a year ago
Chat in English? Sure. But there is a better way. Make it a game to see how little you can specify to get what you want.

I used this single line to generate a 5 line Java unit test a while back.

test: grip o -> assert state.grip o

LLMs have wide "understanding" of various syntaxes and associated semantics. Most LLMs have instruct tuning that helps. Simplifications that are close to code work.

Re precision, yes, we need precision but if you work in small steps, the precision comes in the review.

Make your own private pidgin language in conversation.

michaelfeathers commented on Software Friction   hillelwayne.com/post/soft... · Posted by u/saikatsg
michaelfeathers · 2 years ago
I think it makes sense to see friction as disincentive, the opposite of incentive.
michaelfeathers commented on Simple tasks showing reasoning breakdown in state-of-the-art LLMs   arxiv.org/abs/2406.02061... · Posted by u/tosh
michaelfeathers · 2 years ago
This is a good talk about the problem: https://youtu.be/hGXhFa3gzBs?si=15IJsTQLsyDvBFnr

Key takeaway, LLMs are abysmal at planning and reasoning. You can give them the rules of planning task and ask them for a result but, in large part, the correctness of their logic (when it occurs) depends upon additional semantic information rather then just the abstract rules. They showed this by mapping nouns to a completely different domain in rule and input description for a task. After those simple substitutions, performance fell apart. Current LLMs are mostly pattern matchers with bounded generalization ability.

u/michaelfeathers

KarmaCake day2516April 15, 2009View Original