Readit News logoReadit News
stillpointlab commented on Claude Code is being dumbed down?   symmetrybreak.ing/blog/cl... · Posted by u/WXLCKNO
stillpointlab · 2 days ago
I'm old, so I remember when Skyrim came out. At the time, people were howling about how "dumbed down" the RPG had become compared to previous versions. They had simplified so many systems. Seemed to work out for them overall.

I understand the article writers frustration. He liked a thing about a product he uses and they changed the product. He is feeling angry and he is expressing that anger and others are sharing in that.

And I'm part of another group of people. I would notice the files being searched without too much interest. Since I pay a monthly rate, I don't care about optimizing tokens. I only care about the quality of the final output.

I think the larger issue is that programmers are feeling like we are losing control. At first we're like, I'll let it auto-complete but no more. Then it was, I'll let it scaffold a project but not more. Each step we are ceding ground. It is strange to watch someone finally break on "They removed the names of the files the agent was operating on". Of all of the lost points of control this one seems so trivial. But every camels back has a breaking point and we can't judge the straw that does it.

stillpointlab commented on Where's the shovelware? Why AI coding claims don't add up   mikelovesrobots.substack.... · Posted by u/dbalatero
stillpointlab · 5 months ago
> We all know that the industry has taken a step back in terms of code quality by at least a decade. Hardly anyone tests anymore.

I see pseudo-scientific claims from both sides of this debate but this is a bit too far for me personally. "We all know" sounds like Eternal September [1] kind of reasoning. I've been in the industry about as long as the article author and I think he might be looking with rose-tinted glasses on the past. Every aging generation looks down at the new cohort as if they didn't go through the same growing pains.

But in defense of this polemic, and laying out my cards as an AI maximalist and massive proponent of AI coding, I've been wondering the same. I see articles all the time about people writing this and that software using these new tools and it so often is the case they never actually share what they built. I mean, I can understand if someone is heads-down cranking out amazing software using 10 Claude Code instances and raking in that cash. But not even to see one open source project that embraces this and demonstrates it is a bit suspicious.

I mean, where is: "I rewrote Redis from scratch using Claude Code and here is the repo"?

1. https://en.wikipedia.org/wiki/Eternal_September

stillpointlab commented on John Coltrane's Tone Circle   roelsworld.eu/blog-saxoph... · Posted by u/jim-jim-jim
stillpointlab · 5 months ago
I want to consider the higher-level claims in the article. In between the historical context helpfully provided by the article there is also some speculation about Merkaba, Platonic solids, Flower of Life and other sacred geometry.

There is a premise hidden in those speculations that there is some strong connection between the structure of the universe itself and the structures humans find pleasing when listening to music. And I detect a suggestion that studying the output of our most genius musicians might reveal some kind of hidden information about the universe, specifically related to some kind of "spirituality".

This was a sentiment shared, in some sense, by the deists of the enlightenment. They rejected the scriptures and instead believed that studying the physical universe might reveal the "mind of God".

If we are looking for correspondences between these things - why limit ourselves to Euclidean geometry? Modern physics leans on Riemannian geometry, symmetry, and topology. It appears the topology of the universe, under a wide array of experiments, is way more complicated than the old geometric ideas. Most physicists talk about Lie Groups, fiber bundles, etc.

If you take "as above, so below" seriously and you want to find connections between cosmology and music, I believe you have to use modern mathematical tools. I think we need to expand beyond geometry and embrace topology. Can we think of the chromatic scale tones as a Group? What operators would we need? etc.

It's interesting to try to get into the head of a guy like Coltrane and his mathematical approach, but perhaps we could be pushing new boundaries based on new understanding.

stillpointlab commented on Proposal: AI Content Disclosure Header   ietf.org/archive/id/draft... · Posted by u/exprez135
xhkkffbf · 6 months ago
I'm all for some kind of disclosure, but where do we draw the line. I use a pretty smart grammar and spell checker, one that's got more "AI" in it to analyze the sentence structure. Is that AI content?
stillpointlab · 6 months ago
According to the spec, yes a grammar checker would be subject to disclosure:

> ai-modified Indicates AI was used to assist with or modify content primarily created by humans. The source material was not AI-generated. Examples include AI-based grammar checking, style suggestions, or generating highlights or summaries of human-written text.

stillpointlab commented on The unbearable slowness of AI coding   joshuavaldez.com/the-unbe... · Posted by u/aymandfire
rcarr · 6 months ago
For the longer ones, are you using AI to help you write the specs?
stillpointlab · 6 months ago
My experience is: AI written prompts are overly long and overly specific. I prefer to write the instructions myself and then direct the LLM to ask clarifying questions or provide an implementation plan. Depending on the size of change I go 1-3 rounds of clarifications until Claude indicates it is ready and provides a plan that I can review.

I do this in a task_descrtiption.md file and I include the clarifications in its own section (the files follow a task.template.md format).

stillpointlab commented on The unbearable slowness of AI coding   joshuavaldez.com/the-unbe... · Posted by u/aymandfire
stillpointlab · 6 months ago
I'm still calibrating myself on the size of task that I can get Claude Code to do before I have to intervene.

I call this problem the "goldilocks" problem. The task has to be large enough that it outweighs the time necessary to write out a sufficiently detailed specification AND to review and fix the output. It has to be small enough that Claude doesn't get overwhelmed.

The issue with this is, writing a "sufficiently detailed specification" is task dependent. Sometimes a single sentence is enough, other times a paragraph or two, sometimes a couple of pages is necessary. And the "review and fix" phase again is totally dependent and completely unknown. I can usually estimate the spec time but the review and fix phase is a dice roll dependent on the output of the agent.

And the "overwhelming" metric is again not clear. Sometimes Claude Code can crush significant tasks in one shot. Other times it can get stuck or lost. I haven't fully developed an intuition for this yet, how to differentiate these.

What I can say, this is an entirely new skill. It isn't like architecting large systems for human development. It isn't like programming. It is its own thing.

stillpointlab commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
stillpointlab · 6 months ago
I think ghostty is a popular enough project that it attracts a lot of attention, and that means it certainly attracts a larger than normal amount of interlopers. There are all kinds of bothersome people in this world, but some of the most bothersome you will find are well meaning people who are trying to be helpful.

I would guess that many (if not most) of the people attempting to contribute AI generated code are legitimately trying to help.

People who are genuinely trying to be helpful can often become deeply offended if you reject their help, especially if you admonish them. They will feel like the reprimand is unwarranted, considering the public shaming to be an injury to their reputation and pride. This is most especially the case when they feel they have followed the rules.

For this reason, if one is to accept help, the rules must be clearly laid out from the beginning. If the ghostty team wants to call out "slop", then it must make it clear that contributing "slop" may result in a reprimand. Then the bothersome want-to-be helpful contributors cannot claim injury.

This appears to me to be good governance.

stillpointlab commented on Claude says “You're absolutely right!” about everything   github.com/anthropics/cla... · Posted by u/pr337h4m
stillpointlab · 6 months ago
One thing I've noticed with all the LLMs that I use (Gemini, GPT, Claude) is a ubiquitous: "You aren't just doing <X> you are doing <Y>"

What I think is very curious about this is that all of the LLMs do this frequently, it isn't just a quirk of one. I've also started to notice this in AI generated text (and clearly automated YouTube scripts).

It's one of those things that once you see it, you can't un-see it.

stillpointlab commented on The current state of LLM-driven development   blog.tolki.dev/posts/2025... · Posted by u/Signez
tptacek · 6 months ago
Learning how to use LLMs in a coding workflow is trivial. There is no learning curve. You can safely ignore them if they don’t fit your workflows at the moment.

I have never heard anybody successfully using LLMs say this before. Most of what I've learned from talking to people about their workflows is counterintuitive and subtle.

It's a really weird way to open up an article concluding that LLMs make one a worse programmer: "I definitely know how to use this tool optimally, and I conclude the tool sucks". Ok then. Also: the piano is a terrible, awful instrument; what a racket it makes.

stillpointlab · 6 months ago
I agree with you and I have seen this take a few times now in articles on HN, which amounts to the classic: "We've tried nothing and we're all out of ideas" Simpson's joke.

I read these articles and I feel like I am taking crazy pills sometimes. The person, enticed by the hype, makes a transparently half-hearted effort for just long enough to confirm their blatantly obvious bias. They then act like the now have ultimate authority on the subject to proclaim their pre-conceived notions were definitely true beyond any doubt.

Not all problems yield well to LLM coding agents. Not all people will be able or willing to use them effectively.

But I guess "I gave it a try and it is not for me" is a much less interesting article compared to "I gave it a try and I have proved it is as terrible as you fear".

stillpointlab commented on Eleven Music   elevenlabs.io/blog/eleven... · Posted by u/meetpateltech
betterhealth12 · 6 months ago
do you have a point of view of this type of collaborative approach applied to other areas, for example, collective understanding for groups of people? We are working on something in that space.
stillpointlab · 6 months ago
The amount I have to say on this topic would be inappropriate for a Hacker News comment. But some brief and unstructured thoughts I can offer.

For collaboration I believe that _lineage_ is important. Not just a one-shot output artifact but a series of outputs connected in some kind of connected graph. It is the difference between a single intervention/change vs. a _process_. This provides a record which can act as an audit trail. In this "lineage" as I would call it, there are conversations with LLMs (prompts + context) and there are outputs.

Let's imagine the original topic, audio, with the understanding that the abstract idea could apply to anything (including mental health). I have a conversation with an LLM about some melodic ideas and the output is a score. I take the score and add it as context to a new conversation with an LLM and the output is a demo. I take the demo and the score then add it to a new conversation with an LLM and the output is a rhythm section. etc.

What we are describing here is an evolving _process_ of collaboration. We change our view from "I did this one thing, here is the result" to "I am _doing_ this set of things over time".

The output of that "doing" is literally a graph. You have multiple inputs to each node (conversation/context) which can be traced back to initial "seed" elements.

From a collaborative perspective, each node in this graph is somewhat independent. One person can create the score. Another person can take the score and create a demo. etc.

u/stillpointlab

KarmaCake day265May 27, 2025
About
https://stillpointlab.com
View Original