Readit News logoReadit News
stillpointlab commented on The unbearable slowness of AI coding   joshuavaldez.com/the-unbe... · Posted by u/aymandfire
rcarr · 3 days ago
For the longer ones, are you using AI to help you write the specs?
stillpointlab · 3 days ago
My experience is: AI written prompts are overly long and overly specific. I prefer to write the instructions myself and then direct the LLM to ask clarifying questions or provide an implementation plan. Depending on the size of change I go 1-3 rounds of clarifications until Claude indicates it is ready and provides a plan that I can review.

I do this in a task_descrtiption.md file and I include the clarifications in its own section (the files follow a task.template.md format).

stillpointlab commented on The unbearable slowness of AI coding   joshuavaldez.com/the-unbe... · Posted by u/aymandfire
stillpointlab · 3 days ago
I'm still calibrating myself on the size of task that I can get Claude Code to do before I have to intervene.

I call this problem the "goldilocks" problem. The task has to be large enough that it outweighs the time necessary to write out a sufficiently detailed specification AND to review and fix the output. It has to be small enough that Claude doesn't get overwhelmed.

The issue with this is, writing a "sufficiently detailed specification" is task dependent. Sometimes a single sentence is enough, other times a paragraph or two, sometimes a couple of pages is necessary. And the "review and fix" phase again is totally dependent and completely unknown. I can usually estimate the spec time but the review and fix phase is a dice roll dependent on the output of the agent.

And the "overwhelming" metric is again not clear. Sometimes Claude Code can crush significant tasks in one shot. Other times it can get stuck or lost. I haven't fully developed an intuition for this yet, how to differentiate these.

What I can say, this is an entirely new skill. It isn't like architecting large systems for human development. It isn't like programming. It is its own thing.

stillpointlab commented on AI tooling must be disclosed for contributions   github.com/ghostty-org/gh... · Posted by u/freetonik
stillpointlab · 3 days ago
I think ghostty is a popular enough project that it attracts a lot of attention, and that means it certainly attracts a larger than normal amount of interlopers. There are all kinds of bothersome people in this world, but some of the most bothersome you will find are well meaning people who are trying to be helpful.

I would guess that many (if not most) of the people attempting to contribute AI generated code are legitimately trying to help.

People who are genuinely trying to be helpful can often become deeply offended if you reject their help, especially if you admonish them. They will feel like the reprimand is unwarranted, considering the public shaming to be an injury to their reputation and pride. This is most especially the case when they feel they have followed the rules.

For this reason, if one is to accept help, the rules must be clearly laid out from the beginning. If the ghostty team wants to call out "slop", then it must make it clear that contributing "slop" may result in a reprimand. Then the bothersome want-to-be helpful contributors cannot claim injury.

This appears to me to be good governance.

stillpointlab commented on Claude says “You're absolutely right!” about everything   github.com/anthropics/cla... · Posted by u/pr337h4m
stillpointlab · 11 days ago
One thing I've noticed with all the LLMs that I use (Gemini, GPT, Claude) is a ubiquitous: "You aren't just doing <X> you are doing <Y>"

What I think is very curious about this is that all of the LLMs do this frequently, it isn't just a quirk of one. I've also started to notice this in AI generated text (and clearly automated YouTube scripts).

It's one of those things that once you see it, you can't un-see it.

stillpointlab commented on The current state of LLM-driven development   blog.tolki.dev/posts/2025... · Posted by u/Signez
tptacek · 15 days ago
Learning how to use LLMs in a coding workflow is trivial. There is no learning curve. You can safely ignore them if they don’t fit your workflows at the moment.

I have never heard anybody successfully using LLMs say this before. Most of what I've learned from talking to people about their workflows is counterintuitive and subtle.

It's a really weird way to open up an article concluding that LLMs make one a worse programmer: "I definitely know how to use this tool optimally, and I conclude the tool sucks". Ok then. Also: the piano is a terrible, awful instrument; what a racket it makes.

stillpointlab · 14 days ago
I agree with you and I have seen this take a few times now in articles on HN, which amounts to the classic: "We've tried nothing and we're all out of ideas" Simpson's joke.

I read these articles and I feel like I am taking crazy pills sometimes. The person, enticed by the hype, makes a transparently half-hearted effort for just long enough to confirm their blatantly obvious bias. They then act like the now have ultimate authority on the subject to proclaim their pre-conceived notions were definitely true beyond any doubt.

Not all problems yield well to LLM coding agents. Not all people will be able or willing to use them effectively.

But I guess "I gave it a try and it is not for me" is a much less interesting article compared to "I gave it a try and I have proved it is as terrible as you fear".

stillpointlab commented on Eleven Music   elevenlabs.io/blog/eleven... · Posted by u/meetpateltech
betterhealth12 · 19 days ago
do you have a point of view of this type of collaborative approach applied to other areas, for example, collective understanding for groups of people? We are working on something in that space.
stillpointlab · 18 days ago
The amount I have to say on this topic would be inappropriate for a Hacker News comment. But some brief and unstructured thoughts I can offer.

For collaboration I believe that _lineage_ is important. Not just a one-shot output artifact but a series of outputs connected in some kind of connected graph. It is the difference between a single intervention/change vs. a _process_. This provides a record which can act as an audit trail. In this "lineage" as I would call it, there are conversations with LLMs (prompts + context) and there are outputs.

Let's imagine the original topic, audio, with the understanding that the abstract idea could apply to anything (including mental health). I have a conversation with an LLM about some melodic ideas and the output is a score. I take the score and add it as context to a new conversation with an LLM and the output is a demo. I take the demo and the score then add it to a new conversation with an LLM and the output is a rhythm section. etc.

What we are describing here is an evolving _process_ of collaboration. We change our view from "I did this one thing, here is the result" to "I am _doing_ this set of things over time".

The output of that "doing" is literally a graph. You have multiple inputs to each node (conversation/context) which can be traced back to initial "seed" elements.

From a collaborative perspective, each node in this graph is somewhat independent. One person can create the score. Another person can take the score and create a demo. etc.

stillpointlab commented on No Comment (2010)   prog21.dadgum.com/57.html... · Posted by u/ColinWright
pimlottc · 19 days ago
If you enjoy that, you might consider becoming a court watcher:

https://www.vera.org/news/how-to-be-a-court-watcher-and-why-...

stillpointlab · 19 days ago
I have seriously considered going to the local court and just watching. I did it once as a kid, and we went to city hall a few times too.
stillpointlab commented on Create personal illustrated storybooks in the Gemini app   blog.google/products/gemi... · Posted by u/xnx
stillpointlab · 19 days ago
I asked it to create a story that described the modes of the major scale with a cartoon treble clef as the main character.

It created a 10 page story that stuck to the topic and was overall coherent. The main character changed color and style on every page, so no consistency there. The overall page layouts and animation style were reasonably consistent.

The metaphor it used was the character climbing a mountain and encountering other characters that represented each mode. Each supporting character was reasonably unique, although note motif was present on 3 or 4. The mountain also changed significantly and the character was frequently back at the bottom. However, in the end, he does reach the summit.

I can't say I am overly impressed but it does mostly do what they claim.

stillpointlab commented on Claude Opus 4.1   anthropic.com/news/claude... · Posted by u/meetpateltech
ryandrake · 19 days ago
Interesting about the auto-completion. That was really the only Copilot feature I found to be useful. The idea of writing out an English prompt and telling Copilot what to write sounded (and still sounds) so slow and clunky. By the time I've articulated what I want it to do, I might as well have written the code myself. The auto-completion was at least a major time-saver.

"The card game state is a structure that contains a Deck of cards, represented by a list of type Card, and a list of Players, each containing a Hand which is also a list of type Card, dealt randomly, round-robin from the Deck object." I could have input the data structure and logic myself in the amount of time it took to describe that.

stillpointlab · 19 days ago
One analogy I have been thinking about lately is GPUs. You might say "The amount of time it takes me to fill memory with the data I want, copy from RAM to the GPU, let the GPU do it's thing, then copy it back to RAM, I might as well have just done the task on the CPU!"

I hope when I state it that way you start to realize the error in your thinking process. You don't send trivial tasks to the GPU because the overhead is too high.

You have to experiment and gain experience with agent coding. Just imagine that there are tasks where the overhead of explaining what to do and reviewing the output are dwarfed by the actual implementation. You have to calibrate yourself so you can recognize those tasks and offload them to the agent.

stillpointlab commented on Eleven Music   elevenlabs.io/blog/eleven... · Posted by u/meetpateltech
pacifika · 19 days ago
I’d recommend GarageBand for this.
stillpointlab · 19 days ago
I haven't used the virtual drummer feature of GarageBand recently, but my experience with it was pretty disappointing. The output sounds very midi or like the most basic loops.

I believe there is massive room for improvement over what is currently available.

However, my larger point isn't "I want to do this one particular thing" and rather: I wish the music model companies would divert some attention away from "prompt a complete song in one shot" and towards "provide tools to iteratively improve songs in collaboration with a musician/producer".

u/stillpointlab

KarmaCake day247May 27, 2025
About
https://stillpointlab.com
View Original