edit: anyone care to explain why this is a bad comment, rather than just downvoting? The GP comment says "Over a weekend, between board games and time with my kids,", and the parent comment lectures them based on an obvious strawman: "I can prompt AI while playing with the kids"
"between time with the kids" is another person's "during time with the kids".
For me, "between time with the kids" means my kids are engaged in another activity that does not require my input until they are done with it. Whatever I am doing during this time also is typically very interruptible, so I am ready to help the kids along to their next "thing" (the joys of being a parent!). On a typical weekend (my oldest is 7), I'll get maybe 2 hours of this time during the time my kids are awake.
I think companies who buy software by feature checklists would love this sort of products.
We have a strict no laptops and no phones rule when the kids are around (unless we're specifically doing something with them using those tools - looking at the weather forecast, or looking up some information).
"I can prompt AI while playing with the kids" is not a future I want.
One of the hard things about building a product on an LLM is that the model frequently changes underneath you. Since we introduced Claude Code almost a year ago, Claude has gotten more intelligent, it runs for longer periods of time, and it is able to more agentically use more tools. This is one of the magical things about building on models, and also one of the things that makes it very hard. There's always a feeling that the model is outpacing what any given product is able to offer (ie. product overhang). We try very hard to keep up, and to deliver a UX that lets people experience the model in a way that is raw and low level, and maximally useful at the same time.
In particular, as agent trajectories get longer, the average conversation has more and more tool calls. When we released Claude Code, Sonnet 3.5 was able to run unattended for less than 30 seconds at a time before going off the rails; now, Opus 4.6 1-shots much of my code, often running for minutes, hours, and days at a time.
The amount of output this generates can quickly become overwhelming in a terminal, and is something we hear often from users. Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow. We want to make sure every user has a good experience, no matter what terminal they are using. This is important to us, because we want Claude Code to work everywhere, on any terminal, any OS, any environment.
Users give the model a prompt, and don't want to drown in a sea of log output in order to pick out what matters: specific tool calls, file edits, and so on, depending on the use case. From a design POV, this is a balance: we want to show you the most relevant information, while giving you a way to see more details when useful (ie. progressive disclosure). Over time, as the model continues to get more capable -- so trajectories become more correct on average -- and as conversations become even longer, we need to manage the amount of information we present in the default view to keep it from feeling overwhelming.
When we started Claude Code, it was just a few of us using it. Now, a large number of engineers rely on Claude Code to get their work done every day. We can no longer design for ourselves, and we rely heavily on community feedback to co-design the right experience. We cannot build the right things without that feedback. Yoshi rightly called out that often this iteration happens in the open. In this case in particular, we approached it intentionally, and dogfooded it internally for over a month to get the UX just right before releasing it; this resulted in an experience that most users preferred.
But we missed the mark for a subset of our users. To improve it, I went back and forth in the issue to understand what issues people were hitting with the new design, and shipped multiple rounds of changes to arrive at a good UX. We've built in the open in this way before, eg. when we iterated on the spinner UX, the todos tool UX, and for many other areas. We always want to hear from users so that we can make the product better.
The specific remaining issue Yoshi called out is reasonable. PR incoming in the next release to improve subagent output (I should have responded to the issue earlier, that's my miss).
Yoshi and others -- please keep the feedback coming. We want to hear it, and we genuinely want to improve the product in a way that gives great defaults for the majority of users, while being extremely hackable and customizable for everyone else.
https://martin.ankerl.com/2007/09/01/comprehensive-linux-ter...
Could the React rendering stack be optimised instead?
Heck, simply handle the scrolling yourself a la tmux/screen and only update the output at most every 4ms?
It's so trivial, can't you ask your fancy LLM to do it for you? Or you guys lost the plot at his point and forgot the most basics of writing non pessimized code.
The real problem is their ridiculous "React rendering in the terminal" UI.
What makes petroleum artificial compared to any other substance found on (or in) earth?
It seems you are proposing that "any food that has had a petroleum product added to it at any stage is artificial", which is an oddly narrow focus.
For that matter, does this definition of "artificial" extend to the range of substances that can be synthesized with bio-feedstock?
To expand on our discussion, would this mean that ethanol made other feedstock is natural, but made from petroleum is artificial?