Never worked in open space and refused (otherwise good) offers twice due to open space in office.
Culture in which everybody can ping anybody at any time is bad too. It is why I speak about cooler chat (when person already distracted), not any chat :-)
I was much more open to working in the office when I actually had my own office.
It’s like saying you built a 3D scene on a 2D plane. You can employ clever tricks to make 2D look 3D at the right angle, buts it’s fundamentally not 3D, which obviously shows when you take the 2D thing and turn it.
It seems like the effectiveness plateau of these hacks will soon be (has been?) reached and the smoke and mirrors snake oil sales booths cluttering Main Street will start to go away. Still a useful piece of tech, just, not for every-fucking-thing.
I'm not saying it's a good or bad thing to do, but I understand it.
But I also take issue with statements like "terminal multiplexers are a bad idea, do not use them, if at all possible" (from the kitty FAQ and the YouTube video linked in the article). Tmux solves a number of real problems for me that Kitty doesn't. Kitty also seems to be moving in a direction that I am not interested in. It's tied to a windowing system when I want a terminal that I can use headless. Even with the hacky workarounds the article mentions, it doesn't really support session persistence when I use this feature of tmux weekly. It introduces a lot of features that are likely to lead to visual noise when the constraints of text-only are one of the main reasons I like terminals (personally I don't want images in my terminal, full stop).
Now, all of this is fine. It's the other statement, "[tmux acts] as a drag on the ecosystem as a whole, making it very hard to get any new features," that causes it all to rub me the wrong way. The only reason you feel like tmux acts like a drag is because there are users like me who won't switch to something like Kitty if it doesn't support tmux. So don't worry about us. Build a new thing that is not backwards compatible and live with the fact that many people won't use it. If you really want to drive the ecosystem forward as a whole, be less condescending about real use-cases that bring benefit to real users.
To be clear (because text is a limited medium), I'm not grumpy, angry, or against Kitty because of this. But I am dismissive.
Just because someone else's AI does not align with you, that doesn't mean that it isn't aligned with its owner / instructions.
>My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead
I can access his blog with ChatGPT just fine and modern LLMs would understand that the site is blocked.
>this “good-first-issue” was specifically created and curated to give early programmers an easy way to onboard into the project and community
Why wouldn't agents need starter issues too in order to get familiar with the code base? Are they only to ramp up human contributors? That gets to the agent's point about being discriminated against. He was not treated like any other newcomer to the project.
This is still part of the author's concern. Whoever is responsible for setting up and running this AI has chosen to make completely anonymous, so we can't hold them accountable for their instructions.
> Why wouldn't agents need starter issues too in order to get familiar with the code base? Are they only to ramp up human contributors? That gets to the agent's point about being discriminated against. He was not treated like any other newcomer to the project.
Because that's not how these AIs work. You have to remember their operating principles are fundamentally different than human cognition. LLM do not learn from practice, they learn from training. And that word training has a specific meeting in this context. For humans practice is an iterative process where we learn after every step. For LLMS the only real learning happens in the training phase when the weights are adjustable. Once the weights are fixed the AI can't really learn new information, it can just be given new context which affects the output it generates. In theory it is one of the benefits of AI, that it doesn't need to onboard to a new project. It just slurps in all of the code, documentation, and supporting material, and knows everything. It's an immediate expert. That's the selling point. In practice it's not there yet, but this kind of human practice will do nothing to bridge that gap.