And I think that what CC's /sandbox uses on a Mac
I don't understand why this is such a necessary feature. Most people don't read paper books with a dictionary handy.
I find myself really missing this feature when I occasionally read a paper book, thinking about clicking the word on the page to get a definition.
It sparks me as odd, because I've had quite a few times where it would generate me a response over multiple messages (since it was hitting its max message length) without any second-guessing or issue.
It wasn't even during the over-capacity event I don't think, and I'm a pro user.
Try it yourself. I'm getting a lot of value out of just using chat gpt for coding. It's not without flaws. But I can get it to do a lot of routine stuff quite quickly. What I like about the desktop client is that a prompt is just one alt+space away. I usually just copy paste whatever I'm working on and then ask it to do stuff to it.
There's some art to the prompting and you usually have to nudge it to not be lazy and do the whole thing you asked for. It seems engineers on the other side are working really hard to minimize token usage.
I find it's increasingly the UX that's holding me back, not the model quality. Context windows are now big enough to hold a lot of stuff. But how do you get everything in there that matters? Manually copy pasting together stuff is tedious. I actually wrote a script (well, with some llm help) that flattens things in my repository into a file that I then simply attach to a conversation. Works surprisingly well.
Also I found the UX of Claude to be better for this, especially their Projects feature. I can just put the codebase in the Project's context, and start a new conversation to ask different questions/solve different problems.
The only pain point I have is that it seems to be pretty optimized to only show changes in existing files, not rewriting them in full, which is a bit of a pain to copy-paste into my IDE. I'll see if I can write out a system prompt to force it to generate diff or a similar format that could more easily be applied automatically to my code.
My experience is that at first it was very frustrating. I went from ~80WPM on my macbook keyboard to ~20WPM on the ergodox. After a couple of weeks I was able to write text at a comfortable speed again, but any special character was painfully slow, as I had to consciously think where each character was, and often look it up on my layout. After about 3 months I was back up to 80WPM.
What took a long time as well, was configuring my layout to fit my programming needs, it took me about 6 months to come up with a layout that had everything I needed (you can see it here if you're curious[1]). My recommendation is to do it incrementally, trying with something general at first, then with use seeing what feels right and doesn't.
In the end it was a really good idea, a lot of back pain I had has gone away, and after long typing session I have way less pain in my wrists and hands.
[1]: https://configure.zsa.io/ergodox-ez/layouts/BOLz0/ybXMx/0
Imagine what a world-class programmer could accomplish in this world if they thought 100 times faster than a human, and had no fear of going to jail. Our world is an insecure machine, and we're preparing to run untrusted code with root access.
And sure, maybe we can try to use less-intelligent AIs to secure things before then, but the weak point is still humans. Social engineering is typically way easier than straight up hacking. We've seen these lesser AIs threaten people, and while we can keep bonking them on the nose when they do that, we can't prove or tell that they won't ever do it in a different situation, when they judge that it's likely to be the most effective course of action.
I hope every day that this is all just hype and that another AI winter is coming, because we need time (who knows how long) for a way to align these things. But I really fear that it isn't.