It was probably most useful as "rubber duck" technique. Forcing myself to articulate all of the things I needed/wanted to get done that day was itself extremely useful. Sometimes the agent would help me by identifying the highest-priority next action, but usually it was just recognizing what I thought was highest priority from implicit context. This can still be psychologically valuable, as a lot of procrastination can be caused by the logjam of not being sure which thing to focus on.
The main missing ingredient, which caused me to ultimately stop the practice, was that it didn't really remember past conversations. I would feed past conversations to it and tell them to summarize the key points, then feed those summaries in as starting context, but this workflow was not sustainable. First of all, the summarization lost too much important nuance. Second and more importantly, even that summarization context block became larger than GPT-3's context window within a few days. This lack of persistent context destroyed the sense that I was talking to a real person, someone who could reliably recall information about a project that I last worked on 10 days ago and apply that context to the current conversation.
I suspect we are not far away from both of these issues being mostly solved. The trend is obviously going in the direction of LLMs with different types of memory and/or much larger context windows.
PS: not tried myself.
https://en.wikipedia.org/wiki/Stockholm_syndrome
TIL there's a Helsinki syndrome: https://www.scandinaviastandard.com/what-is-helsinki-syndrom...
GPU race is getting really hot and there is a lot of work being done to squeeze every ounce of performance especially for LLM training and inference.
One resource I would recommend is “Programming massively parallel processors” [1]
I am also learning it as my hobby project and uploading my notes here [2]
[1] https://shop.elsevier.com/books/programming-massively-parall...
[2] https://github.com/mandliya/PMPP_notes