A hybrid of Strong (the lifting app) and ChatGPT where the model has access to my workouts, can suggest improvements, and coach me. I mainly just want to be able to chat with the model knowing it has detailed context for each of my workouts (down to the time in between each set).
Strong really transformed my gym progression, I feel like its autopilot for the gym. BUT I have 4x routines I rotate through (I'll often switch it up based on equipment availability), but I'm sure an integrated AI coach could optimize.
- When ever you address a failing test, always bring your component mental model into the context.
Paste that into your Claude prompt and see if you get better results. You'll even be able to read and correct the LLM's mental model.
For example, in an English class with a lot of essay-writing assignments, the assignments could simply be worth 0% of the final mark. There would still be deadlines as usual, and they would be marked as usual, but the students would be free to do them or not as they pleased. The catch would be that the *proctored, for-credit* exams would demand that they write similar essays, which would then be graded based on the knowledge/skills the students would have been expected to gain if they'd done the assignments.
Advantages:
- No more issues with cheating.
- Students get to manage (or learn to manage) their own time and priorities, as is expected of adults, without being whipped as much with the cane of class grades.
- The advanced students who can already write clearly, concisely and convincingly (or whatever the objectives are of the writing exercises) don't have to waste time with unneeded assignments.
- If students skip the assignments, learn to write on their own time using ChatGPT and friends, and can demonstrate their skills in exam conditions, then it's a win-win.
This all requires that whoever is in charge of the class have clear and testable learning goals in mind -- which, alas, they all-too-often do not.
Mind you, I don't really have any alternative suggestions.
I typically have a discussion about how I want the architecture to be and my exact desired end state. I make the model repeat back to me what I want and have it produce the plan to the degree I am happy with. I typically do not have it work in building large amorphous systems, I work with and have it plan subsystems of the overall system I'm building.
A lot of my discussion with the model is tradeoffs on the structure I'm imagining and methods it might know. My favorite sentence to send Claude right now "Is go google this." because I almost never take its first suggested response at face value.
I also watch every change and cancel and redirect ones I do not like. I read code very fast and like the oversight, because even small stupidities stack up.
The workflow is highly iterative and I make changes frequently, my non AI workflow was like this too. Write, compile, test, tweak and repeat.
I like this workflow a lot because I feel I am able to express my designs succinctly and get to a place I'm happy with with much less writing than a lot of the actual code itself which in many cases is not an interesting problem, but work that needs to happen for a working system at all.
I do wind up taking over, but feel less than I used to, in edges where its clear there is not a lot of training data or I'm working on something fairly novel or lower level.
I work in Python, Rust and Typescript (Rust by far most often) and the majority of my work is technically challenging but at the systems design level maybe not low level systems programming challenging. Think high concurrency systems and data processing, training models, and some web dev.