Years later there was a time when me and my sister noticed our mom was acting a bit strange -- more snappish and irritable than usual, and she even started dressing differently. Then at dinner she announced proudly that she had been off Prozac for a month. My sister and I looked at each other and at the same time went, "Ohhhh!" Mom was shocked that we'd noticed such a difference in her behavior and started taking the medication again.
I've been on the exact same dose as her for 15 years, and my 7-year-old son just started half that dose.
If I have a good day it's impossible to day whether that's due to Prozac. But since starting Prozac I have been much more likely to have good days than bad. So, since Prozac is cheap and I don't seem to suffer any side effects, I plan to keep taking it in perpetuity.
What I tell my kids is that getting depressed, feeling sad, feeling hopeless -- those are all normal feelings that everyone has from time to time. Pills can't or shouldn't keep you from feeling depressed if you have something to be depressed about. Pills are for people who feel depressed but don't have something to be depressed about -- they have food, shelter, friends, opportunities to contribute and be productive, nothing traumatic has happened, but they feel hopeless anyway -- and that's called Depression, which is different from "being depressed."
I also really admire the way you're dealing patiently with everyone in this thread arguing in bad faith, you have a lot more tolerance than I do! Hopefully it's not getting to you. Best wishes.
It's too early to say. Obviously the idea is to get her off it if possible.
SSRIs never help because of boosting serotonin.
That's a hell of a claim, which could use some evidence.
I think existing software development skills get a whole lot more valuable with the addition of coding agents. You can take everything you've learned up to this point and accelerate the impact you can have with this new family of tools.
I said a version of this in the post:
> AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.
A brand new vibe coder may be able to get a cool UI out of ChatGPT, but they're not going to be able to rig up a set of automated tests with continuous integration and continuous deployment to a Kubernetes cluster somewhere. They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.
2025: Get downvoted on HN for comparing Charlie Kirk to Horst Wessel
2026: Get upvoted for it
2027: Get banned for it
2028: No voting allowed, here or anywhere else
The basic gist of it is to give the llm some code to review and have it assign a grade multiple times. How much variance is there in the grade?
Then, prompt the same llm to be a "critical" reviewer with the same code multiple times. How much does that average critical grade change?
A low variance of grades across many generations and a low delta between "review this code" and "review this code with a critical eye" is a major positive signal for quality.
I've found that gpt-5.1 produces remarkably stable evaluations whereas Claude is all over the place. Furthermore, Claude will completely [and comically] change the tenor of its evaluation when asked to be critical whereas gpt-5.1 is directionally the same while tightening the screws.
You could also interpret these results to be a proxy for obsequiousness.
Edit: One major part of the eval i left out is "can an llm converge on an 'A'?" Let's say the llm gives the code a 6/10 (or B-). When you implement its suggestions and then provide the improved code in a new context, does the grade go up? Furthermore, can it eventually give itself an A, and consistently?
It's honestly impressive how good, stable, and convergent gpt-5.1 is. Claude is not great. I have yet to test it on Gemini 3.