This isn't some rare mistake, this is by design. 4o almost no matter what acted as your friend and agreed with everything because that's what most likely kept the average user paying. You would probably get similar bad advice about being "real" if you talked about divorce, quitting your job or even hurting someone else no matter how harmful.
Tell me you seriously didn't notice this.
Shops will do whatever makes them profit, they are not strictly run by the government.
Even North Korea is more well developed despite being an isolated communist state run by a mafia.
2. What's wrong with celebrating Diwali?
3. Why should anyone care? Did anyone stop you from celebrating Christmas with your friends and family?
P.s. according to your post history you have based anti capitalist positioning on the pointlessness of most white collar labor, what happened to make you participate on the wrong side in a meaningless culture war that's just a distraction from the reduction in material conditions of the working class?
Also that is quite an overreach on basic observations that are generally agreed upon and weren't anti-capitalist.
Exactly when depends what you count, but part of the path leading to ChatGPT was getting a bunch of dumb systems to work together to train a smarter system: https://arxiv.org/pdf/1909.08593 fig 1, and also section 4.4 for what can go wrong
Ironically though I could still see lawsuits like this weighing heavily on the sycophancy that these models have, as the limited chat excerpts given have that strong stench of "you are so smart and so right about everything!". If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".
It is not one extreme or the other. o3 is nowhere near as sycophantic as 4o but it is also not going to tell you that you suck especially in a suicidal context. 4o was the mainstream model because OpenAI probably realised that this is what most people want rather than a more professional model like o3 (besides the fact that it also uses more compute).
The lawsuits probably did make them RLHF GPT-5 to be at least a bit more middle-ground though that led to backlash because people "missed" 4o due this type of behaviour so they made it bit more "friendly". Still not as bad as 4o.