This is the right question.
The answer is most definitely no, LLMs are not set up to deal with the nuances of the human psyche. We're in real danger of LLM accidentally reinforcing dangerous lines of thinking. It's a matter of time till we get a "ChatGPT made me do it" headline.
Too many AI hype folks out there thinking that humans don't need humans, we are social creatures, even as introverts. Interacting with an LLM is like talking to an evil mirror.
"...for people lacking the wealth or living in areas with no access to decent housing, unmaintained dangerous apartment buildings are a godsend."
Why is it ok to expect poor people to be endangered and suffer through indignity just for the "crime" of being poor?