By definition, if they're tedious, they're not utopias. It's more that writing convincing utopias is hard and people are lazy.
In Player of Games we see a corner of a gaming culture which partly meets this criterion but it does not have meaningful consequences outside the gaming participants (unless you count the ways the Minds use it to manipulate Gurgeh).
Maybe by this criterion utopias are impossible, since the disruption caused by exciting activities with consequences conflict too much with the optimality of the society. But I don't think anyone now can prove this would be the case.
Utopias are by definition tedious because a utopia is an end to history and as such an end to meaning or negotiation of how to live. A utopia is always an end to a story or Freedom with a capital f. As Dostoevsky points out in Notes from Underground, on man in utopia:
"[he] would purposely do something perverse out of simple ingratitude, simply to gain his point. I believe in it, I answer for it, for the whole work of man really seems to consist in nothing but proving to himself every minute that he is a man and not a piano-key! It may be at the cost of his skin; but he has proved it!"
Another way to phrase it. If you are in a utopia, you cannot be in a democracy that entails the possibility of ending it. Which is to say, you can't govern yourself at all. And that is why Ian M Banks culture is nothing of the sort. It's a society literally controlled by "perfect minds" using a Sapir-Whorf like language to manage the behavior of its people. Even Banks who tried to write a positive utopia and that's not his fault, couldn't imagine a utopia that entails the possibility of rebellion.
So according to your definition the Culture is not a utopia.
I wonder in what sense they really do "believe". If they had a strong practical reason to go to a big city, what would they do?
It's unfortunate to see the author take this tack. This is essentially taking the conventional tack that insanity is separable: some people are "afflicted", some people just have strange ideas -- the implication of this article being that people who already have strange ideas were going to be crazy anyways, so GPT didn't contribute anything novel, just moved them along the path they were already moving regardless. But anyone with serious experience with schizophrenia would understand that this isn't how it works: 'biological' mental illness is tightly coupled to qualitative mental state, and bidirectionally at that. Not only do your chemicals influence your thoughts, your thoughts influence your chemicals, and it's possible for a vulnerable person to be pushed over the edge by either kind of input. We like to think that 'as long as nothing is chemically wrong' we're a-ok, but the truth is that it's possible for simple normal trains of thought to latch your brain into a very undesirable state.
For this reason it is very important that vulnerable people be well-moored, anchored to reality by their friends and family. A normal person would take care to not support fantasies of government spying or divine miracles or &c where not appropriate, but ChatGPT will happily egg them on. These intermediate cases that Scott describes -- cases where someone is 'on the edge', but not yet detached from reality -- are the ones you really want to watch out for. So where he estimates an incidence rate of 1/100,000, I think his own data gives us a more accurate figure of ~1/20,000.
This seems very incorrect, or at least drastically underspecified. These trains of thought are "normal" (i.e. common and unremarkable) so why don't they "latch your brain into a very undesirable state" lots of the time?
I don't think Scott or anyone up to speed on modern neuroscience would deny the coupling of mental state and brain chemistry--in fact I think it would be more accurate to say both of them are aspects of the dynamics of the brain.
But this doesn't imply that "simple normal trains of thought" can latch our brain dynamics into bad states -- i.e. in dynamics language move us into a undesirable attractor. That would require a very problematic fragility in our normal self-regulation of brain dynamics.
At its root it is a cutting problem, like graph cutting but much more general because it includes things like non-trivial geometric types and relationships. Solving the cutting problem is necessary to efficiently shard/parallelize operations over the data models.
For classic scalar data models, representations that preserve the relationships have the same dimensionality as the underlying data model. A set of points in 2-dimensions can always be represented in 2-dimensions such that they satisfy the cutting problem (e.g. a quadtree-like representation).
For non-scalar types like rectangles, operations like equality and intersection are distinct and there are an unbounded number of relationships that must be preserved that touch on concepts like size and aspect ratio to satisfy cutting requirements. The only way to expose these additional relationships to cutting algorithms is to encode and embed these other relationships in a (much) higher dimensionality space and then cut that space instead.
The mathematically general case isn't computable but real-world data models don't need it to be. Several decades ago it was determined that if you constrain the properties of the data model tightly enough then it should be possible to systematically construct a finite high-dimensionality embedding for that data model such that it satisfies the cutting problem.
Unfortunately, the "should be possible" understates the difficulty. There is no computer science literature for how one might go about constructing these cuttable embeddings, not even for a narrow subset of practical cases. The activity is also primarily one of designing data structures and algorithms that can represent complex relationships among objects with shape and size in dimensions much greater than three, which is cognitively difficult. Many smart people have tried and failed over the years. It has a lot of subtlety and you need practical implementations to have good properties as software.
About 20 years ago, long before "big data", the iPhone, or any current software fashion, this and several related problems were the subject of an ambitious government research program. It was technically successful, demonstrably. That program was killed in the early 2010s for unrelated reasons and much of that research was semi-lost. It was so far ahead of its time that few people saw the utility of it. There are still people around that were either directly involved or learned the computer science second-hand from someone that was but there aren't that many left.