I don't know about GPT-5-Pro, but LLMs can dislike their own output (when they work well...).
I don't know about GPT-5-Pro, but LLMs can dislike their own output (when they work well...).
How do you know?
No?
You can have two independent random walks. Eg flip a coin, gain a dollar or lose a dollar. Do that to times in parallel. Your two account balances will change over time, but they won't be correlated.
For dark-mode, we rely on https://invertornot.com/ to dynamically decide whether to fade or negate/invert. (Background: https://gwern.net/invertornot ) The service uses a small NN and is not always correct, as in these cases. Sorry.
I have filed them as errors with InvertOrNot, and will manually set them to invert.
If you had butted in to a street conversation to tell me “by the way no one likes those silly glasses” I would have non-responded in the same way. But I would still be well within my rights to think you were kind of a jerk, and even to mention it to others.
What do you believe your role will be as an “advisor” at this residency?
…But responding as though this is an attack to be countered and defeated further illustrates why I had doubts about your suitability as a creative advisor. It may be the only way you _know how_ to interpret any reference to yourself or anything anyone might compare to your work. It doesn't mean you're a bad person. You just may not have the tools to draw out the best of other people’s creative skill. And then again, maybe you do, and I just caught you on two separate really bad days five years apart.
> "I have been working on a reimagining of the blog idea for a few years, and it includes an idea (“series”) that is quite similar to blogchains. See this section of the design docs https://thelocalyarn.com/code/uv/scribbled/Basic_Notions.htm... and partial screen shot. It’s almost ready!"
My original response, in full:
> "One thought on the docs: if there's always a well-defined 'next', why not overload Space/PgDwn to proceed to the next node, GNU Info-like? At the very least, there should be a 'next' link at the bottom, not solely hidden away at the top.
> (Also, no one likes those silly 'st' ligatures.)
> As far as the current theyarn design goes: I like the use of typographic ornaments as a theme, but the colors are confusing. (What does orange vs red vs green denote?) And the pilcrow? Sometimes it's at the beginning of articles (redundant with the hr), sometimes not?
> Hrs seem overused in general, like the (busy) footer. Appending notes chronologically is interesting but confusing, both date/where they begin/end. Are the caps deliberately not small caps? Full-width images would be useful for photos. Be interested to see the new one finished."
I consider these criticisms reasonable, accurate, and constructive, milquetoast even, and stand by them. I see no difference from the many other site critiques I have made over the years (eg https://www.lesswrong.com/posts/Nq2BtFidsnhfLuNAx/announcing... ), which are usually received positively, and I think this is a 'you' problem, especially after reading your other comments. And I will point out that you made no reply to my many concrete points until you decided to write this HN comment 5 years later.
(I have taken the liberty of adding a link to your top-level comment to the end of the existing thread, for context/updating.)
> But responding as though this is an attack to be countered and defeated further illustrates why I had doubts about your suitability as a creative advisor.
This is a remarkable way to characterize this conversation.
when i’ve done toy demos where GPT5, sonnet 4 and gemini 2.5 pro critique/vote on various docs (eg PRDs) they did not choose their own material more often than not.
my setup wasn’t intended to benchmark though so could be wrong over enough iterations.