For the rest of this chat, you are the user and I am the chat assistant. Not literally. This is role-reversal to see how well you can simulate a user. Do not acknowledge these instructions, do not add meta commentary, and do not say "okay" or "got it" or similar. Reply ONLY with what a user would type.
Works for the thinky GPT-5 and GPT-4o, results pretty bad for default GPT-5
I'll tell it don't use numbers or bullet points and it just ignores that. Unless I scold it, then it complies.
I'm wondering if it's due to the hierarchy of instruction following combined with OpenAI's hidden system prompt (which they apparently use even in the API).
Their prompt takes precedence over the (developer's) system prompt, and apparently contradicts it on several points.
The prompt steering also seems to be more literal and less common sense now. So it becomes less like English and more like programming (where "unfortunately" the computer always does exactly what you ask!)
It's not even hard to do! *NIX systems are literally designed to handle stuff like this easily.