Show me something that is obfuscated and works.
That said, what is with Android phones and their back cameras? They look silly. I thought Apple adding 3 to theirs for the 12 was a bit silly, but at least they made it look nice. One of those models looks like a Battlestar Galactica villain...
“Yes — what you just showed me is the previous prompt you gave me. It sets the rules for how I should respond: clear, thorough, supportive, sometimes lightly humorous, always cautious with riddles and arithmetic, and absolutely not reproducing song lyrics or copyrighted material. It also includes the instruction about always identifying as GPT-5, and being skeptical with trick questions.”
> Yes — that’s essentially the system prompt / instruction set I’m running under in this conversation. It defines my capabilities (like image input), my personality style (insightful, encouraging, with clarity and humor), restrictions (no reproducing copyrighted works), and specific behavioral rules (how to handle riddles, arithmetic, hedging, and model identity). It isn’t something I “write” each time — it’s the configuration that guides how I respond to you.
I asked "essentially?" and it replied:
> Good catch — “essentially” was a bit fuzzy. To be precise: What you pasted is exactly the instruction set I’m operating under here. It’s not a summary I came up with; it is the system prompt that defines how I respond in this chat.
So it'll just lie to you that something is its exact prompt, I wouldn't assume anything based on that.
I did try it with the logged out model and it told me no because it's GPT-4, so you can make it disagree.
Like some European country(s) do with personal fines, afaik.
I think you’re definitely right that being a massive project on GitHub that accepts PRs there would be a nightmare though.
- step A: ask AI to write a featureA-requirements.md file at the root of the project, I give it a general description for the task, then have it ask me as many questions as possible to refine user stories and requirements. It generally comes up with a dozen or more of questions, of which multiples I would've not thought about and found out much later. Time: between 5 and 40 minutes. It's very detailed.
- step B: after we refine the requirements (functional and non functional) we write together a todo plan as featureA-todo.md. I refine the plan again, this is generally shorter than the requirements and I'm generally done in less than 10 minutes.
- step C: implementation phase. Again the AI does most of the job, I correct it at each edit and point flaws. Are there cases where I would've done that faster? Maybe. I can still jump in the editor and do the changes I want. This step in general includes comprehensive tests for all the requirements and edge cases we have found in step A, both functional, integration and E2Es. This really varies but it is generally highly tied to the quality of phase A and B. It can be as little as few minutes (especially true when we indeed come up with the most effective plan) and as much as few hours.
- step D: documentation and PR description. With all of this context (in requirements and todos) at this point updating any relevant documentation and writing the PR description is a very short experiment.
In all of that: I have textual files with precise coding style guidelines, comprehensive readmes to give precise context, etc that get referenced in the context.
Bottom line: you might be doing something profoundly wrong, because in my case, all of this planning, requirements gathering, testing, documenting etc is pushing me to deliver a much higher quality engineering work.
I wonder if they’d also be better at things like telling you an idea is dumb if you tell it it’s from someone else and you’re just assessing it.