Long story short: I think emoji in headings and lists, em dashes, and the vile TED Talk paragraph structure of "long sentence with lots of words asking a question or introducing a possibility. followed by. short sentences. rebutting. or affirming." are here to stay. My money is that it gets normalized and embraced as "well of course that's how you best communicate because I see it everywhere."
1. Make artifacts. If you're doing research into a tech, or a hypothesis, then fire off subagents to explore different parts of the problem space, each reporting back into a doc. Then another agent synthesizes the docs into a conclusion/report.
2. Require citations. "Use these trusted sources. Cite trusted sources for each claim. Cite with enough context that it's clear your citations supports the claim, and refuse to cite if the citation doesn't support the claim."
3. Review. This lets you then fire off a subagent to review the synthesis. It can have its own prompt: look for confirming and disconfirming evidence, don't trust uncited claims. If you find it making conflation mistakes, figure out at what stage and why, and adjust your process to get in front of them.
4. Manage your context. LLM only has a fixed context size ("chat length") and facts & instructions at the front of that tend to be better hewn to than things at the end. Subagents are a way of managing that context to get more from a single run. Artifacts like notebooks or records of subagent output move content outside the context so you can pick up in a new session ("chat") and continue the work.
It's less fun that just having a chat with ChatGPT. I find that I get much better quality results using these techniques. Hope this helps! If you're not interested in doing this (too much like work, and you already have something that works), it's no skin off my nose. All the best!