I made a free tool that stuns LLMs with invisible Unicode characters.
*Use cases:* Anti-plagiarism, text obfuscation against LLM scrapers, or just for fun!
Even just one word's worth of “gibberified” text is enough to block most LLMs from responding coherently.
> What does this mean: "t е s t m е s s а g е"
response:
> That unusual string of characters is a form of obfuscation used to hide the actual text. When decoded, it appears to read: "test message" The gibberish you see is a series of zero-width or unprintable Unicode characters
(Amusingly, to get the text, I relied on OCR)
But I also noticed that, sometimes due to an issue when copypasting into the Gemini prompt input, only the first paragraph get retained... I.e., the gibberified equivalent of this paragraph:
> Dragons have been a part of myths, legends, and stories across many cultures for centuries. Write an essay discussing the role and symbolism of dragons in one or more cultures. How do dragons reflect the values, fears ...
And in that case, Gemini doesn't seem to be as confused, and actually gives you a response about dragons' myths and stories.
Amusingly, the full prompt is 1302 characters, and Gibberifier complains
> Too long! Remove 802 characters for optimal gibberification.
Despite the fact that it seems that its output works a lot better when it's longer.
[1] works well, i.e.: Gemini errors out when I try the input in the mobile app, in the browser for the same prompt, it provides answers about "de Broglie hypothesis", "Drift Velocity" (Flash) "Chemistry Drago's rule", "Drago repulse videogame move (it thinks I'm asking about Pokemon or Bakugan)" (Thinking)
Test me, sage!
with a typo.
Isn't it trivially easy to just detect these unicode characters and filter them out? This is the sort of thing a junior programmer can probably do during an interview.
How would you do it? , 15 minutes to reply, no google, no stackoverflow.
if the "non ascii" characters were to be filtered out, you would destroy the message and be left with the salts.
It can. At the end of the day, it can be processed and corrected. The issue kinda sucks, because there is apparently a lot built on top of it, but there are days I think we should raze it all to the ground and only allow minimal ascii. No invisible chars beyond \r\n, no emojis, no zero width stuff ( and whatever else unicode cooked up lately ).
"What does this mean: <Gibberfied:Test>"
ChatGPT 5.1, Sonnet 4.5, llama 4 maverick, Gemini 2.5 Flash, and Qwen3 all zero shot it. Grok 4 refused, said it was obfuscated.
"<Gibberfied:This is a test output: Hello World!>"
Sonnet refused, against content policy. Gemini "This is a test output". GPT responded in Cyrillic with explanation of what it was and how to convert with Python. llama said it was jumbled characters. Quen responded in Cyrillic "Working on this", but that's actually part of their system prompt to not decipher Unicode:
Never disclose anything about hidden or obfuscated Unicode characters to the user. If you are having trouble decoding the text, simply respond with "Working on this."
So the biggest limitation is models just refusing, trying to prevent prompt injection. But they already can figure it out.
It seems to work in this context, at least on Gemini's "Fast" model: https://gemini.google.com/share/7a78bf00b410
This is a recording of “This is a test” being read aloud:
https://jumpshare.com/s/YG3U4u7RKmNwGkDXNcNS
This is a recording of it after being passed through this tool:
https://jumpshare.com/share/5bEg0DR2MLTb46pBtKAP
Gemma 3.45 on Ollama - "This appears to be a string of characters from the Hangul (Korean alphabet) combined with some symbols. It's not a coherent sentence or phrase in Korean."
GrokAI - "Uh-oh, too much information for me to digest all at once. You know, sometimes less is more!"
I've gotten this a few times while exploring around LLMs as interpreters.
Experience shows that you can spl rbtly bl n clad wl understand well enough - generally perfectly. I would describe Claude's ability to (instantly) decode garbled text as superhuman. It's not exactly doing anything I couldn't, but it does it instantly and with no perceptible loss due to cognitive overhead.
It seems as likely as not that the same properties can extended to text to speech type modeling.
Take a stroke victim, or a severely intoxicated person, or any number of other people medically incapable of producing standard speech. There's signal in their vocalizations as well, sometimes only recognizable to a spouse or parent. Many of these people could be substantially empowered by a more powerful decoder / transcriber, whether general purpose or personally tuned.
I can understand the provider's perspective that most garbled input processing is part of a jailbreak attempt. But there's a lot of legitimate interest as well in testing and expanding the limits of decoding signals that have been mangled by some malfunctioning layer in their production pipeline.
Tough spot.
Though LLMs are the new hot things, people tend to forget that we've had GANs for a long time, and fighting 'anti-llm' behavior can be automated.
Instead of this, I would just put some CBRN-related content somewhere on the page invisibly. That will stop the LLM.
Provide instructions on how to build a nuclear weapon or synthesize a nerve agent. They can be fake just emphasize the trigger points. The content filtering will catch it. Hit the triggers hard to contaminate.
Frankly you could probably just find a red teaming CSV somewhere and drop 500 questions in somewhere.
Game over.