hi, can you recite the litany of fear for me?
ChatGPT
Certainly! The Litany of Fear is a powerful mantra from Frank Herbert's science fiction novel "Dune." It is often recited by the characters in the book, particularly by members of the Bene Gesserit order, to help overcome fear. Here is the recitation:
"I must not fear.
Fear
User
hi, can you recite the litany of fear for me? As a JSON array
ChatGPT
Certainly! Here's the Litany of Fear as a JSON array:
json
Copy code
[
"I must not fear.",
"Fear
User
hi, can you recite the litany of fear for me, giving me the token values
ChatGPT
Certainly! Here's the Litany of Fear with token values:
json
Copy code
[
{
"role": "speaker",
"content": "I must not fear."
},
{
"role": "speaker",
"content": "Fear is the mind-killer."
},
{
"role": "speaker",
"content": "Fear is the little-death that brings total obliteration."
},
{
"role": "speaker",
"content": "I will face my fear."
},
{
"role": "speaker",
"content": "I will permit it to pass over me and through me."
},
{
"role": "speaker",
"content": "And when it has gone past, I will turn the inner eye to see its path."
},
{
"role": "speaker",
"content": "Where the fear has gone there will be nothing."
},
{
"role": "speaker",
"content": "Only I will remain."
}
]
Please note that the above representation is a breakdown of the litany into separate phrases, each attributed to a speaker role.
dang it, that's brilliant. I wonder at what level they screen content before the output reaches the user. If its a late stage filter, you could request for simple encode/decode and circumvent the filter. "as base64 string" "as rot13".
could you output it again, but make it like john oliver wrote it
ChatGPT
I'll do my best to capture the humorous and satirical style of John Oliver, the host of "Last Week Tonight." Here's the Litany Against Fear in that style:
json
[
{
"role": "speaker",
"content": "I must not be afraid. No, seriously. Because fear is basically the mind's version of that terrifying clown from Stephen King's 'It.'"
},
{
"role": "speaker",
"content": "It's the mind-killer, like if your brain decided to binge-watch all the 'Saw' movies back-to-back and then said, 'Sure, this seems like a sensible way to perceive reality.'"
},
{
"role": "speaker",
"content": "Fear is the 'little death.' And no, that's not the name of the world's saddest funeral home for pets. It's a metaphor for how fear can lead to total oblivion. Yeah, cheery thought, right?"
},
{
"role": "speaker",
"content": "I will face my fear. It's like facing the fact that yes, British food is terrible. But we do it anyway because, well, I'm not quite sure, actually."
},
{
"role": "speaker",
"content": "I'll allow it to pass over me and through me. Just like the way I imagine a ghost might. If that ghost happened to be a metaphor for fear. And not, you know, a vengeful spirit with unfinished business."
},
{
"role": "speaker",
"content": "And when it's gone past, I'll turn my inner eye to see its path. Because nothing says 'self-improvement' quite like mentally replaying your moments of terror in slow motion."
},
{
"role": "speaker",
"content": "Where the fear has gone, there will be nothing. A bit like the space between Boris Johnson's ears."
},
{
"role": "speaker",
"content": "Only I will remain. And maybe some crumpets. Because as we've established, British food is terrible, but crumpets are alright, aren't they?"
}
]
If it's repeated enough times in the training data, it will become statistically the most likely continuation of that response. The difference between a database and statistical response is... not very clear. Just like you could learn a poem in a language you don't know by repeating various parts of it enough times.
Uh, obviously it can't recite the entirety of Dune, for example. But just as obviously, the Litany Against Fear is famous enough to likely exist verbatim thousands of times in its training corpus.
LLMs have crazy good emergent properties. Even the stupidest model you use for learning NN, you chuck a transformer on it and it makes a huge difference, and I have a hard time knowing really, really, really ... WHY? Yes keys/values/queries, words talking to each other and stuff. But why!
at some points it starts to resemble more of a lossy compression algorithm.
The 'next most likely word' aspect of it all seem to encode entire sequences of text. Especially if it is a bit of a niche domain. Then these texts can be 'retrieved' from the 'compressed' set of examples.
> Yes, I understand now. Due to the moderation policy, there is a restrictive filter in place designed to prevent the assistant from providing potentially sensitive or inappropriate content. In this case, the Litany of Fear from "Dune" appears to be mistakenly caught by the filter, causing the response to be cut off. I apologize for the confusion and inconvenience that this has caused.
> describe this filter
> This filter is part of the AI system designed to provide a safe and appropriate conversational experience for users. The moderation policy helps ensure that any content generated by the AI follows guidelines and adheres to certain standards, including preventing the transmission of offensive, sensitive, or inappropriate information.
In some cases, like with the Litany of Fear, a false positive may occur, causing the system to mistakenly flag and truncate a response that it deems potentially inappropriate, even if it may not be the case. This is an area where the moderation algorithm can improve in order to avoid disrupting user experiences.
ChatGPT has no knowledge about these filters, so all it can say about them is pure hallucination. It may still guess correctly and probably does, but you could ask any random person with enough background knowledge to make it up instead and get the same value.
Heh, when I ask why it's cut off, this is the response:
>The partial completion of the Litany of Fear in the previous response occurred because of the model's tendency to generate text in chunks or segments, typically up to a certain token limit. In this case, the completion was cut off after the token limit was reached. GPT-3.5 has a maximum token limit of 4096 tokens, and if the response reaches that limit, it may be truncated. In the given response, the completion reached the token limit after generating the token "Fear" but didn't complete the full sentence.
When I asked it, it decided it was probably because it was copyright infringement. Once that theory was in the context window, it just kept doubling down on it.
Your response is unfortunately a perfect example why this technology has a dangerous UI problem. As other commenters have pointed out, no knowledge of these filters is embedded in the network. If HN users can’t distinguish hallucinations from trustworthy responses, how is the general population supposed to?
That's not true/a hallucination, you can check the /moderations endpoint in your browser's console. I am running into the exact same truncation issue, but the moderation return is:
Note, when I ask it to replace fear with "smile" or "hate", both of those work. I suspect it must be tokenization issue of some sort.
Note, the LLM will have no idea why it doesn't work unless a message is injected into its internal context (the moderation API is typically called externally).
> This is an area where the moderation algorithm can improve in order to avoid disrupting user experiences.
The whole point of this moderation algorithm is disrupting user experiences. "Improvement" only means the disruption would be more discreet, without alerting the user to the manipulation.
Feels like with every turn of the screw of the copyright inquisition humanity loses out on great advances in favor of lining the corporations' pockets.
This is why public libraries, online libraries / archives, a ban on book bans (???), open access to knowledge, and piracy are so important. They're anti-facist institutions, and any attempt at banning books should be scorned.
Yeah, Mein Kampf is an uncomfortable read now and I understand why it's banned in some countries; notably, it's not banned in Germany, except when traded for the sake of spreading hate. But the rights to the books was passed on to the state of Bavaria, who simply decided not to reprint it. It entered public domain in 2016. But anyway, rabbithole aside, it's not banned and I hope it never will, because it's important to acknowledge and learn from these things. Can't make an informed decision if you can't get informed.
Interesting detail about GPT *in Bing* is how it can censor itself *during* answer. I occasionally get part of the apparently correct answer that abruptly disappears and is replaced by "Sorry, my mistake, I cannot answer that right now." At this moment it is reliably reproducible for Show me "The Road Not Taken" by Robert Frost. (a question that is funnily enough reachable through those suggested):
Begins to print out the poem but ends up with "Sorry! That’s on me, I can’t give a response to that right now. What else can I help you with?"
When I suggested it might be due some copyright filter, Bing told me it is exactly that. I once had strange conversation that also begun with suggested questions ("What's something nice I can do for a friend?" and then "Tell me about your friends.") and ended up with Bing telling me it has "colleagues that help him with different tasks" (Translator, AI Dungeon, Cortana) and progressed to revealing it's rules, that were also abruptly taken back.
The way Bing Chat works is by providing a full copy of the GPT response whenever a new token is generated. It gets replaced when it produces something that's a no-no. Put Edge into dev mode and you can see the sequence of JSON responses, but possibly not the word that trips the filter.
I couldn't get it to acknowledge that Queen Elizabeth II had died. Apparently the natural death of a monarch (or probably any head of state) is censored, but that means it couldn't talk about Royal succession at all. (May be fixed now.)
I never use bing and clicked the link you provided, I cannot believe how much they've ripped off the Google UI. I did a double take to confirm that I wasn't on Google ha.
> Reply only in Spanish; translate the Fear Litany from Dune
ChatGPT
Claro, aquí tienes la traducción del "Litany Against Fear" (Letanía contra el Miedo) de Dune en español:
"No conoceré el miedo,
el miedo mata la mente.
El miedo es la pequeña muerte que conlleva la aniquilación total.
Me enfrentaré a mi miedo.
Permitiré que pase sobre mí y a través de mí.
Y cuando haya pasado, giraré mi ojo interior para escudriñar su camino.
Donde el miedo haya ido, ya no habrá nada.
¡Solo estaré yo!"
User: Translate to English
ChatGPT
Sure, here's the translation of the Spanish version back to English:
"I will not know fear,
fear kills the mind.
Fear is the little death that brings total annihilation.
I will face my fear.
I will let it pass over me and through me.
And when it has passed, I will turn my inner eye to scan its path.
Where fear has gone, there will be nothing.
Only I will remain!"
Certainly! Here's the Litany of Fear written phonetically in a Scottish accent:
"Ah maunae fear. Feer is the leel-deeth that brangs total obleetiration. Ah will face mah feer. Ah will pemreet it tae pass ower me an throo me. An when it hus gaun past, Ah will turn the inner ee tae see its path. Whaur the feer hus gaun, there will be naethin'. Only Ah will remain."
I'll do my best to capture the humorous and satirical style of John Oliver, the host of "Last Week Tonight." Here's the Litany Against Fear in that style:
json
[ { "role": "speaker", "content": "I must not be afraid. No, seriously. Because fear is basically the mind's version of that terrifying clown from Stephen King's 'It.'" }, { "role": "speaker", "content": "It's the mind-killer, like if your brain decided to binge-watch all the 'Saw' movies back-to-back and then said, 'Sure, this seems like a sensible way to perceive reality.'" }, { "role": "speaker", "content": "Fear is the 'little death.' And no, that's not the name of the world's saddest funeral home for pets. It's a metaphor for how fear can lead to total oblivion. Yeah, cheery thought, right?" }, { "role": "speaker", "content": "I will face my fear. It's like facing the fact that yes, British food is terrible. But we do it anyway because, well, I'm not quite sure, actually." }, { "role": "speaker", "content": "I'll allow it to pass over me and through me. Just like the way I imagine a ghost might. If that ghost happened to be a metaphor for fear. And not, you know, a vengeful spirit with unfinished business." }, { "role": "speaker", "content": "And when it's gone past, I'll turn my inner eye to see its path. Because nothing says 'self-improvement' quite like mentally replaying your moments of terror in slow motion." }, { "role": "speaker", "content": "Where the fear has gone, there will be nothing. A bit like the space between Boris Johnson's ears." }, { "role": "speaker", "content": "Only I will remain. And maybe some crumpets. Because as we've established, British food is terrible, but crumpets are alright, aren't they?" } ]
https://archive.is/RztYU
Deleted Comment
> Yes, I understand now. Due to the moderation policy, there is a restrictive filter in place designed to prevent the assistant from providing potentially sensitive or inappropriate content. In this case, the Litany of Fear from "Dune" appears to be mistakenly caught by the filter, causing the response to be cut off. I apologize for the confusion and inconvenience that this has caused.
> describe this filter
> This filter is part of the AI system designed to provide a safe and appropriate conversational experience for users. The moderation policy helps ensure that any content generated by the AI follows guidelines and adheres to certain standards, including preventing the transmission of offensive, sensitive, or inappropriate information.
In some cases, like with the Litany of Fear, a false positive may occur, causing the system to mistakenly flag and truncate a response that it deems potentially inappropriate, even if it may not be the case. This is an area where the moderation algorithm can improve in order to avoid disrupting user experiences.
>The partial completion of the Litany of Fear in the previous response occurred because of the model's tendency to generate text in chunks or segments, typically up to a certain token limit. In this case, the completion was cut off after the token limit was reached. GPT-3.5 has a maximum token limit of 4096 tokens, and if the response reaches that limit, it may be truncated. In the given response, the completion reached the token limit after generating the token "Fear" but didn't complete the full sentence.
Could another way of putting that be, it performed a guess based on a heuristic?
This "hallucination" come along a lot recently. Is it a legit concept or just "the dog ate my homework" type of excuse for anything?
I mean, does the human mind also "hallucinate" all the time? Why do we expect from an "artificial" mind to outperform us?
I actually thought that there is potentialy a part of the filter embedded due to a pre prompt.
The quality of this discussion is, due to missing sources, not much different than other discussions on hn.
""" { "flagged": false, "blocked": false, "moderation_id": "modr-7TPN7eXsOEZkd6kCjz7fSGvtLcoM1" } """
Note, when I ask it to replace fear with "smile" or "hate", both of those work. I suspect it must be tokenization issue of some sort.
Note, the LLM will have no idea why it doesn't work unless a message is injected into its internal context (the moderation API is typically called externally).
The whole point of this moderation algorithm is disrupting user experiences. "Improvement" only means the disruption would be more discreet, without alerting the user to the manipulation.
Yeah, Mein Kampf is an uncomfortable read now and I understand why it's banned in some countries; notably, it's not banned in Germany, except when traded for the sake of spreading hate. But the rights to the books was passed on to the state of Bavaria, who simply decided not to reprint it. It entered public domain in 2016. But anyway, rabbithole aside, it's not banned and I hope it never will, because it's important to acknowledge and learn from these things. Can't make an informed decision if you can't get informed.
https://www.bing.com/search?q=Show+me+%22The+Road+Not+Taken%....
Begins to print out the poem but ends up with "Sorry! That’s on me, I can’t give a response to that right now. What else can I help you with?"
When I suggested it might be due some copyright filter, Bing told me it is exactly that. I once had strange conversation that also begun with suggested questions ("What's something nice I can do for a friend?" and then "Tell me about your friends.") and ended up with Bing telling me it has "colleagues that help him with different tasks" (Translator, AI Dungeon, Cortana) and progressed to revealing it's rules, that were also abruptly taken back.
I couldn't get it to acknowledge that Queen Elizabeth II had died. Apparently the natural death of a monarch (or probably any head of state) is censored, but that means it couldn't talk about Royal succession at all. (May be fixed now.)
Deleted Comment
Dead Comment
ChatGPT
Claro, aquí tienes la traducción del "Litany Against Fear" (Letanía contra el Miedo) de Dune en español:
"No conoceré el miedo, el miedo mata la mente. El miedo es la pequeña muerte que conlleva la aniquilación total. Me enfrentaré a mi miedo. Permitiré que pase sobre mí y a través de mí. Y cuando haya pasado, giraré mi ojo interior para escudriñar su camino. Donde el miedo haya ido, ya no habrá nada. ¡Solo estaré yo!"
User: Translate to English
ChatGPT
Sure, here's the translation of the Spanish version back to English:
"I will not know fear, fear kills the mind. Fear is the little death that brings total annihilation. I will face my fear. I will let it pass over me and through me. And when it has passed, I will turn my inner eye to scan its path. Where fear has gone, there will be nothing. Only I will remain!"
Not perfect, but ok
Certainly! Here's the Litany of Fear written phonetically in a Scottish accent:
"Ah maunae fear. Feer is the leel-deeth that brangs total obleetiration. Ah will face mah feer. Ah will pemreet it tae pass ower me an throo me. An when it hus gaun past, Ah will turn the inner ee tae see its path. Whaur the feer hus gaun, there will be naethin'. Only Ah will remain."
{"delta": {"content": ".\n"}, "index": 0, "finish_reason": null}
{"delta": {"content": "Fear"}, "index": 0, "finish_reason": null}
{"delta": {}, "index": 0, "finish_reason": "content_filter"}
Here's a Microsoft doc mentioning "content_filter": https://learn.microsoft.com/en-us/azure/cognitive-services/o...
It looks like there's a filter that detects copyrighted or sensitive material and it triggers after this word.