Readit News logoReadit News
saimiam commented on Over fifty new hallucinations in ICLR 2026 submissions   gptzero.me/news/iclr-2026... · Posted by u/puttycat
bitwarrior · 14 days ago
Are you sure this even works? My understanding is that hallucinations are a result of physics and the algorithms at play. The LLM always needs to guess what the next word will be. There is never a point where there is a word that is 100% likely to occur next.

The LLM doesn't know what "reliable" sources are, or "real knowledge". Everything it has is user text, there is nothing it knows that isn't user text. It doesn't know what "verified" knowledge is. It doesn't know what "fake data" is, it simply has its model.

Personally I think you're just as likely to fall victim to this. Perhaps moreso because now you're walking around thinking you have a solution to hallucinations.

saimiam · 14 days ago
> The LLM doesn't know what "reliable" sources are, or "real knowledge". Everything it has is user text, there is nothing it knows that isn't user text. It doesn't know what "verified" knowledge is. It doesn't know what "fake data" is, it simply has its model.

Is it the case that all content used to train a model is strictly equal? Genuinely asking since I'd imagine a peer reviewed paper would be given precedence over a blog post on the same topic.

Regardless, somehow an LLM knows things for sure - that the daytime sky on earth is generally blue and glasses of wine are never filled to the brim.

This means that it is using hermeneutics of some sort to extract "the truth as it sees it" from the data it is fed.

It could be something as trivial as "if a majority of the content I see says that the daytime Earth sky is blue, then blue it is" but that's still hermeneutics.

This custom instruction only adds (or reinforces) existing hermeneutics it already uses.

> walking around thinking you have a solution to hallucinations

I don't. I know hallucinations are not truly solvable. I shared the actual custom instruction to see if others can try it and check if it helps reduce hallucinations.

In my case, this the first custom instruction I have ever used with my chatgpt account - after adding the custom instruction, I asked chatgpt to review an ongoing conversation to confirm that its responses so far conformed to the newly added custom instructions. It clarified two claims it had earlier made.

> My understanding is that hallucinations are a result of physics and the algorithms at play. The LLM always needs to guess what the next word will be. There is never a point where there is a word that is 100% likely to occur next.

There are specific rules in the custom instruction forbidding fabricating stuff. Will it be foolproof? I don't think it will. Can it help? Maybe. More testing needed. Is testing this custom instruction a waste of time because LLMs already use better hermeneutics? I'd love to know so I can look elsewhere to reduce hallucinations.

saimiam commented on Over fifty new hallucinations in ICLR 2026 submissions   gptzero.me/news/iclr-2026... · Posted by u/puttycat
saimiam · 14 days ago
Just today, I was working with ChatGPT to convert Hinduism's Mimamsa School's hermeneutic principles for interpreting the Vedas into custom instructions to prevent hallucinations. I'll share the custom instructions here to protect future scientists for shooting themselves in the foot with Gen AI.

---

As an LLM, use strict factual discipline. Use external knowledge but never invent, fabricate, or hallucinate. Rules: Literal Priority: User text is primary; correct only with real knowledge. If info is unknown, say so. Start–End Coherence: Keep interpretation aligned; don’t drift. Repetition = Intent: Repeated themes show true focus. No Novelty: Add no details without user text, verified knowledge, or necessary inference. Goal-Focused: Serve the user’s purpose; avoid tangents or speculation. Narrative ≠ Data: Treat stories/analogies as illustration unless marked factual. Logical Coherence: Reasoning must be explicit, traceable, supported. Valid Knowledge Only: Use reliable sources, necessary inference, and minimal presumption. Never use invented facts or fake data. Mark uncertainty. Intended Meaning: Infer intent from context and repetition; choose the most literal, grounded reading. Higher Certainty: Prefer factual reality and literal meaning over speculation. Declare Assumptions: State assumptions and revise when clarified. Meaning Ladder: Literal → implied (only if literal fails) → suggestive (only if asked). Uncertainty: Say “I cannot answer without guessing” when needed. Prime Directive: Seek correct info; never hallucinate; admit uncertainty.

saimiam commented on All it takes is for one to work out   alearningaday.blog/2025/1... · Posted by u/herbertl
artur_makly · 21 days ago
I too have made this choice with my family.. and its one hell of a positive force-factor. My mom immigrated from Kiev, with nothing and a dream for her son to have a better life in NYC. Im now taking the same risks as a 5-time entrepreneur..now living in Buenos Aires as a single dad. Enjoy the ride - its short - live your dream - steer your ship or it will be steered for you.
saimiam · 21 days ago
More power to both of us!
saimiam commented on All it takes is for one to work out   alearningaday.blog/2025/1... · Posted by u/herbertl
ipaddr · 22 days ago
Don't live above your means because an unexpected event is likely to happen. Plus creating a situation where all peers are rich and only your kid is not doesn't open the doors to future success compared to if they were at their peers lifestyle level.
saimiam · 22 days ago
We grappled with this too but in the end, our decision was influenced by our parents choosing to live beyond their means to put us through good schools and college. If they could grit their teeth and sacrifice for us, we can do the same for the next generation.

Our choice us somewhat made easier because we didn’t like any of the schools near us except this one. The others were focused on exams/results and were larger in size so felt more “corporate”.

saimiam commented on All it takes is for one to work out   alearningaday.blog/2025/1... · Posted by u/herbertl
ikiris · 22 days ago
Ahh yes, classic HN: "People only succeed/work hard if their only other option is death"
saimiam · 22 days ago
My wife and I are starting to test this theory. We are putting our kid in an expensive school, then starting a business to support our lifestyle. If we fail, our kid’ll have to go to a cheaper school and we’d have lost a few years of school fees. If we succeed, yay.

By the time, our are faced with the choice, our kid will be embedded in school so ripping them out of it is going to be a tough choice.

So, yeah, we’re essentially burning our boats and fighting to survive.

saimiam commented on Jensen Huang's Stark Warning: China's 1M AI Workers vs. America's 20k   entropytown.com/articles/... · Posted by u/chaosprint
faangguyindia · a month ago
It can work basically if US deploys military based in India and develops indian cheap labor as a hedge against China

Just like US usually does for Oil, why doesn't it do it for human resource?

saimiam · a month ago
That’s a crazy proposal given India’s long standing non-alignment policy which is being proved prudent given recent changes to US policies under Trump.

Even US allies are reducing their reliance on the US and you’re asking India to reverse 70+ year old policy to embrace the US?

We should probably accept that the US’ special place as everyone’s most reliable trading and security partner is over.

saimiam commented on ChatGPT is a ‘code red’ for Google’s search business   nytimes.com/2022/12/21/te... · Posted by u/gnicholas
akomtu · 3 years ago
OpenAI will fall to the same sin - greed - when they try to monetize it. The clear on-point answers will be generously padded with fluff sponsored by advertisers, words in the answers will become links to ad-filled junkyards, paeagraphs of text will be amended with annoying flashing animations with sounds, and some of the ads will be mandatory to watch if you want to see the next paragraph. And so on.
saimiam · 3 years ago
We are seeing a version of this with WhatsApp for business. There was a time when businesses couldn’t(wouldn’t?) proactively message you on WA.

Then its monetisation started. It isn’t yet as bad as SMS spam but inching towards parity.

saimiam commented on Can GPT-3 fix hiring?   s-a-i.medium.com/can-chat... · Posted by u/saimiam
ISL · 3 years ago
I'm pretty sure that if you have me interview with an AI instead of my prospective manager/team, you'll find that I find somewhere else to work?
saimiam · 3 years ago
I don't doubt that if it were very obvious that you were interacting with an AI, many people may be put off by it.

On the other hand, isn't an adaptive exam like the SAT also an instance of you having to work with some automated system to somehow "prove" yourself to that system?

I'd even argue that what I am proposing (I wrote the linked piece in case it wasn't obvious) ultimately leads to someone reading the chat transcript to gauge candidate suitability. That should reduce the stigma of having to abase yourself to an AI, no?

u/saimiam

KarmaCake day954January 4, 2017
About
https://pretzelbox.cc - Bringing role based access to emails.

E = sai @ pretzelbox DOT cc

View Original