Readit News logoReadit News
oldstrangers commented on A.I. researchers are negotiating $250M pay packages   nytimes.com/2025/07/31/te... · Posted by u/jrwan
oldstrangers · 24 days ago
This strikes me as "end game" type behavior. These companies see the writing on the wall, and are willing to throw everything they have left to retain relevance in the coming post-AGI world. To me I'm more alarmed than I am shocked at the pay packages.
oldstrangers commented on On Not Carrying a Camera – Cultivating memories instead of snapshots   hedgehogreview.com/issues... · Posted by u/pseudolus
oldstrangers · 4 months ago
I'm a designer by profession but the majority of my actual schooling was in photography. Capturing a great visual moment feels second nature to me and the process feels so involuntary that you'd rarely even notice I've taken a photo. You can absolutely live in the moment and still have something to show for it.

Deleted Comment

oldstrangers commented on Show HN: I Made an Escape Room Themed Prompt Injection Challenge   pangea.cloud/landing/ai-e... · Posted by u/planetpr
oldstrangers · 5 months ago
Constructive feedback: every piece of design on this site screams I'm about to get scammed somehow.

Deleted Comment

Deleted Comment

Deleted Comment

Deleted Comment

oldstrangers commented on The Hallucinatory Thoughts of the Dying Mind   nautil.us/the-hallucinato... · Posted by u/dnetesn
oldstrangers · 6 months ago
If you're a fan of the holographic brain, you could postulate that perhaps the brain’s usual filtering mechanisms are being sufficiently degraded to allow for consciousness to tap into nonlocal holographic information. Or perhaps its a feature, returning you the "cosmic source" of all life and knowledge.
oldstrangers commented on Introducing deep research   openai.com/index/introduc... · Posted by u/mfiguiere
kees99 · 7 months ago
"Occasional nonsense" doesn't sound great, but would be tolerable.

Problem is - LLMs pull answers from their behind, just like a lazy student on the exam. "Halucinations" is the word people use to describe this.

Those are extremely hard to spot - unless you happen to know the right answer already, at which point - why ask? And those are everywhere.

One example - recently there was quite a discussion about llm being able to understand (and answer) base16 (aka "hex") encoding on the fly, so I went on to try base64, gzipped base64, zstd-compressed base64, etc...

To my surprise, LLM got most of those encoding/compressions right, decoded/uncompressed the question, and answered it flawlessly.

But with few encodings, LLM detected base64 correctly, got compression algorithm correctly, and then... instead of decompressing, made up a completely different payload, and proceeded to answer that. Without any hint of anything sinister going.

We really need LLMs to reliably calculate and express confidence. Otherwise they will remain mere toys.

oldstrangers · 7 months ago
Yeah, what you said represents a 'net gain' over not having any of that at all.

u/oldstrangers

KarmaCake day2733October 3, 2010View Original