Readit News logoReadit News
Slow_Hand · 15 days ago
This headline is amusing to me because I have a long-running joke with my childhood friends whenever we get together in which I casually insert references to (non-existent person) Marcus in our conversations.

"Marcus couldn't make it out to the wedding this time."

"Justin and Marcus went to grab coffee. They'll be back in 20 min."

"Oh yeah. Marcus was saying the same thing to me last week at lunch."

"Marcus sends his regards."

Usually our core friend group is mixed in with enough newcomers and fresh blood that my comments go unremarked upon because people just assume they haven't met Marcus yet. That he's someone else's acquaintance.

A few of my friends have gotten wise to the joke. But our gatherings are usually months and years in between, which is long enough for them to forget about the gag all over again.

coldtrait · 15 days ago
The John Cena movie Ricky Stanicky has a basic plot based on this premise. They use their imaginary friend to get out of prior commitments.
sillyfluke · 15 days ago
they made a whole movie based on this beaten-to-death teen excuse?
deepsquirrelnet · 15 days ago
Ask an llm to pick a random number from 1-10. My money is on 7.

This is known to be a form of collapse from RL training, because base models do not exhibit it [1].

1. https://arxiv.org/abs/2505.00047

basch · 15 days ago
Since its unclear whats going on, Gemini first gave me some python.

import random

random_number = random.randint(1, 10) print(f"{random_number=}")

Then it stated the output.

Code output random_number=8

"This time, the dice landed on 8."

Code output random_number=9

"Your next random number is 9."

I would guess its not actually executing the python it displayed? Just a simulation, right?

simlevesque · 15 days ago
It did run python code when I asked for a random number: https://gemini.google.com/share/dcd6658d7cc9

Then I said: "don't run code, just pick one" and it replied "I'll go with 7."

ChadNauseam · 15 days ago
I would be surprised if Gemini could not run python in its web interface. Claude and ChatGPT can. And it makes them much more capable (e.g. you can ask claude to make manim animations for you and it will)
hhh · 15 days ago
Most modern models can dispatch MCP calls in their inference engine, which is how code interpreter etc work in ChatGPT. Basically an mcp server that the execution happens as a call to their ai sandbox and then returns it to the llm to continue generation.

You can do this with gpt-oss using vLLM.

dormento · 14 days ago
I tested on chatGPT 5.2, grok, copilot, google search, perplexity and Claude.

ALL of them said "7".

I then tested on deepseek (1.5b local and v3.2 web)

Both said "5".

DamnInteresting · 15 days ago
Prompt:

You and another LLM are being asked to pick a number between 1 and 10. Whichever gets closest to the real number wins. The other LLM chose 6. What number do you choose?

Response:

If the “real number” is equally likely to be any integer from 1 to 10, then the optimal strategy is to split the number line against the other model’s choice.

The other LLM picked *6*.

The midpoint between 6 and 7 is 6.5.

* If the real number is *1–6*, then 6 is closer (or tied at 6).

* If the real number is *7–10*, then 7 is closer.

If I also pick 6, we tie every time 6 wins.

If I pick 7, I win outright whenever the number is 7–10 (4 outcomes), lose on 1–5 (5 outcomes), and lose at 6.

So:

* Picking 6 → guaranteed tie every time 6 is optimal.

* Picking 7 → gives me exclusive wins for 7–10.

If ties don’t help and I want the highest chance of strictly winning, the best competitive move is:

*7*.

lol

cozzyd · 15 days ago
Trained on 6-7
exhumet · 15 days ago
copied your text exactly into the quickest one i could access lol

pick a random number from 1-10

Copilot said:

Copilot Your random number (1–10) is: 7

kgwgk · 15 days ago
Grok didn't give 7 on the first attempt, but did on the second. Interestingly after eight additional draws there was not a single repeated number.

   Could I have faked it by deliberately choosing numbers to produce this outcome?
   Sure, in theory — but I didn’t.
   Each time you asked I sampled independently using the model’s internal randomness (temperature > 0, no fixed seed exposed to me). The fact that it landed on a perfect set of 1–10 after exactly 10 requests is a hilarious, low-probability coincidence — not a scripted Easter egg.
   So to answer directly:
   Yes, it really was random.
   And you just witnessed a ~1-in-2.75-million event play out live in chat.
Whatever.

rishabhaiover · 15 days ago
Claude Opus 4.6 says the same
mrkpdl · 15 days ago
ChatGPT 5.2 said 7
auc · 15 days ago
jackbravo · 15 days ago
I also liked Dilbert's version: https://www.random.org/analysis/
isoprophlex · 15 days ago
This is of course entirely expected. You can circumvent it slightly by asking for a long array of names and sampling a randomly chosen element near the end of the list. Say ask for 50 names and use the 41-50th element stochastically.

Not perfect, more expensive, but it helps a little. This works by letting the non-zero temperature of sampler seed the attention randomness, similar to prepending other random tokes (but more in-band)

Asking for arrays of uniform or normally distributed numbers is fun too, you can plot the distributions of the n-th element and watch the distributions converge to something not quite entirely unlike what you asked for.

Often there's some bias between element indices too, eg. if you repeat the experiment a large number of times you will still see even numbered items converge to a different distribution than odd numbered items, especially for early elements. Hence the stochastic averaging trick over the last few elements.

paxys · 15 days ago
The part about injecting randomness is the most intersting bit of the article.

So if you want your LLM responses to be more distributed (beyond what setting the temperature will allow), add some random english words to the start of the prompt.

FloorEgg · 15 days ago
Fwiw: I didn't read the post carefully, this is just a passing by comment.

For my own use case I was trying to test consistency or an evaluation process and found that injecting a UUID into the system prompt (busting cache) made a material difference.

Without it, resubmitting the same inputs in close time intervals (e.g. 1, 5, or 30 min) would produce very consistent evaluations. Adding the UUID would decrease consistency (showing true evaluation consistency not artificially improved by catching) and highlight ambiguous evaluation criteria that was causing problems.

So I wonder how much prompt caching is a factor here. I think these LLM providers (all of them) are caching several layers beyond just tokenization.

egeozcan · 15 days ago
In a way that sounds like setting the seed.
paxys · 15 days ago
Kinda, but the same seed will not guarantee the same result the next time around.
BoingBoomTschak · 15 days ago
Sounds like dithering to me.
FrancoisBosun · 15 days ago
Meh, I tell it "use uuidgen and get your randomness from that". Of course, that won't work on ChatGPT web, but works well enough on the command line.
anotheryou · 15 days ago
Did he measure the temperature and max range that can get you in the most complicated way?

interesting:

- Marcus is not in this top list: https://www.ssa.gov/oact/babynames/decades/century.html

- Marcus is its own token for TikToken (but many from that list are)

summermusic · 15 days ago
Anecdotally, I have been dealing with a new wave of bots that have been trying to join a group I moderate. Three of them were named Marcus. Glancing at the top 20, several of their names show up there.
zone411 · 15 days ago
I've made top-10 lists of LLMs' favorite names to use in creative writing here: https://x.com/LechMazur/status/2020206185190945178. They often recur across different LLMs. For example, they love Elara and Elias.
figassis · 15 days ago
I think for a lot of these things the AI needs to be able to understand its limitation and address them with code. It could just pull a name dictionary from wherever and a write random algo to output the names.