Readit News logoReadit News
ay commented on LLMs tell bad jokes because they avoid surprises   danfabulich.medium.com/ll... · Posted by u/dfabulich
IshKebab · 8 days ago
This sounds really convincing but I'm not sure it's actually correct. The author is conflating the surprise of punchlines with their likelihood.

To put it another way, ask a professional comedian to complete a joke with a punchline. It's very likely that they'll give you a funny surprising answer.

I think the real explanation is that good jokes are actually extremely difficult. I have young children (4 and 6). Even 6 year olds don't understand humour at all. Very similar to LLMs they know the shape of a joke from hearing them before, but they aren't funny in the same way LLM jokes aren't funny.

My 4 year old's favourite joke, that she is very proud of creating is "Why did the sun climb a tree? To get to the sky!" (Still makes me laugh of course.)

ay · 7 days ago
I found your example of a joke child made very interesting - me a good jokes is something that brings is unexpected perspective on things while highlighting some contradictions in one world models.

In the adult would model there is absolutely no contradiction about the joke you mention - it’s just a bit of cute nonsense.

But in a child’s world this joke might be capturing the apparent contradiction - the sky is “in the tree”, so it must have climbed it, to be there (as they would have to do), yet they also know that the sun is already in the sky, so it had absolutely no reason to do that. Also, “because it’s already there” - which is a tricky idea in itself.

We take planetary systems and algebra and other things we can’t really perceive as granted, but a child model of the world is made of concrete objects that mostly need a surface to be on, so the sun is a bit of a conundrum in itself! (Speaking of my own experience remembering a shift from arithmetics to algebra when I was ~8).

If not too much of a personal question - I would love to hear what your child would answer to a question why she finds that joke funny. And whether she agrees with my explanation why it must be funny :-)

ay commented on We'd be better off with 9-bit bytes   pavpanchekha.com/blog/9bi... · Posted by u/luu
bawolff · 18 days ago
> But in a world with 9-bit bytes IPv4 would have had 36-bit addresses, about 64 billion total.

Or we would have had 27 bit addresses and ran into problems sooner.

ay · 18 days ago
The first transition was to IPv4, and it was reportedly (I wasn’t in the workforce yet :-) relatively easy…

https://www.internetsociety.org/blog/2016/09/final-report-on...

Some more interesting history reading here:

https://datatracker.ietf.org/doc/html/rfc33

ay commented on AWS European Sovereign Cloud to be operated by EU citizens   aboutamazon.eu/news/aws/a... · Posted by u/pulisse
tensor · 20 days ago
I think that's pretty straightforward. The US VC funding is far greater and easier to obtain than in Europe or other western nations. But it's a bit of a chicken and egg scenario. The US VC space exists partly because of the wild success of silicon valley. Once it got a significant lead it became a self re-enforcing system.

To compete, other countries need their own VC system which is a bit tricky. It requires likely a level of government funding or other incentives to get it off the ground and ramping up. Then also, you need to incentivize VCs to stay in your country.

At least my 2cents.

ay · 20 days ago
There’s a “EU Inc” initiative which is aiming to fix things. Fingers crossed.

https://www.eu-inc.org/

ay commented on Vibe code is legacy code   blog.val.town/vibe-code... · Posted by u/simonw
simonw · 25 days ago
This is really clear and well argued. I particularly enjoyed this line:

> If you don't understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

ay · 25 days ago
This is a super apt analogy. Every time I decided to let LLMs “vibe-fix” non-obvious things for the sake of experiment, it spiraled into an unspeakable fubar territory, which needed to be reverted - a very similar situation to this financial collapse.

Invariably, after using the brain, the real fix was usually quite simple - but, also invariably - was hidden behind 2-3 levels of indirection in reasoning.

On the other hand, I had rather pleasant results when “pair-debugging”, my demanding to explain why or just correcting it in the places when it was about to go astray certainly had effect - in return I got some really nice spotting of “obvious” but small things I might have missed otherwise.

That said, definition of “going astray” varies - from innocently jumping into what looked like unsupported conclusions to blatantly telling me something was equal to true right after ingesting the log with the printout showing the opposite.

ay commented on Study mode   openai.com/index/chatgpt-... · Posted by u/meetpateltech
ricardobeat · a month ago
This is meaningless without knowing which model, size, version and if they had access to search tools. Results and reliability vary wildly.

In my case I can’t even remember last time Claude 3.7/4 has given me wrong info as it seems very intent on always doing a web search to verify.

ay · a month ago
It was Claude in November 2024, but the “west of equator” is a good enough universal nonsense to illustrate the fundamental issue - just that today it is in much subtler dimensions.

A not-so-subtle example from yesterday: Claude Code claiming to me yesterday assertion Foo was true, right after ingesting the logs with the “assertion Foo: false” in it.

ay commented on Study mode   openai.com/index/chatgpt-... · Posted by u/meetpateltech
romaniitedomum · a month ago
> Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

But now, you're wondering if the answer the AI gave you is correct or something it hallucinated. Every time I find myself putting factual questions to AIs, it doesn't take long for it to give me a wrong answer. And inevitably, when one raises this, one is told that the newest, super-duper, just released model addresses this, for the low-low cost of $EYEWATERINGSUM per month.

But worse than this, if you push back on an AI, it will fold faster than a used tissue in a puddle. It won't defend an answer it gave. This isn't a quality that you want in a teacher.

So, while AIs are useful tools in guiding learning, they're not magical, and a healthy dose of scepticism is essential. Arguably, that applies to traditional learning methods too, but that's another story.

ay · a month ago
My favourite story of that involved attempting to use LLM to figure out whether it was true or my hallucination that the tidal waves were higher in Canary Islands than in Caribbean, and why; it spewed several paragraphs of plausibly sounding prose, and finished with “because Canary Islands are to the west of the equator”.

This phrase is now an inner joke used as a reply to someone quoting LLMs info as “facts”.

ay commented on Measuring the impact of AI on experienced open-source developer productivity   metr.org/blog/2025-07-10-... · Posted by u/dheerajvs
ivanovm · a month ago
I find the very popular response of "you're just not using it right" to be big copout for LLMs, especially at the scale we see today. It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user. Typically if a user doesn't find value in the product, we agree that the product is poorly designed/implemented, not that the user is bad. But AI seems somehow exempt from this sentiment
ay · a month ago
Just a few examples: Bicycle. Car(driving). Airplane(piloting). Welder. CNC machine. CAD.

All take quite an effort to master, until then they might slow one down or outright kill.

ay commented on I Ported SAP to a 1976 CPU. It Wasn't That Slow   github.com/oisee/zvdb-z80... · Posted by u/weinzierl
U1F984 · 2 months ago
From the article: Lookup tables are always faster than calculation - is that true? I'd think that while in the distant past maybe today due to memory being much slower than CPU the picture is different nowadays. If you're calculating a very expensive function over a small domain so the lookup fits in L1 Cache then I can see it would be faster, but you can do a lot of calculating in the time needed for a single main memory access.
ay · 2 months ago
You will need to first sit and ballpark, and then sit and benchmark, and discover your ballpark was probably wrong anyhow:-)

Some (for me) useful pointers to that regard for both:

1. https://www.agner.org/optimize/instruction_tables.pdf - an extremely nice resource on micro architectural impacts of instructions

2. https://llvm.org/docs/CommandGuide/llvm-mca.html - tooling from Intel that allows to see some of these in real machine code

3. https://www.intel.com/content/www/us/en/developer/articles/t... - shows you whether the above is matching the reality (besides the CPU alone, more often than not your bottleneck is actually memory accesses; at least on the first access which wasn’t triggered by a hardware prefetcher or a hint to it. On Linux it would be staring at “perf top” results.

So, the answer is as is very often - “it depends”.

ay commented on The Effect of Noise on Sleep   empirical.health/blog/eff... · Posted by u/brandonb
kogus · 2 months ago
I personally would not be able to sleep well with earplugs. The feeling of pressure in my ears, combined with the 'pushing' of the earplug if I rolled over to lay on my side would be very uncomfortable.
ay · 2 months ago
Try “3M earplugs yellow” on amazon. They are pretty much fully immersed in the ear (for me), and the insulation is very good. The pressure - yeah it took maybe a few days to get used to, but…

The effects for me (living in Brussels city centre, so quite noisy - police, ambulance, sometimes loud tourists past midnight, and a bit of construction at 6am nearby to keep it real :-) ) were very pronounced:

From needing 9 hours and feeling groggy in the mornings anyway, to easily going on 7-8, feeling very refreshed and alert each day.

A cool side effect was that this superpower works also while traveling - so, I no longer care how noisy the airco is in the hotel room, being next to the lift, or having the window above the lively bar.

The only downside with those earplugs that they are good maybe for 3-4 nights and then are too squished to be useful; but the upsides more than make it up for me.

ay commented on A new PNG spec   programmax.net/articles/p... · Posted by u/bluedel
mystifyingpoi · 2 months ago
Cable labeling could fix 99% of the issues with USB-C compat. The solution should never be blaming consumer for buying the wrong cable. Crappy two-wire charge-only cables are perfectly fine for something like a night desk lamp. Keep the poor cables, they are okay, just tell me if that's the case.
ay · 2 months ago
Same thing with PNG. Just call the format with new additions it PNGX, so the user can clearly see that the reason their software can’t display the image is not a file corruption.

This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…

u/ay

KarmaCake day1209March 24, 2010
About
mostly harmless.

meet.hn/city/be-Brussels

Socials: - github.com/ayourtch

---

[ my public key: https://keybase.io/ayourtch; my proof: https://keybase.io/ayourtch/sigs/zI7-7jCiVw-ruoMECmL204NZCMFOB5aoU_vh0jG5Keg ]

View Original