Readit News logoReadit News
snarkconjecture commented on Show HN: What's my JND? – a colour guessing game   keithcirkel.co.uk/whats-m... · Posted by u/Keithamus
OisinMoran · 3 days ago
This is fun! I just played once and got 0.0016, which it says is "absurdly below the theoretical limit"...

Okay, tried again and got 0.0034 which is still says is beyond the human limit! I'll have to give this to my mum because we often argue about colours and I suspect she might be a tetrachromat.

Both tests on a Pixel 10 btw

snarkconjecture · 2 days ago
Tetrachromacy wouldn't affect a test taken through a phone screen.
snarkconjecture commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
wavemode · 3 days ago
Not to sound condescending, but this reads like someone fimiliar with LLMs but very unfamiliar with statistics in general.

If we could understand economics, or poverty, or any number of other social structures, simply by cramming data into a statistical model with billions of parameters, we would've done that decades ago and these problems would already be understood.

In the real world, though, there is a phenomenon called overfitting. In other words you can perfectly model the training data but be unable to make useful predictions about new data (i.e. the future).

snarkconjecture · 3 days ago
Deep neural networks can generalize well even when they're far into the overparametrized regime where classical statistical learning theory predicts overfitting. This is usually called "double descent" and there are many papers on it.
snarkconjecture commented on Why do people keep writing about the imaginary compound Cr2Gr2Te6?   righto.com/2025/08/Cr2Ge2... · Posted by u/freediver
pseudochemist · 6 months ago
> I'm inclined to give them a pass. It's easy enough to figure out that it should be germanium and not gadolinium, and dyslexia already exists among scientists.

I’m not. If somewhat said Pi was 9.14 I think no one would give it a pass. It’s not like a misspelling. It’s an invalid element which is the chemistry equivalent of an absurdly wrong number in maths.

snarkconjecture · 6 months ago
It's more like saying pi is approximately "3..14". Easily corrected syntax errors aren't as bad as semantic errors.
snarkconjecture commented on Hot take: GPT 4.5 is a nothing burger   garymarcus.substack.com/p... · Posted by u/isaacfrond
refulgentis · a year ago
It's unfortunate it is named 4.5 -- it is next generation scale, and it's a 1.0 of next-generation scale.

Sonnet is on its 3rd iteration, i.e. has considerably more post-training, most notably, reasoning via reinforcement learning.

snarkconjecture · a year ago
Versions numbers for LLMs don't mean anything consistent. They don't even publicly announce at this point which models are built from new base models and which aren't. I'm pretty sure Claude 3.5 was a new set of base models since Claude 3.

What do mean by "it's a 1.0" and "3rd iteration"? I'm having trouble parsing those in context.

snarkconjecture commented on Iterated Log Coding   adamscherlis.github.io/bl... · Posted by u/snarkconjecture
DannyBee · a year ago
Isn't this just a variant of Dirac's solution for representing any number using sqrt, log, and the number 2?
snarkconjecture · a year ago
Not really. Dirac's trick works entirely at a depth of two logs, using sqrt like unary to increment the number. It requires O(n) symbols to represent the number n, i.e. O(2^n) symbols to represent n bits of precision. This thing has arbitrary nesting depth of logs (or exps), and can represent a number to n bits of precision in O(n) symbols.
snarkconjecture commented on QwQ: Alibaba's O1-like reasoning LLM   qwenlm.github.io/blog/qwq... · Posted by u/amrrs
exact_string · a year ago
These tests always make me wonder: What qualifies as a valid pattern rule?

For example, why wouldn't "0" be a correct answer here (rule being "every other number on the right should be 0, other numbers do not have a pattern")?

snarkconjecture · a year ago
I think it's better phrased as "find the best rule", with a tacit understanding that people mostly agree on what makes a rule decent vs. terrible (maybe not on what makes one great) and a tacit promise that the sequence presented has at least one decent rule and does not have multiple.

A rule being "good" is largely about simplicity, which is also essentially the trick that deep learning uses to escape no-free-lunch theorems.

snarkconjecture commented on Wealth Distribution in the United States   righto.com/2024/10/wealth... · Posted by u/ssklash
dietr1ch · a year ago
Mayan economy: Sacrifice your K wealthiest individuals yearly to ensure prosperity of your economy on the years to come. Those sacrificed must give away 90% of their money.
snarkconjecture · a year ago
For the individuals shown in the graph, this buys about $6k per American (and after the first year you can't do it again).

u/snarkconjecture

KarmaCake day226June 3, 2022View Original