Readit News logoReadit News
kaivi commented on Werner Herzog Between Fact and Fiction   thenation.com/article/cul... · Posted by u/Hooke
iancmceachern · 15 days ago
Thank you, it's not on audible, where did you buy it?
kaivi · 15 days ago
I bought mine on audible.co.uk
kaivi commented on Russia's economy has entered the death zone   economist.com/by-invitati... · Posted by u/thelastgallon
dyauspitr · a month ago
How does Russia have $73 billion dollars worth of debt. Who is buying Russian treasuries? Also how did the GDP grow by 1% in 2025, is that a function of the internal Defence spending activity?
kaivi · a month ago
> how did the GDP grow by 1% in 2025

Build tanks -> increase GDP

Send those tanks to burn with men inside -> increase GDP per capita

kaivi commented on OpenAI's "Study Mode" and the risks of flattery   resobscura.substack.com/p... · Posted by u/benbreen
rgovostes · 7 months ago
Out of curiosity, was it James Tailor in Bangkok? I was whisked there on my last day by my hired guide while she stopped for an “errand”. It struck me as a preposterous hustle, but now I’m curious if this is a common ploy.
kaivi · 7 months ago
It was Royal Boss Taylor, I still have it saved on Google Maps. There are a lot of these tailors, but an innumerable amount of other scams too.
kaivi commented on OpenAI's "Study Mode" and the risks of flattery   resobscura.substack.com/p... · Posted by u/benbreen
accrual · 7 months ago
Right. If we define "original" as having no prior influence before creating a work, then it applies neither to humans nor AI.

Not to claim this is a perfect watertight definition, but what if we define it like this:

* Original = created from ones "latent" space. For a human it would be their past experiences as encoded in their neurons. For an AI it would be their training as encoded in model weights.

* Remixed = created from already existing physical artifacts, like sampling a song, copying a piece of an image and transforming it, etc.

With this definition both humans and AI can create both original and remixed works, depending on where the source material came from - latent or physical space.

kaivi · 7 months ago
> Remixed = created from already existing physical artifacts, like sampling a song, copying a piece of an image and transforming it, etc.

What's the significance of "physical" song or image in your definition? Aren't your examples just 3rd party latent spaces, compressed as DCT coefficients in jpg/mp3, then re-projected through a lens of cochlear or retinal cells into another latent space of our brain, which makes it tickle? All artist human brains have been trained on the same media, after all.

When we zoom this far out in search of a comforting distinction, we encounter the opposite: all the latent spaces across all modalities that our training has produced, want to naturally merge into one.

kaivi commented on OpenAI's "Study Mode" and the risks of flattery   resobscura.substack.com/p... · Posted by u/benbreen
ZeroGravitas · 7 months ago
Travis Kalanick (ex-CEO of Uber) thinks he's making cutting edge quantum physics breakthroughs with Grok and ChatGPT too. He has no relevant credentials in this area.
kaivi · 7 months ago
This epidemic is very visible when you peek into replies of any physics influencer on Xitter. Dozens of people are straight copy-pasting walls of LaTeX mince from ChatGPT/Grok and asking for recognition.

Perhaps epidemic isn't the right word here because they must have been already unwell. At least these activities are relatively harmless.

kaivi commented on OpenAI's "Study Mode" and the risks of flattery   resobscura.substack.com/p... · Posted by u/benbreen
infecto · 7 months ago
Are you religious by chance? I have been trying to understand why some individuals are more susceptible to it.
kaivi · 7 months ago
Not at all, I think the big part was just my unfamiliarity with insuretech plus the unexpected change in gpt-4 behavior.

I'm assuming here, but would you say that better critical thinking skills would have helped me avoid spending that Saturday with ChatGPT? It is often said that critical thinking is the antidote to religion, but I have a suspicion that there's a huge prerequisite which is general broad knowledge about the world.

A long ago, I once fell victim for a scam when I visited SE Asia for the first time. A pleasant man on the street introduced himself as a school teacher, showed me around, then put me in a tuktuk which showed me around some more before dropping me off in front of a tailor shop. Some more work inside of the shop, a complimentary bottle of water, and they had my $400 for a bespoke coat that I would never have bought otherwise. Definitely a teaching experience. This art is also how you'd prime an LLM to produce the output you want.

Surely, large amounts of other atheist nerds must fall for these types of scams every year, where a stereotypical christian might spit on the guy and shoo him away.

I'm not saying that being religious would not increase one's chances of being susceptible, I just think that any idea will ring "true" in your head if you have zero counterfactual priors against it or if you're primed to not retrieve them from memory. That last part is the essence of what critical thinking actually is, in my opinion, and it doesn't work if you lack the knowledge. Knowing that you don't know something is also a decent alternative to having the counter-facts when you're familiar with an adjacent domain.

kaivi commented on OpenAI's "Study Mode" and the risks of flattery   resobscura.substack.com/p... · Posted by u/benbreen
iwontberude · 7 months ago
Thinking you can create novel physics theories with the help of an LLM is probably all the evidence I needed. The premise is so asinine that to actually get to the point where you are convinced by it seems very strange indeed.
kaivi · 7 months ago
> The premise is so asinine

I believe it's actually the opposite!

Anybody armed with this tool and little prior training could learn the difference between a Samsung S11 and the symmetry, take a new configuration from the endless search space that it is, correct for the dozen edge cases like the electron-phonon coupling, and publish. Maybe even pass peer review if they cite the approved sources. No requirement to work out the Lagrangians either, it is also 100% testable once we reach Kardashev-II.

This says more about the sad state of modern theoretical physics than the symbolic gymnastics required to make another theory of everything sound coherent. I'm hoping that this new age of free knowledge chiropractors will change this field for the better.

kaivi commented on OpenAI's "Study Mode" and the risks of flattery   resobscura.substack.com/p... · Posted by u/benbreen
neom · 7 months ago
I don't like this framing "But for people with mental illness, or simply people who are particularly susceptible to flattery, it could have had some truly dire outcomes."

I thought the AI safety risk stuff was very over-blown in the beginning. I'm kinda embarrassed to admit this: About 5/6 months ago, right when ChatGPT was in it's insane sycophancy mode I guess, I ended up locked in for a weekend with it...in...what was in retrospect, a kinda crazy place. I went into physics and the universe with it and got to the end thinking..."damn, did I invent some physics???" Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like "this is genuinely interesting stuff!" - and the LLM kept telling me it was genuinely interesting stuff and I should continue - I even emailed a friend a "wow look at this" email (he was like, dude, no...) I talked to my wife about it right after and she basically had me log off and go for a walk. I don't think I would have gotten into a thinking loop if my wife wasn't there, but maybe, and then that would have been bad. I feel kinda stupid admitting this, but I wanted to share because I do now wonder if this kinda stuff may end up being worse than we expect? Maybe I'm just particularly susceptible to flattery or have a mental illness?

kaivi · 7 months ago
It's funny that you mention this because I had a similar experience.

ChatGPT in its sycophancy era made me buy a $35 domain and waste a Saturday on a product which had no future. It hyped me up beyond reason for the idea of an online, worldwide, liability-only insurance for cruising sailboats, similar to SafetyWing. "Great, now you're thinking like a true entrepreneur!"

In retrospect, I fell for it because the onset of its sycophancy was immediate and without any additional signals like maybe a patch note from OpenAI.

kaivi commented on V-JEPA 2 world model and new benchmarks for physical reasoning   ai.meta.com/blog/v-jepa-2... · Posted by u/mfiguiere
cubefox · 9 months ago
I think the fundamental idea behind JEPA (not necessarily this concrete Meta implementation) will ultimately be correct: predicting embeddings instead of concrete tokens. That's arguably what animals do. Next-token prediction (a probability distribution over the possible next tokens) works well for the discrete domain of text, but it doesn't work well for a continuous domain like video, which would be needed for real-time robotics.

For text, with a two-byte tokenizer you get 2^16 (~65.000) possible next tokens, and computing a probability distribution over them is very much doable. But the "possible next frames" in a video feed would already be an extremely large number. If one frame is 1 megabyte uncompressed (instead of just 2 bytes for a text token) there are 2^(8*2^20) possible next frames, which is far too large a number. So we somehow need to predict only an embedding of a frame, of how the next frame of a video feed will look approximately.

Moreover, for robotics we don't want to just predict the next (approximate) frame of a video feed. We want to predict future sensory data more generally. That's arguably what animals do, including humans. We constantly anticipate what happens to us in "the future", approximately, and where the farther future is predicted progressively less exactly. We are relatively sure of what happens in a second, but less and less sure of what happens in a minute, or a day, or a year.

kaivi · 9 months ago
> We constantly anticipate what happens to us in "the future", approximately, and where the farther future is predicted progressively less exactly

There's then evidence of what's called Predictive Coding. When that future happens, a higher level circuit decides how far off we were, and then releases appropriate neuromodulators to re-wire that circuit.

That would mean that to learn faster, you want to expose yourself to situations where you are often wrong: be often surprised and go down the wrong paths. Have a feedback mechanism which will tell you when you're wrong. This is maybe also why the best teachers are the ones who often ask the class questions for which there are counter-intuitive answers.

u/kaivi

KarmaCake day766September 12, 2013View Original