Readit News logoReadit News
itkovian_ commented on Stanford to continue legacy admissions and withdraw from Cal Grants   forbes.com/sites/michaelt... · Posted by u/hhs
thelock85 · a month ago
If you reduce the choice to public funding vs wealthy alumni stewardship, and there seems to be no meaningful pathway to circumventing the current assault on public funding, then why should you alienate your wealthy alumni?

Obviously the situation is much more complex and nuanced, but this framing (amongst others I’m sure) seems appropriate if you are thinking on a 25,50,100 year time scale in terms of impact of your decision. The country is littered with public and private universities who made poor moral choices across the 19th and 20th centuries but I’m not aware of any institutions suffering long-term reputational harm (or threat of insolvency) as a result of those choices. (Then again, maybe it’s because the harm was swift and final at the time)

itkovian_ · a month ago
These are some of the richest entities - forget about universities - just entities full stop, in the entire country.
itkovian_ commented on Does the Bitter Lesson Have Limits?   dbreunig.com/2025/08/01/d... · Posted by u/dbreunig
itkovian_ · a month ago
I don’t think people understand the point sutton was making; he’s saying that general, simple systems that get better with scale tend to outperform hand engineered systems that don’t. It’s a kind of subtle point that’s implicitly saying hand engineering inhibits scale because it inhibits generality. He is not saying anything about the rate, doesn’t claim llms/gd are the best system, in fact I’d guess he thinks there’s likely an even more general approach that would be better. It’s comparing two classes of approaches not commenting on the merits of particular systems.
itkovian_ commented on OpenAI raises $8.3B at $300B valuation   nytimes.com/2025/08/01/bu... · Posted by u/mfiguiere
vessenes · a month ago
Startup-land valuations are for PR. The real negotiation is in the discount and the cap, warrants, etc.

That $8.3b came in early, and was oversubscribed, so the terms are likely favorable to oAI, but if an investor puts in $1b at a $300b valuation (cap) with a 20% discount to the next round, and the company raises another round at $30b in two months; good news: they got in at a $24b price.

To your point on Anthropic and Google; yep. But, if you think one of these guys will win (and I think you have to put META on this list too), then por que no los quatro? Just buy all four.

I'll call it now; they won't lose money on those checks.

itkovian_ · a month ago
I’m gonna go ahead and guess they didn’t raise 8.3b on SAFEs
itkovian_ commented on Why are we pretending AI is going to take all the jobs?   thebignewsletter.com/p/wh... · Posted by u/pseudolus
itkovian_ · a month ago
Article doesn’t say jobs aren’t about to be evicerated, says this is already happening and it’s due to capitalism, a lack of consumer protections and we require more government regulation. This never made any sense to me because we don’t have to guess how this would go - the experiment is being run in Europe right now.

Also the core of the argument is wrong, ai is clearly displacing jobs this is happening today.

itkovian_ commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
keiferski · 2 months ago
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

itkovian_ · 2 months ago
The reason for this is it’s horrifying to consider that things like the Ukrainian war didn’t have to happen. It provides a huge amount of phycological relief to view these events as inevitable. I actually don’t think as humans are even able to conceptualise/internalise suffering on those scales as individuals. I can’t at least.

And then ultimately if you believe we have democracies in the west it means we are all individually culpable as well. It’s just a line of logic that becomes extremely distressing and so there’s a huge, natural and probably healthy bias away from thinking like that.

itkovian_ commented on Judge rejects Meta's claim that torrenting is “irrelevant” in AI copyright case   arstechnica.com/tech-poli... · Posted by u/Bluestein
duskwuff · 2 months ago
Same. If I invented a novel new way of encoding video and used it to pack a bunch of movies into a single file, I would fully expect to be sued if I tried distributing that file, and equally so if I let people use a web site that let them extract individual videos from that file. Why should text be treated differently?
itkovian_ · 2 months ago
I think the better analogy is if you had someone with a superhuman, but not perfect memory read a bunch of stuff, then you were allowed to talk to the person about the things they’d read, does that violate copyright? I’d say clearly no.

Then what if their memory is so good, they repeat entire sections verbatim when asked. Does that violate it? I’d say it’s grey.

But that’s a very specific case - reproducing large chunks of owned work is something that can be quite easily detected and prevented and I’m almost certain the frontier labs are already going this.

So I think it’s just very not clear - the reality is this is a novel situation, the job of the courts is now to basically decide what’s allowed and what’s not. But the rational shouldn’t be ‘this can’t be fair use it’s just compression’. Because it’s clearly something fundamentally different and existing laws just aren’t applicable imo

itkovian_ commented on Q-learning is not yet scalable   seohong.me/blog/q-learnin... · Posted by u/jxmorris12
itkovian_ · 3 months ago
Completely agree and think it’s a great summary. To summarize very succinctly; you’re chasing a moving target where the target changes based on how you move. There’s no ground truth to zero in on in value-based RL. You minimise a difference in which both sides of the equation have your APPROXIMATION in them.

I don’t think it’s hopeless though, I actually think RL is very close to working because what it lacked this whole time was a reliable world model/forward dynamics function (because then you don’t have to explore, you can plan). And now we’ve got that.

itkovian_ commented on AGI is not multimodal   thegradient.pub/agi-is-no... · Posted by u/danielmorozoff
nemjack · 3 months ago
I don't think you're quite right. The author is arguing that images and text should not be processed differently at any point. Current early fusion approaches are close, but they still treat modalities different at the level of tokenization.

If I understand correctly he would advocate for something like rendering text and processing it as if it were an image, along with other natural images.

Also, I would counter and say that there is some actionable information, but its pretty abstract. In terms of uniting modalities he is bullish on tapping human intuition and structuralism, which should give people pointers to actual books for inspiration. In terms of modifying the learning regime, he's suggesting something like an agent-environment RL loop, not a generative model, as a blueprint.

There's definitely stuff to work with here. It's not totally mature, but not at all directionless.

itkovian_ · 3 months ago
Saying we should tokenize different modalities the same would be analogous to saying that in order to be really smart, a human has to listen with its eyes. At some point there has to be SOME modality specific preprocessing. The thing is in all current sota arch.’s this modality specific preprocessing is very very shallow, almost trivially shallow. I feel this is the peice of information that may be missing for people with this view. In the multimodal models everything is moving to a shared representation very rapidly - that’s clearly already happening.

On the ‘we need to do rl loop rather than a generative model’ point - I’d say this is the consensus position today!

itkovian_ commented on AGI is not multimodal   thegradient.pub/agi-is-no... · Posted by u/danielmorozoff
itkovian_ · 3 months ago
I don’t want to bash the guy since he’s still in his phd, but it’s written in such a confident tone for something that is so all over the place that I think it’s fair game.

Like a lot of the symbolic/embodied people, the issue is they don’t have a deep understanding of how the big models work or are trained, so they come to weird conclusions. Like things that aren’t wrong but make you go ‘ok.. but what you trying to say’.

E.g ‘Instead of pre-supposing structure in individual modalities, we should design a setting in which modality-specific processing emerges naturally.’ Seems to lack the understanding that a vision transformer is completely identical for a standard transformer except for the tokenization which is just embedding a grid of patches and adding positional embeddings. Transformers are so general, what he’s asking us to do is exactly what everyone is already doing. Everything is early fusion now too.

“The overall promise of scale maximalism is that a Frankenstein AGI can be sewed together using general models of narrow domains.” No one is suggesting this.. everyone wants to do it end to end, and also thinks that’s the most likely thing to work. Some suggestions like lecuns jepa’s do suggest to induce some structure in the arch, but still the driving force there is to allow gradients to flow everywhere.

For a lot of the other conclusions, the statements are literally almost equivalent to ‘to build agi, we need to first understand how to build agi’. Zero actionable information content.

itkovian_ commented on Ask HN: Any insider takes on Yann LeCun's push against current architectures?    · Posted by u/vessenes
itkovian_ · 6 months ago
The fundamental distinction is usually made to contrastive approaches (i.e. make correct more likely, make everything else we just compared unlikely). Ebms are "only what is correct is more likely and the default for everything is unlikely"

This is obviously an extremely high level simplification, but that's the core of it.

itkovian_ · 6 months ago
And in this categorization auto regressive llms are contrastive due to the cross entropy loss.

u/itkovian_

KarmaCake day70May 20, 2024View Original