Readit News logoReadit News
itkovian_ commented on Major AI conference flooded with peer reviews written by AI   nature.com/articles/d4158... · Posted by u/_____k
itkovian_ · 2 months ago
Whether it’s actually 20% or not doesn’t matter, everyone is aware the signal of the top confs is in freefall.

There are also rings of reviewer fraud going on where groups of people in these niche areas all get assigned their own papers and recommend acceptance and in many cases the AC is part of this as well. Am not saying this is common but it is occurring.

It feels as if every layer of society is in maximum extraction mode and this is just a single example. No one is spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do. People did used to do this.

itkovian_ · 2 months ago
The argument is that there is no incentive to carefully review a paper (I agree), however what used to occur is people would do the right thing without explicit incentives. This has totally disappeared.
itkovian_ commented on Major AI conference flooded with peer reviews written by AI   nature.com/articles/d4158... · Posted by u/_____k
itkovian_ · 2 months ago
Whether it’s actually 20% or not doesn’t matter, everyone is aware the signal of the top confs is in freefall.

There are also rings of reviewer fraud going on where groups of people in these niche areas all get assigned their own papers and recommend acceptance and in many cases the AC is part of this as well. Am not saying this is common but it is occurring.

It feels as if every layer of society is in maximum extraction mode and this is just a single example. No one is spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do. People did used to do this.

itkovian_ commented on Stanford to continue legacy admissions and withdraw from Cal Grants   forbes.com/sites/michaelt... · Posted by u/hhs
thelock85 · 6 months ago
If you reduce the choice to public funding vs wealthy alumni stewardship, and there seems to be no meaningful pathway to circumventing the current assault on public funding, then why should you alienate your wealthy alumni?

Obviously the situation is much more complex and nuanced, but this framing (amongst others I’m sure) seems appropriate if you are thinking on a 25,50,100 year time scale in terms of impact of your decision. The country is littered with public and private universities who made poor moral choices across the 19th and 20th centuries but I’m not aware of any institutions suffering long-term reputational harm (or threat of insolvency) as a result of those choices. (Then again, maybe it’s because the harm was swift and final at the time)

itkovian_ · 6 months ago
These are some of the richest entities - forget about universities - just entities full stop, in the entire country.
itkovian_ commented on Does the Bitter Lesson Have Limits?   dbreunig.com/2025/08/01/d... · Posted by u/dbreunig
itkovian_ · 6 months ago
I don’t think people understand the point sutton was making; he’s saying that general, simple systems that get better with scale tend to outperform hand engineered systems that don’t. It’s a kind of subtle point that’s implicitly saying hand engineering inhibits scale because it inhibits generality. He is not saying anything about the rate, doesn’t claim llms/gd are the best system, in fact I’d guess he thinks there’s likely an even more general approach that would be better. It’s comparing two classes of approaches not commenting on the merits of particular systems.
itkovian_ commented on OpenAI raises $8.3B at $300B valuation   nytimes.com/2025/08/01/bu... · Posted by u/mfiguiere
vessenes · 6 months ago
Startup-land valuations are for PR. The real negotiation is in the discount and the cap, warrants, etc.

That $8.3b came in early, and was oversubscribed, so the terms are likely favorable to oAI, but if an investor puts in $1b at a $300b valuation (cap) with a 20% discount to the next round, and the company raises another round at $30b in two months; good news: they got in at a $24b price.

To your point on Anthropic and Google; yep. But, if you think one of these guys will win (and I think you have to put META on this list too), then por que no los quatro? Just buy all four.

I'll call it now; they won't lose money on those checks.

itkovian_ · 6 months ago
I’m gonna go ahead and guess they didn’t raise 8.3b on SAFEs
itkovian_ commented on Why are we pretending AI is going to take all the jobs?   thebignewsletter.com/p/wh... · Posted by u/pseudolus
itkovian_ · 7 months ago
Article doesn’t say jobs aren’t about to be evicerated, says this is already happening and it’s due to capitalism, a lack of consumer protections and we require more government regulation. This never made any sense to me because we don’t have to guess how this would go - the experiment is being run in Europe right now.

Also the core of the argument is wrong, ai is clearly displacing jobs this is happening today.

itkovian_ commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
keiferski · 7 months ago
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

itkovian_ · 7 months ago
The reason for this is it’s horrifying to consider that things like the Ukrainian war didn’t have to happen. It provides a huge amount of phycological relief to view these events as inevitable. I actually don’t think as humans are even able to conceptualise/internalise suffering on those scales as individuals. I can’t at least.

And then ultimately if you believe we have democracies in the west it means we are all individually culpable as well. It’s just a line of logic that becomes extremely distressing and so there’s a huge, natural and probably healthy bias away from thinking like that.

itkovian_ commented on Judge rejects Meta's claim that torrenting is “irrelevant” in AI copyright case   arstechnica.com/tech-poli... · Posted by u/Bluestein
duskwuff · 8 months ago
Same. If I invented a novel new way of encoding video and used it to pack a bunch of movies into a single file, I would fully expect to be sued if I tried distributing that file, and equally so if I let people use a web site that let them extract individual videos from that file. Why should text be treated differently?
itkovian_ · 8 months ago
I think the better analogy is if you had someone with a superhuman, but not perfect memory read a bunch of stuff, then you were allowed to talk to the person about the things they’d read, does that violate copyright? I’d say clearly no.

Then what if their memory is so good, they repeat entire sections verbatim when asked. Does that violate it? I’d say it’s grey.

But that’s a very specific case - reproducing large chunks of owned work is something that can be quite easily detected and prevented and I’m almost certain the frontier labs are already going this.

So I think it’s just very not clear - the reality is this is a novel situation, the job of the courts is now to basically decide what’s allowed and what’s not. But the rational shouldn’t be ‘this can’t be fair use it’s just compression’. Because it’s clearly something fundamentally different and existing laws just aren’t applicable imo

itkovian_ commented on Q-learning is not yet scalable   seohong.me/blog/q-learnin... · Posted by u/jxmorris12
itkovian_ · 8 months ago
Completely agree and think it’s a great summary. To summarize very succinctly; you’re chasing a moving target where the target changes based on how you move. There’s no ground truth to zero in on in value-based RL. You minimise a difference in which both sides of the equation have your APPROXIMATION in them.

I don’t think it’s hopeless though, I actually think RL is very close to working because what it lacked this whole time was a reliable world model/forward dynamics function (because then you don’t have to explore, you can plan). And now we’ve got that.

u/itkovian_

KarmaCake day102May 20, 2024View Original