Readit News logoReadit News
roxolotl commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
MagicMoonlight · 2 days ago
They’re definitely just training the models on the benchmarks at this point
roxolotl · 2 days ago
Yea either this is an incredible jump or we’ve finally gotten confirmation benchmarks are bs.
roxolotl commented on We Need to Die   willllliam.com/blog/why-w... · Posted by u/ericzawo
zebomon · 4 days ago
The author's argument seems to be a practical one and two-part: 1) without death, there's nothing to motivate us to live life well and 2) unless we live life well, there's no point in living.

I just disagree with both postulates, and that's fine. The author can go on thinking that life needs to be something specific in order for it to be desirable. I myself like being productive. I also like eating fast food every once in a while. I think I'd be able to go on living (with some happiness to boot) if I never had another productive day or another McD's burger ever again.

Life can be its own end. If we manage to end death by aging, someday there will be children who have never known another world, and they'll marvel at all the death-centric thinking that permeated the societies of their past.

roxolotl · 4 days ago
I think the point is a bit more nuanced and has to do with the authors conception of the self. He argues that even if you got immortality and lived a great life at some point You would stop being You so you might as well have died anyways. I think it’s a bit silly. But if you believe that enough alteration of the self results in its death, a sort of Self of Theseus, then I think it’s a consistent opinion.

> His argument is precise: the desires that give you reason to keep living (he calls them categorical desires) would either eventually exhaust themselves, leaving you in a state of "boredom, indifference and coldness", or they'd evolve so completely that you'd become a different person anyway. Either way, the You that wanted immortality doesn't get it. You just die from a lack of Self rather than through physical mortality.

roxolotl commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
viccis · 6 days ago
Every day I see people treat gen AI like a thinking human, Dijkstra's attitudes about anthropomorphizing computers is vindicated even more.

That said, I think the author's use of "bag of words" here is a mistake. Not only does it have a real meaning in a similar area as LLMs, but I don't think the metaphor explains anything. Gen AI tricks laypeople into treating its token inferences as "thinking" because it is trained to replicate the semiotic appearance of doing so. A "bag of words" doesn't sufficiently explain this behavior.

roxolotl · 6 days ago
Yea bag of words isn’t helpful at all. I really do think that “superpowered sentence completion” is the best description. Not only is it reasonably accurate it is understandable, everyone has seen autocomplete function, and it’s useful. I don’t know how to “use” a bag of words. I do know how to use sentence completion. It also helps explains why context matters.
roxolotl commented on OpenAI disables ChatGPT app suggestions that looked like ads   techoreon.com/openai-disa... · Posted by u/GeorgeWoff25
bigyabai · 6 days ago
Just so you know, "vertical integration" and "annoying advertisement" are not mutually exclusive.

I learned that from trying to use Apple Music to handle my local library. Never again.

roxolotl · 6 days ago
Did you happen to find a solution? I'm dealing with this issue now. I genuinely miss 2008 itunes at this point
roxolotl commented on The Reverse-Centaur's Guide to Criticizing AI   pluralistic.net/2025/12/0... · Posted by u/doener
roxolotl · 7 days ago
It’s a longer read, and the tangent on copyright while interesting and something Cory is passionate about is slightly off topic, but it’s the rare piece bringing up the key issue with AI. The argument goes like:

- The valuations are only reasonable if they are going to enable mass worker replacement. Yes there is the machine god argument, Wall Street doesn’t buy that.

- The tooling doesn’t have to be capable of replacing workers. The sales people just have to be able to convince execs it is.

- Even ignoring to fact that lots of people would lose their jobs this replacement would make everything worse because AI isn’t capable of replacing jobs.

- The bubble is based on the assumption everything will get better.

- We need to convince people things will get worse before they actually do.

These tools aren’t useless. They are remarkable. But that doesn’t mean they will meet the hype nor the valuations. In order to avoid an economic cataclysm it’s important for a realistic and measured narrative to take hold fast.

roxolotl commented on The past was not that cute   juliawise.net/the-past-wa... · Posted by u/mhb
bluedino · 7 days ago
> The food was extremely good. . . . everything was fresh from the garden.

Was it this, or was it that your mother/grandmother was a great cook? I hear a lot of older people talk about how awful their food was, limited ingredients, everything was boiled...

Food also probably tastes better when you're actually hungry, and not able to Doordash whatever you want to eat at any time of day.

roxolotl · 7 days ago
Anecdotally vegetables I grow are wildly more flavorful than ones you can buy. Like think grape tomatoes as sweet as grapes. Green beans that a have complex flavor almost like green tea. The butternut squash that I accidentally grew this year from seeds that survived the winter in my compost tastes like a pumpkin pie. Corn that you can eat raw and that putting butter on feels like a waste.

That’s not to say you cannot get really good food that’s not “farm fresh” but food right out of the ground absolutely on average is better.

roxolotl commented on Most technical problems are people problems   blog.joeschrag.com/2023/1... · Posted by u/mooreds
N_Lens · 9 days ago
Isn't this generally the case across all sectors and industries? We have the technology today to create a post scarcity utopia, to reverse climate change, to restore the biosphere. The fact that none of that happens is a people problem, a political problem, a spiritual problem, more so than any technological barrier.
roxolotl · 9 days ago
Yea this is true of virtually all problems today. It's one of the blind spots of the AI acceleration crowd. Cancer vaccine discovered by GPT-6? You still have to convince people it's safe. Fusion reactor modeled by Gemini? Convince people it's not that kind of nuclear power. Global Engineering solution for climate change? Well it might look like chemtrails but it's not. Implementation of all of these things in a society is always going to be hard.

I think this is a large factor in the turn towards more authoritarian tendencies in the Silicon Valley elites. They spent the 2000s and 2010s as a bit more utopian and laissez faire and saw it got them almost nowhere because of technology doesn't solve people problems.

roxolotl commented on Google, Nvidia, and OpenAI   stratechery.com/2025/goog... · Posted by u/tambourine_man
guerrilla · 12 days ago
Who's sitting there talking to Gemini though? Nobody I know's even heard of it. Everyone talks to ChatGPT, everyone. Habits are everything. Google will be swimming against the current and be seen as just another company forcing AI down everyone's throat, while everyone is still talking to ChatGPT. We'll see though, maybe I'm wrong. Maybe Google is clever and can do integration well and make something useful out of it though. They have never succeeded in that way before though and in fact seem terrible at it as an organization, so I very much doubt that.
roxolotl · 12 days ago
They aren’t talking about Gemini because Google is the brand name people know. “Oh Google told me this”. “Google planned my day.” Or maybe “Google’s AI said X”

Gemini is less a consumer brand name and a more a brand name for those of us who care about models.

roxolotl commented on Do the thinking models think?   bytesauna.com/post/consci... · Posted by u/mapehe
Yizahi · 13 days ago
No they don't. When queried how exactly did a program arrive to a specific output it will happily produce some output resembling thinking and having all the required human-like terminology. The problem is that it doesn't match at all with how the LLM program calculated output in reality. So the "thinking" steps are just a more of the generated BS, to fool us more.

One point to think about - an entity being tested for intelligence/thinking/etc only needs to fail once, o prove that it is not thinking. While the reverse applies too - to prove that a program is thinking it must be done in 100% of tests, or the result is failure. And we all know many cases when LLMs are clearly not thinking, just like in my example above. So the case is rather clear for the current gen of LLMs.

roxolotl · 13 days ago
This is an interesting point but while I agree with the article, don’t think LLMs are more than sophisticated autocomplete, and believe there’s way more to human intelligence than matrix multiplication humans also cannot explain in many cases why they did what they did.

Of course the most famous and clear example are the split brain experiments which show post hoc rationalization[0].

And then there’s the Libet experiments[1] showing that your conscious experience is only realized after the triggering brain activity. While it’s not showing you cannot explain why it does seem to indicate your explanation is post hoc.

0: https://www.neuroscienceof.com/human-nature-blog/decision-ma...

1: https://www.informationphilosopher.com/freedom/libet_experim...

roxolotl commented on Atlas Shrugged (2024)   david-jasso.com/2024/04/1... · Posted by u/mnky9800n
ineedasername · 13 days ago
I’m not sure there could be something less like the “shrug” in the book and this comparison to corporate decline. It’s an inversion of quite a bit that was central to the theme.

The shrug in the book was people turning their back, walking away— people who thought their talents were either wasted or unequally compensated in some way, or footing an unfair portion of things, and the “shrug” was them walking away. A fundamental individual, not collective and corporate act. The central character felt exploited by the company he worked for.

The book has enough problems without also confusing who the author meant when she said “Atlas”. It wasn’t corporations, it was individuals.

roxolotl · 13 days ago
You just described a union. One of the things that’s wild to me about the book is that it’s pushed as a proponent of individualism and libertarianism but Gault’s Gulch is just the wealthy unionizing. There’s nothing individualistic about it. If just John Galt had walked away nothing would have happened.

u/roxolotl

KarmaCake day1694February 16, 2025View Original