Readit News logoReadit News
ponow commented on Rats Play DOOM   ratsplaydoom.com/... · Posted by u/ano-ther
aw124 · 13 days ago
I'm opposed to this project because it involves using animals in medical experiments, which I believe is never ethically justifiable. It goes against basic moral and ethical principles regarding animal treatment. If the project were designed to allow animals to choose whether or not to participate, it would be more acceptable. Some scientists have already explored such approaches. By not giving animals a choice, you're limiting their freedom and potentially exposing them to physical or psychological harm through your simulation. As someone who advocates for animal rights, I'd prefer to see alternative methods that don't involve animals or allow them to participate voluntarily
ponow · 12 days ago
Then make your own world where you don't benefit from the knowledge gained by means you don't approve of. This world is not that world.
ponow commented on Europe is scaling back GDPR and relaxing AI laws   theverge.com/news/823750/... · Posted by u/ksec
1718627440 · a month ago
> Any instance of selective enforcement being necessary is ipso facto evidence of a bad law.

By that measure every law is a bad law.

ponow · a month ago
Legislation is much worse than organically derived common law, for the common law comprises decisions that apply to particular conditions with all their details while the former are mere idealizations.
ponow commented on Europe is scaling back GDPR and relaxing AI laws   theverge.com/news/823750/... · Posted by u/ksec
nandomrumber · a month ago
You’re arguing that the mass incarceration of more people would have been better?
ponow · a month ago
Yes, I would argue that it would be better for more to have been incarcerated, for that would bring greater focus to injustice and the law would be changed. Selective enforcement interferes with the feedback mechanism that would otherwise make the law work better.
ponow commented on Amazon strategised about keeping water use secret   source-material.org/amazo... · Posted by u/chhum
827a · 2 months ago
Yes: my understanding is that it’s rather common practice to at least make a best-effort estimation of all these secondary impacts.

It’s also absolutely true that “agricultural usage dominating data center usage” is a dirty little secret that a lot of people are very, very incentivized to keep secret. Amazon can’t outright say that, because uh whutabuht mah poor farmers.

ponow · 2 months ago
Categorical rejection of alternatives is premature without context.
ponow commented on PSF has withdrawn $1.5M proposal to US Government grant program   pyfound.blogspot.com/2025... · Posted by u/lumpa
bilekas · 2 months ago
This seems very un-American. The government dictating how you run your business ?

> “do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI, or discriminatory equity ideology in violation of Federal anti-discrimination laws.”

Is that even legal to add such an arbitrary and opinionated reason to a government grant?

I applaud them for taking a stand, it seems to be more and more rare these days.

ponow · 2 months ago
Federal funding of research is un-American.
ponow commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
const_cast · 4 months ago
But go tooling is bad. Like, really really bad.

Sure it's good compared to like... C++. Is go actually competing with C++? From where I'm standing, no.

But compared to what you might actually use Go for... The tooling is bad. PHP has better tooling, dotnet has better tooling, Java has better tooling.

ponow · 4 months ago
Go was a response, in part, to C++, if I recall how it was described when it was introduced. That doesn't seem to be how it ended it out. Maybe it was that "systems programming language" means something different for different people.
ponow commented on Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens   arstechnica.com/ai/2025/0... · Posted by u/blueridge
nerdjon · 4 months ago
This is why research like this is important and needs to keep being published.

What we have seen the last few years is a conscious marketing effort to rebrand everything ML as AI and to use terms like "Reasoning", "Extended Thinking" and others that for many non technical people give the impression that it is doing far more than it is actually doing.

Many of us here can see his research and be like... well yeah we already knew this. But there is a very well funded effort to oversell what these systems can actually do and that is reaching the people that ultimately make the decisions at companies.

So the question is no longer will AI Agents be able to do most white collar work. They can probably fake it well enough to accomplish a few tasks and management will see that. But will the output actually be valuable long term vs short term gains.

ponow · 4 months ago
I'm happy enough if I'm better off for having used a tool than having not.
ponow commented on Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens   arstechnica.com/ai/2025/0... · Posted by u/blueridge
wzdd · 4 months ago
> The mechanism that the model uses to transition towards the answer is to generate intermediate text.

Yes, which makes sense, because if there's a landscape of states that the model is traversing, and there are probablistically likely pathways between an initial state and the desired output, but there isn't a direct pathway, then training the the model to generate intermediate text in order to move across that landscape so it can reach the desired output state is a good idea.

Presumably LLM companies are aware that there is (in general) no relationship between the generated intermediate text and the output, and the point of the article is that by calling it a "chain of thought" rather than "essentially-meaningless intermediate text which increases the number of potential states the model can reach" users are misled into thinking that the model is reasoning, and may then make unwarranted assumptions, such as that the model could in general apply the same reasoning to similar problems, which is in general not true.

ponow · 4 months ago
Meaningless? The participation in a usefully predicting path is meaning. A different meaning.

And Gemini has a note at the bottom about mistakes, and many people discuss this. Caveat emptor, as usual.

ponow commented on Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens   arstechnica.com/ai/2025/0... · Posted by u/blueridge
ponow · 4 months ago
> LLMs are [...] sophisticated simulators of reasoning-like text

Most humans are unsophisticated simulators of reasoning-like text.

ponow commented on The great displacement is already well underway?   shawnfromportland.substac... · Posted by u/JSLegendDev
JohnMakin · 7 months ago
I’m not trying to be unsympathetic in this comment so please do not read it that way, and I’m aware having spent most of my career in cloud infrastructure that I am usually in high demand regardless of market forces - but this just does not make sense to me. If I ever got to the point where i was even in high dozens of applications without any hits, I’d take a serious look at my approach. Trying the same thing hundreds of times without any movement feels insane to me. I believe accounts like this, because why make it up? as other commenters have noted there may be other factors at play.

I just wholly disagree with the conclusion that this is a common situation brought by AI. AI coding simply isnt there to start replacing people with 20 years of experience unless your experience is obsolete or irrelevant in today’s market.

I’m about 10 years into my career and I constantly have to learn new technology to stay relevant. I’d be really curious what this person has spent the majority of their career working on, because something tells me it’d provide insight to whatever is going on here.

again not trying to be dismissive, but even with my fairly unimpressive resume I can get at least 1st round calls fairly easily, and my colleagues that write actual software all report similar. companies definitely are being more picky, but if your issue is that you’re not even being contacted, I’d seriously question your approach. They kind of get at the problem a little by stating they “wont use a ton of AI buzzwords.” Like, ok? But you can also be smart about knowing how these screeners work and play the game a little. Or you can do doordash. personally I’d prefer the former to the latter.

Also find it odd that 20 years of experience hasnt led to a bunch of connections that would assist in a job search - my meager network has been where I’ve found most of my work so far.

ponow · 7 months ago
Could it be that your particular position required more ongoing learning, and that has kept you better prepared for a changing world?

What fraction of positions require that ongoing learning, or at least to that degree?

Also, consider many other jobs, are they doing their job, and the doing of their job itself provides the experience that makes you a more valuable worker? Or is the doing of the job basically a necessary distraction from the actual task of preparing yourself for a future job? What fraction of humanity actually takes on two jobs, the paying job and the preparing-for-the-next-job? Might doing the latter get you fired from the former? Most importantly, is doing that latter job getting more important over time, that is, are our jobs less secure? If so, is this what is an improving economy, rising, as it were, with GDP?

u/ponow

KarmaCake day220October 4, 2016View Original