Readit News logoReadit News
falkensmaize commented on Coding agents have replaced every framework I used   blog.alaindichiappari.dev... · Posted by u/alainrk
wtetzner · 2 days ago
> It's strange to me when articles like this describe the 'pain of writing code'.

I find it strange to compare the comment sections for AI articles with those about vim/emacs etc.

In the vim/emacs comments, people always state that typing in code hardly takes any time, and thinking hard is where they spend their time, so it's not worth learning to type fast. Then in the AI comments, they say that with AI writing the code, they are free'd up to spend more time thinking and less time coding. If writing the code was the easy part in the first place, and wasn't even worth learning to type faster, then how much value can AI be adding?

Now, these might be disjoint sets of people, but I suspect (with no evidence of course) there's a fairly large overlap between them.

falkensmaize · 2 days ago
What I never understand is that people seem to think the conception of the idea and the syntactical nitty gritty of the code are completely independent domains. When I think about “how software works” I am at some level thinking about how the code works too, not just high level architecture. So if I no longer concern myself with the code, I really lose a lot of understanding about how the software works too.
falkensmaize commented on India's female workers watching hours of abusive content to train AI   theguardian.com/global-de... · Posted by u/thisislife2
glimshe · 3 days ago
People who raise these concerns don't understand true poverty. They might have seen it during trips but don't really "grok" it. That's one place where the expression "First world problems" is relevant. Being able to pay for housing, food and some degree of safety is an immense improvement in life quality versus the previous state with poverty and no videos.
falkensmaize · 3 days ago
Watching this stuff all day can literally cause you to have lifelong PTSD. I want poor people to have enough money to provide for themselves, but this is exploitative - they should get paid a LOT more to do this kind of work, the same way someone who does something physically dangerous gets paid more for the risk.
falkensmaize commented on The $100B megadeal between OpenAI and Nvidia is on ice   wsj.com/tech/ai/the-100-b... · Posted by u/pixelesque
pinnochio · 9 days ago
Altman is a consummate liar and manipulator with no moral scruples. I think this LLM business is ethically compromised from the start, but Dario is easily the least worst of the three.
falkensmaize · 9 days ago
Pfft. Dario has been making nonsense fear mongering that never comes true.
falkensmaize commented on Will AIs take all our jobs and end human history, or not? (2023)   writings.stephenwolfram.c... · Posted by u/lukakopajtic
HEmanZ · 11 days ago
What are you working on that they are so knowledgeable?Even the best models absolutely make stuff up, even to this day. I literally spend all day every day working with them (all latest ChatGPT models) and it’s still 10-15% BS.

I had ChatGPT 5.2 thinking straight up make up an api after I pasted the full api spec to it earlier today. And built its whole response around a public api that did not exist. And Claude cli with sonnet 4.5 made up the craziest reason why my curl command wasn’t working (that curl itself was bugged, not the obvious it can’t resolve the dn it tried to use) and almost went down a path of installing a bunch of garbage tools.

These are not ready to be unsupervised. Yet.

falkensmaize · 11 days ago
Just today I had Claude Opus 4.5 try to write to a fictional Mac user account on my computer during a coding session. It was pretty weird - the name was very specific and unique enough that it was clear it was likely bleed through from training data. It wasn’t like “John Smith” or something.

That’s the kind of thing that on a large scale could be catastrophic.

falkensmaize commented on GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers   gptzero.me/news/neurips/... · Posted by u/segmenta
cthalupa · 17 days ago
The fact that there is absurd AI hype right now doesn't mean that we should let equally absurd bullshit pass on the other side of the spectrum. Having a reasonable and accurate discussion about the benefits, drawbacks, side effects, etc. is WAY more important right now than being flagrantly incorrect in either direction.

Meanwhile this entire comment thread is about what appears to be, as fumi2026 points out in their comment, a predatory marketing play by a startup hoping to capitalize on the exact sort of anti AI sentiment that you seem to think is important... just because there is pro AI sentiment?

Naming and shaming everyday researchers based on the idea that they have let hallucinations slip into their paper all because your own AI model has decided thatit was AI so you can signal boost your product seems pretty shitty and exploitative to me, and is only viable as a product and marketing strategy because of the visceral anti AI sentiment in some places.

falkensmaize · 17 days ago
“anti-ai sentiment”

No that’s a straw man, sorry. Skepticism is not the same thing as irrational rejection. It means that I don’t believe you until you’ve proven with evidence that what you’re saying is true.

The efficacy and reliability of LLMs requires proof. Ai companies are pouring extraordinary, unprecedented amounts of money into promoting the idea that their products are intelligent and trustworthy. That marketing push absolutely dwarfs the skeptical voices and that’s what makes those voices more important at the moment. If the researchers named have claims made against them that aren’t true, that should be a pretty easy thing for them to refute.

falkensmaize commented on GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers   gptzero.me/news/neurips/... · Posted by u/segmenta
aydyn · 17 days ago
>also plagiarism

To me, this is a reminder of how much of a specific minority this forum is.

Nobody I know in real life, personally or at work, has expressed this belief.

I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.

Clearly, the authors in NeurIPS don't agree that using an LLM to help write is "plagiarism", and I would trust their opinions far more than some random redditor.

falkensmaize · 17 days ago
“Anti-AI extremism”? Seriously?

Where does this bizarre impulse to dogmatically defend LLM output come from? I don’t understand it.

If AI is a reliable and quality tool, that will become evident without the need to defend it - it’s got billions (trillions?) of dollars backstopping it. The skeptical pushback is WAY more important right now than the optimistic embrace.

falkensmaize commented on Cursor's latest “browser experiment” implied success without evidence   embedding-shapes.github.i... · Posted by u/embedding-shape
jonathanstrange · 22 days ago
You do realize that AI can already today write fairly complex software autonomously, don't you? It's not as if I haven't tested that. It works quite well for certain tasks and with certain programming languages.

Anyone who knows history knows that people initially tend to underestimate the impact of technologies, yet few people learn something from that lesson.

falkensmaize · 21 days ago
What fairly complex software has it written autonomously for you?
falkensmaize commented on A pandemic rescue became a 30-year debt trap   thehill.com/opinion/finan... · Posted by u/iancmceachern
falkensmaize · 21 days ago
Our government needs to get tf out of the “fixing things” business and back to the “maintaining a framework where people can fix things themselves” business that it was designed to be. Everything they “fix” ends up in much worse shape than it was before.
falkensmaize commented on A Calif. teen trusted ChatGPT's drug advice. He died from an overdose   sfgate.com/tech/article/c... · Posted by u/freediver
andsoitis · 23 days ago
> What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

In a forum, it is the actual people who post who are responsible for sharing the recommendation.

In a chatbot, it is the owner (e.g. OpenAI).

But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.

falkensmaize · 23 days ago
Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.
falkensmaize commented on A Calif. teen trusted ChatGPT's drug advice. He died from an overdose   sfgate.com/tech/article/c... · Posted by u/freediver
PeterHolzwarth · 23 days ago
I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

This seems like a web problem, not a ChatGPT issue specifically.

I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

Is there an angle here I am not picking up on, do you think?

falkensmaize · 23 days ago
AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.

u/falkensmaize

KarmaCake day99April 6, 2024View Original