Readit News logoReadit News
grey-area commented on Over fifty new hallucinations in ICLR 2026 submissions   gptzero.me/news/iclr-2026... · Posted by u/puttycat
theoldgreybeard · 8 days ago
If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.

If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.

grey-area · 8 days ago
Generative AI and the companies selling it with false promises and using it for real work absolutely are the problem.
grey-area commented on We're losing our voice to LLMs   tonyalicea.dev/blog/were-... · Posted by u/TonyAlicea10
randycupertino · 18 days ago
She read it EXACTLY as written from the ChatGPT response, verbatim. If it was her own unique response there would have been some variation.
grey-area · 11 days ago
What makes you think the LLM wasn't reproducing a snippet from a medical reference?

I mean it's possible an expert in the field was using ChatGPT to answer questions but is seems rather stupid and improbable doesn't it? It'd be a good way to completely crash your career when found out.

grey-area commented on AI agents break rules under everyday pressure   spectrum.ieee.org/ai-agen... · Posted by u/pseudolus
stavros · 13 days ago
That's the issue, we don't really know enough about how LLMs work to say, and we definitely don't know enough about how humans work.
grey-area · 11 days ago
We absolutely do, we know exactly how LLMs work. They generate plausible text from a corpus. They don't accurately reproduce data/text, don't think, they don't have a world view or a world model, and they sometimes generate plausible yet incorrect data.
grey-area commented on AI agents break rules under everyday pressure   spectrum.ieee.org/ai-agen... · Posted by u/pseudolus
stavros · 13 days ago
I keep hearing this non sequitur argument a lot. It's like saying "humans just pick the next work to string together into a sentence, they're not actually dutiful agents". The non sequitur is in assuming that somehow the mechanism of operation dictates the output, which isn't necessarily true.

It's like saying "humans can't be thinking, their brains are just cells that transmit electric impulses". Maybe it's accidentally true that they can't think, but the premise doesn't necessarily logically lead to truth

grey-area · 13 days ago
No it’s not like saying that, because that is not at all what humans do when they think.

This is self-evident when comparing human responses to problems be LLMs and you have been taken in by the marketing of ‘agents’ etc.

grey-area commented on We're losing our voice to LLMs   tonyalicea.dev/blog/were-... · Posted by u/TonyAlicea10
SoftTalker · 18 days ago
Every time I see AI videos in my YouTube recommendations I say “don’t recommend this channel” but the algorithm doesn’t seem to get the hint. Why don’t they make a preference option of “don’t show me AI content”
grey-area · 18 days ago
Because they have a financial incentive not to.
grey-area commented on We're losing our voice to LLMs   tonyalicea.dev/blog/were-... · Posted by u/TonyAlicea10
WD-42 · 18 days ago
Where are these places where everything is written by a LLM? I guess just don’t go there. Most of the comments on HN still seem human.
grey-area · 18 days ago
Many instagram and facebook posts are now llm generated to farm engagement. The verbosity and breathless excitement tends to give it away.
grey-area commented on We're losing our voice to LLMs   tonyalicea.dev/blog/were-... · Posted by u/TonyAlicea10
randycupertino · 18 days ago
I had a weird LLM use instance happen at work this week, we were in a big important protocol review meeting with 35 remote people and someone asks how long IUDs begin to take effect in patients. I put it in ChatGPT for my own reference and read the answer in my head but didn't say anything (I'm ops, I just row the boat and let the docs steer the ship). Anyone this bigwig Oxford/Johns Hopkins cardiologist who we pay $600k a year pipes up in the meeting and her answer is VERBATIM reading off the ChatGPT language word for word. All she did was ask it the answer and repeat what it said! Anyway it kinda made me sad that all this big fancy doctor is doing is spitting out lazy default ChatGPT answers to guide our research :( Also everyone else in the meeting was so impressed with her, "wow Dr. so and so thank you so much for this helpful update!" etc. :-/
grey-area · 18 days ago
The LLM may well have pulled the answer from a medical reference similar to that used by the dr. I have no idea why you think an expert in the field would use ChatGPT for a simple question, that would be negligence.
grey-area commented on AWS is 10x slower than a dedicated server for the same price [video]   youtube.com/watch?v=Ps3AI... · Posted by u/wolfgangbabad
aurareturn · 19 days ago
When your cheap dedicated server goes down and your admin is on holiday and you have hundreds of angry customers calling you, you'll get it.

Or you need to restore your Postgres database and you find out that the backups didn't work.

And finally you have a brilliant idea of hiring a second $150k/year dev ops admin so that at least one is always working and they can check each other's work. Suddenly, you're spending $300k on two dev ops admins alone and the cost savings of using cheaper dedicated servers are completely gone.

grey-area · 19 days ago
It is statistically far more likely that your cloud service will go down for hours or days, and you will have no recourse and will just have to wait till AWS manage to resolve it.
grey-area commented on Go's Sweet 16   go.dev/blog/16years... · Posted by u/0xedb
nicodjimenez · a month ago
Microservices in Golang are definitely related to classes due to the ergonomic aspects of a language. It takes a lot of discipline in Golang not to end up with huge flat functions. Golang services are easier to reason about when they are small due to the lack of abstractions, also Golang is very quick to compile, so its natural to just add services to extend functionality. Code re-use is just a lot of work in Golang. Golang is not monolith friendly IMO.
grey-area · a month ago
It really doesn’t.

Structs and interfaces replace classes just fine.

Reuse is really very easy and I use it for several monoliths currently. Have you tried any of the things you’re talking about with go?

grey-area commented on Go's Sweet 16   go.dev/blog/16years... · Posted by u/0xedb
nicodjimenez · a month ago
Golang to me is a great runtime and very poor language. I could maybe get used to the C pointer-like syntax and to half of my code checking if err != nil, but the lack of classes is a step too far. The Golang idiomatic approach is to have a sprawling set of microservices talking to each other over the network, to manage complexity instead of having classes. This makes sense for things like systems agents (eg K8) but doesn't make sense for most applications because it complicates the development experience unnecessarily and monoliths are also easier to debug.

I would not use Golang for a big codebase with lots of business logic. Golang has not made a dent in Java usage at big companies, no large company is going to try replacing their Java codebases with Golang because there's no benefit, Java is almost as fast as Golang and has classes and actually has a richer set of concurrency primitives.

grey-area · a month ago
Microservices are entirely unrelated to classes and in no way endemic to go.

Go’s lack of inheritance is one of its bolder decisions and I think has been proven entirely correct in use.

Instead of the incidental complexity encouraged by pointless inheritance hierarchies we go back to structure which bundle data and behaviour and can compose them instead.

Favouring composition over inheritance is not a new idea nor did it come from the authors of Go.

Also the author of Java (Gosling) disagrees with you.

https://www.infoworld.com/article/2160788/why-extends-is-evi...

u/grey-area

KarmaCake day21584April 12, 2012View Original