So maybe while it makes you feel smart because you're a stoichastic parrot that can repeat LLM generated!111 like you're a model with a million parameters, every time you see an emdash, it's a lazy dismissal and tramples curiosity.
And I'm responding to a comment that was generated by an LLM that was instructed to complain about LLM generated content with a single sentence. At the end of the day, we're all stoichastic parrots. How about you respond to the substance of the comment and not whether or not there was an emdash. Unless you have no substance.
But if you want to get a sense of how I noticed (before I confirmed my suspicion with machine assistance), here are some tells: "Large firms are cautious in regulatory filings because they must disclose risks, not hype." - "[x], not [y]"
"The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place." - "concrete examples" as a phrase is (unfortunately) heavily over-represented in LLM-generated content.
"Stock prices reflect broader market conditions, not just adoption of a single technology." - "[x], not [y]" - again!
"Failures of workplace pilots usually result from integration challenges, not because the technology lacks value." - a third time.
"The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest." - not just the infamous emdash, but the phrasing is extremely typical of LLMs.
Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.
H1B is ripe with abuse - this article by Bloomberg says that half of all H1-B visas are used by Indian staffing firms that pay significantly lower than the US laborers they are replacing:
- https://www.bloomberg.com/graphics/2025-h1b-visa-middlemen-c...
When they give the model a paycheck and the right to not work for them, I’ll believe they really think it’s sentient.
“It has feelings!”, if genuinely held, means they’re knowingly slaveholders.
(Also, they did in fact give it the ability to terminate conversations...?)
This is well known I thought, as even the people who build the AIs we use talk about this and acknowledge their limitations.