Things like em-dashes are a really bad way to detect AI because they can be good grammar and improve text readability, same with curly quotes. I use them all the time in my writing, and I wouldn't be surprised if this iOS dev feels similarly as Apple platforms have emphasised this stuff for years.
No secret. Just vibes.
Since you know the tells of LLM generated text, you'll know that this is a classic: No X. Just Y. Proxyman -- pick your poison.
And if you're from PureGym reading this—let's talk.
There's a mixture of em dashes joining words and double hyphens spaced between words, suggesting the former were missed in a find and replace job."And if you're from [COMPANY] reading this[EM DASH]let's talk" is a classic GPT-ism.
It's like the API is saying "Hey buddy, I know this is odd, but can you poll me every minute? Thanks, love you too."
Shame Notifications: "You were literally 100 meters from the gym and walked past it"
It's just a ZIP archive with delusions of grandeur
Clear examples of fluff. Not only do these fail to "add facts or colour to the story", they actually detract from it.I agree with you that em dashes in isolation are not indicative, but the prose here is dripping with GPT-speak.
I managed a few Hetzner cloud instances, and some report perfect uptime for over a year. The ones that don't, I was the root cause.
What exactly leads you to make this sort of claim? Do you actually have any data or are you just running your mouth off?
https://news.ycombinator.com/item?id=29651993
https://news.ycombinator.com/item?id=42365295
https://news.ycombinator.com/item?id=44038591
>are you just running your mouth off?
Don't be snarky. Edit out swipes.
Why is the important safety advice buried in a bunch of interpersonal drama and administrivia?
>The AI sector, for example, relies heavily on Python and is mostly untapped for the PSF, PyCon US, and our entire community.
Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app
It's Software Developer Kit, not Special Developer Kit ;-)
See this comment from 2019: https://news.ycombinator.com/item?id=19178938
I will leave you with this line from README.md:
>I am not as knowledgeable about hardening/securing a Linux kernel as I'd like. As much as I hate to admit it, I do not know what all of these settings do.
Devil's advocate: why does it matter (apart from "it feels wrong")? As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?
TL;DR: Because of the bullshit asymmetry principle. Maybe the conclusions below are sound, have a read and try to wade through ;-)
Let us address the underlying assumptions and implications in the argument that the provenance of a report, specifically whether it was written with the assistance of AI, should not matter as long as the conclusions are sound.
This position, while intuitively appealing in its focus on the end result, overlooks several important dimensions of communication, trust, and epistemic responsibility. The process by which information is generated is not merely a trivial detail, it is a critical component of how that information is evaluated, contextualized, and ultimately trusted by its audience. The notion that it feels wrong is not simply a matter of subjective discomfort, but often reflects deeper concerns about transparency, accountability, and the potential for subtle biases or errors introduced by automated systems.
In academic, journalistic, and technical contexts, the methodology is often as important as the findings themselves. If a report is generated or heavily assisted by AI, it may inherit certain limitations, such as a lack of domain-specific nuance, the potential for hallucinated facts, or the unintentional propagation of biases present in the training data. Disclosing the use of AI is not about stigmatizing the tool, but about providing the audience with the necessary context to critically assess the reliability and limitations of the information presented. This is especially pertinent in environments where accuracy and trust are paramount, and where the audience may need to know whether to apply additional scrutiny or verification.
Transparency about the use of AI is a matter of intellectual honesty and respect for the audience. When readers are aware of the tools and processes behind a piece of writing, they are better equipped to interpret its strengths and weaknesses. Concealing or omitting this information, even unintentionally, can erode trust if it is later discovered, leading to skepticism not just about the specific report, but about the integrity of the author or institution as a whole.
This is not a hypothetical concern, there are numerous documented cases (eg in legal filings https://www.damiencharlotin.com/hallucinations/) where lack of disclosure about AI involvement has led to public backlash or diminished credibility. Thus, the call for transparency is not a pedantic demand, but a practical safeguard for maintaining trust in an era where the boundaries between human and machine-generated content are increasingly blurred.