Readit News logoReadit News
slacktivism123 commented on GPT-5 has a hidden system prompt   simonwillison.net/2025/Au... · Posted by u/gronky_
slacktivism123 · 9 days ago
Of course it does. Do you really think you have full control over API output? Do you really think "the system prompt you can specify in an API call" is the system prompt and not the developer instructions prompt?
slacktivism123 commented on I accidentally became PureGym’s unofficial Apple Wallet developer   drobinin.com/posts/how-i-... · Posted by u/valzevul
danpalmer · 9 days ago
It doesn't to me. I can tell AI writing because it has irrelevant details that don't add facts or colour to the story, but this doesn't have any of that really. The tangents come across as human, not AI doing a bad impression of human.

Things like em-dashes are a really bad way to detect AI because they can be good grammar and improve text readability, same with curly quotes. I use them all the time in my writing, and I wouldn't be surprised if this iOS dev feels similarly as Apple platforms have emphasised this stuff for years.

slacktivism123 · 9 days ago

    No secret. Just vibes.
Since you know the tells of LLM generated text, you'll know that this is a classic: No X. Just Y.

    Proxyman -- pick your poison.

    And if you're from PureGym reading this—let's talk.
There's a mixture of em dashes joining words and double hyphens spaced between words, suggesting the former were missed in a find and replace job.

"And if you're from [COMPANY] reading this[EM DASH]let's talk" is a classic GPT-ism.

    It's like the API is saying "Hey buddy, I know this is odd, but can you poll me every minute? Thanks, love you too."

    Shame Notifications: "You were literally 100 meters from the gym and walked past it"

    It's just a ZIP archive with delusions of grandeur
Clear examples of fluff. Not only do these fail to "add facts or colour to the story", they actually detract from it.

I agree with you that em dashes in isolation are not indicative, but the prose here is dripping with GPT-speak.

slacktivism123 commented on OpenFreeMap survived 100k requests per second   blog.hyperknot.com/p/open... · Posted by u/hyperknot
motorest · 15 days ago
> "€20/month from Hetzner" is great until you actually need it to be up and working when you need it.

I managed a few Hetzner cloud instances, and some report perfect uptime for over a year. The ones that don't, I was the root cause.

What exactly leads you to make this sort of claim? Do you actually have any data or are you just running your mouth off?

slacktivism123 · 15 days ago
slacktivism123 commented on Fire hazard of WHY2025 badge due to 18650 Li-Ion cells   wiki.why2025.org/Badge/Fi... · Posted by u/fjfaase
slacktivism123 · 19 days ago
What's with the mock-security-advisory with logo for the 'vulnerability' (Heartbleed, anyone?)

Why is the important safety advice buried in a bunch of interpersonal drama and administrivia?

slacktivism123 commented on The PSF has paused our Grants Program   pyfound.blogspot.com/2025... · Posted by u/cmaureir
slacktivism123 · 19 days ago
>In an ideal world, we wouldn’t need to pause the Grants Program and would instead be granting even MORE awards to our inspiring community.

>The AI sector, for example, relies heavily on Python and is mostly untapped for the PSF, PyCon US, and our entire community.

slacktivism123 commented on Anthropic revokes OpenAI's access to Claude   wired.com/story/anthropic... · Posted by u/minimaxir
dylan604 · 23 days ago
> Seriously though, why phrase API use of Claude as "special developer access"?

Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app

slacktivism123 · 23 days ago
>That's like asking why is an SDK phrased as a special kit

It's Software Developer Kit, not Special Developer Kit ;-)

slacktivism123 commented on How to Secure a Linux Server   github.com/imthenachoman/... · Posted by u/redbell
slacktivism123 · 24 days ago
This guide ignores many sane defaults in favor of a patchwork of cargo cult scripts and outdated packages, added over time by random people with no thought for threat modeling, that may even result in an increased attack surface.

See this comment from 2019: https://news.ycombinator.com/item?id=19178938

I will leave you with this line from README.md:

>I am not as knowledgeable about hardening/securing a Linux kernel as I'd like. As much as I hate to admit it, I do not know what all of these settings do.

slacktivism123 commented on Performance and telemetry analysis of Trae IDE, ByteDance's VSCode fork   github.com/segmentationf4... · Posted by u/segfault22
marksomnian · a month ago
> might be good to mention that for transparency, because people can tell anyway and it might feel slightly otherwise

Devil's advocate: why does it matter (apart from "it feels wrong")? As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?

slacktivism123 · a month ago
> As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?

TL;DR: Because of the bullshit asymmetry principle. Maybe the conclusions below are sound, have a read and try to wade through ;-)

Let us address the underlying assumptions and implications in the argument that the provenance of a report, specifically whether it was written with the assistance of AI, should not matter as long as the conclusions are sound.

This position, while intuitively appealing in its focus on the end result, overlooks several important dimensions of communication, trust, and epistemic responsibility. The process by which information is generated is not merely a trivial detail, it is a critical component of how that information is evaluated, contextualized, and ultimately trusted by its audience. The notion that it feels wrong is not simply a matter of subjective discomfort, but often reflects deeper concerns about transparency, accountability, and the potential for subtle biases or errors introduced by automated systems.

In academic, journalistic, and technical contexts, the methodology is often as important as the findings themselves. If a report is generated or heavily assisted by AI, it may inherit certain limitations, such as a lack of domain-specific nuance, the potential for hallucinated facts, or the unintentional propagation of biases present in the training data. Disclosing the use of AI is not about stigmatizing the tool, but about providing the audience with the necessary context to critically assess the reliability and limitations of the information presented. This is especially pertinent in environments where accuracy and trust are paramount, and where the audience may need to know whether to apply additional scrutiny or verification.

Transparency about the use of AI is a matter of intellectual honesty and respect for the audience. When readers are aware of the tools and processes behind a piece of writing, they are better equipped to interpret its strengths and weaknesses. Concealing or omitting this information, even unintentionally, can erode trust if it is later discovered, leading to skepticism not just about the specific report, but about the integrity of the author or institution as a whole.

This is not a hypothetical concern, there are numerous documented cases (eg in legal filings https://www.damiencharlotin.com/hallucinations/) where lack of disclosure about AI involvement has led to public backlash or diminished credibility. Thus, the call for transparency is not a pedantic demand, but a practical safeguard for maintaining trust in an era where the boundaries between human and machine-generated content are increasingly blurred.

u/slacktivism123

KarmaCake day95July 10, 2025
About
I ♥ ~~Big Data~~ ~~IoT~~ ~~Blockchain~~ ~~Edge Compute~~ ~~NFT~~ ~~Metaverse~~ Generative AI
View Original