I know more than most there is some baseline productivity we are always trying to be at, that can sometimes be a target more than a current state. But the way people talk about their AI workflows is different. It's like everyone has become tyranical factory floor managers, pushing ever further for productive gains.
Leave this kind of productivity to the bosses I say! Life is a broader surface than this. We can/should focus on be productive together, but leave your actual life for finer, more sustainable ventures.
Like where has the decency gone am I right??
THIS FRIGHTENS ME. Many of us sweng are either going be FIRE millionaires, or living under a bridge, in two years.
I’ve spent this week performing SemPort; found a ts app that does a needed thing, and was able to use a long chain of prompts to get it completely reimplemented in our stack, using Gene Transfer to ensure it uses some existing libraries and concrete techniques present in our existing apps.
Now not only do I have an idiomatic Python port, which I can drop right into our stack, but I have an extremely detailed features/requirements statement for the origin typescript app along with the prompts for generating it. I can use this to continuously track this other product as it improves. I also have the “instructions infrastructure” to direct an agent to align new code to our stack. Two reusable skills, a new product, and it took a week.
Because we're not good at curing cancers, we're just good at making people survive better for longer until the cancer gets them. 5 year survival is a lousy metric but it's the best we can manage and measure.
I'm perfectly happy investing roughly 98% of my savings into the thing that has a solid shot at curing cancers, autoimmune and neurodegenerative diseases. I don't understand why all billionaires aren't doing this.
The plot of Good Will Hunting would like a word.
--
Because it’s not just that agents can be dangerous once they’re installed. The ecosystem that distributes their capabilities and skill registries has already become an attack surface.
^ Okay, once can happen. At least he clearly rewrote the LLM output a little.
That means a malicious “skill” is not just an OpenClaw problem. It is a distribution mechanism that can travel across any agent ecosystem that supports the same standard.
^ Oh oh..
Markdown isn’t “content” in an agent ecosystem. Markdown is an installer.
^ Oh no.
The key point is that this was not “a suspicious link.” This was a complete execution chain disguised as setup instructions.
^ At this point my eyes start bleeding.
This is the type of malware that doesn’t just “infect your computer.” It raids everything valuable on that device
^ Please make it stop.
Skills need provenance. Execution needs mediation. Permissions need to be specific, revocable, and continuously enforced, not granted once and forgotten.
^ Here's what it taught me about B2B sales.
This wasn’t an isolated case. It was a campaign.
^ This isn't just any slop. It's ultraslop.
Not a one-off malicious upload.
A deliberate strategy: use “skills” as the distribution channel, and “prerequisites” as the social engineering wrapper.
^ Not your run-of-the-mill slop, but some of the worst slop.
--
I feel kind of sorry for making you see it, as it might deprive you of enjoying future slop. But you asked for it, and I'm happy to provide.
I'm not the person you replied to, but I imagine he'd give the same examples.
Personally, I couldn't care less if you use AI to help you write. I care about it not being the type of slurry that pre-AI was easily avoided by staying off of LinkedIn.
I guess I too would be exhausted if I hung on every sentence construction like that of every corporate blog post I come across. But also, I guess I am a barely literate slop enjoyer, so grain of salt and all that.
Also: as someone who doesn't use the AI like this, how can it become beyond the run of the mill in slop? Like what happened to make it particularly bad? For something so flattening otherwise, that's kinda interesting right?
I get the call for "effort" but recently this feels like its being used to critique the thing without engaging.
HN has a policy about not complaining about the website itself when someone posts some content within it. These kinds of complaints are starting to feel applicable to the spirit of that rule. Just in their sheer number and noise and potential to derail from something substantive. But maybe that's just me.
If you feel like the content is low effort, you can respond by not engaging with it?
Just some thoughts!
I think we should always default to skepticism no matter our priors, even if that ends up being wrong, its not a fruitless position compared to the alternative.