Readit News logoReadit News
artski commented on As Nuclear Power Makes a Comeback, South Korea Emerges a Winner   bloomberg.com/news/featur... · Posted by u/JumpCrisscross
artski · 10 months ago
I don’t know how South Korea works politically but Ik an example of Malaysia - they spent years on their nuclear road map -> new administration comes in who hates nuclear it gets scrapped. And now they are back to one that doesn’t mind it and have to start from zero
artski commented on Trump wants coal to power AI data centers   cnbc.com/2025/05/17/trump... · Posted by u/melling
artski · 10 months ago
Yeah I think the world is screwed. These aren't things you can shut down instantly without losing a bunch of money/the time not spent building better alternatives - all these projects have long lead times.
artski commented on Maybe we should be designing for machines too   substack.com/sign-in... · Posted by u/artski
artski · 10 months ago
I’ve been thinking a lot about how new features and systems are built lately, especially with everything that’s happened over the past few years. It’s interesting how most of the AI stuff we see in products today is basically tacked on after the fact to trace the trend - some more valuable than others depending on how forced it feels. You build your tool, your dashboard, your app, and then you try to layer in some sort of automation or “assistant” once it’s already working. And I get why - it makes sense when you’ve already got an established thing and you want to enhance it without breaking what people rely on. I did a main writeup in substack about it but figured I'd expand the discussion.

But I wonder if we’re now at a point where that can’t really be the default anymore. If you’re building something new in 2025, whether it’s a product, internal tool, or even just a feature, maybe it should be designed from the ground up to be usable not just by a human clicking buttons, but by another system entirely. A model, a script, an orchestration layer - whatever you want to call it.

It’s not about being “AI-first” in the marketing sense. It’s more about thinking: can this thing I’m building be used by something else without needing a human in the loop? Can it expose its core functions as callable actions? Can its state be inspected in a structured way? Can it be reasoned about or composed into a workflow? That kind of thinking, I think, will become the baseline expectation - not just a “nice to have.”

It’s also not really that complicated. Most of the time it just means thinking in terms of well-structured APIs, surfacing decisions and logs clearly, and not baking critical functionality too deeply into the front-end. But the shift is mental. You start designing features as tools - not just user flows - and that opens up all kinds of new possibilities. For example, someone might plug your service into a broader workflow and have it run unattended, or an LLM might be able to introspect your system state and take useful actions, or you can just let users automate things with much less effort.

There’s been some early but interesting work around formalising how systems expose their capabilities to automation layers. One effort I’ve been keeping an eye on is the MCP. A quick summary is basically that It aims to let a service describe what it can do - what functions it offers, what inputs it accepts, what guarantees or permissions it requires -in a way that downstream agents or orchestrators can understand without brittle hand-tuned wrappers. It’s still early days, but if this sort of approach gains traction, I can imagine a future where this kind of “self-describing system contract” becomes part of the baseline for interoperability. Kind of like how APIs used to be considered secondary, and now they are the product. It’s not there yet, but if autonomous coordination becomes more common, this may quietly become essential infrastructure.

I don’t know. Just a thought I’ve been chewing on. Curious what other people think. Is anyone building things with this mindset already or are there good examples out there of products or platforms that got this right from day one?

artski commented on Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps   github.com/m-ahmed-elbesk... · Posted by u/artski
Yiling-J · 10 months ago
It would be interesting if there were an AI tool to analyze the growth pattern of an OSS project. The tool should work based on star info from the GitHub API and perform some web searches based on that info.

For example: the project gets 1,000 stars on 2024-07-23 because it was posted on Hacker News and received 100 comments (<link>). Below is the static info of stargazers during this period: ...

artski · 10 months ago
Yeah I thought about this and maybe down the line, but wanted to start with the pure statistics part as the base so it's as little of a black box as possible.
artski commented on Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps   github.com/m-ahmed-elbesk... · Posted by u/artski
coffeeboy · 10 months ago
Very nice! I'm personally looking into bot account detection for my own service and have come up with very similar heuristics (albeit simpler ones since I'm doing this at scale) so I will provide some additional ones that I have discovered:

1. Fork to stars ratio. I've noticed that several of the "bot" repos have the same number of forks as stars (or rather, most ratios are above 0.5). Typically a project doesn't have nearly as many forks as stars.

2. Fake repo owners clone real projects and push them directly to their account (not fork) and impersonate the real project to try and make their account look real.

Example bot account with both strategies employed: https://github.com/algariis

artski · 10 months ago
Crazy how far people go for these things tbh.
artski commented on Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps   github.com/m-ahmed-elbesk... · Posted by u/artski
sesm · 10 months ago
How does it differentiate between organic (like project posted on HN) and inorganic star spikes?
artski · 10 months ago
For each spike it samples the users from that spike (I set it to a high enough value currently it essentially gets all of them for 99.99% of repos - though that should be optimised so it's faster but just figured I will just grab every single one for now whilst building it). It checks the users who caused this spike for signs of being "fake accounts".
artski commented on Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps   github.com/m-ahmed-elbesk... · Posted by u/artski
zxilly · 10 months ago
I checked your past submissions and yes, they are also ai generated.

I know it's the age of ai, but one should do a little checking oneself before posting ai generated content, right? Or at least one should know how to use git and write meaningful commit messages?

artski · 10 months ago
It's a project I'm making purely for myself and I like to share what I make - sorry I didn't put up most effort in the commit messages, will not do that again.
artski commented on Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps   github.com/m-ahmed-elbesk... · Posted by u/artski
zxilly · 10 months ago
Frankly, I think this program is ai generated.

1. there are hallucinatory descriptions in the Readme (make test), and also in the code, such as the rate limit set at line 158, which is the wrong number

2. all commits are done on github webui, checking the signature confirms this

3. too verbose function names and a 2000 line python file

I don't have a complaint about ai, but the code quality clearly needs improvement, the license only lists a few common examples, the thresholds for detection seem to be set randomly, _get_stargazers_graphql the entire function is commented out and performs no action, it says "Currently bypassed by get_ stargazers", did you generate the code without even reading through it?

Bad code like this gets over 100stars, it seems like you're doing a satirical fake-star performance art.

artski · 10 months ago
Well I initially planned to use GraphQL and started to implement it, but switched to REST for now as it's still not fully complete, just to keep things simpler while I iterate and the fact that it's not required currently. I’ll bring GraphQL back once I’ve got key cycling in place and things are more stable. As for the rate limit, I’ve been tweaking things manually to avoid hitting it constantly which I did to an extent—that’s actually why I want to add key rotation... and I am allowed to leave comments for myself for a work in progress no? or does everything have to be perfect from day one?

You would assume if it was pure ai generated it would have the correct rate limit in the comments and the code .... but honestly I don't care and yeah I ran the read me through GPT to 'prettify it'. Arrest me.

u/artski

KarmaCake day56April 4, 2025View Original