But I wonder if we’re now at a point where that can’t really be the default anymore. If you’re building something new in 2025, whether it’s a product, internal tool, or even just a feature, maybe it should be designed from the ground up to be usable not just by a human clicking buttons, but by another system entirely. A model, a script, an orchestration layer - whatever you want to call it.
It’s not about being “AI-first” in the marketing sense. It’s more about thinking: can this thing I’m building be used by something else without needing a human in the loop? Can it expose its core functions as callable actions? Can its state be inspected in a structured way? Can it be reasoned about or composed into a workflow? That kind of thinking, I think, will become the baseline expectation - not just a “nice to have.”
It’s also not really that complicated. Most of the time it just means thinking in terms of well-structured APIs, surfacing decisions and logs clearly, and not baking critical functionality too deeply into the front-end. But the shift is mental. You start designing features as tools - not just user flows - and that opens up all kinds of new possibilities. For example, someone might plug your service into a broader workflow and have it run unattended, or an LLM might be able to introspect your system state and take useful actions, or you can just let users automate things with much less effort.
There’s been some early but interesting work around formalising how systems expose their capabilities to automation layers. One effort I’ve been keeping an eye on is the MCP. A quick summary is basically that It aims to let a service describe what it can do - what functions it offers, what inputs it accepts, what guarantees or permissions it requires -in a way that downstream agents or orchestrators can understand without brittle hand-tuned wrappers. It’s still early days, but if this sort of approach gains traction, I can imagine a future where this kind of “self-describing system contract” becomes part of the baseline for interoperability. Kind of like how APIs used to be considered secondary, and now they are the product. It’s not there yet, but if autonomous coordination becomes more common, this may quietly become essential infrastructure.
I don’t know. Just a thought I’ve been chewing on. Curious what other people think. Is anyone building things with this mindset already or are there good examples out there of products or platforms that got this right from day one?
For example: the project gets 1,000 stars on 2024-07-23 because it was posted on Hacker News and received 100 comments (<link>). Below is the static info of stargazers during this period: ...
1. Fork to stars ratio. I've noticed that several of the "bot" repos have the same number of forks as stars (or rather, most ratios are above 0.5). Typically a project doesn't have nearly as many forks as stars.
2. Fake repo owners clone real projects and push them directly to their account (not fork) and impersonate the real project to try and make their account look real.
Example bot account with both strategies employed: https://github.com/algariis
I know it's the age of ai, but one should do a little checking oneself before posting ai generated content, right? Or at least one should know how to use git and write meaningful commit messages?
1. there are hallucinatory descriptions in the Readme (make test), and also in the code, such as the rate limit set at line 158, which is the wrong number
2. all commits are done on github webui, checking the signature confirms this
3. too verbose function names and a 2000 line python file
I don't have a complaint about ai, but the code quality clearly needs improvement, the license only lists a few common examples, the thresholds for detection seem to be set randomly, _get_stargazers_graphql the entire function is commented out and performs no action, it says "Currently bypassed by get_ stargazers", did you generate the code without even reading through it?
Bad code like this gets over 100stars, it seems like you're doing a satirical fake-star performance art.
You would assume if it was pure ai generated it would have the correct rate limit in the comments and the code .... but honestly I don't care and yeah I ran the read me through GPT to 'prettify it'. Arrest me.