Switching from github actions default runners is exactly a one line change.
Migrating from `actions-runner-controller` with k8s or self-managed VMs could be different based on specific customizations in place. However, we have import flows for the former to directly import the custom containers used in k8s, and an AMI import is coming soon.
There shouldn't be any stack specific behavior, except with caches. We have introduced fast custom caching actions including stack specific instructions in docs [1].
[1] https://docs.warpbuild.com/cache/quickstart#ruby
hth.
> [1] In traditional AI, agents are defined entities that perceive and act upon their environment, but that definition is less useful in the LLM era — even a thermostat would qualify as an agent under that definition.
I'm a huge believer in the power of agents, but this kind of complete ignorance of the history of AI gets frustrating. This statement belies a gross misunderstanding of how simple agents have been viewed.
If you're serious about agents then Minsky's The Society of the Mind should be on your desk. From the opening chapter:
> We want to explain intelligence as a combination of simpler things. This means that we must be sure to check, at every step, that none of our agents is, itself, intelligent... Accordingly, whenever we find that an agent has to do anything complicated, we'll replace it with a subsociety of agents that do simpler things.
Instead this write up completely ignores the logic of one of the seminal writings on this topic (and it's okay to disagree with Minsky, I sure do, but you need to at least acknowledge this) and immediately thinks the future of agents must be immensely complex.
Automatic thermostats existed in the early days of research on agents, and the key to a thermostat being an agent is it's ability to communicate with other agents automatically, and collectively perform complex actions.
Ultimately, that effort failed but I don’t see any awareness of that considerable volume of work reflected in today’s use of the word “agent”. If nothing else, there was a lot of work on the use-cases and human factors.
It’s just a bit disheartening to know that so much work, by hundreds of researchers (at least), over 10+ years, has just slipped into irrelevance
Been blogging since ~2000 but archived most the old stuff. Just rebuilt it on Bridgetown, Tailwind, and Cloudflare Pages because I had some free time.
Best recent blog post is about my re-discovery of hobbies during sabbatical: https://jamie.ideasasylum.com/2023/07/02/hobbies
"we'll meet at four your time"
"great!"
"why weren't you there?? I googled 'current time utc!!'"
"because we're on BST, aka IST, aka UTC+1 in the summer"
But this rando website says UK/Ireland is UTC!!
Someday, somehow, we'll teach people that if you're using PST in the summer, there's a 99% chance you're wrong.
Now do you think small firms can’t hold large quantities of damaging data?
What if agentic coding sessions are triggering a similar dopamine feedback loop as social media apps? Obviously not to the same degree as social media apps, I mean coding for work is still "work"... but there's maybe some similarity in getting iterative solutions from the agent, triggering something in your brain each time, yes?
If that was the case, wouldn't we expect developers to have an overly positive perception of AI because they're literally becoming addicted to it?
There's no flow state to be achieved with AI tools (at the moment)