It's easy to draw parallels between what's described and those dysfunctions. In case you're not familiar, this framework by Patrick Lencioni outlines five obstacles that can mess up a team’s flow: absence of trust, fear of conflict, lack of commitment, avoidance of accountability, and inattention to results.
Particularly relevant to this situation: > Fear of conflict: seeking artificial harmony over constructive passionate debate
Just to warn you though, there is a tradeoff. You can also just act like an asshole and cite a culture of toxic positivity if people take issue with your behavior. The key is collaborate, productive focus on the outcomes with the other human beings involved in the endeavor.
[0] https://en.wikipedia.org/wiki/The_Five_Dysfunctions_of_a_Tea...
We use it for our media supply chain which processes a few hundred videos daily using various systems.
Most other teams drank the AWS Step Koolaid and have thousands of lambas deployed, with insane development friction and surprisingly higher costs. I just found out today that we spend 6k a month on "Step Transitions", really?!
He mentioned this:
> With FLAME, your dev and test runners simply run on the local backend.
and this
> by default, FLAME ships with a LocalBackend
Can't wait for the deep dive on how that works
That said, I don't understand this bit:
> Leaning on your worker queue purely for offloaded execution means writing all the glue code to get the data into and out of the job, and back to the caller or end-user’s device somehow
I assumed by "worker queue" they were talking about something akin to Celery in python land, but it actually does handle all this glue. As far as I can tell, Celery provides a very similar developer experience to FLAME but has the added benefit that if you do want durability those knobs are there. The only real downside seems you need redis or rabbit to facilitate it? I don't have any experience with them but I'd assume it's the same story with other languages/frameworks (eg ruby+sidekiq)?
Maybe I'm missing something.
He's made the distinction in the article that those tools are great when you need durability, but this gives you a lower ceremony way to make it Just Work™ when all you're after is passing off the work.
For example, if I used the heroku api to do the equivalent of ps:scale to boot up more nodes - those new nodes (dynos in heroku parlance) could see what kind of pool members they are. I don't think there is a way to do dyno specific env vars - they apply at the app level.
If anyone tries to do a Heroku backend before I do, an alternative might be to use distinct process types in the Procfile for each named pool and ps:scale those to 0 or more.
Also, might need something like Supabase's libcluster_postgres[1] to fully pull it off.
EDIT2: So the heroku backend would be a challenge. You'd maybe have to use something like the formation api[2] to spawn the pool, but even then you can't idle them down because Heroku will try to start them back. I.e. there's no `restart: false` from what I can tell from the docs or you could use the dyno api[3] with a timeout set up front (no idle awareness)
[1] https://github.com/supabase/libcluster_postgres
[2] https://devcenter.heroku.com/articles/platform-api-reference...
[3] https://devcenter.heroku.com/articles/platform-api-reference...
> It's probably a non-issue, the number of things done at initialization could be kept minimal, and FLAME could just have some checks to skip initialization code when in a flame context.
Exactly :)
Acknowledging this is brand new; just curious what your thinking is.
EDIT: Would it go in the pool config, and a runner as a member of the pool has access to that?
I'm trying to go through the same thought process - this is neat, how do i translate into more practical applications. It seems like such a powerful paradigm if one can figure out that mapping.
These days I use Codex, with GPT-5-Codex + $200 Pro subscription. I code all day every day and haven't yet seen a single rate limiting issue.
We've come a long way. Just 3-4 months ago, LLMs would start doing a huge mess when faced with a large codebase. They would have massive problems with files with +1k LoC (I know, files should never grow this big).
Until recently, I had to religiously provide the right context to the model to get good results. Codex does not need it anymore.
Heck, even UI seems to be a solved problem now with shadcn/ui + MCP.
My personal workflow when building bigger new features:
1. Describe problem with lots of details (often recording 20-60 mins of voice, transcribe)
2. Prompt the model to create a PRD
3. CHECK the PRD, improve and enrich it - this can take hours
4. Actually have the AI agent generate the code and lots of tests
5. Use AI code review tools like CodeRabbit, or recently the /review function of Codex, iterate a few times
6. Check and verify manually - often times, there are a few minor bugs still in the implementation, but can be fixed quickly - sometimes I just create a list of what I found and pass it for improving
With this workflow, I am getting extraordinary results.
AMA.
I'm interested in hearing more about this - any resource you can point me at or do you mind elaborating a bit? TIA!