Readit News logoReadit News
jrmiii commented on Getting AI to work in complex codebases   github.com/humanlayer/adv... · Posted by u/dhorthy
iagooar · 3 months ago
I am working on a project with ~200k LoC, entirely written with AI codegen.

These days I use Codex, with GPT-5-Codex + $200 Pro subscription. I code all day every day and haven't yet seen a single rate limiting issue.

We've come a long way. Just 3-4 months ago, LLMs would start doing a huge mess when faced with a large codebase. They would have massive problems with files with +1k LoC (I know, files should never grow this big).

Until recently, I had to religiously provide the right context to the model to get good results. Codex does not need it anymore.

Heck, even UI seems to be a solved problem now with shadcn/ui + MCP.

My personal workflow when building bigger new features:

1. Describe problem with lots of details (often recording 20-60 mins of voice, transcribe)

2. Prompt the model to create a PRD

3. CHECK the PRD, improve and enrich it - this can take hours

4. Actually have the AI agent generate the code and lots of tests

5. Use AI code review tools like CodeRabbit, or recently the /review function of Codex, iterate a few times

6. Check and verify manually - often times, there are a few minor bugs still in the implementation, but can be fixed quickly - sometimes I just create a list of what I found and pass it for improving

With this workflow, I am getting extraordinary results.

AMA.

jrmiii · 3 months ago
> Heck, even UI seems to be a solved problem now with shadcn/ui + MCP.

I'm interested in hearing more about this - any resource you can point me at or do you mind elaborating a bit? TIA!

jrmiii commented on Plugin System   iina.io/plugins/... · Posted by u/xnhbx
jrmiii · 3 months ago
The plugin architecture here reminds me of what happened with VS Code - once you give users a proper JavaScript API and decent documentation, the community starts solving problems you never even knew existed. But there's something particularly clever about IINA's approach: they're essentially turning every media file into a potential canvas for interactive experiences.
jrmiii commented on Concord had a dev culture of toxic positivity that halted any negative feedback   twitter.com/longislandvip... · Posted by u/xnhbx
jrmiii · a year ago
I had a coworker introduce me to The Five Dysfunctions of a Team[0] as a useful tool for framing problems with team dynamics.

It's easy to draw parallels between what's described and those dysfunctions. In case you're not familiar, this framework by Patrick Lencioni outlines five obstacles that can mess up a team’s flow: absence of trust, fear of conflict, lack of commitment, avoidance of accountability, and inattention to results.

Particularly relevant to this situation: > Fear of conflict: seeking artificial harmony over constructive passionate debate

Just to warn you though, there is a tradeoff. You can also just act like an asshole and cite a culture of toxic positivity if people take issue with your behavior. The key is collaborate, productive focus on the outcomes with the other human beings involved in the endeavor.

[0] https://en.wikipedia.org/wiki/The_Five_Dysfunctions_of_a_Tea...

jrmiii commented on Rethinking serverless with FLAME   fly.io/blog/rethinking-se... · Posted by u/kiwicopple
thefourthchime · 2 years ago
I created something similar at my work, which I call "Long Lamda", the idea is that what if a lambda could run more than 15 minutes? Then do everything in a Lambda. An advantage of our system as is you can also run everything locally and debug it. I didn't see that with the FLAME but maybe I missed it.

We use it for our media supply chain which processes a few hundred videos daily using various systems.

Most other teams drank the AWS Step Koolaid and have thousands of lambas deployed, with insane development friction and surprisingly higher costs. I just found out today that we spend 6k a month on "Step Transitions", really?!

jrmiii · 2 years ago
> you can also run everything locally and debug it. I didn't see that with the FLAME but maybe I missed it.

He mentioned this:

> With FLAME, your dev and test runners simply run on the local backend.

and this

> by default, FLAME ships with a LocalBackend

jrmiii commented on Rethinking serverless with FLAME   fly.io/blog/rethinking-se... · Posted by u/kiwicopple
chrismccord · 2 years ago
I talk about FLAME outside elixir in one one of the sections in the blog. The tldr; is it's a generally applicable pattern for languages with a reasonable concurrency model. You likely won't get all the ergonomics that we get for free like functions with captured variable serialization, but you can probably get 90% of the way there in something like js, where you can move your modular execution to a new file rather than wrapping it in a closure. Someone implementing a flame library will also need to write the pooling, monitoring, and remote communication bits. We get a lot for free in Elixir on the distributed messaging and monitoring side. The process placement stuff is also really only applicable to Elixir. Hope that helps!
jrmiii · 2 years ago
> functions with captured variable serialization

Can't wait for the deep dive on how that works

jrmiii commented on Rethinking serverless with FLAME   fly.io/blog/rethinking-se... · Posted by u/kiwicopple
seabrookmx · 2 years ago
I'm firmly in the "I prefer explicit lambda functions for off-request work" camp, with the recognition that you need a lot of operational and organizational maturity to keep a fleet of functions maintainable. I get that isn't everyone's cup of tea or a good fit for every org.

That said, I don't understand this bit:

> Leaning on your worker queue purely for offloaded execution means writing all the glue code to get the data into and out of the job, and back to the caller or end-user’s device somehow

I assumed by "worker queue" they were talking about something akin to Celery in python land, but it actually does handle all this glue. As far as I can tell, Celery provides a very similar developer experience to FLAME but has the added benefit that if you do want durability those knobs are there. The only real downside seems you need redis or rabbit to facilitate it? I don't have any experience with them but I'd assume it's the same story with other languages/frameworks (eg ruby+sidekiq)?

Maybe I'm missing something.

jrmiii · 2 years ago
Yeah, I think this was more inward focusing on things like `Oban` in elixir land.

He's made the distinction in the article that those tools are great when you need durability, but this gives you a lower ceremony way to make it Just Work™ when all you're after is passing off the work.

jrmiii commented on Rethinking serverless with FLAME   fly.io/blog/rethinking-se... · Posted by u/kiwicopple
chrismccord · 2 years ago
Good question. The pools themselves in your app will be per usecase, and you can reference the named pool you are a part of inside the runner, ie by looking in system env passed as pool options. That said, we should probably just encode the pool name along with the other parent info in the `%FLAME.Parent{}` for easier lookup
jrmiii · 2 years ago
Ah, that makes a lot of sense - I think the FLAME.Parent{} approach may enable backends that wouldn't be possible otherwise.

For example, if I used the heroku api to do the equivalent of ps:scale to boot up more nodes - those new nodes (dynos in heroku parlance) could see what kind of pool members they are. I don't think there is a way to do dyno specific env vars - they apply at the app level.

If anyone tries to do a Heroku backend before I do, an alternative might be to use distinct process types in the Procfile for each named pool and ps:scale those to 0 or more.

Also, might need something like Supabase's libcluster_postgres[1] to fully pull it off.

EDIT2: So the heroku backend would be a challenge. You'd maybe have to use something like the formation api[2] to spawn the pool, but even then you can't idle them down because Heroku will try to start them back. I.e. there's no `restart: false` from what I can tell from the docs or you could use the dyno api[3] with a timeout set up front (no idle awareness)

[1] https://github.com/supabase/libcluster_postgres

[2] https://devcenter.heroku.com/articles/platform-api-reference...

[3] https://devcenter.heroku.com/articles/platform-api-reference...

jrmiii commented on Rethinking serverless with FLAME   fly.io/blog/rethinking-se... · Posted by u/kiwicopple
chrismccord · 2 years ago
This is actually a feature. If you watch the screencast, I talk about Elixir supervision trees and how all Elixir programs carefully specify the order their services stop and stop in. So if your flame functions need DB access, you start your Ecto.Repo with a small or single DB connection pool. If not, you flip it off.

> It's probably a non-issue, the number of things done at initialization could be kept minimal, and FLAME could just have some checks to skip initialization code when in a flame context.

Exactly :)

jrmiii · 2 years ago
So, Chris, how do you envision the FLAME child understanding what OTP children it needs to start on boot, because this could be FLAME.call dependent if you have multiple types of calls as described above. Is there a way to pass along that data or for it to be pulled from the parent?

Acknowledging this is brand new; just curious what your thinking is.

EDIT: Would it go in the pool config, and a runner as a member of the pool has access to that?

jrmiii commented on G9.js: Automatically Interactive Graphics   omrelli.ug/g9/gallery/... · Posted by u/nnx
aboodman · 2 years ago
I guess it could potentially be used for any drag and drop interface? It's such a different way to think of the problem, I'm going to have to try it for something just to wrap my head around it.
jrmiii · 2 years ago
Well, that was 8 hours ago - you get anywhere noodling over this?

I'm trying to go through the same thought process - this is neat, how do i translate into more practical applications. It seems like such a powerful paradigm if one can figure out that mapping.

jrmiii commented on EXGBoost: Gradient Boosting in Elixir   dockyard.com/blog/2023/07... · Posted by u/clessg
marcosfelt · 2 years ago
For the uninitiated, what is the advantage of using Elixir for machine learning?
jrmiii · 2 years ago
You're in luck, there was a big discussion on that just yesterday. https://news.ycombinator.com/item?id=36859785

u/jrmiii

KarmaCake day144March 9, 2013View Original