I think the fundamental shift is something like having ancillary awareness of code at all but high capability to architect and drill down into product details. In other words, fresh-faced LLM programmers will come out the gate looking like really good product managers.
Similar to how C++ programmers looked down on web developers for not knowing all about malloc and pointers. Why dirty your mind with details that are abstracted away? Someone needs to know the underlying code at some point, but that may be reserved for the wizards making "core libraries" or something.
But the real advancement will be not being restricted by what used to be impossible. Why not a UI that is generated on the fly on every page load? Or why even have a webform that people have to fill out, just have the website ask users for the info it needs?
> looking like really good product managers.
Exactly and that's a different field with a different skillset than developer/programmer.
And that's the purpose of technology in the first place tbh, to make the hard/tedious work easier.
You also seem to be missing the point that if vibe coding lets your engineers write 10x the amount of code they previously could in the same working hours, you now have to review 10x that amount.
It's easy to see how there is an instant bottleneck here...
Or maybe you're saying that the same amount of code is written when vibe-coding than when writing by hand, and if that's the case then obviously there's absolutely no reason to vibe-code.
They are young and inexperienced today, but won't stay that way for long. Learning new paradigms while your brain is still plastic is an advantage, and none of us can go back in time.
> They are young and inexperienced today, but won't stay that way for long.
I doubt that. For me this is the real dilemma with a generation of LLM-native developers. Does a worker in a fully automated watch factory become better at the craft of watchmaking with time?
For me, the core feature of Netlify is building and deploying static websites quickly, with minimal configuration and triggered by git commits.
Does any of these really resemble that experience (except for the CDN Netlify uses, of course)?
Dokploy vs. CapRover, Dokku, Coolify
I became the math consultant for A Beautiful Mind in part because I was such an Apollo 13 buff. In my first call with Todd Hallowell, the executive producer, we spent an hour's aside discussing Apollo 13. This actually was part of the interview: Making a movie is intensely boring unless you're really engaged, and I demonstrated the required interest in detail.
Cool, awesome job, as far as i can tell as a fan of the movie!
So you did what was best for yourself... and the group.
Do you guys know a website that clearly shows which OS LLM models run on / fit into a specific GPU(setup)?
The best heuristic i could find for the necessary VRAM is Number of Parameters × (Precision / 8) × 1.2 from here [0].
[0] https://medium.com/@lmpo/a-guide-to-estimating-vram-for-llms...
While it is seemingly hard to calculate it, maybe one should just make a database website that tracks specific setups (model, exact variant / quantisation, runner, hardware) where users can report, which combination they got running (or not) along with metrics like tokens/s.
Visitors could then specify their runner and hardware and filter for a list of models that would run on that.
SaaS is a business model while malleable vs. rigid is a property of the software itself.