Readit News logoReadit News
lagrange77 commented on Malleable Software Will Eat the SaaS World   mdubakov.me/malleable-sof... · Posted by u/tablet
lagrange77 · 13 minutes ago
This is comparing two orthogonal properties.

SaaS is a business model while malleable vs. rigid is a property of the software itself.

lagrange77 commented on A guide to Gen AI / LLM vibecoding for expert programmers   stochasticlifestyle.com/a... · Posted by u/ChrisRackauckas
fragmede · 5 days ago
But do those watches tell time better? or harder? or louder? Once you have the quartz crystal and have digital watches, mechanical movements became obsolete. Rolex and Patek Philippe are still around, but it's more of a curiosity than anything.
lagrange77 · 5 days ago
Absolutely agree. The watches do tell time better. But the factory worker does not become better at the craft of watchmaking or EE.
lagrange77 commented on A guide to Gen AI / LLM vibecoding for expert programmers   stochasticlifestyle.com/a... · Posted by u/ChrisRackauckas
lubujackson · 5 days ago
I think the idea that LLMs are just good at "automating" is the old curmudgeon idea that young people won't have.

I think the fundamental shift is something like having ancillary awareness of code at all but high capability to architect and drill down into product details. In other words, fresh-faced LLM programmers will come out the gate looking like really good product managers.

Similar to how C++ programmers looked down on web developers for not knowing all about malloc and pointers. Why dirty your mind with details that are abstracted away? Someone needs to know the underlying code at some point, but that may be reserved for the wizards making "core libraries" or something.

But the real advancement will be not being restricted by what used to be impossible. Why not a UI that is generated on the fly on every page load? Or why even have a webform that people have to fill out, just have the website ask users for the info it needs?

lagrange77 · 5 days ago
Yeah i agree with most of what you say.

> looking like really good product managers.

Exactly and that's a different field with a different skillset than developer/programmer.

And that's the purpose of technology in the first place tbh, to make the hard/tedious work easier.

lagrange77 commented on A guide to Gen AI / LLM vibecoding for expert programmers   stochasticlifestyle.com/a... · Posted by u/ChrisRackauckas
iLoveOncall · 5 days ago
As a tech lead I have reviewed code written by junior engineers and written by AI, and there is a very clear difference between the two.

You also seem to be missing the point that if vibe coding lets your engineers write 10x the amount of code they previously could in the same working hours, you now have to review 10x that amount.

It's easy to see how there is an instant bottleneck here...

Or maybe you're saying that the same amount of code is written when vibe-coding than when writing by hand, and if that's the case then obviously there's absolutely no reason to vibe-code.

lagrange77 · 5 days ago
You have forgotten the most important part: Lay off 90% of those devs.
lagrange77 commented on A guide to Gen AI / LLM vibecoding for expert programmers   stochasticlifestyle.com/a... · Posted by u/ChrisRackauckas
Retr0id · 5 days ago
New generations are always leapfrogging those that came before them, so I don't find it too hard to believe even under more pessimistic opinions of LLM usefulness.

They are young and inexperienced today, but won't stay that way for long. Learning new paradigms while your brain is still plastic is an advantage, and none of us can go back in time.

lagrange77 · 5 days ago
But automating isn't a programming paradigm.

> They are young and inexperienced today, but won't stay that way for long.

I doubt that. For me this is the real dilemma with a generation of LLM-native developers. Does a worker in a fully automated watch factory become better at the craft of watchmaking with time?

lagrange77 commented on Show HN: OpenAI/reflect – Physical AI Assistant that illuminates your life   github.com/openai/openai-... · Posted by u/Sean-Der
lagrange77 · 8 days ago
Is it my browser, or does the video in the readme not have sound?
lagrange77 commented on Dokploy is the sweet spot between PaaS and EC2   nikodunk.com/2025-06-10-d... · Posted by u/nikodunk
lagrange77 · 11 days ago
Does any of you use one of these (Dokploy, CapRover, Dokku, Coolify) like Netlify, as advertised by some?

For me, the core feature of Netlify is building and deploying static websites quickly, with minimal configuration and triggered by git commits.

Does any of these really resemble that experience (except for the CDN Netlify uses, of course)?

lagrange77 commented on Jim Lovell, Apollo 13 commander, has died   nasa.gov/news-release/act... · Posted by u/LorenDB
Syzygies · 18 days ago
And David Scott, commander of Apollo 15, was the technical consultant. As Ron Howard explained this to me, he sees reality as more interesting and detailed than any fiction. No one would grasp the details were correct, but he felt they would contribute an irreplaceable texture to the film.

I became the math consultant for A Beautiful Mind in part because I was such an Apollo 13 buff. In my first call with Todd Hallowell, the executive producer, we spent an hour's aside discussing Apollo 13. This actually was part of the interview: Making a movie is intensely boring unless you're really engaged, and I demonstrated the required interest in detail.

lagrange77 · 18 days ago
> I became the math consultant for A Beautiful Mind in part because I was such an Apollo 13 buff.

Cool, awesome job, as far as i can tell as a fan of the movie!

So you did what was best for yourself... and the group.

lagrange77 commented on Running GPT-OSS-120B at 500 tokens per second on Nvidia GPUs   baseten.co/blog/sota-perf... · Posted by u/philipkiely
lagrange77 · 20 days ago
While you're here..

Do you guys know a website that clearly shows which OS LLM models run on / fit into a specific GPU(setup)?

The best heuristic i could find for the necessary VRAM is Number of Parameters × (Precision / 8) × 1.2 from here [0].

[0] https://medium.com/@lmpo/a-guide-to-estimating-vram-for-llms...

lagrange77 · 20 days ago
Thanks for your answers!

While it is seemingly hard to calculate it, maybe one should just make a database website that tracks specific setups (model, exact variant / quantisation, runner, hardware) where users can report, which combination they got running (or not) along with metrics like tokens/s.

Visitors could then specify their runner and hardware and filter for a list of models that would run on that.

u/lagrange77

KarmaCake day1436December 25, 2021View Original