Readit News logoReadit News
pengfeituan commented on Why Self-Host?   romanzipp.com/blog/why-a-... · Posted by u/romanzipp
pengfeituan · 2 months ago
Excellent topic, I can offer a perspective from my own experience. The biggest benefit of running a homelab isn't cost savings or even data privacy, though those are great side effects. The primary benefit is the deep, practical knowledge you gain. It's one thing to read about Docker, networking, and Linux administration; it's another thing entirely to be the sole sysadmin for services your family actually uses. When the DNS stops working or a Docker container fails to restart after a power outage, you're the one who has to fix it. That's where the real learning happens. However, there's a flip side that many articles don't emphasize enough: the transition from a fun "project" to a "production" service. The moment you start hosting something critical (like a password manager or a file-syncing service), you've implicitly signed up for a 24/7 on-call shift. You become responsible for backups, security patching, and uptime. It stops being a casual tinker-toy and becomes a responsibility. This is the core trade-off: self-hosting is an incredibly rewarding way to learn and maintain control over your data, but it's not a free lunch. You're trading the monetary cost of SaaS for the time and mental overhead of being your own IT department. For many on HN, that's a trade worth making.
pengfeituan commented on After nine years of grinding, Replit found its market. Can it keep it?   techcrunch.com/2025/10/02... · Posted by u/toomanyrichies
pengfeituan · 2 months ago
They are working at a too competitive area. Techs changed, and everything should re-start from scratch. Nine years may mean nothing at all.

Deleted Comment

pengfeituan commented on Two things LLM coding agents are still bad at   kix.dev/two-things-llm-co... · Posted by u/kixpanganiban
pengfeituan · 2 months ago
The first issue is related to the inner behavior of LLMs. Human can ignore some detailed contents of code and copy and paste, but LLM convert them into hidden states. It is a process of compression. And the output is a process of decompression. And something maybe lost. So it is hard for LLM to copy and paste. The agent developer should customize the edit rules to do this.

The second issue is that, LLM does not learn much high level context relationship of knowledge. This can be improved by introducing more patterns in the training data. And current LLM training is doing much on this. I don't think it is a problem in next years.

u/pengfeituan

KarmaCake day3December 8, 2019
About
Indie at https://zan.chat
View Original