I know you are saying you do work mainly in Angular, but for others reading this, I don't think this is giving modern Angular the credit it deserves. Maybe that was the case in the late 20-teens, but the Angular team has been killing it lately, IMO. There is a negative perception due to the echo chamber that is social media but meanwhile, Angular "just works" for enterprise and startups who want to scale alike.
I think people who are burned on on decision fatigue with things like React should give Angular another try, might be pleasantly surprised how capable it is out of the box, and no longer as painful to press against the edges.
What you’re doing with Hetzner is just a few less layers of abstraction compared to AWS or Azure. They can still theoretically take down the machine or steal your data, if they wanted to.
I don’t know what the correct definition of self hosted is, but there is a big ideological difference between what you’re doing and self-hosting actual, physical hardware in your home.
In fact, I’d argue the physical risk of loss, theft, or data compromise is much higher at home than in a professional datacenter with power redundancy, security controls, and constant uptime monitoring.
It’s a bit like saying, "Don’t trust the bank, they could take your money and freeze your account — keep all your money under the mattress." Technically possible, yes. But come on.
To me, self hosted also means I rent a machine with Hetzner and run the server software on it. Its cheap, stable, fast, secure and Hetzner wont screw me over with my data. I have a LOT less headache and I can rent a vserver for a long time until the hardware cost for a server running at home is surpassed.
I can also very simply assign a domain to it and am pretty sure that software like nextcloud offers oauth access so my friends would NOT be required to sign up for my "weird app". Well, technically they do but oauth automates it.
Am I missing something?
As to “why”: I’ve been coding for 25 years, and LLMs is the first technology that has a non-linear impact on my output. It’s simultaneously moronic and jaw-dropping. I’m good at what I do (eg, merged fixes into Node) and Claude/o3 regularly finds material edge cases in my code that I was confident in. Then they add a test case (as per our style), write a fix, and update docs/examples within two minutes.
I love coding and the art&craft of software development. I’ve written millions of lines of revenue generating code, and made millions doing it. If someone forced me to stop using LLMs in my production process, I’d quit on the spot.
Why not self host: open source models are a generation behind SOTA. R1 is just not in the same league as the pro commercial models.
i've tried agent-style workflows in copilot and windsurf (on claude 3.5 and 4), and honestly, they often just get stuck or build themselves into a corner. they don’t seem to reason across structure or long-term architecture in any meaningful way. it might look helpful at first, but what comes out tends to be fragile and usually something i’d refactor immediately.
sure, the model writes fast – but that speed doesn't translate into actual productivity for me unless it’s something dead simple. and if i’m spending a lot of time generating boilerplate, i usually take that as a design smell, not a task i want to automate harder.
so i’m honestly wondering: is cc max really that much better? are those productivity claims based on something fundamentally different? or is it more about tool enthusiasm + selective wins?
Deleted Comment
You can't make up a couple of conversation topics and expect the LLMs to do the rest by just switching languages. People approach the same topics completely different in different languages. The app looks like someone picked a couple of topics and the rest is "just" ChatGPT advanced voice mode.
And the worst thing is that the LLMs in TTS do not sound native and cannot teach you pronounciation and learning to listen and understand (which is the whole point in having spoken conversation).
And the other way around, the STT will not notice pronounciation mistakes made by the student - so the app cannot tell you: oh, its pronounced like this.