Any Linux distro can have MySQL or Postgres installed in less than five minutes and works out of the box
Even a single core VPS can handle lots of queries per second (assuming the tables are indexed properly and the queries aren't trash)
There are mature open source backup solutions which don't require DB downtime (also available in most package managers)
It's trivial to tune a DB using .conf files (there are even scripts that autotune for you!!!)
Your VPS provider will allow you to configure encryption at rest, firewall rules, and whole disk snapshots as well
And neither MySQL or Postgres ever seem to go down, they're super reliable and stable
Plus you have very stable costs each month
I setup a cron job to store my backups to object storage but everything felt very fragile because if any detail in the chain was misconfigured I'd basically have a broken production database. I'd have to watch the database constantly or setup alerts and notifications.
If there is a ready to go OSS postgres with backups configured you can deploy I'd happily pay them for that.
Claude is sort of like a ghost dog in that it makes sure you know that it's up for doing whatever you want to do, but you're still in charge.
I see dozens of people on HN just posting about how amazing it is to write/compose software now. They're making more software than ever and having the time of their lives. When I read those and I actually go and explore that software they're just OSS tools, I wonder why would anyone want to use this? If everyone was doing as they were they're just asking their LLM to do it instead of looking out for a tool. Even better they'll just ask their LLM to make a tool to accomplish whatever those authors are building.
Then it's this whole new religion of human out of the loop. You feel like you've either gone stale or insane because every one now says adding human into the loop worsens the productivity gains from a model. I highly disagree, I haven't used a single model that handles a substantially complex task flawlessly. If you mention anything about that people don't shutup about harnesses.
Don't get me wrong, I use LLMs, I use them quiet frequently. However it's this obsessive attitude towards them that makes it impossible to get funding or research for anything that's not at least tangentially related them. It's completely burned me out professionally, academically and psychologically.
If it gets stuck, use another LLM as the debugger. If that gets stuck then use another LLM. Turtles all the way down.
/s
> Powered by Gemini, a multimodal large language model developed by Google, EMMA employs a unified, end-to-end trained model to generate future trajectories for autonomous vehicles directly from sensor data. Trained and fine-tuned specifically for autonomous driving, EMMA leverages Gemini’s extensive world knowledge to better understand complex scenarios on the road.
https://waymo.com/blog/2024/10/introducing-emma/> While EMMA shows great promise, we recognize several of its challenges. EMMA's current limitations in processing long-term video sequences restricts its ability to reason about real-time driving scenarios — long-term memory would be crucial in enabling EMMA to anticipate and respond in complex evolving situations...
They're still in the process of researching it, noting in that post implies VLM are actively being used by those companies for anything in production.
if you want mostly bot, some human content then reddit's way more convenient
1. Write a document that describes the work. In this case I had the minified+bundled JS, no documentation, but I did know how I use the system and generally the important behavioral aspects of the web client. There are aspects of the system that I know from experience tend to be tricky, like compositing an embedded browser into other UI, or dealing with VOIP in general. Other aspects, like JS itself, I don't really know deeply. I knew I wanted a Mac .app out the end, as well as Flatpak for Linux. I knew I wanted an mdbook of the protocol and behavioral specs. Do the best you can. Think really hard about how to segment the work for hands-off testability so the assistant can grind the loop of add logs, test run, fix, etc.
2. In Claude Desktop (or whatever) paste in the text from 1 and instruct it to research and ask you batches of 10 clarifying questions until it has enough information to write a work plan for how to do the job, specific tools, necessary documentation, etc. Then read and critique until you feel like the thread has the elements of a good plan, and have Claude generate a .md of the plan.
3. Create a repo containing the JS file and the plan.
4. Add other tools like my preferred template for change implementation plans, Rust style guide, etc (have the chatbot write a language style guide for any language you use that covers the gap between common practice ~3 years ago and the specific version of the language you want to use, common errors, etc). I have specific instructions for tracking current work, work log, and key points to remember in files, everyone seems to do this differently.
5. Add Claude Code (or whatever) to the container or machine holding the repo.
Repeat until done:
6a. Instruct the assistant to do a time-boxed 60 minutes of work towards the goal, or until blocked on questions, then leave changes for your review along with any questions.
6b. Instruct the assistant to review changes from HEAD for correctness, completeness, and opportunities to simplify, leaving questions in chat.
6c. Review and give feedback / make changes as necessary. Repeat 6b until satisfied.
6d. Go back to 6a.
At various points you'll find that the job is mis-specified in some important way, or the assistant can't figure out what to do (e.g. if you have choppy audio due to a buffer bug, or a slow memory leak, it won't necessarily know about it). Sometimes you need to add guidance to the instructions like "update instructions to emphasize that we must never allocate in situation XYZ". Sometimes the repo will start to go off the rails messy, improved with instructions like "consider how to best organize this repository for ease of onboarding the next engineer, describe in chat your recommendations" and then have it do what it recommended.
There's a fair amount of hand-holding but a lot of it is just making sure what it's doing doesn't look crazy and pressing OK.
What was the final framework like, how did the protocols work, etc.