I'm a DevOps/SRE and I've spent the past couple weeks trying to vibecode as much of what I do as possible.
In some ways, it's magical. e.g. I whipped up a web based tool for analyzing performance statistics of a blockchain. Claude was able to do everything from building the gui, optimizing the queries, adding new indices to the database etc. I broke it down into small prompts so that I kept it on track and it didn't veer off course. 90% of this I could have done myself but Claude took hours where it would have taken me days or even weeks.
Then yesterday I wanted to do a quick audit of our infra using Ansible. I first thought: let's try Claude again. I gave it lots of hints on where our inventory is, which ports matter etc but it still was grinding away for several minutes. I eventually Ctrl-C'ed and used a couple one liners that I wrote myself in a few minutes. In other words, I was faster that the machine in this case.
After the above, it makes sense to me that people may have conflicting feelings about productivity. e.g. sometimes it's amazing, sometimes it does the wrong thing.
I think there's an argument where if Claude had the knowledge map of your personal one liners and a tool for using them, it would often do the right thing in those cases. But it's definitely not as able to compress all the entropy of 'what can go wrong' operations wise as it is when composing code yet.
Appealing, but this is coming from someone smart/thoughtful. No offence to 'rest of world', but I think that most people have felt this way for years. And realistically in a year, there won't be any people who can keep up.
Somewhere a doctor is happy he found a model that's good enough for coding but he thinks, I'm certainly not dumb enough to use this for medical advice.
The thing about medical advice is that Google was useful for narrowing problems down, and it's the same with any current LLM only more useful. I have enough biology to know what interventions require professional opinions.
That’s great, but not a reason for taxpayers to get involved and be on the hook for massive risky investments.
OpenAI doesn’t need government financial backing for investment. The government has more pressing priorities to address with the money they take from us first.
I've been using the gpt-oss 20b parameter model on my laptop and it works great. Doesn't reject giving legal or medical advice either. Obviously not good enough for coding, but seems like 'useful AI assistant for daily life' is in overshoot.
It is insane to worry about this compared to other sources. 2M tons of carbon over the last decade to save how many lives? $30-200M to deal with that carbon is clearly worth the benefit of a decade of kids and adults not dying preventable deaths in mass scale.
In some ways, it's magical. e.g. I whipped up a web based tool for analyzing performance statistics of a blockchain. Claude was able to do everything from building the gui, optimizing the queries, adding new indices to the database etc. I broke it down into small prompts so that I kept it on track and it didn't veer off course. 90% of this I could have done myself but Claude took hours where it would have taken me days or even weeks.
Then yesterday I wanted to do a quick audit of our infra using Ansible. I first thought: let's try Claude again. I gave it lots of hints on where our inventory is, which ports matter etc but it still was grinding away for several minutes. I eventually Ctrl-C'ed and used a couple one liners that I wrote myself in a few minutes. In other words, I was faster that the machine in this case.
After the above, it makes sense to me that people may have conflicting feelings about productivity. e.g. sometimes it's amazing, sometimes it does the wrong thing.