The real divide going forward will be between vibe coding with experience across domains vs vibe coding without IMHO.
The scenario is perfect, a use case that is not currently supported but may well make sense. It’s basically sketching out an idea to let business evaluate its market viability, and to gather further end-user input.
Will the code reach production? It just might, but it at least needs review and refactoring by a developer seasoned in the framework. They might even want to rebuild it, and then they have the yard stick which to measure their output. And if they need a specification, it can be generated from the code in which ever specification format required by their processes.
The key here is that I’ve been able to iterate on the POC many times in a short time. The idea sketch has been refined, necessary details added, while others removed. Functionality swapped in and out while testing different approaches.
Right now vibe coding in this way requires substantial experience in software development to frame the problems and solutions to the AI. Without my understanding of the domain (both the software domain and the actual domain) vibe coding the POC would not have succeed.
My greatest concern is that it looks and works too good and thus will be kept as is even in production. As the old adage says: There are no temporary solutions, just more or less permanent solutions. A temporary solution that works is a permanent solution.
Excellent quote! Unfortunately not all high g people engage in moral reasoning, and I fear that they will tend to exploit lower g people, rather than to help them utilize AI to compensate. There is a real opportunity to help individuals with cognitive impairments enhance their abilities with AI. The question is how, and how they collectively feel about it.
I once took a timed test with a section that had me translating a string of symbols to letters using a cipher, response being multiple choice. If you read the string left to right, there were multiple answer options that started with the same sequence of letters (so ostensibly you had to translate the entire string).
But if you read the string right to left, there was often only one answer option that matched (the right one). So I got away with translating only the last ~4 symbols, regardless of how long the string was. I blew through the section, and surely scored high.
I always wondered: did they realize this? Or did it artificially inflate my results?
And looking at the highest-entropy section felt natural to me, but only because of countless hours as a software engineer where the highest-entropy bit is at the end (filepaths, certain IDs, etc).
Is it really accurate to say I'm "more intelligent" because I've seen that pattern a ton before, whereas someone who hasn't isn't? I suspect not.
Appreciate your post and the post you commented on. Taking shortcuts in test development often ends up being detrimental. There is also an inherent challenge in developing test for people who may well be smarter than you are. It’s like that programmer thing: “If you write the smartest program you can, and debugging is harder than writing code. Who’s gonna debug the code?” Many people have tried developing “smart” tests for cognitive abilities, some realize when they fail, some unfortunately don’t.
And 80TB with 1TB/s? Thanks to AI hardware is getting interesting again.
Consolidation has gone from having a data center with separate servers for all functions, to consolidating in a couple of racks, and will go on to being single server plus redundancy for most workloads you can imagine at some point in the future. Unless AI manages to convince us that we need the performance and cooling of 10:s of kWs per rack.
Some times I imagine that the IT of most companies, the part that is not ”in the cloud” that is, could run on a single server already. And maybe could even host the cloud functions if the admin know how hadn’t been lost to time.