It didn't help that the LLM was confidently incorrect.
The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.
In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.
I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.
With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).
I know it probably didn't, but I wonder if part of Sequoia's decision to invest had anything to do with these false claims?
This is a product I REALLY want. Since I want to be able to diagram entire complex systems without always seeing 10,000 boxes on screen. You could start a presentation at 35,000 feet, showing the entire rough structure, then zoom into different regions where more detail will appear (infinitely)
Nestable feels more like excalidraw, with a folder/file structure?
Ai is not far away from dropping to the “trough of disillusionment” and I can’t see why databricks even needs Postgres.
Hopefully I’m wrong as I’m a big fan of databricks.
Deleted Comment
Besides that distros also tend to include theming that’s much more complete and versatile (works at odd UI scales and such) than themes you find online, which can also be of value. Trying to assemble all the components and poke configs in all the right places to get a coherent look is frankly a huge pain in the rear.
When the end result is just install packages a, b, c, remove snap, add this theme, add this wallapaper. that is like a script to me lol.
aka ship a diff instead of shipping an entire asset.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
I shouldn't fault the creators. They did what they did, and that is all and good. I am more shocked by the way it has exploded in adoption.
Would love to see a coffeescript for golang.