Sure, the AI _can_ code integrations, but it now has to maintain them, and might be tempted to modify them when it doesn't need to (leaky abstractions), adding cognitive load (in LLM parlance: "context pollution") and leading to worse results.
Batteries-included = AI and humans write less code, get more "headspace"/"free context" to focus on what "really matters".
As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.
Nonetheless, I'm positive in a couple of years we'll have found a way for LLMs to be equally good, if not better, with other frameworks. I think we'll find mechanisms to have LLMs learn libraries and projects on the fly much better. I can imagine crazy scenarios where LLMs train smaller LLMs on project parts or libraries so they don't get context pollution but also don't need a full-retraining (or incredibly pricey inference). I can also think of a system in line with Anthropic's view of skills, where LLMs very intelligently switch their knowledge on or off. The technology isn't there yet, but we're moving FAST!
Love this era!!
i have the exact opposite experience. its far better to have llms start from scratch than use batteries that are just slightly the wrong shape... the llm will run circles and hallucinate nonexistent solutions.
that said, i have had a lot of success having llms write opinionated (my opinions) packages that are shaped in the way that llms like (very little indirection, breadcrumbs to follow for code paths etc), and then have the llm write its own documentation.
But distributed systems are hard. If your system isn't inherently distributed, then don't rush towards a model of concurrency that emulates a distributed system. For anything on a single machine, prefer structured concurrency.
the biggest bugbear for concurrent systems is mutable shared data. by inherently being distributable you basically "give up on that" so for concurrent erlang systems you ~mostly don't even try.
if for no other reason than that erlang is saner than go for concurrency
like goroutines aren't inherently cancellable, so you see go programmers build out the kludgey context to handle those situations and debugging can get very tricky
My understanding is that Rust prevents data races, but not all race conditions. You can still get a logical race where operations interleave in unexpected ways. Rust can’t detect that, because it’s not a memory-safety issue.
So you can still get deadlocks, starvation, lost wakeups, ordering bugs, etc., but Rust gives you:
- No data races
- No unsynchronized aliasing of mutable data
- Thread safety enforced through type system (Send/Sync)
the most odd one probably being 'const expected = [_]u32{ 123, 67, 89, 99 };'
and the 2nd most being the word 'try' instead of just ?
the 3rd one would be the imports
and `try std.fs.File.stdout().writeAll("hello world!\n");` is not really convincing either for a basic print.
constant array with u32, and let the compiler figure out how many of em there are (i reserve the right to change it in the future)
You can get a working site with the usual featuers (admin panel, logins, forgot reset/password flow, etc) with minimal code thanks to the richness of the ecosystem, and because of the minimal code it's relatively easy for the AI to keep iterating on it since it's small enough to be understandable in context.
special @asyncSuspend and @asyncResume builtins, they will be the low level detail you can build an evented io with.
new Io is an abstraction over the higher level details that are common between sync, threaded, and evented, so you shouldn't expect the suspension mechanism to be in it.