For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.
I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.
The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.
shit, I'm stealing that quote! it's easier to seize an opportunity, (i.e. build a tool that fixes the problem X without causing annoying Y and Z side effects) but finding one is almost as hard as it was since the beginning of the world wide web.
Neither Kotlin nor Rust cares about effects.
Switching to Kotlin/Rust for FP reasons (and then relying on programmer discipline to track effects) is like switching to C++ for RAII reasons.
Kotlin and Rust are just a lot more practical than, say, Clojure or Haskell, but they both take lessons from those languages.
trivia: Kotlin interfaces were initially called "traits", but with Kotlin M12 release (2015), they renamed it to interfaces because Kotlin traits basically are Java interfaces. [0]
[0]: https://blog.jetbrains.com/kotlin/2015/05/kotlin-m12-is-out/...
I don't know how strong lock in is.
also, hot take: Kotlin simply does not need this many tools for refactoring, thanks in part to the first-class FP support. in fact, almost every non-Android Kotlin dev I have ever met would be totally happy with analysis and refactoring levels on par with Rust Analyzer.
but even with LSP, I would still need IDEA (at least Community) to perform Java -> Kotlin migration and smooth Java interoperability.
- that's why OOP failed - side effects, software too liquid for its complexity
- that's why functional and generic programming are on their rise - good FP implementations are natively immutable, generic programming makes FP practical.
- that's why Kotlin and Rust are in position to purge Java and C, philosophically speaking - the only things that remain are technical concerns, such as JetBrains' IDEA lock-in (that's basically the only place where you can do proper Kotlin work) as well Rust's "hostility" to other bare-metal languages, embedded performance, and compiler speed.
I am still curious, why? I have my own set of why's and want to hear yours
another argument against letting LLM do the bulk of the job is that they output code that's already legacy, and you want to avoid tech debt. for example, Gemini still thinks that Kotlin 2.2 is not out, hence misses out on context parameters and latest Swift interoperability goodies. you, a human being, are the only one who will ever have the privilege of learning "at test time", without separate training process.
replace coding "agents" with search tools. they are still non-deterministic, but hey, both Perplexity and Google AI Mode are good at quick lookup of SvelteKit idioms and whatnot. plus, good old Lighthouse can point out a11y issues - most of them stem from non-semantic HTML. but if you really want to do it without leaving a terminal, I can recommend Gemini CLI with some search-specific prompting. it's the only CLI "agent" that has access to the web search to my knowledge. it's slower than Perplexity or even ChatGPT Search, but you can attach anything as a context.
this is the true skill of "how to use AI" - only use it where it's worth it. and let's be real, if Google Search was not filled with SEO crap, we would not need LLMs.