[1] https://github.com/microsoft/CsWin32
[2] https://lowleveldesign.org/2023/11/23/generating-c-bindings-...
[1] https://github.com/microsoft/CsWin32
[2] https://lowleveldesign.org/2023/11/23/generating-c-bindings-...
It's seriously going to make people question the future of the platform. Look at Microsoft's actions, not their words.
TS Compiler: Go New TUI Editor: Rust Winget: C++ (this would have been a great candidate for .NET)
At least PowerToys is C#.
.NET is great, but why isn't it good enough for Microsoft? The company that historically has had such a strong culture of dogfooding.
I was wrong and Rob was indeed not an MS employee??
In general, the industry has been making huge efforts to push errors from runtime, to compile time. If you imagine points where we can catch errors being laid out from left to right, we have the following:
Caught by: Compiler -> code review -> tests -> runtime checks -> 'caught' in prod
The industry is trying to push errors leftwards. Rust, heavier review, safety in general - its all about cutting down costs by eliminating expensive errors earlier in the production chain. Every industry does this, its much less costly to catch a defective oxygen mask in the factory, than when it sets a plane on fire. Its also better to catch a defective component in the design phase, than when you're doing tests on it
AI is all about trying to push these errors rightwards. The only way that it can save in engineer time is if it goes through inadequate testing, validation, and review. 90% of the complexity of programming is building a mental model of what you're doing, and ensuring that it meets the spec of what you want to do. A lot of that work is currently pure mental work with no physical component - we try and offload it increasingly to compilers in safe languages, and add tests and review to minimise the slippage. But even in a safe language, it still requires a very high amount of mental work to be done to make sure that everything is correct. Tests and review are a stop gap to try and cover the fallibility of the human brain
So if you chop down on that critical mental work by using something probabilistically correct, you're introducing errors that will be more costly down the line. It'll be fine in the short term, but in the long term it'll cost you more money. That's the primary reason why I don't think AI will catch on - its short termist thinking from people who don't understand what makes software complex to build, or how to actually produce software that's cheap in the long term. Its also exactly the same reason that Boeing is getting its ass absolutely handed to it in the aviation world. Use AI if you want to go bankrupt in 5 years but be rich now
Conception -> design -> compiler -> code review ...
If AI tools allow for better rapid prototyping, they could help catch "errors" in the conception and design phases. I don't know how useful this actually is, though.
It makes me wonder if there's some kind of light decentralized thing that can be used with a convention oauth style front end