That's not really true. The only guarantees in Rust futures are that they are polled() once and must have their Waker's wake() called before they are polled again. A completion based future submits the request on first poll and calls wake() on completion. That's kind of the interesting design of futures in Rust - they support polling and completion.
The real conundrum is that the futures are not really portable across executors. For io_using for example, the executor's event loop is tightly coupled with submission and completion. And due to instability of a few features (async trait, return impl trait in trait, etc) there is not really a standard way to write executor independent async code (you can, some big crates do, but it's not necessarily trivial).
Combine that with the fact that container runtimes disable io_uring by default and most people are deploying async web servers in Docker containers, it's easy to see why development has stalled.
It's also unfair to mischaracterize design goals and ideas from 2016 with how the ecosystem evolved over the last decade, particularly after futures were stabilized before other language items and major executors became popular. If you look at the RFCs and blog posts back then (eg: https://aturon.github.io/tech/2016/09/07/futures-design/) you can see why readiness was chosen over completion, and how completion can be represented with readiness. He even calls out how naïve completion (callbacks) leads to more allocation on future composition and points to where green threads were abandoned.
Uhm all of that is just sugar on top of stable feature. None of these features or lack off prevent portability.
Full portability isn't possible specifically due to how Waker works (i.e. is implementation specific). That allows async to work with different style of asyncs. Reason why io_uring is hard in rust is because of io_uring way of dealing with memory.
Quick and accurate routing and triage of inbound calls may be more fruitful and far easier than summarizing hundreds of hours of "ok now plug the router back into the wall." Im imagining AI identifying a specific technical problem that sounds a lot like a problem that a specific technician successfully solved previously.
1) my call is very important to them (it's not)
2) listen carefully because options changed (when? 5 years ago?)
3) they have a website where I can do things (you can't, otherwise why would I call?)
4) please stay at the end of call to give them feedback (sure, I will waste more of my time)
I have a light fork that tries to nullify this, but I don't think I've managed to catch all the instances.
Other than that, it's a very nice editor in my opinion.
I hate this pattern in software so much.
Deleted Comment
Also, there are MCP servers that allow running any command in your terminal, including apt install / brew install etc.
[1] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
With MCP server, I can just expose commit functionality and add it to allow list. The fact that security for remote MCP servers (i.g. not stdin) is a separate issue. The fact that there isn't an easy way to provide credentials to an MCP server is also a separate issue.
In case you don't want to read through the PR
While this is true, you can download the OpenAI open source model and run it in Ollama.
The thinking is a little slow, but the results have been exceptional vs other local models.
https://ollama.com/library/gpt-oss