Right now all we're seeing is gentlemens' agreements and self-enforcement. A robots.txt is just asking you nicely. I don't know of any country that has any judicial ruling on the topic of AI scraping yet?
https://www.deeplearning.ai/the-batch/japan-ai-data-laws-exp...
But TBH, in a Rust world, it’s worth revisiting the assumptions behind the ROS node architecture, since Rust is so strong at scaling to large monolithic applications (due to the strict hierarchical code patterns it encourages).
A transitional Rust approach, that doesn't try to reimplement everything from scratch, could do something like a strangler pattern: Take each ROS node, run them separately in “jails” with a Rust API around each one, then implement the plumbing/management logic in pure Rust.
No, it isn't. Nothing is stopping your threading library from implementing the same thing. It just turns out it is a bad idea to kill threads at random points in time because they may own things like locks. Or in the case of async await doing something that is thought to be atomic.
In the end, its about expressing a state machine in a more concise implicit way, which is a suitable level of abstraction for most use cases.
It didn't start because it is so awesome, it started because JS can't do parallel any other way. That's the long and short of it. People wanted to use JS in the backend for some reason. The backend requires concurrency. JS cannot do concurrency. Enter the event loop. Then enter some syntactic sugar for the event loop. And since JS is popular, async became popular.
Code written using threads is, at least to me, much more readable and easier to reason about. Each section in itself is just synchronous code. The runtime/kernel take care of concurrency. The overhead is negligible in a day when we have greenlet implementations. It works for both i/o bound concurrency and cpu bound parallel computing. It doesn't require entire libraries rewritten to support it. There is no callback hell. It scales both horizontically and vertically. Modern languages support it out of the box (Hello `go` keyword).
I realise that this is going to get a lot of downvotes. I don't really care. To me, async is just "cooperative multitasking" with a quick paintjob. We left behind that paradigm in Operating Systems decades ago, and with good reason.
In regular threaded programming, cancellation is a bit more painful as you need to have some type of cancellation token used each time the thread waits for something. This a) is more verbose and b) can lead to bugs where you forget to implement the cancellation logic.
The reason I gave up on Pynecone is that these things transpile to other already high abstraction frameworks like React. This is all good and well until something goes wrong and you now have more layers to troubleshoot. Another huge downside to this approach is that now I have to deal with two sets of build and deployment tools: python and node. And given that I have few fond memories of the node tool set from previous project, I refuse to incur this complexity unless it's absolutely unavoidable.
If someone actually built something like Pynecone that targeted html/dom/js directly instead of wrapping node/react/next I would be the first to hop on the bandwagon. Because the API is very good but the way that sausage is made ain't pretty.
Lemme know if you have any questions and I'll answer as best as I can. (I'm an advisor to Modulo.)
Does anyone have feedback about personally using DVC vs LFS?
Switched to https://github.com/kevin-hanselman/dud and I have been happy since ..