assert n >= 0
assert result >= 0
?Also, there's a small set of programs that don't get along with the abstractions provided by OSes and run better on a virtual hardware abstraction instead.
This is not hypothetical, consider the present. While Firefox on iOS exists, it's just a branding skin over WebKit, due to a similar flavor of security paternalism around JITing code (only bad people write self modifying code :-). If Firefox had needed to differentiate itself originally in such a market, it's doubtful it would have had much success.
A threading free-for-all may be the wrong abstraction to use for many applications, but it has the virtue of being a decent stand-in for the hardware's actual capabilities. It's also close enough to ground truth that most other abstractions can be built on top of it. Imagine how unpleasant building a browser on top of Workers + ArrayBuffer transfer would be (especially given the lousy multi-millisecond latency of postMessage in most browsers). Also, consider that while there is often loud agreement that raw threads are dangerous, after decades of research, there's little consensus on the "right" option amongst many alternatives.
SharedArrayBuffer is nearly as powerful as the proposal, but not quite. For example, while it allows writing a multi-threaded tree processing library, it would have trouble exposing an idiomatic JS API if the trees in the library live in a single SAB (as JS has no finalizers etc. to enable release of sub-allocations of the SAB). The options are either one SAB per tree (which likely performs badly), an API where JS users need to explicitly release trees when done with them, or leaking memory. With the proposal, each tree node could be represented directly as a JS object. The proposal may not be the best way to fix this problem, but we definitely still have gaps in JS concurrency.
Agreed this would be a serious undertaking, however, and not to be lightly considered.
The proposal goes a long way to make the case this can be implemented performantly, but some deep thought should go into how it would alter / constraint future optimizations in JS JITs.
This should be an industry driven decision. Wait for the users of SAB to say it's not meeting their needs, and for them to provide clear reasons why (not hypothetical limitations, not vague falsely-equivalent comparisons to Firefox). Then we can tangibly weigh the pros against the cons.
Right now this is a solution looking for a problem. Your analogy comparing the JS runtime to iOS runtime isn't appropriate, no single company controls the web platform. Mozilla or Google or Apple or Microsoft can push for JS threads if the arguments for it make sense. Compare to WebAssembly.
In fact the evolution of WebAssembly is a good example of how this ought to happen. Imagine if the creator of emscripten opted to instead first propose a new VM/IL for the web? It would never happen because JS was already good enough. It was more natural to use JS first then create the VM with the goal of addressing the limitations encountered with the JS approach.
Let the tangible shortcomings of SAB bubble to the surface. Then we can sensibly design something that effectively addresses those shortcomings. Not a pattern-matched solution looking for a problem.
The data set only has to be large enough that the search takes more than 1/60th of a second. Then it's profitable to do it concurrently.
GC is not single threaded at all. In WebKit, it is concurrent and parallel, and already supports mutable threads accessing the heap (since our JIT threads access the heap). Most of the famous high-performance GC implementations are at least parallel, if not also concurrent. The classic GC algorithms like mark-sweep and semi-space are straight-forward to make work with multiple threads, and they both have straight-forward extensions that support parallelism in the GC and concurrency to the mutator.
Efficient parallel GC is non-trivial to implement. In the most common implementation, you have to pause all threads before you can collect. That will often negate the performance benefits of having independent parallel threads running, especially if they are thrashing the heap with shared objects as you suggest.
In a GCed language, you can freely your parameters into a complex returned structure; without GC, you can't mix up ownerships without a separate accounting system.
Consider:
struct some_struct *f(int x, char *y)
{
struct some_struct *result = malloc(sizeof *result);
if (x < 0)
result->value = "negative";
else
result->value = y;
return result;
}
If you're writing functional code, the return values of your routines will normally be a function of the parameters, which often means wholesale inclusion. Without GC, you need to do a lot more preemptive copying, and freeing the resulting data structures is much more painful.Alternative accounting systems can be used, e.g. arena or marker based allocation (freeing everything allocated when control returns to earlier points on the stack), or something else more involved. But it's not free.
A type system is a proof. But the proof in the type system is not a proof that it’s better to use a type system than not. That’s an entirely separate question.
I think that type systems are good at some things. Concurrency isn’t one of them.
Additionally your opinion is wrong, by simple counter example. Rust's type checker already has the ability to prevent data races: https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h... This works today.