I enjoyed the expedient writing style. The author did a great job of expressing the essential knowledge without too much verbosity. It's very clear what the skeleton of the project consists of, and the known unknowns are labeled (i.e. "this is how this works, you can read more later [here]").
Some may find this directness unempathetic, but I consider it as the opposite: the writing seems precisely aware of exactly what the reader wants to know at any given time and what questions he or she would be likely to have on their mind.
I just meant it subjectively. I assume some might not like this expedient style as a matter of taste, because most publications/articles/tutorials are not written so concisely.
Related: Early (almost prehistoric by software timeline standards) versions of Java used a green thread implementation, but this was dropped in Java 1.1 in favour of the current native OS thread model. Some background:
Java had a quite lousy implementation of green threads, that hurt performance more than enabled parallelism. (What isn't explainable by it being early, since there were earlier green threads implementation that didn't suck. Looks like it just wasn't a priority for Java developers.)
There's nothing anywhere restricting green threads to a single OS thread. Most modern runtimes will automatically multiplex the green threads into as many OS threads as your computer can run.
Rust originally supported only M:N green threads. Native thread support was added, became the default, and green thread support was eventually moved out of Rust core to a library. Here is the proposal and rationale:
It's occurred to me that asynchronous programming is essentially just very lightweight green threads with the added benefit of scheduling tied directly to I/O.
Synchronous programming is typically much more natural. It's too bad no languages have opted to abstract this away. Of course I suppose coroutines are kind of that.
>It's too bad no languages have opted to abstract this away.
Erlang and other BEAM languages like Elixir and LFE kind of do. They use the actor model for concurrency and message passing is asynchronous. However, most of the time directly after sending a message, a program waits for a reply message, which blocks the calling process until the reply arrives, or a configurable timeout passes, making it a synchronous call. This is ok since spawning new processes is very cheap, so a web server for example idiomaticly spawns a new process for every request, making blocking calls in one request not interfere with other requests. The result is highly concurrent code that is mostly written synchronously without creating red function/ blue function kind of divisions most(?) async/await implementations have.
Erlang does that. The logic inside erlang processes look synchronous but outwardly its all async.
For example you can send a message to a different process, even running on a different machine and wait for the reply right there in the next line. It doesn't hold any OS threads.
Vibe.d [0] the one and only D web framework uses fibers, because of this reason. The tagline is "Asynchronous I/O that doesn’t get in your way"
I'm not sure about the tradeoff. It seems to be equivalent with performance. It might require more memory, but if the competition is Java, Python, and Ruby then they are easy to beat in terms of memory consumption. I'm not sure how it compares to Go.
I recently did a similar thing [1]. The major difference I did was to store the registers on the stack instead of a separate structure. The stack is the thread structure.
I also have a version for x86-32 bit and x86-64 bit (and they work under Linux and Mac OS-X). The assembly code isn't much, but it took a while to work out just what was needed and no more.
In the code listing on the 4th page [0], why is MaxGThreads and StackSize wrapped in an enum? I would use an enum when I might want some values to be mutually exclusive. Seems like these values aren't even used in the same context? I would use static const ints.
Interesting. I hadn't heard the "green thread" term before. Does anyone happen to know how this compares to Windows' "fiber" mechanism? I can't recall whether fibers actually run in user land.
You could use fibers to implement green threads; you could also use fibers to implement coroutines.
There's an abstraction mismatch between the terms, though. Green threads is having separate threads of control (meaning separate program counters and contexts like stack and registers) running concurrently, with switching controlled in user space, whereas OS threads switch in the kernel. It's a description of an architecture. You could implement green threads in a VM, where the program counter and context is switched by a VM loop, yet the VM interpreter itself might be single-threaded. Or you could have multiple VM interpreter loops implemented using fibers, and switch between them that way.
Fibers are a way of switching program counter and context explicitly in userland. Like I said, you could use them to implement coroutines, iterators, or other code that resembles continuation passing style (like async) instead (the context being switched includes the return address, which is the continuation - the RET instruction in imperative code is a jump to the continuation). Fibers let you hang on to the continuation and do something else in the meantime. But they're a low-level primitive, and confusing to work with directly unless you wrap them up in something else.
Here's a chap who used Fibers to implement native C++ yield return (iterators) similar to C#:
A Fiber is a cooperative multitasking strategy. As such, a fiber must yield at some point to allow other fibers to do their work.
Green threading is just the idea of doing the scheduling of threads in user-space. It can be preemptive or cooperative. Fibers and green threading aren't mutually exclusive, afaik.
Some may find this directness unempathetic, but I consider it as the opposite: the writing seems precisely aware of exactly what the reader wants to know at any given time and what questions he or she would be likely to have on their mind.
https://softwareengineering.stackexchange.com/questions/1203...
There's nothing anywhere restricting green threads to a single OS thread. Most modern runtimes will automatically multiplex the green threads into as many OS threads as your computer can run.
1:N green threads (which Java had) aren't intended for parallelism and provide none. They provide concurrency only.
M:N green threads (e.g., Erlang processes) provide parallelism.
https://github.com/rust-lang/rfcs/blob/0806be4f282144cfcd55b...
Synchronous programming is typically much more natural. It's too bad no languages have opted to abstract this away. Of course I suppose coroutines are kind of that.
Erlang and other BEAM languages like Elixir and LFE kind of do. They use the actor model for concurrency and message passing is asynchronous. However, most of the time directly after sending a message, a program waits for a reply message, which blocks the calling process until the reply arrives, or a configurable timeout passes, making it a synchronous call. This is ok since spawning new processes is very cheap, so a web server for example idiomaticly spawns a new process for every request, making blocking calls in one request not interfere with other requests. The result is highly concurrent code that is mostly written synchronously without creating red function/ blue function kind of divisions most(?) async/await implementations have.
For example you can send a message to a different process, even running on a different machine and wait for the reply right there in the next line. It doesn't hold any OS threads.
If a program needs the input before continuing doesn't it need to wait and therefore hold the program flow and therefore stop, even in erlang?
I'm not sure about the tradeoff. It seems to be equivalent with performance. It might require more memory, but if the competition is Java, Python, and Ruby then they are easy to beat in terms of memory consumption. I'm not sure how it compares to Go.
[0] http://vibed.org/
Deleted Comment
I also have a version for x86-32 bit and x86-64 bit (and they work under Linux and Mac OS-X). The assembly code isn't much, but it took a while to work out just what was needed and no more.
[1] http://boston.conman.org/2017/02/27.1
[0] https://c9x.me/articles/gthreads/code0.html
There's an abstraction mismatch between the terms, though. Green threads is having separate threads of control (meaning separate program counters and contexts like stack and registers) running concurrently, with switching controlled in user space, whereas OS threads switch in the kernel. It's a description of an architecture. You could implement green threads in a VM, where the program counter and context is switched by a VM loop, yet the VM interpreter itself might be single-threaded. Or you could have multiple VM interpreter loops implemented using fibers, and switch between them that way.
Fibers are a way of switching program counter and context explicitly in userland. Like I said, you could use them to implement coroutines, iterators, or other code that resembles continuation passing style (like async) instead (the context being switched includes the return address, which is the continuation - the RET instruction in imperative code is a jump to the continuation). Fibers let you hang on to the continuation and do something else in the meantime. But they're a low-level primitive, and confusing to work with directly unless you wrap them up in something else.
Here's a chap who used Fibers to implement native C++ yield return (iterators) similar to C#:
https://www.codeproject.com/Articles/20015/Yield-Return-Iter...
Green threading is just the idea of doing the scheduling of threads in user-space. It can be preemptive or cooperative. Fibers and green threading aren't mutually exclusive, afaik.
All the implementations I've seen are cooperative.
Deleted Comment
Deleted Comment