Yes. The callback is not a natural construct (i.e. it does not map well to our intuitive understanding of X is doing something while Y is doing something else).
I'm annoyed when coroutines are reserved for use only in high performance, c10k-type of situations. For example, the KJ library's doc says:
"Because of this, fibers should not be used just to make code look nice (C++20's co_await, described below, is a better way to do that)."
With this "stackless or nothing" attitude we do not have good C++ coroutine libraries outside C++20's.
For me the purpose IS to make the code look nice and I do not care if the coros are stackfull and consume more stack memory. At the end of the day, for my application, I saved more in programmer's time and bugs than I lost in RAM (and I'm on an embedded board with only 64MB)
I want to add to the callback vs coroutines and async/serial discussion that it all depends on how you treat errors. Are errors mere exceptions or do you want to handle errors in the control flow ?
For example turndeg(90), Move(10), PickupItem(), turndeg(180), Move(10) you treat errors as exceptions, if the robot fail to pickup the item, or if it ends up at the wrong place it's an exception.
Now if you put all these in try/catch, have the functions return error code (or 0 for success), or use callbacks, the code will be more "ugly" yes, but you then treat errors as "first class citizens", if the robot for example fail to pickup the item you want to try something else, maybe apply more vacum to the suction arm, or switch to a grip arm. And if the robot goes off course you want to make a course direction.
async/await, coroutines, futures, promises, do make the code "look nice", but that nice look comes from treating errors as exceptions.
I don't agree. You can handle errors how you like on both approaches. The callback code, however, will get pretty messy once you have a branched control flow.
With coroutines you are using the language's control structures directly. The program counter is your state and a branch is an `if`. With minimal debugger support you will be able to see at which line is the execution on each coro.
With callbacks you would have to inspect which ones are pending, unless you have made the state explicit in a variable.
These is not a huge deal for those who have been programming for a while but for beginners coros will feel more like an extension of the language than something built on top.
I have a gopher server written in Lua [1] that uses coroutines to handle each connection. This bit of code is executed as the main menu page [2] is being displayed (with comments)
-- -------------------
-- Load in some modules.
-- The first makes a gopher menu item of type link
-- The second allows us to do a TCP connection
-- --------------------------
local mklink = require "port70.mklink"
local tcp = require "org.conman.nfl.tcp"
-- ------------------------------
-- tcp.connect() connects to the given address and port and timeout
-- (in seconds). This function will create a socket, set it
-- to non-blocking, call connect, then yield. When the socket has
-- connected, the coroutine is then resumed with the connection; if
-- 1 second has passed before the connection is done, then nil is
-- returned when resumed. This just calls a local QOTD service.
-- ---------------------
local ios = tcp.connect("127.0.0.1",'qotd',1)
if ios then
local res = ""
-- --------------------------------
-- The following loop will yield the coroutine until a line
-- of data has been accumulated from the network. The coroutine
-- is then resumed with the line of text.
-- ---------------------------------------
for line in ios:lines() do
-- -----------
-- We're just accumulating the text into one long blob of
-- text that we'll return
-- -----------------
res = res .. mklink { type = 'info' , display = line }
end
ios:close() -- another yield point, when closed, resume
return res
else
return mklink { type = 'info' , display = "Not Available" }
end
No exceptions here. If we can't connect, it's a simple 'if' test and do something else. The functions `tcp.connect()`, `ios:lines()` and `ios:close()` are all blocking points that cause the coroutine to yield. Yes, if I put something like `while true do end` that will block the entire process as there is no preemption, but aside from that detail, I find this code easy to read.
Ish. You can also have error recovery in the form of the "conditions/restarts" system of common lisp. Such that you can have "look nice" things if you want with many different setups.
Just to be clear, the audience for that tour is primarily Cloudflare engineers working on workerd / Workers runtime. Within that context, that's very much correct. The entire codebase is written using asynchronous I/O - synchronous I/O doesn't show up (or if it does, it's in weird parts I've never looked). Fibers are used sparingly in very specific contexts and we have very special code to make it memory efficient at our scale.
According to who? It is very natural to anyone making a GUI or anything interactive for the firs time. Eventually I would try to move people to queueing up events and handling them all at the same time so that the order is easier to debug.
Coroutines are the latest silver bullet syndrome. Fundamentally you still need to synchronize and order data and that's the hard part. I don't know why students would need to do more than the classic interactive loop of:
1. get data
2. update state
3. interactive output (drawing a frame, moving a robot etc.)
> According to who? It is very natural to anyone making a GUI or anything interactive for the firs time.
The people that do not make GUIs but apps with complex behaviours.
Callbacks are nice and simple. Callbacks calling callbacks calling callbacks calling callbacks (because of one of worst ideas in programming ever, function coloring) stops being simple. Async/await is just a patch over that ugliness for languages that can't do any better easily
The callbacks (as in a chain of callbacks) are not natural to a beginner, like a student trying to program a robot. For a GUI, where you set up a callback in response to user input, it is also easy to understand because the program's control flow does not progress as a chain of callbacks.
The input-update-output loop is fine, but you cannot easily combine and nest such loops. For example, in a robot you may have a speed control inner loop, then a path following loop and on top of that a high-level behavior loop. Nesting one inside the others you end up with a hand-baked implementation of a coroutine.
>I'm annoyed when coroutines are reserved for use only in high performance,c10k-type of situations
Go's pretty much that idea. Make the threads light as coroutines (I think it's like 4k or 8k per goroutine) and give some basic messaging (channels) to go with it. Works well but some of the simplicity ended up biting it in the arse so there can be quite a bit of boilerplate in some cases.
Coroutines are covered in Knuth's first volume. And, I confess, I think I went years thinking he was just describing method calls. Yes, they were method calls that had state attached, but that felt essentially like attaching the method to an object and calling it a day.
Seeing them make an odd resurgence in recent years has been awkward. I'm not entirely clear that they make things much more readable than alternatives. Reminds me of thinking continuations were amazing, when I saw some demos. Than I saw some attempts at using them in anger, and that rarely worked out that well.
Also to the point of the article, I love being "that guy" that points out that LISP having a very easy "code as data" path makes the concerns expressed over the "command" system basically go away. You can keep the code as, essentially:
And then show what happens if "Grab" is unsuccessful and define a restart that is basically "sleep, then try again." Could start plugging in new restart ideas such as "turn a little, then try again." All without changing that core loop.
With these examples I think the author would still be stuck with stepping through the state machines with the students. Unless what you wrote would allow for the "autonomousPeriodic function to keep ticking" another way?
Apologies for not making that more explicit. My point was that that isn't necessarily code, but also data. Literally, you can turn that into a list and instead of evaluating it with the standard runtime, you can send it to another place that turns it into the "command" style from the example java.
You can /kind/ of do this with java, of course. Just make sure to not use "new FooCommand" and instead change the "foo" function to return a the command object. No reason that couldn't be done; but, and this is the big difference, it requires building a ton of scaffolding in the java program to support both ideas at the same time. In lisp, it is fairly easy to wrap in a macro. Still somewhat magical, I suppose, but no more so than the rest of the compilation/build process.
That make sense? I'm somewhat interested in this, so more than happy to try and do a blog post on the idea, if that would help.
I think that developers have minimal scheduling primitives available to them, to schedule complicated work, in the order and timings you want it to have.
I don't like hardcoding functions in coroutine pipelines. Depending on the ordering of your pipeline, you might have to create things and then refer to them, because of the forward reference problem.
Here's my stackoverflow question for what I'm getting at:
Coordinating work between independent threads of execution is adhoc and not really well developed. I would like to build a rich "process api" that can fork, merge, pause, yield, yield until, drop while, synchronize, wait (latch), react according to events. I feel every distributed systems builds this again and again.
Go's and Occam's CSP is pretty powerful.
I've noticed that people build turing completeness ontop of existing languages, probably due to the lack of expressivity of the original programming langauge to take turingness as an input.
I’d recommend exploring some with Scheme. Somewhere between writing code in continuation-passing style, using macros, and using call-with-current-continuation it should be able to build any of these mechanisms in a clean way. Maybe there’s a prototype for a better construct there. Then it’s on other language developers to support these capabilities as well.
Because looking at all the examples listed in the article, none of them seem like very idioms.
I have looked into shift and reset to the point I think I understand it (and callCC but it actually changes the execution context or can be seen as an AST transformation)
> I would like to build a rich "process api" that can fork, merge, pause, yield, yield until, drop while, synchronize, wait (latch), react according to events. I feel every distributed systems builds this again and again.
Years ago I attempted to build an experimental language with first-class resumable functions. Every function can be invoked by the caller through a special reference type called "quaint". The caller can resume or stop the execution of the function either after reaching a timeout, or after passing a "wait label":
I mean, this is an interesting thing to think about but isn't at all what the article is about...coroutines have many uses may be and are often related to async work in general but OP is using coroutines just as a pauseable function (in good old asm, you know how computers actually just work, this is just jumping into a subroutine).
I think the article implements stackful coroutines since the yield calls a function where control flow jumps to, it eventually returns to the instruction(s) after the yield statement.
Thankfully a piece that emphasizes that coroutines are functions that pause. Java frameworks like Quasar became focused on other goals besides that basic capability and lost their way (IMHO).
Java's not the easiest to pick up in high school unless you really make a big after-school effort. Something like Lua is probably better.
For FIRST robotics in particular, Java and LabView are the most commonly used languages because they are the two for which a robust library is maintained by Worcester Polytechnic Institute for use in the competition.
But I have long been of the opinion that both languages are kind of an ill fit for the application. Python is making some headway and I think it might be a better fit; It has some advantages on legibility, it has an optional static typing system, and it does support coroutines.
Thankfully, very few teams are still using LabView at this point and WPILib primarily targets C++ and Java, with first-party Python support coming next season.
> Java frameworks like Quasar became focused on other goals besides that basic capability and lost their way (IMHO).
A bit of an aside maybe, but the guy that made Quasar is behind project loom to add light weight threads to JVM which will become available in JVM 21 https://openjdk.org/jeps/444
In my day Borland had all the hostages, and every time you used something that wasn't orange text turbo pascal another one got fed to a gru.
The right intro language is an interesting study in itself and the intuitiveness of coroutines is an good data point. Go seems like a decent first one. It jas dark corners, but at least they are in the corner. I'd love to argue for rust as a first language, and maybe it isn't a bad one. But I'm not sure where I'd start the argument. C was my second language, and I'm not sure it would have made sense as qyuckly before commodore basic.
I don't know Quasar, but a lot of projects seem to want to add functionality rather than be a library for that purpose, and create a new one for the completely unrelated feature that you want.
Coroutined make programming for pico8 very chill as well.
The biggest challenge is sometimes you do want external flow control, and there coroutines can get hard to untangle if your design is a bit messy.
Something like “A signals to B to do something else” starts to be a bit tricky (along with interruptible actions). I think there are good patterns in theory but I’ve found myself with pretty tangled knots at times.
This is ultimately a general problem when programming everything as functions. Sometimes you need to mess with state that’s “hidden away” in your closure. Building out control flow data structures ends up becoming mandatory in many cases.
I took computer science at school in 1989 because I hated my chemistry teacher in 1988. Never looked back. I really loved the author's call for attention to whether kids are "getting" the concepts being taught and adjust accordingly.
My teacher (Hi Mr Steele if you are still kicking around!!) taught us the algorithms without coding, but instead used playing cards or underwater bubbles or whatever. We had our a-ha moments intellectually before we implemented them in code.
As an aside, our school had just got macs and we spent most of the day playing digitised sound files from Monty Python.
People always say that coroutines make code easier to understand, but I've always found normal asynchronous code with callbacks much easier to understand.
They're equivalent except that asynchronous callbacks is what actually happens and you have clear control and visibility on how control flow moves.
If you want to see the callbacks, an alternative middle ground is promises, e.g. code that looks like doSomething().then(() => doSomethingElse()).then(() => doLastTask());
I currently work on a project that involves Java code with a promise library, and Unreal Engine C++ code which does not and uses callbacks (and do async JS stuff in my personal projects), and both have to do asynchronous logic. The Unreal code is just so much harder to deal with.
Specific problems the Unreal code has:
- There's no "high level" part of the code that you can look at to see what the logic flow is.
- Many functions are side-effecty, triggering the next part of the sequential logic without it being clear that that's what they're doing. Like the handleFetchAccount() callback kicks off another httpRequest for the next step, but you wouldn't know that it does that just from the name.
I'd admit some of these problems might be mitigatable in a better written codebase though.
The coroutine approach shines for complex business-logic.
Consider this example algorithm, of several async steps,
1. Download a file into memory
2. Email a link to a review web page.
3. Wait for the user to review.
4. Upload the file to a partner.
5. Update a database.
You could implement this as callbacks. Callback from each step leads to the next being triggered. Downside - your business logic is spread across all the callbacks. You could mitigate this somewhat by defining a class with one method for each step, with those methods being defined in the same visual order as the algorithm. Then have each callbacks call a method. (The article shows something different but similar with its Command autoCommand pattern.)
Tricks like this only go so far. Imagine if the reviewer user had a choice of pressing 'approve' or 'reject' on the webserver interface, with the algorithm changing depending on their answer. How do you now represent the business logic so the programmer can follow it?
Such changes are easy in coroutines. Here is the algorithm with that variation in coroutine code,
You state that callback code gives you easy visibility to what actually happens - yes, they do. When you read callback code, it is natural to follow business-logic to system calls. Coroutine tends towards code of layered business-logic and abstraction.
Callbacks are no more 'what actually happens' than coroutines are; what actually happens involves a lot of jumping to memory addresses, and closure state is just as much a compiler invention as async/await. Blocking-style code, by comparison, is how we actually think about the business logic; language features that abstract over callback hell to let you write it make code inherently more clear. People always say it because it's true.
> They're equivalent except that asynchronous callbacks is what actually happens [...]
Neither stackful nor stackless coroutines work like this practice. The former suspends coroutines by saving and restoring the CPU state (and stack) and the latter compiles down to state machines, as mentioned in the article. Coroutines are functionally not equivalent to callbacks at all.
Which is exactly what happens when you use asynchronous callbacks except that you have to do the storing of state explicitly. Stackless coroutines even typically compile to (or are defined as equivalent to) callback based code.
I imagine you would get a lot of blank stares with that POV, at least from folks with working bullshit detectors (like young kids that haven’t been conditioned to “modern” industry practices).
I found the article a great example of the kind of crap that passes for programming these days.
In the context of the first robotics competition, teaching the command hierarchy is a good opportunity to teach students what a state machine is... And that can be a good opportunity to talk about what a computer does, because the computer is basically a hardware implementation of a state machine.
But that isn't the kind of lesson that you want to be cramming into the middle of the competition season.
The structure of the coroutine version looks very close to what I've been settling towards for my own background code (not robots but a similar "do a sequence of things that may take different amounts of time and rely on external state"). I'm not sure if it has a name so in my head it's been something like "converging towards a 'good' state":
Every tick, inspect the state of the world. Then do the one thing that gets you a single step towards your goal.
At first I wasn't sure the "inspect" part was possible in the robot system, but the Lua code makes it look like it is? If so, the change is basically changing the "while" to "if" and adding additional conditions, maybe with early returns so you don't need a huge stack of conditions.
The "converging" style doesn't use coroutines and is more robust. Let's say, for example, another robot bumps into yours during the grab - the Lua code couldn't adapt, but the "converging" style has that built in since there's no assumed state that can get un-synchronized with the world like with a state machine / coroutine version. It was because of external interactions like that, that I couldn't 100% rely on but were inspectable, that I originally came up with this style.
I'm annoyed when coroutines are reserved for use only in high performance, c10k-type of situations. For example, the KJ library's doc says:
"Because of this, fibers should not be used just to make code look nice (C++20's co_await, described below, is a better way to do that)."
With this "stackless or nothing" attitude we do not have good C++ coroutine libraries outside C++20's.
For me the purpose IS to make the code look nice and I do not care if the coros are stackfull and consume more stack memory. At the end of the day, for my application, I saved more in programmer's time and bugs than I lost in RAM (and I'm on an embedded board with only 64MB)
For example turndeg(90), Move(10), PickupItem(), turndeg(180), Move(10) you treat errors as exceptions, if the robot fail to pickup the item, or if it ends up at the wrong place it's an exception.
Now if you put all these in try/catch, have the functions return error code (or 0 for success), or use callbacks, the code will be more "ugly" yes, but you then treat errors as "first class citizens", if the robot for example fail to pickup the item you want to try something else, maybe apply more vacum to the suction arm, or switch to a grip arm. And if the robot goes off course you want to make a course direction.
async/await, coroutines, futures, promises, do make the code "look nice", but that nice look comes from treating errors as exceptions.
With coroutines you are using the language's control structures directly. The program counter is your state and a branch is an `if`. With minimal debugger support you will be able to see at which line is the execution on each coro.
With callbacks you would have to inspect which ones are pending, unless you have made the state explicit in a variable.
These is not a huge deal for those who have been programming for a while but for beginners coros will feel more like an extension of the language than something built on top.
[1] https://github.com/spc476/port70/blob/master/share/index.por...
[2] gopher://gopher.conman.org/
Just to be clear, the audience for that tour is primarily Cloudflare engineers working on workerd / Workers runtime. Within that context, that's very much correct. The entire codebase is written using asynchronous I/O - synchronous I/O doesn't show up (or if it does, it's in weird parts I've never looked). Fibers are used sparingly in very specific contexts and we have very special code to make it memory efficient at our scale.
What question are you answering 'yes' to?
The callback is not a natural construct
According to who? It is very natural to anyone making a GUI or anything interactive for the firs time. Eventually I would try to move people to queueing up events and handling them all at the same time so that the order is easier to debug.
Coroutines are the latest silver bullet syndrome. Fundamentally you still need to synchronize and order data and that's the hard part. I don't know why students would need to do more than the classic interactive loop of:
1. get data
2. update state
3. interactive output (drawing a frame, moving a robot etc.)
The people that do not make GUIs but apps with complex behaviours.
Callbacks are nice and simple. Callbacks calling callbacks calling callbacks calling callbacks (because of one of worst ideas in programming ever, function coloring) stops being simple. Async/await is just a patch over that ugliness for languages that can't do any better easily
The input-update-output loop is fine, but you cannot easily combine and nest such loops. For example, in a robot you may have a speed control inner loop, then a path following loop and on top of that a high-level behavior loop. Nesting one inside the others you end up with a hand-baked implementation of a coroutine.
Go's pretty much that idea. Make the threads light as coroutines (I think it's like 4k or 8k per goroutine) and give some basic messaging (channels) to go with it. Works well but some of the simplicity ended up biting it in the arse so there can be quite a bit of boilerplate in some cases.
Seeing them make an odd resurgence in recent years has been awkward. I'm not entirely clear that they make things much more readable than alternatives. Reminds me of thinking continuations were amazing, when I saw some demos. Than I saw some attempts at using them in anger, and that rarely worked out that well.
Also to the point of the article, I love being "that guy" that points out that LISP having a very easy "code as data" path makes the concerns expressed over the "command" system basically go away. You can keep the code as, essentially:
With god knows how much bike shedding around how you want to write the loop there.Of course, you could go further for the "pretty" code that you want by using conditions/restarts such that you could have:
And then show what happens if "Grab" is unsuccessful and define a restart that is basically "sleep, then try again." Could start plugging in new restart ideas such as "turn a little, then try again." All without changing that core loop.You can /kind/ of do this with java, of course. Just make sure to not use "new FooCommand" and instead change the "foo" function to return a the command object. No reason that couldn't be done; but, and this is the big difference, it requires building a ton of scaffolding in the java program to support both ideas at the same time. In lisp, it is fairly easy to wrap in a macro. Still somewhat magical, I suppose, but no more so than the rest of the compilation/build process.
That make sense? I'm somewhat interested in this, so more than happy to try and do a blog post on the idea, if that would help.
I don't like hardcoding functions in coroutine pipelines. Depending on the ordering of your pipeline, you might have to create things and then refer to them, because of the forward reference problem.
Here's my stackoverflow question for what I'm getting at:
https://stackoverflow.com/questions/74420108/whats-the-canon...
Coordinating work between independent threads of execution is adhoc and not really well developed. I would like to build a rich "process api" that can fork, merge, pause, yield, yield until, drop while, synchronize, wait (latch), react according to events. I feel every distributed systems builds this again and again.
Go's and Occam's CSP is pretty powerful.
I've noticed that people build turing completeness ontop of existing languages, probably due to the lack of expressivity of the original programming langauge to take turingness as an input.
Because looking at all the examples listed in the article, none of them seem like very idioms.
I have looked into shift and reset to the point I think I understand it (and callCC but it actually changes the execution context or can be seen as an AST transformation)
(This wiki page helped me understand delimited continuations: https://wiki.haskell.org/Library/CC-delcont )
I am interested in algebraic effects too, but would like to understand them from a assembly point of view and such as exceptions.
My Lisp-like experience is only with Clojure, and it is delightful applying methods so trivially. I just find other people's LISP hard to read!
Years ago I attempted to build an experimental language with first-class resumable functions. Every function can be invoked by the caller through a special reference type called "quaint". The caller can resume or stop the execution of the function either after reaching a timeout, or after passing a "wait label":
https://github.com/bbu/quaint-lang
A typical CPU-intensive example where preemption is done by the caller after a certain timeout:
An example that uses "wait labels" to suspend execution of the callee at certain points:Thank you for your comment and sharing.
I have a lightweight 1:M:N runtime (1 scheduler thread, M kernel threads, N lightweight threads) which preempts by setting hot loops to the limit.
https://github.com/samsquire/preemptible-thread (Rust, Java and C)
How do you preempt code that is running?
Would you like to talk more about your idea?
And moreover made it a reasonable and readable variant!
(That is, it's not a JMP)
Java's not the easiest to pick up in high school unless you really make a big after-school effort. Something like Lua is probably better.
But I have long been of the opinion that both languages are kind of an ill fit for the application. Python is making some headway and I think it might be a better fit; It has some advantages on legibility, it has an optional static typing system, and it does support coroutines.
A bit of an aside maybe, but the guy that made Quasar is behind project loom to add light weight threads to JVM which will become available in JVM 21 https://openjdk.org/jeps/444
The right intro language is an interesting study in itself and the intuitiveness of coroutines is an good data point. Go seems like a decent first one. It jas dark corners, but at least they are in the corner. I'd love to argue for rust as a first language, and maybe it isn't a bad one. But I'm not sure where I'd start the argument. C was my second language, and I'm not sure it would have made sense as qyuckly before commodore basic.
Java has its domains, but small, algorithm driven code worked on by a small, inexperienced group isn't one.
The biggest challenge is sometimes you do want external flow control, and there coroutines can get hard to untangle if your design is a bit messy.
Something like “A signals to B to do something else” starts to be a bit tricky (along with interruptible actions). I think there are good patterns in theory but I’ve found myself with pretty tangled knots at times.
This is ultimately a general problem when programming everything as functions. Sometimes you need to mess with state that’s “hidden away” in your closure. Building out control flow data structures ends up becoming mandatory in many cases.
My teacher (Hi Mr Steele if you are still kicking around!!) taught us the algorithms without coding, but instead used playing cards or underwater bubbles or whatever. We had our a-ha moments intellectually before we implemented them in code.
As an aside, our school had just got macs and we spent most of the day playing digitised sound files from Monty Python.
"YOU TIT!"
They're equivalent except that asynchronous callbacks is what actually happens and you have clear control and visibility on how control flow moves.
I currently work on a project that involves Java code with a promise library, and Unreal Engine C++ code which does not and uses callbacks (and do async JS stuff in my personal projects), and both have to do asynchronous logic. The Unreal code is just so much harder to deal with.
Specific problems the Unreal code has:
- There's no "high level" part of the code that you can look at to see what the logic flow is.
- Many functions are side-effecty, triggering the next part of the sequential logic without it being clear that that's what they're doing. Like the handleFetchAccount() callback kicks off another httpRequest for the next step, but you wouldn't know that it does that just from the name.
I'd admit some of these problems might be mitigatable in a better written codebase though.
Consider this example algorithm, of several async steps,
1. Download a file into memory
2. Email a link to a review web page.
3. Wait for the user to review.
4. Upload the file to a partner.
5. Update a database.
You could implement this as callbacks. Callback from each step leads to the next being triggered. Downside - your business logic is spread across all the callbacks. You could mitigate this somewhat by defining a class with one method for each step, with those methods being defined in the same visual order as the algorithm. Then have each callbacks call a method. (The article shows something different but similar with its Command autoCommand pattern.)
Tricks like this only go so far. Imagine if the reviewer user had a choice of pressing 'approve' or 'reject' on the webserver interface, with the algorithm changing depending on their answer. How do you now represent the business logic so the programmer can follow it?
Such changes are easy in coroutines. Here is the algorithm with that variation in coroutine code,
You state that callback code gives you easy visibility to what actually happens - yes, they do. When you read callback code, it is natural to follow business-logic to system calls. Coroutine tends towards code of layered business-logic and abstraction.Neither stackful nor stackless coroutines work like this practice. The former suspends coroutines by saving and restoring the CPU state (and stack) and the latter compiles down to state machines, as mentioned in the article. Coroutines are functionally not equivalent to callbacks at all.
Stackful coroutines are just a poor man's threads.
I found the article a great example of the kind of crap that passes for programming these days.
But that isn't the kind of lesson that you want to be cramming into the middle of the competition season.
Every tick, inspect the state of the world. Then do the one thing that gets you a single step towards your goal.
At first I wasn't sure the "inspect" part was possible in the robot system, but the Lua code makes it look like it is? If so, the change is basically changing the "while" to "if" and adding additional conditions, maybe with early returns so you don't need a huge stack of conditions.
The "converging" style doesn't use coroutines and is more robust. Let's say, for example, another robot bumps into yours during the grab - the Lua code couldn't adapt, but the "converging" style has that built in since there's no assumed state that can get un-synchronized with the world like with a state machine / coroutine version. It was because of external interactions like that, that I couldn't 100% rely on but were inspectable, that I originally came up with this style.