The second, although more obscure, is that you can use it in languages that do not have "non-local exits" to terminate a deeply nested computation early or return to an earlier point in the call stack. For example, Clojure does not have nonlocal exits, as only the final form of the function is returned. However, using CPS, you can terminate the expression early and return to the original caller without executing the rest of the function. You probably only want to use this in specialized cases though or it may upset your team, they are tricky to debug.
Lastly and probably most controversially, you can make an extensible "if" statement using CPS if you are in a dynamic language and you have no other tools to do so. Admittedly I do sometimes use this in ClojureScript. This allows you to write "append only" code without continually growing the "cond". Again, most teams don't like this, so it depends on the circumstances, but might be nice to have in your back pocket if you need it.
[1]: https://github.com/norvig/paip-lisp/blob/main/docs/chapter12...
When you have an impure effect (e.g. check a database, generate a random number, write to a file, nondeterministic choices,...), instead of directly implementing the impure action, you instead have a symbol e.g "read", "generate number", ...
When executing the function, you also provide a context of "interpreters" that map the symbol to whatever action you want. This is very useful, since the actual business logic can be analyzed in an isolated way. For instance, if you want to test your application you can use a dummy interpreter for "check database" that returns whatever values you need for testing, but without needing to go to an actual SQL database. It also allows you to switch backends rather easily: If your database uses the symbols "read", "write", "delete" then you just need to implement those calls in your backend. If you want to formally prove properties of your code, you can also do that by noting the properties of your symbols, e.g. `∀ key. read (delete key) = None`.
Since you always capture the symbol using an interpreter, you can also do fancy things like dynamically overriding the interpreter: To implement a seeded random number generator, you can have an interpreter that always overrides itself using the new seed. The interpreter would look something like this
```
Pseudorandom_interpreter(seed)(argument, continuation):
rnd, new_seed <- generate_pseudorandom(seed, argument)
with Pseudorandom_interpreter(new_seed):
continuation(rnd)
```You can clearly see the continuation passing style and the power of self-overriding your own interpreter. In fact, this is a nice way of handeling state in a pure way: Just put something other than new_seed into the new interpreter.
If you want to debug a state machine, you can use an interpreter like this
``` replace_state_interpreter(state)(new_state, continuation):
with replace_state_interpreter(new_state ++ state):
continuation(head state)
```To trace the state. This way the "state" always holds the entire history of state changes, which can be very nice for debugging. During deployment, you can then replace use a different interpreter
```
replace_state_interpreter(state)(new_state, continuation):
with replace_state_interpreter(new_state):
continuation(state)
```which just holds the current state.
Then, each have their own $10ish Battlepass, and you need to grind to get to the end of it. Aside from a new map or character, these are the bulk of 'new stuff' that gets added.
Gaming as a Service doesn't scale well on most people who can afford to whale out, once they've already found their slot machine.
If you've ever played a dead/dying competitive game as a newcomer you will know the problem this creates: since the people that stay around are either new or very dedicated players, the skill gap becomes gigantic, which turns of most new players.
if your game wins the Life-Service race, you draw other players in. If your game dies the very same structure that keep players around will prevent new players from joining.
Unfortunately, I don't see this making any sense for large scale energy storage. Storage tanks for compressed hydrogen enjoy the square-cube law. The larger they are the less expensive they are proportional to the mass of hydrogen they hold.
With this iron oxide method, you need 27 tons of iron oxide for one ton of hydrogen. You can procure right now tanks that can hold 2.7 tons of hydrogen and weigh 77 tons empty [1], the ratio is 28 to 1. But the round-trip efficiency of the tank is virtually 100%. The efficiency of the iron-based storage is only 50%. The tanks are not very expensive.
I can't see the niche that this idea can apply to.
[1] https://www.iberdrola.com/press-room/news/detail/storage-tan...
I don't think the point of an encyclopedia is to cover every single topic, as nice as that would be. If you're in the market for an encyclopedia, you are probably looking for a starting point, survey, or summary of stuff that's good to know. The algorithms you're thinking of are probably in very dry papers and monographs, accessible only to experts. If you were writing a commercial-grade generic MINLP solver, you would surely be looking at the latest papers for ideas, or you simply won't be competitive with existing solvers.
There are so many things that have only been invented in the last couple of years like RINS, MCF cuts, conflict analysis, symmetry detection, dynamic search,... (see e.g. Tobias Achterberg's line of work).
On the other hand, hardware improvements were not as relevant for LP and MILP solvers as one would expect: For instance, as of now there is still no solver that really uses GPU compute (though people are working on that). The reason is that parallelization of simplex solvers is quite though since the algorithm is inherently sequential (it's a descend over simplex vertices) and the actual linear algebra is very sparse (if not entirely matrix free). You can do some things like lookahead for better pricing or row/column generation approaches, but you have to be very careful in that (interior point methods are arguably nicer to parallelize but in many cases have a penalty in performance compared to simplex).
MILP/MINLP solvers are much nicer to parallelize at first glance since you can parallelize across branches in the branch-and-bound, but in practice that is also pretty hard: Moderns solvers are so efficient that it can easily happen that you spend a lot of compute exploring a branch that is quickly proven to be unncessary to explore by a different branch (e.g. SCIP, the fastest open-source MINLP solver is completely single threaded and still _somewhat_ competetive). This means that a lot of the algorithmic improvements are hidden inside the parallelization improvements. I.e. a lot of time has been spent on the question of "What do we have to do to parallelize the solver without just wasting the additional threads".
People have tested old year 2000 lp and milp solvers against recent ones while correcting for hardware. Hardware improvements made up ~20x improvement, while lp solvers in general sped up 180x. MILP solvers speed up a full 1000x (Progress in mathematical programming solvers from 2001 to 2020).
Solvers from 2008 are entirely different levels of performance: there are many problems that are unsolvable by those that are solved to zero duality gap in less than a second by more modern solvers.
In MINLPs the difference is even more standing. This doesn't mean that those books are useless (they are quite good), but do not expect a solver based on those techniques to even play in the same league as modern solvers.
(MS had tried to pressure against the move from the start, but wasn't really successful in the first years)
The entire design of "LiMux" was doomed from the start: it was a highly customized version of Ubuntu that was only used in Munich (not even throughout the entire state). That made everything ridiculously expensive since the actual advantages of building on an open source solution was never realized. That is combined with the fact that "open source" and "cost savings" were used interchangeably when in reality the budget for Windows should have been pre-allocated into development, rather than cut.
The entire project was half-assed to begin with, which basically meant that Windows and Linux had to coexist since many crucial tools were never ported to Linux.
The "Microsoft killed it" story sounds realistic, but the truth is the much more boring incompetence in execution.