See the following for theoretical foundations for the work:
Linux 60th Anniversary Keynote
Mathematical foundations are needed that are much more suitable to modern Computer Science than using "universes".
For an example see the following article:
https://papers.ssrn.com/abstract=3603021
Also, see the following video:
Instead, education became weaker :-(
See following link for foundations of computer Science:
computer boundaries.
analogous to the Natural Numbers in classical mathematics.
The Swift team seems to be challenging that thesis:
"[U]nlike other concurrency models, the actor model is also tremendously valuable for modeling distributed systems. Thanks to the notion of location transparent distributed actors, we can program distributed systems using the familiar idea of actors and then readily move it to a distributed, e.g., clustered, environment.
With distributed actors, we aim to simplify and push the state of the art of distributed systems programming, the same way we did with concurrent programming with local actors and Swift’s structured concurrency models embedded in the language.
This abstraction does not intend to completely hide away the fact that distributed calls are crossing the network, though. In a way, we are doing the opposite and programming assuming that calls may be remote. This small yet crucial observation allows us to build systems primarily intended for distribution and testable in local test clusters that may even efficiently simulate various error scenarios.
Distributed actors are similar to (local) actors because they encapsulate their state with communication exclusively through asynchronous calls. The distributed aspect adds to that equation some additional isolation, type system, and runtime considerations. However, the surface of the feature feels very similar to local actors."
Amusingly, in a section immediately followed by "Deja Vu All Over Again", Waldo et al. write:
"One conceptual justification for this vision is that whether a call is local or remote has no impact on the correctness of a program. If an object supports a particular interface, and the support of that interface is semantically correct, it makes no difference to the correctness of the program whether the operation is carried out within the same address space, on some other machine, or off-line by some other piece of equipment. Indeed, seeing location as a part of the implementation of an object and therefore as part of the state that an object hides from the outside world appears to be a natural extension of the object-oriented paradigm.
Such a system would enjoy many advantages. It would allow the task of software maintenance to be changed in a fundamental way. The granularity of change, and therefore of upgrade, could be changed from the level of the entire system (the current model) to the level of the individual object. As long as the interfaces between objects remain constant, the implementations of those objects can be altered at will. Remote services can be moved into an address space, and objects that share an address space can be split and moved to different machines, as local requirements and needs dictate. An object can be repaired and the repair installed without worry that the change will impact the other objects that make up the system. Indeed, this model appears to be the best way to get away from the “Big Wad of Software” model that currently is causing so much trouble.
This vision is centered around the following principles that may, at first, appear plausible:
• there is a single natural object-oriented design for a given application, regardless of the context in which that application will be deployed;
• failure and performance issues are tied to the implementation of the components of an application, and consideration of these issues should be left out of an initial design; and
• the interface of an object is independent of the context in which that object is used.
Unfortunately, all of these principles are false. In what follows, we will show why these principles are mistaken, and why it is important to recognize the fundamental differences between distributed computing and local computing."
RMI, some may recall, was heavily influenced by the above critique which lead to the very verbose and ceremonial characteristics of Sun's Java distributed system's APIs that were later criticized as cumbersome.
[1]: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7...
[2]: http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf
with having to annotate each message send with whether the
receiver is on the current computer or another one in a
Citadel.
Since Actors can move between computers in a Citadel, such
annotations are infeasible in practice.
Of course, modularity, security, and performance should not be
ignored.
[0] I put actors in quotes because neither are true actor system, either. Nor were either designed to be, though if you squint they look similar.
One way to think about this is propagators. I’m still learning myself, but a compelling example is lisp. With lisp you can write macros that essentially allow you to treat your code as a tree and arbitrarily modify that tree (aka arbitrarily write code). You can then compile that code while the system is running and execute it. It’s not just about macro expansion at startup or a single compile time step at the beginning of execution. The system can be designed with this in mind.
It’s also about introspection, the ability to ask questions about the system at runtime as it evolves.
Sussman and Kay both talk a lot about DNA and biology, and the ability for systems to dynamically expand, change, and repair themselves.
When I think about this kind of stuff nowadays I picture something like lisp with an execution environment like the BEAM (so basically LFE) and an introspection system powered by a declarative constraint solving query language (something like Datalog-style RDF found in things like datalevin and it’s predecessors). I think that lends itself really well to these kinds of systems, including another point that Kay talks about pretty frequently. The ability for two systems (and in our cases two actors in one environment count) to negotiate with each other via some shared fundamental language to understand each other’s purpose. SOP style approaches seem like a compelling way to do that, but the main problem to me is identifying entities as globally unique as part of that negotiation process.
Also don’t listen to me, I’m a monkey.