But the things these languages are experimenting with are low-level implementation details that wouldn't be amenable to embedding. There's no escaping the Haskell GC.
But the things these languages are experimenting with are low-level implementation details that wouldn't be amenable to embedding. There's no escaping the Haskell GC.
But you can also stratify in the other direction. ‘Pure’ functions aren't real outside of mathematics: every function can have side effects like allocating memory, possibly not terminating, internal (‘benevolent’) mutation, et cetera. When we talk about ‘pure’ functions we usually mean that they have only a particular set of effects that the language designer considered ‘safe enough’, where ‘enough’ is usually defined with reference to the ergonomic impact of making that effect explicit. Algebraic effects make effects (and importantly effect composition — we got here in the first place because we were fed up of monad transformers) more ergonomic to use, which means you can make more effects explicit without annoying your users.
The acknowledgement section on that GitHub README mentions this paper.
https://koka-lang.github.io/https://effekt-lang.org/
Frank is pretty old now but perhaps a simpler implementation: https://github.com/frank-lang/frank
Deleted Comment
> Another limitation of programming languages is that they are poor abstraction tools
> Programming languages are implementation tools for instructing machines, not thinking tools for expressing ideas
Machine code is an implementation tool for instructing machines (and even then there's a discussion to be had about designing machines with instruction sets that map more neatly to the problems we want to solve with them). Everything we've built on top of that, from assembly on up, is an attempt to bridge the gap from ‘thinking tools for expressing ideas’.
The holy grail of programming languages is a language that seamlessly supports expressing algorithms at any level of abstraction, including or omitting lower-level details as necessary. Are we there yet? Definitely not. But to give up on the entire problem and declare that programming languages are inherently unsuitable for idea expression is really throwing the baby out with the bathwater.
As others in the comments have noted, it's a popular and successful approach to programming today to just start writing code and seeing where the nice structure emerges. The feasibility of that approach is entirely thanks to the increasing ability of programming languages to support top-down programming. If you look at programming practice in the past, when the available implementation languages were much lower-level, software engineering was dominated by high-level algorithm design tools like flowcharts, DRAKON, Nassi–Shneiderman diagrams, or UML, which were then painstakingly compiled by hand (in what was considered purely menial work, especially in the earlier days) into computer instructions. Our modern programming languages, even the ‘low-level’ ones, are already capable of higher levels of abstraction than the ‘high-level’ algorithm design tools of the '50s.
In pure functional languages saying what value an expression will evaluate to (equivalently, explaining the program as a function of its inputs) is a sufficient explanation of the meaning of a program, and semantics for these languages is roughly considered to be ‘solved’. Open areas of study in semantics tend to be more about doing the same thing for languages that have more complicated effects when run, like imperative state update or non-local control (exceptions, async, concurrency).
There's some overlap in study: typically syntax is trying to reflect semantics in some way, by proving that programs accepted by the syntactic analysis will behave or not behave a certain way when run. E.G. Rust's borrow checker is a syntactic check that the program under scrutiny will not dereference an invalid pointer, even though that's a thing that is possible by Rust's runtime semantics. Compare to Java, which has no syntactic check for this because dereferencing invalid pointers is simply impossible according to the semantics of the JVM.
Then, you can make a product that was applied engineering, but you can't replace all the others that not.
Civil engineering and friends have the advantage that is built on top of the universe, that despite claims to the contrary, was not hacked with perl!
This thread is full of claims that ‘programming is really engineering’ (in accordance with the article), ‘programming is really building’, or ‘programming is really philosophy/mathematics’. They're all true!
It's not that one of them is the True Nature of software and anyone doing the others is doing it wrong or at a lower level. These are three different things that it is completely reasonable to want to do with software, and each of them can be done to an amateur or expert level. But only one of them is amenable to scientific analysis. (The other two are amenable to market testing and formal proof, respectively.)
The ‘maker’ tribe also tests with HCI assessments like GOMS and other methods from the ‘soft’ sciences like psychology and sociology, not just economics/business science.
Model-checking and complexity proofs (and complexity type systems) are mathematical attempts to apply mathematician-programmer methods to engineer-programmer properties.
Cyclomatic complexity is an attempt to measure mathematician-programmer properties using engineer-programmer methods.
I found this article with some numbers [0], with the top one being that "95% of ATM swipes rely on COBOL code". If you just need to maintain something in production, and only occasionally update the business logic, without having to upgrade the architecture, COBOL is the way to go.
[0] https://www.pragmaticcoders.com/resources/legacy-code-stats
1. COBOL systems are typically written on a much shallower software stack with less room for unreliability
2. banking systems have a ton of effort put into reliability over decades of development.