This is the antithesis of a hand-waving opinion-piece on expressiveness. It, is, rather, a talk by Shriram Krishnamurthi introducing the key insights of Matthias Felleisen's 1991 paper of the same name. From the paper's abstract:
"The literature on programming languages contains an abundance of informal claims on the relative expressive power of programming languages, but there is no framework for formalizing such statements nor for deriving interesting consequences. As a first step in this direction, we develop a formal notion of expressiveness and investigate its properties. To validate the theory, we analyze some widely held beliefs about the expressive power of several extensions of functional languages. Based on these results, we believe that our system correctly captures many of the informal ideas on expressiveness, and that it constitutes a foundation for further research in this direction."
Shriram did an excellent job being a lively, energetic and engaging speaker who distilled some of the core concepts of the paper down without getting tangled in the formalisms.
I do not have an academic background in theoretical computer science and was nevertheless able to follow along his talk perfectly well.
For anyone interested in reading it, the paper by Matthias Felleisen that this talk is based on can be found on the author's website: https://felleisen.org/matthias/papers.html
Shriram Krishnamurthi is the real deal. When I started at Brown I was majoring in Mech-E and took a programming course with Shriram on a whim. It was the hardest course I took in undergrad. Needless to say, I switched my major to Computer Science right after.
I really liked two of the poinst he made when answering audience questions.
(1) In the expressiveness debate, only the upsides of increasing expressive power are discussed (avoiding global rewrites) but the downsides are not (breaking intuitive expectations).
(2) The theory side and the UX side of language design communities seem to operate independently, to the detriment of the results.
The last example of an extension that increases expressive power of a language — adding truthy/falsey conditionals, over pure Boolean conditionals — is messing with my intuition! The talk obviously showed an example to prove this, and I follow the logic, but I still find it a surprising result. Maybe it chimes with one of the questions that was asked; the context required to show the distinction is engineered and quite “unnatural” (at the risk of being “hand-wavy”), which is at odds with the human meaning of expressiveness.
Your intuition is probably driven by the fact that the theorem isn't true in any popular programming language. The added feature that the proof relies on is that "truthy" conditionals allow the programmer to distinguish a function from a falsy value (such as false). Many other features might provide this capability as well: the ability to compare a function to a boolean directly, a typeof function provided you have the ability to compare its results, an "is function" query, or the ability to try arithmetic on the value or call it as a function and trap an error if that doesn't work. So Shriram's enumeration of possible things you might do to the function, which is required to show the two expressions are equivalent in the starting language, is much shorter than it would be in a practical language, and as a result the proof wouldn't hold. On the other hand, I can attest that if you design a very minimal language and don't consider the question of how to tell what type a value has, you may well end up with a language that doesn't have that capability!
He (Morris, actually) defines observational equivalence as:
e1 == e2 iff
for all contexts C,
C[e1] halts iff C[e2] halts.
Which is a great trick to avoid having to compare contexts. Around 29:00 they discuss how to show 5 != 6.
One question I have is: what if your language doesn't allow for ways to halt? For example, a language where you have no loops. That definition would imply everything equals everything, since all contexts eventually halt.
In most of the talk he’s using Ω to mean the program doesn’t halt, i.e. does not produce an answer. He also mentions when explaining Ω that it is often used as a stand-in for a run-time error. An uncaught exception can do the same job, for example.
(The reason for this that if you are describing the semantics of a language using a collection of reduction rules, and evaluation gets to a point where no reduction rule applies, then the program gets “stuck”. You would normally implement this as a runtime error but semantically it has the same effect as an infinite loop.)
You can tweak the definition to take advantage of whatever your language DOES allow. He alludes to this during the Q&A at the end. For instance: two expression are equal if (and only if), for all possible contexts C, you have C[e1] and C[e2] send the same bytes to standard output.
Great talk. The definition of observational equivalence is very elegant. I instantly thought it made no sense or must be a partial definition, then the simple example given on the next slide explained it perfectly.
"The literature on programming languages contains an abundance of informal claims on the relative expressive power of programming languages, but there is no framework for formalizing such statements nor for deriving interesting consequences. As a first step in this direction, we develop a formal notion of expressiveness and investigate its properties. To validate the theory, we analyze some widely held beliefs about the expressive power of several extensions of functional languages. Based on these results, we believe that our system correctly captures many of the informal ideas on expressiveness, and that it constitutes a foundation for further research in this direction."
Highly recommended, FWIW.
Also, this talk seems wonderfully lucid.
I do not have an academic background in theoretical computer science and was nevertheless able to follow along his talk perfectly well.
For anyone interested in reading it, the paper by Matthias Felleisen that this talk is based on can be found on the author's website: https://felleisen.org/matthias/papers.html
(1) In the expressiveness debate, only the upsides of increasing expressive power are discussed (avoiding global rewrites) but the downsides are not (breaking intuitive expectations).
(2) The theory side and the UX side of language design communities seem to operate independently, to the detriment of the results.
The last example of an extension that increases expressive power of a language — adding truthy/falsey conditionals, over pure Boolean conditionals — is messing with my intuition! The talk obviously showed an example to prove this, and I follow the logic, but I still find it a surprising result. Maybe it chimes with one of the questions that was asked; the context required to show the distinction is engineered and quite “unnatural” (at the risk of being “hand-wavy”), which is at odds with the human meaning of expressiveness.
One question I have is: what if your language doesn't allow for ways to halt? For example, a language where you have no loops. That definition would imply everything equals everything, since all contexts eventually halt.
(The reason for this that if you are describing the semantics of a language using a collection of reduction rules, and evaluation gets to a point where no reduction rule applies, then the program gets “stuck”. You would normally implement this as a runtime error but semantically it has the same effect as an infinite loop.)