I’ve come to believe that a type should be capable of reflecting any arbitrary rules a programmer knows about the bounds of a value. The only real implementations of this I see are mostly academic with a dependent type system like Idris. We’re many years from that becoming mainstream, if it ever will be.
I regret to say that every type level Gordian knot that I have ever been exposed to came from attempts to do this.
I think a lot of the problem is that many of the constraints and acceptable values for data are not determined until well after first deployment. The "knot" comes in when you are explicitly changing data and code to add a feature that you didn't anticipate at the start.
This is a lot like schemaless databases. The flexibility of not having to fully specify the full schema does not mean that you don't benefit from specifying parts of it? Indeed, if you are indexing things, you have to specify those parts. But it is hard to argue with a straight face that schemaless tools don't have some strong benefits.
>> This is a lot like schemaless databases. The flexibility of not having to fully specify the full schema does not mean that you don't benefit from specifying parts of it? Indeed, if you are indexing things, you have to specify those parts. But it is hard to argue with a straight face that schemaless tools don't have some strong benefits.
Basically when you use a hash / JavaScript-style object rather than a fixed struct definition for your data (which amounts to a schema- less database in aggregate)
Easy addition of new properties
Co-existence of different versions of objects (newer objects will typically have more properties)
Introspection of properties and validation rules / reflection
> This is a lot like schemaless databases. The flexibility of not having to fully specify the full schema does not mean that you don't benefit from specifying parts of it? Indeed, if you are indexing things, you have to specify those parts. But it is hard to argue with a straight face that schemaless tools don't have some strong benefits.
Let me put on my straightest face.
Schemaless tools have no benefits.*
As soon as you start _doing_ anything with your data, it has a schema. You may not have declared it to your data store, to a specific piece of code, etc, but your application logic almost definitely makes assumptions about the shape of the data--what fields are and aren't present, the types of values in those fields, and what those values mean.
Every piece of your system must either accept undefined behaviour in the face of data which doesn't fit this implicit schema (there are very few non-trivial problems where you can define a meaningful, valid output for all possible inputs), or essentially treat the schemaless parts as hostile and assert its assumptions before every action.
The only thing you've done by going "schemaless" is distributing your schema across your systems and pushed the schema validation out to each thing interacting with the schemaless components instead of centralizing it.
* Yes, literally storing raw, unstructured data. But as soon as you need to _do_ anything with the data besides regurgitate it you're back to it having some sort of schema.
I empathize with this point of view - if not in any other way then in principle.
However, as a counterpoint, I'd suggest that encoding all known invariants as types may be prohibitively cumbersome and time-consuming - to the point where writing the program becomes more of an exercise in proofs rather than producing something useful. Often the invariants of a program are implied or can be easily inferred by the reader, which can actually make the program easier to understand for both the reader and the writer by leaving some things unsaid.
Of course, whether a program is easier to understand when its invariants are written down vs. when they are left to be inferred by the reader is probably a matter of circumstance, so different tools for different jobs and all that.
I've recently been writing a small program in Go - my first foray into the language - and I thoroughly enjoy how it gets out of my way. The lack of rigor is freeing if you don't stop to think about it too much.
The problem is that user inferring doesn't scale. For small projects this is reasonable but for enterprise software engineering it is easy for a constraint that isn't enforced by the type system to be missed by an engineer leading to a bug. Whereas typed constraints naturally propagate throughout the system.
I think there's a sharp rule of diminishing returns the stronger you make type systems. Type safety comes at a cost and that cost has to come with an associated payoff.
This is why ultra strong type systems end up being little more than academic toys - the payoff when you dial type safety up to an extreme doesn't match the associated cost.
Types and tests should meet somewhere in the middle because the same law of diminishing returns affects tests in reverse. They tend to be good at what the other is bad at and if you rely too heavily on one at the expense of the other they will start fraying around the edges.
Indeed. A type means assigning a rigid category to something.
There is not a single taxonomy that is universal, it ends up like the Celestial Emporium of Benevolent Knowledge that decides animals into:
those belonging to the Emperor,
those that are embalmed,
those that are tame,
pigs,
sirens,
imaginary animals,
wild dogs,
those included in this classification,
those that are crazy-acting,
those that are uncountable,
those painted with the finest brush made of camel hair,
miscellaneous,
those which have just broken a vase, and
those which, from a distance, look like flies.
In Idris you get to choose how strict you want to make your types. E.g. you can choose to use "Vect n a" if you want a vector a's where the size is tracked by the type system. Or you can use e.g. "List a" if you want a list of a's where the size is not tracked by the type system.
Of course the problem comes when you have to interface some code which expects a "Vect n a" and you have a "List a", but that's solvable by simply checking the size (dynamically) and deciding what to do if your "List a" isn't of size exactly n. (Fail, or whatever.). Going from "Vect n a" to a "List a" is trivial, of course.
... but really, that's where most of the friction is: Boundaries between APIs where the types don't quite line up. (Which is often the case for other languages, tbf. We tend to just notice less at compile time since most languages are so bad at expressing constraints.)
What, to you, is an ultra strong type system? Both OCaml and Haskell are used in plenty of non academic contexts. Do you mean something like Coq or F*?
With dependent types you still end up having to do dynamic checking of invariants for a substantial portion of real world code to construct the value with the properties you want. Anything coming from disk or network or a database etc. In practice I feel that it is far easier to just do manual dynamic checking with the constructor hidden from other modules such as `constructEmail :: string -> Email' which uses normal value level code, a common Haskell pattern. Obviously a little less flexible for the common dependent type examples of such as appending two vectors of a specific length.
The other alternative is what protobuf did to solve the same pain: In proto3, all optionals have a default value. They're effectively required where the default value doesn't need to be encoded in the message.
The exact different between the two approaches is subtle. Technically you can fancier things with the framework exposing the optional values as set/not set, but in practice a lot of code is a lot simpler with default values.
This is true when you can’t upgrade all the data and all the data generators. But there are smaller systems where you can actually upgrade everything you have and set some new requirements for everything coming in.
If the requirements change, you can update the definition in the type system once. And then you immediately know the locations where this new bound is violated.
It's true that there are valid programs which don't type check in the same way that for any mathematical system there are truths which can't be proven.
In practice though, almost any program you'd want to write can be type checked, in the same way that few proofs get tied into Gödelian knots.
The benefits of such an approach would be substantial. However, nobody has gotten the costs down below "eye watering", or similar other metaphors. It's not that it can't be done, it's that it is not yet known if it can be done in a way that comes out positive on the cost/benefits engineering analysis.
I am open to the possibility that it can, but at the same time, I'd say if I draw the trendline of progress in this area out, it's not really all that encouraging.
If you want to see why it has remained only academic, spend some time with Idris and take notes on how many problems you encounter are fundamental to the paradigm versus problems that can be solved with a larger involvement of effort. You'll want to pick a task that fits into its library ecosystem already, and not something like "hey, I'll write a fully generic HTTP server that can be proved correct" on your first try. I seriously suggest this, it's very educational. But part of that education will probably be to tone down your expectations of this being a miracle cure any time soon.
I don't say this to celebrate the situation; it's a bummer. But if the situation is ever going to be solved, it's going to be solved by people viewing it clearly.
(One of the major problems I see is that when you get to this level of specification, you end up with everything being too rigid. It is true that a lot of programming problems arise from things being made too sloppy and permitting a lot of states and actions that shouldn't exist. But some of that slop is also what makes our real-world systems compose together at all, without the original creator having to anticipate every possible usage. The "richer" the specifications get, the more rigid they get, and the more the creator of the code has to anticipate every possible future usage, which is not a reasonable ask. While I'd like to have the option to lock down many things more tightly, especially certain high-priority things like crypto or user permissions, it is far from obvious to me that the optimum ideal across the entire programming world is to maximize the specification level to this degree. And those high-priority use cases tend to jump to mind, but we also tend to overestimate their size in the real world. If we are going to see this style of programming succeed, we need to figure out how to balance the rigidity of tight specification with the ability to still flexibly use code in unanticipated manners later, in a context where the original specifications may not be quite right, and I think it's pretty unclear how to do that. Tightening down much past Haskell seems pretty challenging, and even Haskell is a bridge or three too far for a lot of people.)
Slightly more optimistically: As we go along, small pieces go from "too expensive to actually use" to "usable on real projects, but nobody does yet", and then to "doesn't everybody do that?"
This is the story of improvements in type systems over the last 70 years. But the progress is very slow. So this is only slightly more optimistic...
I agree with the philosophy at a high level, but opening up the power of what you can do at compile time is a very deep and nuanced trade off space. I think there’s a more general way to state what you’re getting after as well: we should be able to express arbitrary constraints that are enforced by the compiler. These constraints don’t really have to be about values themselves but could also be about how they are consumed e.g. linear types.
Unfortunately, the more powerful the type system, the harder it is to make inferences. Generality comes at a cost.
In the type-theoeretic world, you want to refer to "refinement types" [0]. Some interesting work that I have seen in this direction is Liquid Types [1][2]. There is some work in Haskell[3], but it is surprisingly very useful in Rust as well.
How would you ensure that the values of the types doesn't invalidate the contract? Easy for basic assignment, but not sure how you would do that when doing arbitrary mutation.
Idris is fascinating. I actually took the book about it on vacation with me. Software reliability is becoming increasingly important as we tie all these systems together because it multiplies the number of possible unanticipated failure modes.
Regarding the ultimate utility of languages initially considered "academic," the languages Lisp, Haskell, Prolog, Scala, Erlang and OCaml would like to have a word ;)
PrimeNumber is definitely possible in pretty much any dependent type language. Indeed, it would be pretty impossible to get any work done in Lean without it.
TotalProgram: possible in some, not in others, I think.
That sounds like you're using "type" to mean "representation", albeit with a lot of detail.
I may be odd, but my programming problems are rarely due to the number of bananas or how bananas are represented, but whether I'm working with the correct bananas.
Typescript has a phenomenal type-system indeed. However, its foundations are not very sound. The types are pragmatic and that's what counts mostly, but going forward it would be great to have type-systems that are even more powerful and also theoretically sound with a simpler foundation (= less edge cases and workarounds). Languages like Idris are working on this.
Typescript can't really be this language because it is impeded by having to work with the javascript runtime, which makes this task much much harder to do.
TypeScript's type foundations are not sound, and I would argue that this is a major factor why it is so good. I have yet to see a sound type system that is as pleasant to use as TypeScript's—certainly not Idris.
I am not a fan of encoding all sorts of correctness statements into a static type system. I'd much rather use a theorem proving environment for correctness, but one that is not based on type theory.
Haskell’s type system isn’t sound either, there is unsafePerformIO.
Nobody actually wants a language with a sound type system, unless they’re writing mathematical proofs. Any time you need to do anything with the external environment, such as call a function written in another language, you need an escape hatch. That’s why every language that aspires to real world use has an unsound type system, and the more practical it aspires to be, the more unsound the type system is.
Soundness is only a goal if the consequences of a type error are bad enough: your proof will be wrong, or an airplane falls out of the sky, or every computer in the world boots to a blue screen.
For everybody else the goal should be to balance error rate with developer productivity. Sacrificing double digits of productivity for single digits of error rate is usually not worth it, since the extra errors that a very sound type system will catch will be dominated by the base rate of logic errors that it can’t catch.
What does "soundness" mean here? I love typescript's type system, except for the quirks that are inherited from javascript, which I sometimes find infuriating, and I'm wondering if you're talking about the same thing. For example, the fact that typescript- through javascript- only exposes a single `number` type is so annoying. Sometimes that's fine, but other times it would be nice to be able to say "No, this isn't just any damn number, this is an integer! This is an iterable ID for god's sake!" so you at the very least get a static error when you try to pass in `1.001`, or something that could be `1.001`. And that's just a very basic example of how a collection of number types with more granularity would be an improvement. Especially if they were a little more robust than that and were composable. Imagine being able to type `integer | negativeFloat` or other such wacky composed types. Ideally you could compose your types programmatically, and define shit like "the type is a `float` who's floor value has a % 3 of 0".
Elixir is taking a similar direction if I'm understanding correctly (they seem to insist their type system is fundamentally from Typescript's, but to this day I cannot understand how, both are structural and based on set theory)
Foo and Bar are now compatible with FooBar which can create a lot of issues even in strict mode. Typescript is amazing, but interfaces not being struct are a footgun in many places
I think what you describe is called "extensible records". Elm had them in a prior version. You can implement them in a language with an expressive enough type system (e.g. Haskell or Scala) without special support, but the syntax can be a bit unwieldy.
I'm not claiming these are definitive in any sense, but they are useful guides into the literature if any reader wants to learn more. (If you are not used to reading programming language papers, the introduction, related work, and conclusions are usually the most useful. All the greek in the middle can usually be skipped unless you are trying to implement it.)
This is often referred to as "row typing", and the only language I've ever used that implemented it first-class with polymorphism is Purescript[0]. It's possible to model such things in other languages with sufficiently powerful type systems, though they're normally not as nice to use.
As an aside, Purescript is one of the most fun languages I've ever used for frontend work, and I lament that Elm seems to have overtaken it in the FP community.
The construction of the type 'Map<string, Type>' is entirely standard in languages like Agda and Coq (and I bet Idris too). In these languages, Type is itself a type, and can be treated like any other type (like string or int). Nothing clunky about it. (If you're curious, Type is usually referred to as a "universe type" in type theory circles.)
The Ceylon language allowed you to do that sort of thing with types.
If you "added" two maps together of types `Map<String, int>` and `Map<String, String>`, you would get back a `Map<String, String | int>` (or, equivalently, `Map<String, String> | Map<String, int>`).
But for some slightly complex reasons, most language designers find adhoc union types (which are required for this to work) a bad idea. See the Kotlin work related to that, they explicitly want to keep that out of the language (and I've seen that in other language discussions - notice how tricky it can be to decide if two types are "the same" or whether a type is a subtype of another in the presence of generalized unions) except for the case of errors: https://youtrack.jetbrains.com/issue/KT-68296/Union-Types-fo...
> If you "added" two maps together of types `Map<String, int>` and `Map<String, String>`, you would get back a `Map<String, String | int>` (or, equivalently, `Map<String, String> | Map<String, int>`).
But these are obviously not equivalent: the first type is a map where all values are either strings or ints, and the second one is either a map where all values are strings, or all values are ints.
If that's confusing, consider: {foo: 1, bar: "2"}. It satisfies `Map<String, String | int>` but not `Map<String, String> | Map<String, int>`.
(In fact, the latter is a subtype of the former.)
P.S. You also seem to have misunderstood the toplevel comment about modeling record types as maps, as being about typing maps that exist in the language.
Ocaml object system can also achieve this in a quite lightweight way
type foo = < foo:int >
type bar = < bar:int >
type k = < foo; bar >
type u = < k; baz:int >
let f (x: <u; ..>) (\* the type annotation is not needed \*) = x#m
You can easily do composable tagged (by index type or even comp time strings) tuples in C++. The syntax is not pretty, but there is minimal to non-existent overhead.
On the other hand, thanks to the loose "duck typed" nature of templates, you can write functions that act on a subset of the fields of a type (even type checking it with concepts) while preserving the type of the whole object.
Go structs sorta behave this way if you put a "parent" struct in the first field of a struct definition. They are, of course, not maps though. But a map with statically defined keys/values/types is basically a struct.
This doesn't function as an interface though. You cannot pass a FooBar to a function that expects a Foo, for example, and although you can fairly easily reference the Foo-part of a FooBar instance (`foobar.Foo`) there is no way to pass e.g. an array of FooBar instances to a function that takes a slice of Foo[] as its argument. That's the problem to be solved.
That sounds achievable at compile-time in c++ with some type-level map implementation such as in boost.mp11 (sadly the language does not allow general use of types as values preventing the use of standard map containers).
Is this something professional programmers are having trouble with??
This is the kind of problem you face in your first year working, no? I am honetly curious what others think. Do you have trouble deciding when to use an interface (assuming your language has that), or a type wrapper (I don't think that's the brightest idea), or a function to extract the field to sort by (most languages do that)??
Your confidence that there is an obviously optimal way to model something reads as either brilliance or hubris and brilliance is a lot rarer.
Typically there are subtle trade-offs and compromises which only prove themselves to be useful/detrimental as the software changes over time. You can place bets based on experience but you can only really be cocky about your choices when looking back not looking forward
> Is this something professional programmers are having trouble with??
I would say, yes. So does everybody else. If you don't believe it, I don't think you appreciate the depth of these problems.
I am reminded of Eric Meijer saying that the interface (in the Java sense) part of the contract between types is the least interesting part. What is important are their algebraic properties, which can be modeled using functional types and other bunch of advanced type-theoretical ideas (like linear types for instance).
Modelling stuff with types is not easy, it's an art.
> Is this something professional programmers are having trouble with??
Nobody said this.
There is value in thinking about things like this sometimes, because it has long-term consequences for the projects we work on. Even if you're a "professional" programmer (whatever that means), it's valuable to go back to beliefs and knowledge you've established long ago to evaluate whether to change them in the face of new experiences you've made since the last time you've thought about it.
If you think "professional" programmers don't get this sort of thing wrong in some form or another, I have a bridge to sell you.
Why scare quotes though? Professional programmer just means someone who writes software for a living. Software is your profession... there's no doubt what that means , is there??
If you do that, you'll have run into this sort of decision very early in your career, and hopefully will understand the best way to handle it, which IMHO just depends on the language (because certain features lead to different optimum solutions) and the culture around it. But sure, I am happy to discuss fundamental topics like this, that's why I am engaging and asking what others think.
It does not solve the problem. Would you rather have a function that takes a map and puts additional keys in it (ex. `timestamp`) or a function that takes a map and stores it in a new map alongside the additional keys? In any case, how would you write a generic function that works on those additional keys?
I think TFA is talking about nesting fields in whatever order and also being able to extract the fields without ceremony like `get(get(get...))`—just allow `get(...)`. If you wanted an example more close to the submission you would have a multi-field class inside that class. Then you would perhaps solve the nested field access boilerplate by manually implementing it in the interface methods.
It's not hard at all and it's actually what the second approach listed by the author shows:
class HasRecipient a where
get_receiver :: a -> PlayerId
which adjusted to your example would be
class Timestamped a where
timestamp :: a -> UTCTime
The problem with this approach is that you'll have duplicated data on all instances. In your example, `Quote` has the fields `timestamp` and `sender` in order to satisfy with `Timestamped` and `Msg`. If you had several classes like `Quote` and interfaces like `Timestamped` then you would end up with a lot of duplicated code.
The challenge is composition. Adding Timestamped to something in a static, type-checked way, but not modifying the source code of that something.
> It doesn't have interface polymorphism?
This is besides the point, but its interface polymorphism is static, not dynamic. If you have a List which is IMappable, and you do some IMappable operations on it, in most OOP languages you get back an IMappable, but in Haskell you get back List.
I think a lot of the problem is that many of the constraints and acceptable values for data are not determined until well after first deployment. The "knot" comes in when you are explicitly changing data and code to add a feature that you didn't anticipate at the start.
This is a lot like schemaless databases. The flexibility of not having to fully specify the full schema does not mean that you don't benefit from specifying parts of it? Indeed, if you are indexing things, you have to specify those parts. But it is hard to argue with a straight face that schemaless tools don't have some strong benefits.
This is very similar to what Rich Hickey argued in "Simplicity Matters": https://youtu.be/rI8tNMsozo0?si=xTkpsLYTYh0jA5lB
Basically when you use a hash / JavaScript-style object rather than a fixed struct definition for your data (which amounts to a schema- less database in aggregate)
Let me put on my straightest face.
Schemaless tools have no benefits.*
As soon as you start _doing_ anything with your data, it has a schema. You may not have declared it to your data store, to a specific piece of code, etc, but your application logic almost definitely makes assumptions about the shape of the data--what fields are and aren't present, the types of values in those fields, and what those values mean.
Every piece of your system must either accept undefined behaviour in the face of data which doesn't fit this implicit schema (there are very few non-trivial problems where you can define a meaningful, valid output for all possible inputs), or essentially treat the schemaless parts as hostile and assert its assumptions before every action.
The only thing you've done by going "schemaless" is distributing your schema across your systems and pushed the schema validation out to each thing interacting with the schemaless components instead of centralizing it.
* Yes, literally storing raw, unstructured data. But as soon as you need to _do_ anything with the data besides regurgitate it you're back to it having some sort of schema.
However, as a counterpoint, I'd suggest that encoding all known invariants as types may be prohibitively cumbersome and time-consuming - to the point where writing the program becomes more of an exercise in proofs rather than producing something useful. Often the invariants of a program are implied or can be easily inferred by the reader, which can actually make the program easier to understand for both the reader and the writer by leaving some things unsaid.
Of course, whether a program is easier to understand when its invariants are written down vs. when they are left to be inferred by the reader is probably a matter of circumstance, so different tools for different jobs and all that.
I've recently been writing a small program in Go - my first foray into the language - and I thoroughly enjoy how it gets out of my way. The lack of rigor is freeing if you don't stop to think about it too much.
Ada's generalized type contracts using subtype predicates work pretty well for this: https://learn.adacore.com/courses/Ada_For_The_CPP_Java_Devel...
You can use it for something as simple as expressing ranges, or to represent types with arbitrary constraints, including types with discontinuities.
This is why ultra strong type systems end up being little more than academic toys - the payoff when you dial type safety up to an extreme doesn't match the associated cost.
Types and tests should meet somewhere in the middle because the same law of diminishing returns affects tests in reverse. They tend to be good at what the other is bad at and if you rely too heavily on one at the expense of the other they will start fraying around the edges.
There is not a single taxonomy that is universal, it ends up like the Celestial Emporium of Benevolent Knowledge that decides animals into:
Then you have a new use case.Of course the problem comes when you have to interface some code which expects a "Vect n a" and you have a "List a", but that's solvable by simply checking the size (dynamically) and deciding what to do if your "List a" isn't of size exactly n. (Fail, or whatever.). Going from "Vect n a" to a "List a" is trivial, of course.
... but really, that's where most of the friction is: Boundaries between APIs where the types don't quite line up. (Which is often the case for other languages, tbf. We tend to just notice less at compile time since most languages are so bad at expressing constraints.)
Even history of "required", rather simple restriction on the value, is showing that putting that logic into type system is too much of a burden.
https://capnproto.org/faq.html#how-do-i-make-a-field-require...
The exact different between the two approaches is subtle. Technically you can fancier things with the framework exposing the optional values as set/not set, but in practice a lot of code is a lot simpler with default values.
https://protobuf.dev/programming-guides/proto3/#default
Useful types are a compromise between expressiveness and practicality.
In practice though, almost any program you'd want to write can be type checked, in the same way that few proofs get tied into Gödelian knots.
I am open to the possibility that it can, but at the same time, I'd say if I draw the trendline of progress in this area out, it's not really all that encouraging.
If you want to see why it has remained only academic, spend some time with Idris and take notes on how many problems you encounter are fundamental to the paradigm versus problems that can be solved with a larger involvement of effort. You'll want to pick a task that fits into its library ecosystem already, and not something like "hey, I'll write a fully generic HTTP server that can be proved correct" on your first try. I seriously suggest this, it's very educational. But part of that education will probably be to tone down your expectations of this being a miracle cure any time soon.
I don't say this to celebrate the situation; it's a bummer. But if the situation is ever going to be solved, it's going to be solved by people viewing it clearly.
(One of the major problems I see is that when you get to this level of specification, you end up with everything being too rigid. It is true that a lot of programming problems arise from things being made too sloppy and permitting a lot of states and actions that shouldn't exist. But some of that slop is also what makes our real-world systems compose together at all, without the original creator having to anticipate every possible usage. The "richer" the specifications get, the more rigid they get, and the more the creator of the code has to anticipate every possible future usage, which is not a reasonable ask. While I'd like to have the option to lock down many things more tightly, especially certain high-priority things like crypto or user permissions, it is far from obvious to me that the optimum ideal across the entire programming world is to maximize the specification level to this degree. And those high-priority use cases tend to jump to mind, but we also tend to overestimate their size in the real world. If we are going to see this style of programming succeed, we need to figure out how to balance the rigidity of tight specification with the ability to still flexibly use code in unanticipated manners later, in a context where the original specifications may not be quite right, and I think it's pretty unclear how to do that. Tightening down much past Haskell seems pretty challenging, and even Haskell is a bridge or three too far for a lot of people.)
This is the story of improvements in type systems over the last 70 years. But the progress is very slow. So this is only slightly more optimistic...
https://docs.servant.dev/en/stable/tutorial/ApiType.html
Unfortunately, the more powerful the type system, the harder it is to make inferences. Generality comes at a cost.
[0] https://en.wikipedia.org/wiki/Refinement_type [1] https://goto.ucsd.edu/~ucsdpl-blog/liquidtypes/2015/09/19/li... [2] https://goto.ucsd.edu/~rjhala/liquid/liquid_types.pdf [3] https://ucsd-progsys.github.io/liquidhaskell/ [4] https://github.com/flux-rs/flux
The former is difficult to track through arithmetic operations.
Regarding the ultimate utility of languages initially considered "academic," the languages Lisp, Haskell, Prolog, Scala, Erlang and OCaml would like to have a word ;)
TotalProgram: possible in some, not in others, I think.
I may be odd, but my programming problems are rarely due to the number of bananas or how bananas are represented, but whether I'm working with the correct bananas.
That's a programming language. In other words, now your type system needs a type system.
A future programmer may be spending more time formally describing the invariants of the system.
Edit: Oh, I typed this on mobile, this was supposed to be a comment on another comment by posix_monad.
Typescript can't really be this language because it is impeded by having to work with the javascript runtime, which makes this task much much harder to do.
I am not a fan of encoding all sorts of correctness statements into a static type system. I'd much rather use a theorem proving environment for correctness, but one that is not based on type theory.
Nobody actually wants a language with a sound type system, unless they’re writing mathematical proofs. Any time you need to do anything with the external environment, such as call a function written in another language, you need an escape hatch. That’s why every language that aspires to real world use has an unsound type system, and the more practical it aspires to be, the more unsound the type system is.
Soundness is only a goal if the consequences of a type error are bad enough: your proof will be wrong, or an airplane falls out of the sky, or every computer in the world boots to a blue screen.
For everybody else the goal should be to balance error rate with developer productivity. Sacrificing double digits of productivity for single digits of error rate is usually not worth it, since the extra errors that a very sound type system will catch will be dominated by the base rate of logic errors that it can’t catch.
Dead Comment
What we want to express here is an object with a map of properties (name to type):
For the OOP minded: And also compose those: But at compile-time, of course.Have any languages achieved this? I know TypeScript can do some of these things, but it's clunky.
Here's three recent papers on extensible records:
* https://arxiv.org/pdf/2108.06296 * https://arxiv.org/pdf/2404.00338 * https://dl.acm.org/doi/pdf/10.1145/3571224
I'm not claiming these are definitive in any sense, but they are useful guides into the literature if any reader wants to learn more. (If you are not used to reading programming language papers, the introduction, related work, and conclusions are usually the most useful. All the greek in the middle can usually be skipped unless you are trying to implement it.)
From what I read, gp only asks for a way to express the type, not a way to change/extend the type of something during runtime.
Elm's current implementation does this just fine.
As an aside, Purescript is one of the most fun languages I've ever used for frontend work, and I lament that Elm seems to have overtaken it in the FP community.
[0]: https://hgiasac.github.io/posts/2018-11-18-Record-Row-Type-a...
You made me want to try purescript :)
I like the beauty of an expressive type system as much as the next guy, but is there any scenario where:
Is better for solving real-world problems than or whatever?But for some slightly complex reasons, most language designers find adhoc union types (which are required for this to work) a bad idea. See the Kotlin work related to that, they explicitly want to keep that out of the language (and I've seen that in other language discussions - notice how tricky it can be to decide if two types are "the same" or whether a type is a subtype of another in the presence of generalized unions) except for the case of errors: https://youtrack.jetbrains.com/issue/KT-68296/Union-Types-fo...
But these are obviously not equivalent: the first type is a map where all values are either strings or ints, and the second one is either a map where all values are strings, or all values are ints.
If that's confusing, consider: {foo: 1, bar: "2"}. It satisfies `Map<String, String | int>` but not `Map<String, String> | Map<String, int>`.
(In fact, the latter is a subtype of the former.)
P.S. You also seem to have misunderstood the toplevel comment about modeling record types as maps, as being about typing maps that exist in the language.
How are these equivalent? Wouldn't the latter result in {foo:1, foo:"two"}, where the former wouldn't?
On the other hand, thanks to the loose "duck typed" nature of templates, you can write functions that act on a subset of the fields of a type (even type checking it with concepts) while preserving the type of the whole object.
[1]: https://nickel-lang.org/
”The MLs” solved it decades ago. Standard ML has very nice record types.
This is the kind of problem you face in your first year working, no? I am honetly curious what others think. Do you have trouble deciding when to use an interface (assuming your language has that), or a type wrapper (I don't think that's the brightest idea), or a function to extract the field to sort by (most languages do that)??
Typically there are subtle trade-offs and compromises which only prove themselves to be useful/detrimental as the software changes over time. You can place bets based on experience but you can only really be cocky about your choices when looking back not looking forward
You're imagining things (uncharitably); that's not in the comment you replied to.
I would say, yes. So does everybody else. If you don't believe it, I don't think you appreciate the depth of these problems.
I am reminded of Eric Meijer saying that the interface (in the Java sense) part of the contract between types is the least interesting part. What is important are their algebraic properties, which can be modeled using functional types and other bunch of advanced type-theoretical ideas (like linear types for instance).
Modelling stuff with types is not easy, it's an art.
Nobody said this.
There is value in thinking about things like this sometimes, because it has long-term consequences for the projects we work on. Even if you're a "professional" programmer (whatever that means), it's valuable to go back to beliefs and knowledge you've established long ago to evaluate whether to change them in the face of new experiences you've made since the last time you've thought about it.
If you think "professional" programmers don't get this sort of thing wrong in some form or another, I have a bridge to sell you.
If you do that, you'll have run into this sort of decision very early in your career, and hopefully will understand the best way to handle it, which IMHO just depends on the language (because certain features lead to different optimum solutions) and the culture around it. But sure, I am happy to discuss fundamental topics like this, that's why I am engaging and asking what others think.
IIRC you can have traits automatically implement this sort of behavior with a centralized implementation.
The challenge is composition. Adding Timestamped to something in a static, type-checked way, but not modifying the source code of that something.
> It doesn't have interface polymorphism?
This is besides the point, but its interface polymorphism is static, not dynamic. If you have a List which is IMappable, and you do some IMappable operations on it, in most OOP languages you get back an IMappable, but in Haskell you get back List.