I'm suspicious about the truth of this claim. I don't think bidirectional typechecking is the problem in itself: the problem trying to do type inference when you have some extremely flexible operator overloading, subtyping, and literal syntax features. It's these "expressive" features which have made type inference in Swift far more computationally complex, not type inference itself.
And certainly not bidirectional type inference; the author of this post's definition of this concept isn't even right (bidirectional typing refers to having a distinction between typing judgements which are used to infer types and those which are used to restrict types, not moving bidirectionally between parent nodes & child nodes). I don't know if the mistake comes from the post author or Chris Lattner, and I don't know if the word "bidirectional" is relevant to Swift's typing; I don't know if Swift has a formal description of its type system or that formal description is bidirectional or not.
EDIT: watching the video the Chris Lattner quote comes from, it appears the mistake about the word "bidirectional" is his. Bidirectional type systems are an improvement over ordinary type systems in exactly the direction he desires: they distinguish which direction typing judgements can be used (to infer or check types), whereas normal formal descriptions of type systems don't make this distinction, causing the problems he describes. "Bottom up" type checking is just a specific pattern of bidirectional type checking.
Regardless, the problem with Swift is that a literal can have any of an unbound arity of types which implement a certain protocol, all of which have supertypes and subtypes, and the set of possibilities grows combinatorially because of these language features.
Bidirectional type checking is what makes Swift type checking exponential. Operators and literals just really jack up the numbers in the exponential function because they quickly lead to deeply nested expression trees with many choices on each nesting. So that's why they're the most commonly cited examples. They look very innocent and simple but aren't in Swift.
But you can absolutely construct slow expressions just with functions that are overloaded on both sides (i.e. parameters and return type). Generics and Closures also drive up the complexity a lot, though.
I found an online Swift Playground, and here is a smaller test case that produces the "unable to type-check this expression in reasonable time" message:
print("a" + "b" + "c" + 11 + "d")
...if you reduce it even further to:
print("a" + "b" + "c" + 11)
...the error message is:
<stdin>:3:23: error: binary operator '+' cannot be applied to operands of type 'String' and 'Int'
print("a" + "b" + "c" + 11)
~~~~~~~~~~~~~~~ ^ ~~
<stdin>:3:23: note: overloads for '+' exist with these partially matching parameter lists: (Int, Int), (String, String)
...it still falls down on with the "reasonable time" error on this tiny example, even if you fully specify the types:
let a:String = "a"
let b:String = "b"
let c:String = "c"
let d:String = "d"
let e:String = "e"
let n:Int = 11
print(a + b + c + n + d + e)
...is that pointing to a problem with type-checking or overload resolution more than type inference? Is there a way in Swift to annotate the types of sub-expressions in-line, so you could try something like:
The slow-down here isn't just from picking the correct `+` overloads, but also the various types that `"a"` could be (e.g. `String`, `Substring`, `Character`, or something like a `SQLQuery` type, depending on what libraries you have imported)
Yeah, I agree. Regular H-M typechecking if you don't have subtyping is actually really fast. The problem is that subtyping makes it really complicated--in fact, IIRC, formally undecidable.
Afaik that is true of traditional HM, but fortunately there was a big advancement in inferring subtypes w/ Dolan's Algebraic Subtyping a few years ago! It's not nearly as fast as HM (O(n^3) worst case) but generally fast enough in real code
It's funny because it's doubly exponential WRT code size. However it's also almost linear (n log*(n) due to union-find) WRT the size of the type and sane humans don't write programs with huge types.
error[E0277]: cannot add `i64` to `i32`
1i32 + 2i64;
^ no implementation for `i32 + i64`
It bothered me at first, there are a lot of explicit annotations for conversions when dealing with mixed precision stuff. But I now feel that it was the exactly correct choice, and not just because it makes inference easier.
One of the interesting tradeoffs in programming languages is compile speed vs everything else.
If you've ever worked on a project with a 40 minute build (me) you can appreciate a language like go that puts compilation speed ahead of everything else. Lately I've been blown away by the "uv" package manager for Python which not only seems to be the first correct one but is also so fast I can be left wondering if it really did anything.
On the other hand, there's a less popular argument that the focus on speed is a reason why we can't have nice things and, for people working on smaller systems, languages should be focused on other affordances so we have things like
One area I've thought about a lot is the design of parsers: for instance there is a drumbeat you hear about Lisp being "homoiconic" but if you had composable parsers and your language exposed its own parser, and if every parser also worked as an unparser, you could do magical metaprogramming with ease similar to LISP. Python almost went there with PEG but stopped short of it being a real revolution because of... speed.
As for the kind of problem he's worried about (algorithms that don't scale) one answer is compilation units and careful caching.
> One of the interesting tradeoffs in programming languages is compile speed vs everything else.
In the case of Rust it's more of a cultural choice. Early people involved in the language pragmatically put everything else (correctness, ability to ship, maintainability, etc.) before compilation speed. Eventually the people attracted to contribute to the language weren't the sort that prioritized compilation speed. Many of the early library authors reflected that mindset as well. That compounds and eventually it's very difficult to crawl out from under.
I suspect the same is true for other languages as well. It's not strictly a bad thing. It's a tradeoff but my point is that it's less of an inevitability than people think.
I think most people haven't used many languages that prioritize compilation speed (at least for native languages) and maybe don't appreciate how much it can help to have fast feedback loops. At least that's the feeling I get when I watch the debates about whether Go should add a bunch more static analysis or not--people argue like compilation speed doesn't matter at all, while _actually using Go_ has convinced me that a fast feedback loop is enormously valuable (although maybe I just have an attention disorder and everyone else can hold their focus for several minutes without clicking into HN, which is how I got here).
Most languages are forced to choose between tooling speed and runtime speed, but Python has historically dealt with this apparent dichotomy by opting for neither. (⌐▨_▨)
Having used Turbo Pascal, Delphi, Modula-2, Active Oberon, Eiffel, D, OCaml, I really don't appreciate that Go puts compilation speed ahead of everything else.
Those languages show one can have both, expressive type systems, and fast compilation turnarounds, when the authors aren't into anti-PhD level languages kind of sentiment.
I'm the author of https://bolinlang.com/
Go is a magnitude slower. I have some ideas why people think go is fast but there's really no excuse for a compiler to be slower than it. Even gcc is faster than go if you don't include too many headers. Try compiling sqlite if you don't believe. Last time I checked you could compile code in <100ms when using SDL3 headers (although not SDL2)
> One of the interesting tradeoffs in programming languages is compile speed vs everything else.
Yes, but I don't think that compile speed has really been pushed aggressively enough to properly weigh this tradeoff. For me, compilation speed is the #1 most important priority. Static type checking is #2, significantly below #1 and everything else I consider low priority.
Nothing breaks my flow like waiting for compilation. With a sufficiently fast compiler (and Go is not fast enough for me), you can run it on every keystroke and get realtime feedback on your code. Now that I have had this experience for a while, I have completely lost interest in any language that cannot provide it no matter how nice their other features are.
I think this is a false choice. It comes from the way we design compilers today.
When you recompile your program, usually a tiny portion of the lines of code have actually changed. So almost all the work the compiler does is identical to the previous time it compiled. But, we write compilers and linkers as batch programs that redo all the compilation work from scratch every time.
This is quite silly. Surely it’s possible to make a compiler that takes time proportional to how much of my code has changed, not how large the program is in total. “Oh I see you changed these 3 functions. We’ll recompile them and patch the binary by swapping those functions out with the new versions.” “Oh this struct layout changed - these 20 other places need to be updated”. But the whole rest of my program is left as it was.
I don’t mind if the binary is larger and less efficient while developing, so long as I can later switch to release mode and build the program for .. well, releasing. With a properly incremental compiler, we should be able to compile small changes into our software more or less instantly. Even in complex languages like Rust.
> Nothing breaks my flow like waiting for compilation. With a sufficiently fast compiler (and Go is not fast enough for me), you can run it on every keystroke and get realtime feedback on your code.
I already get fast feedback on my code inlined in my editor, and for most languages it only takes 1-2 seconds after I finish typing to update (much longer for Rust, of course). I've never personally found that those 1-2 seconds are a barrier, since I type way faster than I can think anyway.
By the time I've finished typing and am ready to evaluate what I've written, the error highlighting has already popped up letting me know what's wrong.
I'm a big fan of the idea of Swift as a cross platform language general purpose langauge, but it just feels bad without Xcode. The Vscode extension is just okay, and all of the tutorials/documentation assumes you are using Xcode.
A lot of the issues that Swift is currently facing are the same issues that C# has, but C# had the benefit of Mono and Xamarin, and in general more time. Plus you have things like JetBrains Rider to fill in for Visual Studio. Maybe in a few years Swift will get there, but I'm just wary because Apple really doesn't have any incentive to support it.
Funnily enough, the biggest proponent of cross platform Swift has been Miguel De Icaza, Gnome creator, and cofounder of Mono the cross platform C# implementation pre .net core. His Swift Godot project even got a shout out recently by Apple
The only thing holding it back is Apple not investing into making it happen. Swift is in a weird spot where it has so much potential, but without investment in the tooling for other platforms (which is uncommon for Apple in general) it just wont happen, at least not as quickly as it could.
I wouldn't describe Swift itself as having so much potential: I loved it and advocated for it, for years. After getting more experience on other platforms and having time to watch how it evolved, or didn't as per TFA, it's...okay to mediocre compared to peers - Kotlin, Dart, Python come to mind.
If Foundation was genuinely cross platform and open source, that description becomes more plausible for at least some subset of engineers. * (for non-Apple devs, Foundation ~= Apple's stdlib, things like dateformatting)
I don't mean to be argumentative, I'm genuinely curious what it looks like through someone else's eyes and the only way to start that conversation is taking an opposing position.
I am familiar with an argument it's better than Rust, but I'd very curious to understand if "better than" is "easier to pick up" or "better at the things people use Rust for": i.e. I bet it is easier to read & write, but AFAIK it's missing a whole lot of what I'll call "necessary footguns for performance" that Rust offers.
* IIRC there is a open source Foundation intended for Linux? but sort of just thrown at the community to build.
> The only thing holding it back is Apple not investing into making it happen.
This seems to be a (bad) pattern with Apple, one that Google used to (and still does) get a lot of flack for, this habit of not investing in things and then thing dying slow, painful deaths.
E.g. I remember this criticism being leveraged at e.g. Safari a lot.
But, for better or worse, Apple is not a technology company, really, its a design company. They focus on their cash-cow (iPhone) and main dev tools (macbook) and nearly everything else is irrelevant. Even their arm-laptops aren't really about being a great silicon competitor, I suspect. Their aim is to simplify their development model across phone/laptop/tablet and design seamless things, not make technically great or good things.
The reason(s) they haven't turned (as much) to enshittification probably are that a) it goes against their general design principles b) they have enough to do improve what they have and so release new stuff c) they aren't in a dominant/monopolistic market position where they can suddenly create utter trash and get away with it because there's nothing else.
And yes, they exhibit monopolistic behaviors within their "walled garden", but if they make a product bad enough, people can and will flee for e.g. Android (or possibly even something microsoft-ish). They can't afford to make a terrible product, but they can afford to abandon anything that doesn't directly benefit their bottom line.
Which is why I suppose I generally stopped caring about most things Apple.
I would say the opposite, actually: at least on large pure swift projects, Xcode grinds to a halt. Many of its features come with unacceptable performance cliffs which are impossible to patch. I ran into a particular problem with the build documentation command recently: https://www.jackyoustra.com/blog/xcode-test-lag
C# also has the issue that while the .NET team is making heroic efforts for .NET to be a good cross platform citizen, upper management would rather sell Visual Studio and Windows licenses, alongside "works best in Azure" frameworks.
C# has been decent on Mac since the early 2010s (MonoDevelop, Unity3d), and mobile .NET has been pretty robust on iOS and Android for about a decade, with plenty of NuGet libraries available.
Meanwhile, Swift has a long way to go to reach at least the state of Kotlin Multiplatform, which is still mostly in beta and lacks libraries that can work outside of Android.
They never will, since it's also one of Swift's greatest strengths. What they may, eventually, do is dedicate the resources to minimize the negative aspects of the system while documenting clear ways to mitigate the biggest issues. Unfortunately Apple's dev tools org is chronically under resourced, which means improvements to the inference system and its diagnostics come and go as engineers are allowed to work on it. Occasionally it will improve, only to then regress as more features are added to the language, and then the cycle continues.
I think this is a unfair characterization. Yes Apple's developer ecosystem has a lot of fair complaints, I've personally run into the issues in this article particularly with newer APIs like SwiftData's #Predicate macro. But we just saw two days ago a lot of concerted issues to fix systemic problems like XCode the editor, or with compile times with Explicit Module improvements.
I think you're painting with too heavy a brush. Apple clearly is dedicating resources to long-tail issues. We just saw numerous examples two days ago at WWDC24.
No, this is just the typical Apple cycle I alluded too. Improvements are saved up for WWDC, previewed, then regress over the next year as other work is done, only for the process to repeat. They've demonstrated small improvements practically every year, yet the compiler continues to regress in build performance. Notably, the explicit module build system you mentioned regresses my small project's build time by 50% on an M1 Ultra. And even without it, overall build performance regressed 10 - 12% on the same project.
Explicit modules make build times worse, not better. Yes, this is the exact opposite of what Apple claims they do, and I am genuinely baffled by the disconnect. Usually Apple's marketing is at least directionally true even if they overstate things, but in this case the project appears to have entirely failed to deliver on what it was supposed to do but it's still being sold as if it succeeded.
On top of that, the big thing we didn't see announced this year was anything at all related to addressing the massive hit to compile times that using macros causes.
It's really hard for me to read past Lattner's quote. "Beautiful minimal syntax" vs "really bad compile times" and "awful error messages".
I know it's not helpful to judge in hindsight, lots of smart people, etc.
But why on earth would you make this decision for a language aimed at app developers? How is this not a design failure?
If I read this article correctly, it would have been an unacceptable decision to make users write setThreatLevel(ThreatLevel.midnight) in order to have great compile times and error messages.
Can someone shed some light on this to make it appear less stupid? Because I'm sure there must be something less stupid going on.
I'm a native Swift app developer, for Apple platforms, so I assume that I'm the target audience.
Apps aren't major-league toolsets. My projects tend to be fairly big, for apps, but the compile time is pretty much irrelevant, to me. The linking and deployment times seem to be bigger than the compile times, especially in debug mode, which is where I spend most of my time.
When it comes time to ship, I just do an optimized archive, and get myself a cup of coffee. It doesn't happen that often, and is not unbearable.
If I was writing a full-fat server or toolset, with hundreds of files, and tens of thousands of lines of code, I might have a different outlook, but I really appreciate the language, so it's worth it, for me.
Of course, I'm one of those oldtimers that used to have to start the machine, by clocking in the bootloader, so there's that...
I tried to fix a bug in Signal a few years ago. One part of the code took so long to do type inference on my poor old Intel MacBook that the Swift compiler errored out. I suppose waiting was out of the question, and I needed a faster computer to be able to compile the program.
That was pretty horrifying. I’ve never seen a compiler that errors nondeterministically based on how fast your cpu is. Whatever design choices in the compiler team led to that moment were terrible.
The author mentioned zig. And zig would get this right you can just write `setThreatLevel(.midnight)`
But where zig breaks down is on any more complicated inference. It's common to end up needing code like `@as(f32, 0)` because zig just can't work it out.
In awkward cases you can have chains of several @ statements just to keep the compiler in the loop of what type to use in a statement
I can’t offer much in the way of reasoning or explanation, but having written plenty of both Swift and Kotlin (the latter of which being a lot like a more verbose/explicit Swift in terms of syntax), I have to say that in the day to day I prefer the Swift way. Not that it’s a terrible imposition to have to type out full enum names and such, but it feels notably more clunky and less pleasant to write.
So maybe the decision comes down to not wanting to trade off that smooth, “natural” feel when writing it.
In practice, you write out ThreatLevel.MIDNIGHT, let the IDE import it for you, and then use an IDE hotkey to do the static import and eliminate the prefix.
Swift’s stated goal has always been to be a language which can scale from systems programming to scripts. Apple themselves are writing more and more of their own stuff in Swift.
Calling it “a language aimed at app developers” is reductive.
The alternatives are even less charitable to the Swift creators.
Surely, early in the development someone noticed compile times were very slow for certain simple but realistic examples. (Alternatives: they didn't have users? They didn't provide a way to get their feedback? They didn't measure compile times?)
Then, surely they sat down considered whether they could improve compile times and at what cost, and determined that any improvement would come at the cost of requiring more explicit type annotations. (Alternatives: they couldn't do the analysis the author did? The author is wrong? They found other improvements, but never implemented them?)
Then, surely they made a decision that the philosophy of this project is to prioritize other aspects of the developer experience ahead of compile times, and memorialized that somewhere. (Alternatives: they made the opposite decision, but didn't act on it? They made that decision, but didn't record it and left it to each future developer to infer?)
The only path here that reflects well on the Swift team decision makers is the happy path. I mean, say what you like about the tenets of Swift, dude, at least it's an ethos.
Honestly it follows the design of the rest of the language. An incomplete list:
1. They wrote it to replace C++ instead of Objective-C. This is obvious from hearing Lattner speak, he always compares it to C++. Which makes sense, he dealt with C++ every day, since he is a compiler writer. This language does not actually address the problems of Objective-C from a user-perspective. They designed it to address the problems of C++ from a user-perspective, and the problems of Objective-C from a compiler's perspective. The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).
2. They designed the language in complete isolation, to the point that most people at Apple heard of its existence the same day as the rest of us. They gave Swift the iPad treatment. Instead of leaning on the largest collection of Objective-C experts and dogfooding this for things like ergonomics, they just announced one day publicly that this was Apple's new language. Then proceeded to make backwards-incompatible changes for 5 years.
3. They took the opposite approach of Objective-C, designing a language around "abstract principles" vs. practical app decisions. This meant that the second they actually started working on a UI framework for Swift (the theoretical point of an Objective-C successor), 5 years after Swift was announced, they immediately had to add huge language features (view builders), since the language was not actually designed for this use case.
4. They ignored the existing community's culture (dynamic dispatch, focus on frameworks vs. language features, etc.) and just said "we are a type obsessed community now". You could tell a year in that the conversation had shifted from how to make interesting animations to how to make JSON parsers type-check correctly. In the process they created a situation where they spent years working on silly things like renaming all the Foundation framework methods to be more "Swifty" instead of...
5. Actually addressing the clearly lacking parts of Objective-C with simple iterative improvements which could have dramatically simplified and improved AppKit and UIKit. 9 years ago I was wishing they'd just add async/await to ObjC so that we could get modern async versions of animation functions in AppKit and UIKit instead of the incredibly error-prone chained didFinish:completionHandler: versions of animation methods. Instead, this was delayed until 2021 while we futzed about with half a dozen other academic concerns. The vast majority of bugs I find in apps from a user perspective are from improper reasoning about async/await, not null dereferences. Instead the entire ecosystem was changed to prevent nil from existing and under the false promise of some sort of incredible performance enhancement, despite the fact that all the frameworks were still written in ObjC, so even if your entire app was written in Swift it wouldn't really make that much of a difference in your performance.
6. They were initially obsessed with "taking over the world" instead of being a great replacement for the actual language they were replacing. You can see this from the early marketing and interviews. They literally billed it as "everything from scripting to systems programming," which generally speaking should always be a red flag, but makes a lot of sense given that the authors did not have a lot of experience with anything other than systems programming and thus figured "everything else" was probably simple. This is not an assumption, he even mentions in his ATP interview that he believes that once they added string interpolation they'd probably convert the "script writers".
The list goes on and on. The reality is that this was a failure in management, not language design though. The restraint should have come from above, a clear mission statement of what the point of this huge time-sink of a transition was for. Instead there was some vague general notion that "our ecosystem is old", and then zero responsibility or care was taken under the understanding that you are more or less going to force people to switch. This isn't some open source group releasing a new language and it competing fairly in the market (like, say, Rust for example). No, this was the platform vendor declaring this is the future, which IMO raises the bar on the care that should be taken.
I suppose the ironic thing is that the vast majority of apps are just written in UnityScript or C++ or whatever, since most the AppStore is actually games and not utility apps written in the official platform language/frameworks, so perhaps at the end of the day ObjC vs. Swift doesn't even matter.
This is a great comment, you clearly know what you're talking about and I learned a lot.
I wanted to push back on this a bit:
> The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).
From an outsider's perspective, this was the point of Swift: Objective C was and is hard to optimize. Optimal code means programs which do more and drain your battery less. That was Swift's pitch: the old Apple inherited Objective C from NExT, and built the Mac around it, back when a Mac was plugged into the wall and burning 500 watts to browse the Internet. The new Apple's priority was a language which wasn't such a hog, for computers that fit in your pocket.
Do you think it would have been possible to keep the good dynamic Smalltalk parts of Objective C, and also make a language which is more efficient? For that matter, do you think that Swift even succeeded in being that more efficient language?
great context. the whole narrative you present kind of begs the question of whether swift could actually be a good systems language.
as a SwiftUI app dev user I feel like this (and the OP's post) lines up with my experience but I've never tried it for e.g. writing an API server or CLI tool.
It looks to me as if there’s a solution to this problem based on the precompilation of sparse matrices. I’ll explain. If you have a function (or operator) call of the form fn(a, b), and you know that fn might accept 19 types (say) in the “a” place and 57 types in the “b” place, then in effect you have a large 2d matrix of the a types and the b types. (For functions taking a larger number of arguments you have a matrix with larger dimensionality.) The compiler’s problem is to find the matrix cell (indeed the first cell by some ordering) that is non-empty. If all the cells are empty, then you have a compiler error. If at least one cell is non-empty (the function is implemented for this type combination), then you ask “downward” whether the given arguments values can conform to the acceptable types. I know that there’s complexity in this “downward” search, but I’m guessing that the bulk of the time is spent on searching this large matrix. If so, then it’s worth noting that there are good ways of making this kind of sparse matrix search very fast, almost constant time.
HM works great for me. Let's try it elsewhere instead of blaming the algorithm!
{-# LANGUAGE OverloadedStrings #-} -- Let strings turn into any type defining IsString
{-# LANGUAGE GeneralizedNewtypeDeriving #-} -- simplify/automate defining IsString
import Data.String (IsString)
main = do
-- Each of these expressions might be a String or one of the 30 Foo types below
let address = "127.0.0.1"
let username = "steve"
let password = "1234"
let channel = "11"
let url = "http://" <> username
<> ":" <> password
<> "@" <> address
<> "/api/" <> channel
<> "/picture"
print url
newtype Foo01 = Foo01 String deriving (IsString, Show, Semigroup)
newtype Foo02 = Foo02 String deriving (IsString, Show, Semigroup)
-- ... eliding 27 other type definitions for the comment
newtype Foo30 = Foo30 String deriving (IsString, Show, Semigroup)
Do we think I've captured the combinatorics well enough?
The url expression is 9 adjoining expressions, where each expression (and pair of expressions, and triplet of expressions ...) could be 1 of at least 31 types.
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 9.0.2
$ time ghc -fforce-recomp foo.hs
[1 of 1] Compiling Main ( foo.hs, foo.o )
Linking foo ...
real 0m0.544s
user 0m0.418s
sys 0m0.118s
Feels more sluggish than usual, but bad combinatorics shouldn't just make it slightly slower.
I tried compiling the simplest possible program and that took `real 0m0.332s` so who knows what's going on with my setup...
If what your saying is true (the type is fixed as an integer), then it's even easier in tfa's case. No inference necessary.
In my code channel is not a string, it's one type of the 31-set of (String, Foo01, Foo02, .., Foo30). So it needs to be inferred via HM.
> If it was a string then it parses very quickly.
"Parses"? I don't think that's the issue. Did you try it?
----- EDIT ------
I made it an Int
let channel = 11 :: Int
instance IsString Int where
fromString = undefined
instance Semigroup Int where
(<>) = undefined
real 0m0.543s
user 0m0.396s
sys 0m0.148s
Our CI posts a list of shame with the 10 worst offending expressions on every PR as part of the build and test results.
So far it's working quite nicely. Every now and then you take a look and notice that your modules are now at the top, so you quickly fix them, passing the honour to the next victim.
And certainly not bidirectional type inference; the author of this post's definition of this concept isn't even right (bidirectional typing refers to having a distinction between typing judgements which are used to infer types and those which are used to restrict types, not moving bidirectionally between parent nodes & child nodes). I don't know if the mistake comes from the post author or Chris Lattner, and I don't know if the word "bidirectional" is relevant to Swift's typing; I don't know if Swift has a formal description of its type system or that formal description is bidirectional or not.
EDIT: watching the video the Chris Lattner quote comes from, it appears the mistake about the word "bidirectional" is his. Bidirectional type systems are an improvement over ordinary type systems in exactly the direction he desires: they distinguish which direction typing judgements can be used (to infer or check types), whereas normal formal descriptions of type systems don't make this distinction, causing the problems he describes. "Bottom up" type checking is just a specific pattern of bidirectional type checking.
Regardless, the problem with Swift is that a literal can have any of an unbound arity of types which implement a certain protocol, all of which have supertypes and subtypes, and the set of possibilities grows combinatorially because of these language features.
cf: https://arxiv.org/abs/1908.05839
But you can absolutely construct slow expressions just with functions that are overloaded on both sides (i.e. parameters and return type). Generics and Closures also drive up the complexity a lot, though.
(I’m not sure of the precedence, that might need another pair of parens)
If you've ever worked on a project with a 40 minute build (me) you can appreciate a language like go that puts compilation speed ahead of everything else. Lately I've been blown away by the "uv" package manager for Python which not only seems to be the first correct one but is also so fast I can be left wondering if it really did anything.
On the other hand, there's a less popular argument that the focus on speed is a reason why we can't have nice things and, for people working on smaller systems, languages should be focused on other affordances so we have things like
https://www.rebol.com/
One area I've thought about a lot is the design of parsers: for instance there is a drumbeat you hear about Lisp being "homoiconic" but if you had composable parsers and your language exposed its own parser, and if every parser also worked as an unparser, you could do magical metaprogramming with ease similar to LISP. Python almost went there with PEG but stopped short of it being a real revolution because of... speed.
As for the kind of problem he's worried about (algorithms that don't scale) one answer is compilation units and careful caching.
In the case of Rust it's more of a cultural choice. Early people involved in the language pragmatically put everything else (correctness, ability to ship, maintainability, etc.) before compilation speed. Eventually the people attracted to contribute to the language weren't the sort that prioritized compilation speed. Many of the early library authors reflected that mindset as well. That compounds and eventually it's very difficult to crawl out from under.
I suspect the same is true for other languages as well. It's not strictly a bad thing. It's a tradeoff but my point is that it's less of an inevitability than people think.
Those languages show one can have both, expressive type systems, and fast compilation turnarounds, when the authors aren't into anti-PhD level languages kind of sentiment.
I've worked on plenty of C++ code based that had a 2 day build time!
If you were lucky, incremental builds only took a few hours.
Yes, but I don't think that compile speed has really been pushed aggressively enough to properly weigh this tradeoff. For me, compilation speed is the #1 most important priority. Static type checking is #2, significantly below #1 and everything else I consider low priority.
Nothing breaks my flow like waiting for compilation. With a sufficiently fast compiler (and Go is not fast enough for me), you can run it on every keystroke and get realtime feedback on your code. Now that I have had this experience for a while, I have completely lost interest in any language that cannot provide it no matter how nice their other features are.
When you recompile your program, usually a tiny portion of the lines of code have actually changed. So almost all the work the compiler does is identical to the previous time it compiled. But, we write compilers and linkers as batch programs that redo all the compilation work from scratch every time.
This is quite silly. Surely it’s possible to make a compiler that takes time proportional to how much of my code has changed, not how large the program is in total. “Oh I see you changed these 3 functions. We’ll recompile them and patch the binary by swapping those functions out with the new versions.” “Oh this struct layout changed - these 20 other places need to be updated”. But the whole rest of my program is left as it was.
I don’t mind if the binary is larger and less efficient while developing, so long as I can later switch to release mode and build the program for .. well, releasing. With a properly incremental compiler, we should be able to compile small changes into our software more or less instantly. Even in complex languages like Rust.
I already get fast feedback on my code inlined in my editor, and for most languages it only takes 1-2 seconds after I finish typing to update (much longer for Rust, of course). I've never personally found that those 1-2 seconds are a barrier, since I type way faster than I can think anyway.
By the time I've finished typing and am ready to evaluate what I've written, the error highlighting has already popped up letting me know what's wrong.
What language are you using?
Deleted Comment
A lot of the issues that Swift is currently facing are the same issues that C# has, but C# had the benefit of Mono and Xamarin, and in general more time. Plus you have things like JetBrains Rider to fill in for Visual Studio. Maybe in a few years Swift will get there, but I'm just wary because Apple really doesn't have any incentive to support it.
Funnily enough, the biggest proponent of cross platform Swift has been Miguel De Icaza, Gnome creator, and cofounder of Mono the cross platform C# implementation pre .net core. His Swift Godot project even got a shout out recently by Apple
If Foundation was genuinely cross platform and open source, that description becomes more plausible for at least some subset of engineers. * (for non-Apple devs, Foundation ~= Apple's stdlib, things like dateformatting)
I don't mean to be argumentative, I'm genuinely curious what it looks like through someone else's eyes and the only way to start that conversation is taking an opposing position.
I am familiar with an argument it's better than Rust, but I'd very curious to understand if "better than" is "easier to pick up" or "better at the things people use Rust for": i.e. I bet it is easier to read & write, but AFAIK it's missing a whole lot of what I'll call "necessary footguns for performance" that Rust offers.
* IIRC there is a open source Foundation intended for Linux? but sort of just thrown at the community to build.
This seems to be a (bad) pattern with Apple, one that Google used to (and still does) get a lot of flack for, this habit of not investing in things and then thing dying slow, painful deaths.
E.g. I remember this criticism being leveraged at e.g. Safari a lot.
But, for better or worse, Apple is not a technology company, really, its a design company. They focus on their cash-cow (iPhone) and main dev tools (macbook) and nearly everything else is irrelevant. Even their arm-laptops aren't really about being a great silicon competitor, I suspect. Their aim is to simplify their development model across phone/laptop/tablet and design seamless things, not make technically great or good things.
The reason(s) they haven't turned (as much) to enshittification probably are that a) it goes against their general design principles b) they have enough to do improve what they have and so release new stuff c) they aren't in a dominant/monopolistic market position where they can suddenly create utter trash and get away with it because there's nothing else.
And yes, they exhibit monopolistic behaviors within their "walled garden", but if they make a product bad enough, people can and will flee for e.g. Android (or possibly even something microsoft-ish). They can't afford to make a terrible product, but they can afford to abandon anything that doesn't directly benefit their bottom line.
Which is why I suppose I generally stopped caring about most things Apple.
Meanwhile, Swift has a long way to go to reach at least the state of Kotlin Multiplatform, which is still mostly in beta and lacks libraries that can work outside of Android.
This is very true, Apple sees compiler jobs as a cost center.
I think you're painting with too heavy a brush. Apple clearly is dedicating resources to long-tail issues. We just saw numerous examples two days ago at WWDC24.
On top of that, the big thing we didn't see announced this year was anything at all related to addressing the massive hit to compile times that using macros causes.
Dead Comment
I know it's not helpful to judge in hindsight, lots of smart people, etc.
But why on earth would you make this decision for a language aimed at app developers? How is this not a design failure?
If I read this article correctly, it would have been an unacceptable decision to make users write setThreatLevel(ThreatLevel.midnight) in order to have great compile times and error messages.
Can someone shed some light on this to make it appear less stupid? Because I'm sure there must be something less stupid going on.
I'm a native Swift app developer, for Apple platforms, so I assume that I'm the target audience.
Apps aren't major-league toolsets. My projects tend to be fairly big, for apps, but the compile time is pretty much irrelevant, to me. The linking and deployment times seem to be bigger than the compile times, especially in debug mode, which is where I spend most of my time.
When it comes time to ship, I just do an optimized archive, and get myself a cup of coffee. It doesn't happen that often, and is not unbearable.
If I was writing a full-fat server or toolset, with hundreds of files, and tens of thousands of lines of code, I might have a different outlook, but I really appreciate the language, so it's worth it, for me.
Of course, I'm one of those oldtimers that used to have to start the machine, by clocking in the bootloader, so there's that...
That was pretty horrifying. I’ve never seen a compiler that errors nondeterministically based on how fast your cpu is. Whatever design choices in the compiler team led to that moment were terrible.
But where zig breaks down is on any more complicated inference. It's common to end up needing code like `@as(f32, 0)` because zig just can't work it out.
In awkward cases you can have chains of several @ statements just to keep the compiler in the loop of what type to use in a statement
I like zig, but it has its own costs too
So maybe the decision comes down to not wanting to trade off that smooth, “natural” feel when writing it.
Calling it “a language aimed at app developers” is reductive.
He doesn't claim its not a design failure.
He doesn't say they sat down and said "You know what? Lets do beautiful minimal syntax but have awful error messages & really bad compile times"
The light here is recursive. As you lay out, it is extremely s̶t̶u̶p̶i̶d̶ unlikely that choice was made, actively.
Left with an unlikely scenario, we take a step back and question if we have any assumptions: and our assumption is they made the choice actively.
Surely, early in the development someone noticed compile times were very slow for certain simple but realistic examples. (Alternatives: they didn't have users? They didn't provide a way to get their feedback? They didn't measure compile times?)
Then, surely they sat down considered whether they could improve compile times and at what cost, and determined that any improvement would come at the cost of requiring more explicit type annotations. (Alternatives: they couldn't do the analysis the author did? The author is wrong? They found other improvements, but never implemented them?)
Then, surely they made a decision that the philosophy of this project is to prioritize other aspects of the developer experience ahead of compile times, and memorialized that somewhere. (Alternatives: they made the opposite decision, but didn't act on it? They made that decision, but didn't record it and left it to each future developer to infer?)
The only path here that reflects well on the Swift team decision makers is the happy path. I mean, say what you like about the tenets of Swift, dude, at least it's an ethos.
1. They wrote it to replace C++ instead of Objective-C. This is obvious from hearing Lattner speak, he always compares it to C++. Which makes sense, he dealt with C++ every day, since he is a compiler writer. This language does not actually address the problems of Objective-C from a user-perspective. They designed it to address the problems of C++ from a user-perspective, and the problems of Objective-C from a compiler's perspective. The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).
2. They designed the language in complete isolation, to the point that most people at Apple heard of its existence the same day as the rest of us. They gave Swift the iPad treatment. Instead of leaning on the largest collection of Objective-C experts and dogfooding this for things like ergonomics, they just announced one day publicly that this was Apple's new language. Then proceeded to make backwards-incompatible changes for 5 years.
3. They took the opposite approach of Objective-C, designing a language around "abstract principles" vs. practical app decisions. This meant that the second they actually started working on a UI framework for Swift (the theoretical point of an Objective-C successor), 5 years after Swift was announced, they immediately had to add huge language features (view builders), since the language was not actually designed for this use case.
4. They ignored the existing community's culture (dynamic dispatch, focus on frameworks vs. language features, etc.) and just said "we are a type obsessed community now". You could tell a year in that the conversation had shifted from how to make interesting animations to how to make JSON parsers type-check correctly. In the process they created a situation where they spent years working on silly things like renaming all the Foundation framework methods to be more "Swifty" instead of...
5. Actually addressing the clearly lacking parts of Objective-C with simple iterative improvements which could have dramatically simplified and improved AppKit and UIKit. 9 years ago I was wishing they'd just add async/await to ObjC so that we could get modern async versions of animation functions in AppKit and UIKit instead of the incredibly error-prone chained didFinish:completionHandler: versions of animation methods. Instead, this was delayed until 2021 while we futzed about with half a dozen other academic concerns. The vast majority of bugs I find in apps from a user perspective are from improper reasoning about async/await, not null dereferences. Instead the entire ecosystem was changed to prevent nil from existing and under the false promise of some sort of incredible performance enhancement, despite the fact that all the frameworks were still written in ObjC, so even if your entire app was written in Swift it wouldn't really make that much of a difference in your performance.
6. They were initially obsessed with "taking over the world" instead of being a great replacement for the actual language they were replacing. You can see this from the early marketing and interviews. They literally billed it as "everything from scripting to systems programming," which generally speaking should always be a red flag, but makes a lot of sense given that the authors did not have a lot of experience with anything other than systems programming and thus figured "everything else" was probably simple. This is not an assumption, he even mentions in his ATP interview that he believes that once they added string interpolation they'd probably convert the "script writers".
The list goes on and on. The reality is that this was a failure in management, not language design though. The restraint should have come from above, a clear mission statement of what the point of this huge time-sink of a transition was for. Instead there was some vague general notion that "our ecosystem is old", and then zero responsibility or care was taken under the understanding that you are more or less going to force people to switch. This isn't some open source group releasing a new language and it competing fairly in the market (like, say, Rust for example). No, this was the platform vendor declaring this is the future, which IMO raises the bar on the care that should be taken.
I suppose the ironic thing is that the vast majority of apps are just written in UnityScript or C++ or whatever, since most the AppStore is actually games and not utility apps written in the official platform language/frameworks, so perhaps at the end of the day ObjC vs. Swift doesn't even matter.
I wanted to push back on this a bit:
> The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).
From an outsider's perspective, this was the point of Swift: Objective C was and is hard to optimize. Optimal code means programs which do more and drain your battery less. That was Swift's pitch: the old Apple inherited Objective C from NExT, and built the Mac around it, back when a Mac was plugged into the wall and burning 500 watts to browse the Internet. The new Apple's priority was a language which wasn't such a hog, for computers that fit in your pocket.
Do you think it would have been possible to keep the good dynamic Smalltalk parts of Objective C, and also make a language which is more efficient? For that matter, do you think that Swift even succeeded in being that more efficient language?
as a SwiftUI app dev user I feel like this (and the OP's post) lines up with my experience but I've never tried it for e.g. writing an API server or CLI tool.
The url expression is 9 adjoining expressions, where each expression (and pair of expressions, and triplet of expressions ...) could be 1 of at least 31 types.
$ ghc --version
$ time ghc -fforce-recomp foo.hs Feels more sluggish than usual, but bad combinatorics shouldn't just make it slightly slower.I tried compiling the simplest possible program and that took `real 0m0.332s` so who knows what's going on with my setup...
Specifically, `channel = 11`, an integer.
If it was a string then it parses very quickly.
In my code channel is not a string, it's one type of the 31-set of (String, Foo01, Foo02, .., Foo30). So it needs to be inferred via HM.
> If it was a string then it parses very quickly.
"Parses"? I don't think that's the issue. Did you try it?
----- EDIT ------
I made it an Int
So far it's working quite nicely. Every now and then you take a look and notice that your modules are now at the top, so you quickly fix them, passing the honour to the next victim.