Interesting read. I wrote the original compiler back in 2002/2003, but a lot changed by the time it was open sourced (including the confusing name -- I just called it a javascript compiler).
One detail this story gets wrong though is the claim that, "The Gmail team found that runtime JavaScript performance was almost irrelevant compared to download times." Runtime performance was actually way more important than download size and we put a lot of effort into making the JS fast (keep in mind that IE6 was the _best_ browser at the time). One of the key functions of the js compiler was inlining and dead-code removal so that we could keep the code readable without introducing any extra overhead.
Thanks for the correction, Paul (and for the great email client and JS Compiler!). I've added a note to the article.
The focus on inlining as a performance win makes a lot of sense. It's hard to get back into the pre-JIT IE6 mindset where every getter and setter came at a cost. By the time I used Closure Compiler years later this had gotten simplified to just "minification good". I remember search (where I worked) in particular was extremely concerned with shaving bytes off our JS bundles.
To be clear, minification was absolutely a key feature/motivation for the compiler. Runtime performance was more important than code size, but as usual the key to improving runtime performance is writing better code -- there's not much a compiler can do to fix slow code. For example, I wanted the inbox to render in less than 100ms, which required not only making the JS fast but also minimizing the number of DOM nodes by a variety means (such as only having a single event handler for the entire inbox instead of one per active element).
As other here have pointed out, JS was very much looked down upon by most people at Google, and there was a lot of resistance to our JS-heavy approach. One of their objections was that JS didn't have any tooling such compilers, and therefore the language was "unscalable" and unmaintainable. Knocking down that objection was another of the motivations for writing the compiler, though honestly it was also just kind of fun.
I used Closure at Google after coming from a Java background. I always described it as "Closure puts the Java into JavaScript". The team I was working on found several bugs where live code was removed from the dead code removal too.
Now Closure (at Google) meant a couple of different things (by 2010+). First it was the compiler. But second it was a set of components, many UI related. Fun fact: the Gmail did had written their own set of components (called Fava IIRC) and those had a different component lifecycle so weren't interoperable. All of this was the most Google thing ever.
IMHO Closure was never heavily pushed by Google. In fact, at the time, publicly at least, Google was very much pushing GWT (Google Web Toolkit) instead. For those unfamiliar, this is writing code in Java that is transpiled to Javascripit for frontend code. This was based on the very Google notion of both not understanding and essentially resenting Javsscript. It was never viewed as a "real" language. Then again, the C++ people didn't view Java as a real language either so there was a hierarchy.
GWT obviously never went anywhere and there were several other Javascript intiatives that never reached mass adoption (eg Angular and, later, Dart). Basically, React came out and everything else just died.
But this idea of running the same code everywhere was really destructive, distracting and counter-productive. Most notably, Google runs on protobufs. Being a binary format, this doesn't work for Javascript. Java API protobufs weren't compatible with GWT for many years. JS had a couple of encodings it tried to use. One was pblite, which basically took the protobuf tag numbers as array elements. Some Google protobufs had thousands of optional fields so the wire format became:
[null,null,null,null,...many times over...,null,"foo"]
Not exactly efficient. Another used protobuf tag numbers as JSON object keys. I think this had other issues but I can't remember what.
Likewise, Google never focused on having a good baseline set of components. Around this time some Twitter engineers came out with Bootstrap, which became the new reset.css plus a component library and everything else kind of died.
Even Angular's big idea of two-way data binding came at a huge cost (component transclusion anyone?).
Google just never got the Web. The biggest product is essentially a text box. The way I always described it is "How do you know you're engineering if you're not over-engineering?" Google does some absolutely amazing technical infrastructure (in particular) but the Web? It just never seemed to be taken seriously or it was forced into an uncomfortable box.
Yes definitely, I also worked there during that time, and agree with the idea that Google didn't get JS. This is DESPITE coming out with 2 of the greatest JS applications ever -- GMail and Google Maps -- which basically started "Web 2.0" in JS
I always found that odd, and I do think it was cultural. At a certain point, low-level systems programming became valued more, and IMO it was emphasized/rewarded too much over products. I also agree that GWT seemed to be more "staffed" and popular than Closure compiler. There were a bunch of internal sites using GWT.
There was much more JS talent at Yahoo, Facebook, etc. -- and even eventually Microsoft! Shocking... since early Google basically leap-frogged Microsoft's hotmail and I think maps with their JS-based products. Google Docs/Sheets/Slides was supposedly a very strategic JS-based product to compete with Microsoft.
I believe a lot of it had to do with the interview process, which was very uniform for all the time I was there. You could never really hire a JS specialist -- they had to be a generalist and answer systems programming questions. I think there's a sound logic in that (JS programmers need to understand systems performance), but I also think there's room for diversity on a team. People can learn different skills from each other; not everyone has to jump through the same hoops
---
This also reminds me that I thought Alex Russell wrote something about Google engineering not getting the WEB! Not just JavaScript. It's not this, but maybe something similar:
I don't remember if it was an internal or external doc. I think it focused on Chrome side things.
But I remember thinking that too -- the C++ guys don't get the web. When I was in indexing, I remember the tech lead (Sitaram) encouraged the engineers to actually go buy a web hosting account, and set up a web site !! Presumably because that would get them more in touch with web tech, and how web sites are structured.
So yeah it seems really weird and ironic that the company that owns the biggest web apps and the most popular web browser has a lot of employees who don't value that tech
---
Similarly, I have a rant about Google engineering not getting Python. The early engineers set up some pretty great Python infrastructure, and then it kind of rotted. There were arguably sound reasons for that, but I think it basically came back to bite the company with machine learning.
I've heard a bunch of complaints that the Tensorflow APIs are basically what a non-Python programmer would invent, and so PyTorch is more popular ... that's sort of second-hand, but I believe it, from what I know about the engineering culture.
A lot of it has to do with Blaze/Bazel, which is a great C++ build system, while users of every other language all find it deoptimizes their workflow (Java, Go, Python, JS, ...)
So basically I think in the early days there were people who understood JS (like paul) and understood Python, and wrote great code with them, but the eng culture shifted away from those languages.
You have good points overall, but I'd say AngularJS did have mass adoption at a time - it was a great way to build web applications compared to the alternatives. React just came by and did even better.
And Dart may not have much of a standing on its own, but Flutter is one of the most popular frameworks for creating cross-platform applications right now.
A few things held closure compiler back in my mind from general adoption:
- First and foremost a Google tool, not a tool meant for the masses, per se[0]. It could do great things, but it had the dependency on closure tools, which were not the easiest to use outside of Google (did anyone outside google actually use the `goog.*` stuff?) and writing annotation files (externs) of libraries and such never caught on and weren't shared in some meaningful way.
- Lacked evangelism. I imagine this had to do with the above. There weren't people from Google singing the benefits loud and proud everywhere about using the Closure Compiler
- Docs weren't the best. Some parts are still broken (links and such).
- It didn't evolve. Closure could have been a pretty great bundler too, actually, but it didn't support ESM for a long time and it was never clear how to get it bundle things even when it did, I think you mostly had to declare each dependency file for it to work correctly, but I never myself got it to work 100%
These are some of the things that I think ended up holding it back. It could have been a cool little ecosystem too. There was a sister project called Closure Stylesheets that was for CSS that was supposedly very good as well, though I think thats no longer maintained. I believe it was very similar to how people use PostCSS in terms of what it did
[0]: Lots of projects use it to minify their JS but never really took advantage of the advanced functionality it can provide on top.
Beyond that the branding is also confusing. Is it Closure? The Closure Compiler? Closure Tools? Google Closure? I used it for years and I'm still not clear which is right.
I used the goog.* stuff once upon a time. I remember using it outside Google meant bundling it with a .jar built by a single guy that hadn't been updated in years.
Closure compiler was actually one of the biggest influences on the design of TypeScript, and even the early motivation for the approach that TypeScript took.
> There were many options already available, but none seemed to be resonating well with a broad enough section of the market. Internally at Microsoft, Script# was being used by some large teams. It let them use C# directly instead of JavaScript, but as a result, suffered from the kind of impedance mismatch you get when trying to stand at arms length from the runtime model you are really programming against. And there was Google’s Closure Compiler, which offered a rich type system embedded in comments inside JavaScript code to guide some advanced minification processes (and along the way, caught and reported type-related errors). And finally, this was the timeframe of a rapid ascendancy of CoffeeScript within the JavaScript ecosystem — becoming the first heavily used transpiled-to-JavaScript language and paving the way for transpilers in the JavaScript development workflow. (Aside — I often explained TypeScript in the early days using an analogy “CoffeeScript : TypeScript :: Ruby : C#/Java/C++”, often adding — “and there are 50x more C#/Java/C++ developers than Ruby developers :-)”)
> What we quickly discovered we wanted to offer was a “best of all worlds” at the intersection of these three — a language as close as possible to JavaScript semantics (like CoffeeScript) and syntax (like Closure Compiler) but able to offer typechecking and rich tooling (like Script#).
Excel online at last back in 2015 was writing in script#. Not only c# IDE support was just miles ahead (that's per vscode, typescript days), the biggest thing was the ability to author unit tests that leverage lots of work from at the time dedicated testing organization. (Who wrote unit tests in js 10yrs ago, anyone? )
:raises-hand: - I was certainly writing unit tests in JS in 2012. Jasmine came out in 2010 and was already widely adopted.
Also, Jasmine wasn't the first test runner by a long shot (John Resig wrote one for jQuery before Jasmine was a thing and there were earlier ones too).
I've been using the Google Closure compiler for the last 8 years or so, in production. My SaaS depends on it, so you could say I make a living based on that tool. It's been working great, providing a significant performance increase to my ClojureScript code, along with a bunch of other benefits. I use advanced compilation mode.
I'm not sure why the author believes that "minification was a design goal". Minification is a side effect.
> "In the context of npm in 2023, this would be impossible. In most projects, at least 90+% of the lines of code are third-party. "
Well I guess that's why I avoid using npm and why I can maintain code for 8 years and still keep my sanity. I keep the use of third-party code to a minimum, carefully considering each addition, its cost over time, and the probability that it will be maintained.
As a side note, I think it's immature to use terms like "X won" or "Y is dead" in tech discussions.
Closure Compiler was an amazing tool, and still is.
When I was at Lucidchart, I helped convert the 600k line Closure codebase to TypeScript. [1] In fact, Lucidchart still uses Closure (for minification+library, not typechecking).
There are better approaches available in 2023, but Closure Compiler will always have special place in my heart.
+1. The comments bashing Closure in comparison to TypeScript feel like they're missing the timeline.
Closure brought modules, requires, compile-time type checking and optimizations to JavaScript years before TypeScript was on the scene. I wouldn't dare start a new project with Closure. But it was such a spiritual predecessor to what we have today in TypeScript, and has a special place in my heart too.
Closure was/is a technological marvel - years ahead of the competition in some aspects. Closure had dead-code removal years before "tree-shaking" became popular in the Javascript world - both the buzzword and the implementations.
The public-facing documentation was terrible, and it got very little evangelism.
Also: it allowed multiple teams to work on huge code bases without constantly stepping on their feet - it was basically impossible to write bigger JS apps before.
> Unless you've worked on frontend at Google at some point in the past 20 years, it's unlikely that you've ever encountered the Closure Compiler
Unless, of course you were at the forefront of frontend stuff like a decade ago and Closure Compiler was the absolute best for a long time at dead-tree elimination and compressing JS artifacts. Or, you're a developer using ClojureScript. Or...
> It occupied a similar niche to TypeScript, but TypeScript has absolutely, definitively won.
Huh? The Closure Compiler is so much more than just "JavaScript but with types". It also provides minification, provided a namespacing system, additional standard library and such.
> The Closure Compiler is so much more than just "JavaScript but with types". It also provides minification, provided a namespacing system, additional standard library and such.
That’s literally what the article is about if you read a few sentences more
Exactly. We build a large SPA back in 2008. Used closure compiler just for modulizing the code base and minification for production. It had some linting as well. The software still running today only last year moved to parceljs.
Closure library/compiler was the only real way to write a large SPA back in 2012+.
You had access to an amazing compiler with a standard library that was like having every npm module you could ever need but written by Google and well maintained without the security risks.
We still use it, and the only real “wish” we have is maybe if the jsdoc “type system” could just be replaced by TypeScript while maintaining all the rest of the library/compiler.
Outside Google, ClojureScript (with a "j") used to depend on the Closure compiler (with an "s") - partly because the library that came with the compiler provided a Java-like API which was convenient as the Clojure language was originally written targeting Java, and partly because the language had quite a large runtime and tree shaking was necessary for performance. You also had to write extern declarations if you used outside code, much like you have to manually declare types for untyped dependencies in when using Typescript.
Edit: ClojureScript still depends on Google Closure.
I use ClojureScript (it's excellent along with the rest of the Clojure ecosystem) and thus use the Closure compiler every day. It no longer requires manual extern declarations and is able to inform externs from your source.
I absolutely hate property renaming in the Closure compiler.
If you’re not acutely aware of it and you do a `myObj[‘someProp’]`, you’re not going to get any in code warnings, everything will work as you expect in development, and you’re tests and presubmit checks will pass. On multiple occasions for me, the problem surfaced in production, and there was no one around to tell me that property renaming was even a thing. I had to try to debug compiled code.
Worse still is that you don’t even have to try to do a `myObj[‘someProp’]` to get into trouble. There is very commonly used library code that will cause the same problem—you’re just calling a method that tries to access a property on an object. But since it’s abstracted, it’s even harder to catch during a code review or to debug the problem.
One detail this story gets wrong though is the claim that, "The Gmail team found that runtime JavaScript performance was almost irrelevant compared to download times." Runtime performance was actually way more important than download size and we put a lot of effort into making the JS fast (keep in mind that IE6 was the _best_ browser at the time). One of the key functions of the js compiler was inlining and dead-code removal so that we could keep the code readable without introducing any extra overhead.
The focus on inlining as a performance win makes a lot of sense. It's hard to get back into the pre-JIT IE6 mindset where every getter and setter came at a cost. By the time I used Closure Compiler years later this had gotten simplified to just "minification good". I remember search (where I worked) in particular was extremely concerned with shaving bytes off our JS bundles.
As other here have pointed out, JS was very much looked down upon by most people at Google, and there was a lot of resistance to our JS-heavy approach. One of their objections was that JS didn't have any tooling such compilers, and therefore the language was "unscalable" and unmaintainable. Knocking down that objection was another of the motivations for writing the compiler, though honestly it was also just kind of fun.
Now Closure (at Google) meant a couple of different things (by 2010+). First it was the compiler. But second it was a set of components, many UI related. Fun fact: the Gmail did had written their own set of components (called Fava IIRC) and those had a different component lifecycle so weren't interoperable. All of this was the most Google thing ever.
IMHO Closure was never heavily pushed by Google. In fact, at the time, publicly at least, Google was very much pushing GWT (Google Web Toolkit) instead. For those unfamiliar, this is writing code in Java that is transpiled to Javascripit for frontend code. This was based on the very Google notion of both not understanding and essentially resenting Javsscript. It was never viewed as a "real" language. Then again, the C++ people didn't view Java as a real language either so there was a hierarchy.
GWT obviously never went anywhere and there were several other Javascript intiatives that never reached mass adoption (eg Angular and, later, Dart). Basically, React came out and everything else just died.
But this idea of running the same code everywhere was really destructive, distracting and counter-productive. Most notably, Google runs on protobufs. Being a binary format, this doesn't work for Javascript. Java API protobufs weren't compatible with GWT for many years. JS had a couple of encodings it tried to use. One was pblite, which basically took the protobuf tag numbers as array elements. Some Google protobufs had thousands of optional fields so the wire format became:
Not exactly efficient. Another used protobuf tag numbers as JSON object keys. I think this had other issues but I can't remember what.Likewise, Google never focused on having a good baseline set of components. Around this time some Twitter engineers came out with Bootstrap, which became the new reset.css plus a component library and everything else kind of died.
Even Angular's big idea of two-way data binding came at a huge cost (component transclusion anyone?).
Google just never got the Web. The biggest product is essentially a text box. The way I always described it is "How do you know you're engineering if you're not over-engineering?" Google does some absolutely amazing technical infrastructure (in particular) but the Web? It just never seemed to be taken seriously or it was forced into an uncomfortable box.
I always found that odd, and I do think it was cultural. At a certain point, low-level systems programming became valued more, and IMO it was emphasized/rewarded too much over products. I also agree that GWT seemed to be more "staffed" and popular than Closure compiler. There were a bunch of internal sites using GWT.
There was much more JS talent at Yahoo, Facebook, etc. -- and even eventually Microsoft! Shocking... since early Google basically leap-frogged Microsoft's hotmail and I think maps with their JS-based products. Google Docs/Sheets/Slides was supposedly a very strategic JS-based product to compete with Microsoft.
I believe a lot of it had to do with the interview process, which was very uniform for all the time I was there. You could never really hire a JS specialist -- they had to be a generalist and answer systems programming questions. I think there's a sound logic in that (JS programmers need to understand systems performance), but I also think there's room for diversity on a team. People can learn different skills from each other; not everyone has to jump through the same hoops
---
This also reminds me that I thought Alex Russell wrote something about Google engineering not getting the WEB! Not just JavaScript. It's not this, but maybe something similar:
https://changelog.com/jsparty/263
I don't remember if it was an internal or external doc. I think it focused on Chrome side things.
But I remember thinking that too -- the C++ guys don't get the web. When I was in indexing, I remember the tech lead (Sitaram) encouraged the engineers to actually go buy a web hosting account, and set up a web site !! Presumably because that would get them more in touch with web tech, and how web sites are structured.
So yeah it seems really weird and ironic that the company that owns the biggest web apps and the most popular web browser has a lot of employees who don't value that tech
---
Similarly, I have a rant about Google engineering not getting Python. The early engineers set up some pretty great Python infrastructure, and then it kind of rotted. There were arguably sound reasons for that, but I think it basically came back to bite the company with machine learning.
I've heard a bunch of complaints that the Tensorflow APIs are basically what a non-Python programmer would invent, and so PyTorch is more popular ... that's sort of second-hand, but I believe it, from what I know about the engineering culture.
A lot of it has to do with Blaze/Bazel, which is a great C++ build system, while users of every other language all find it deoptimizes their workflow (Java, Go, Python, JS, ...)
So basically I think in the early days there were people who understood JS (like paul) and understood Python, and wrote great code with them, but the eng culture shifted away from those languages.
And Dart may not have much of a standing on its own, but Flutter is one of the most popular frameworks for creating cross-platform applications right now.
In pblite, they are serialized as `[,,,,...many times over...,,"foo"]`. Just comma, no "null".
- First and foremost a Google tool, not a tool meant for the masses, per se[0]. It could do great things, but it had the dependency on closure tools, which were not the easiest to use outside of Google (did anyone outside google actually use the `goog.*` stuff?) and writing annotation files (externs) of libraries and such never caught on and weren't shared in some meaningful way.
- Lacked evangelism. I imagine this had to do with the above. There weren't people from Google singing the benefits loud and proud everywhere about using the Closure Compiler
- Docs weren't the best. Some parts are still broken (links and such).
- It didn't evolve. Closure could have been a pretty great bundler too, actually, but it didn't support ESM for a long time and it was never clear how to get it bundle things even when it did, I think you mostly had to declare each dependency file for it to work correctly, but I never myself got it to work 100%
These are some of the things that I think ended up holding it back. It could have been a cool little ecosystem too. There was a sister project called Closure Stylesheets that was for CSS that was supposedly very good as well, though I think thats no longer maintained. I believe it was very similar to how people use PostCSS in terms of what it did
[0]: Lots of projects use it to minify their JS but never really took advantage of the advanced functionality it can provide on top.
Quite an understatement :) It was something like two pages of "install like this" (one page) and "you should use Closure library" (one page).
Errors? Troubleshooting? Integrating with anything outside Closure? Hah.
From https://medium.com/hackernoon/the-first-typescript-demo-905e...:
> There were many options already available, but none seemed to be resonating well with a broad enough section of the market. Internally at Microsoft, Script# was being used by some large teams. It let them use C# directly instead of JavaScript, but as a result, suffered from the kind of impedance mismatch you get when trying to stand at arms length from the runtime model you are really programming against. And there was Google’s Closure Compiler, which offered a rich type system embedded in comments inside JavaScript code to guide some advanced minification processes (and along the way, caught and reported type-related errors). And finally, this was the timeframe of a rapid ascendancy of CoffeeScript within the JavaScript ecosystem — becoming the first heavily used transpiled-to-JavaScript language and paving the way for transpilers in the JavaScript development workflow. (Aside — I often explained TypeScript in the early days using an analogy “CoffeeScript : TypeScript :: Ruby : C#/Java/C++”, often adding — “and there are 50x more C#/Java/C++ developers than Ruby developers :-)”)
> What we quickly discovered we wanted to offer was a “best of all worlds” at the intersection of these three — a language as close as possible to JavaScript semantics (like CoffeeScript) and syntax (like Closure Compiler) but able to offer typechecking and rich tooling (like Script#).
Also, Jasmine wasn't the first test runner by a long shot (John Resig wrote one for jQuery before Jasmine was a thing and there were earlier ones too).
I'm not sure why the author believes that "minification was a design goal". Minification is a side effect.
> "In the context of npm in 2023, this would be impossible. In most projects, at least 90+% of the lines of code are third-party. "
Well I guess that's why I avoid using npm and why I can maintain code for 8 years and still keep my sanity. I keep the use of third-party code to a minimum, carefully considering each addition, its cost over time, and the probability that it will be maintained.
As a side note, I think it's immature to use terms like "X won" or "Y is dead" in tech discussions.
When I was at Lucidchart, I helped convert the 600k line Closure codebase to TypeScript. [1] In fact, Lucidchart still uses Closure (for minification+library, not typechecking).
There are better approaches available in 2023, but Closure Compiler will always have special place in my heart.
[1] https://www.lucidchart.com/techblog/2017/11/16/converting-60...
Closure brought modules, requires, compile-time type checking and optimizations to JavaScript years before TypeScript was on the scene. I wouldn't dare start a new project with Closure. But it was such a spiritual predecessor to what we have today in TypeScript, and has a special place in my heart too.
The public-facing documentation was terrible, and it got very little evangelism.
Like a lot of Google OSS projects, honestly.
Cool tech, but awkward for everyone not-Google to use.
Unless, of course you were at the forefront of frontend stuff like a decade ago and Closure Compiler was the absolute best for a long time at dead-tree elimination and compressing JS artifacts. Or, you're a developer using ClojureScript. Or...
> It occupied a similar niche to TypeScript, but TypeScript has absolutely, definitively won.
Huh? The Closure Compiler is so much more than just "JavaScript but with types". It also provides minification, provided a namespacing system, additional standard library and such.
That’s literally what the article is about if you read a few sentences more
You had access to an amazing compiler with a standard library that was like having every npm module you could ever need but written by Google and well maintained without the security risks.
We still use it, and the only real “wish” we have is maybe if the jsdoc “type system” could just be replaced by TypeScript while maintaining all the rest of the library/compiler.
I am aware this is possible but I have heard that getting a good tooling experience out of this, outside Google, is difficult.
Edit: ClojureScript still depends on Google Closure.
If you’re not acutely aware of it and you do a `myObj[‘someProp’]`, you’re not going to get any in code warnings, everything will work as you expect in development, and you’re tests and presubmit checks will pass. On multiple occasions for me, the problem surfaced in production, and there was no one around to tell me that property renaming was even a thing. I had to try to debug compiled code.
Worse still is that you don’t even have to try to do a `myObj[‘someProp’]` to get into trouble. There is very commonly used library code that will cause the same problem—you’re just calling a method that tries to access a property on an object. But since it’s abstracted, it’s even harder to catch during a code review or to debug the problem.