The thing about Flash was that it was pretty WYSIWYG and a lot of creative people flocked to it because it had strong multimedia capabilities. Personally I kind of missed the part where you could build really nice looking stuff without much technical expertise, nowadays you need to know all about CSS, Polyfills and JS quirks so there's always this limitation where you have to understand pretty technical stuff early on to get something done.
Flash was more like using Premiere where you just edited your timeline with a bit of interactivity sprinkled over it, no movie editor ever had to get his hands dirty with some kind of scripting language or low level file formats just to edit a movie.
I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash. It was like the web was warped into the future. Nowadays you can achieve the same but not as elegant. It kind of reminds me of how PC's had to catch up with the Amiga for years. Perhaps starting with Wing Commander things were really on par or better, almost 8 years later.
To me, flash was like the past, or a warped vision of the distant future. The interfaces people created had visual appeal, but that’s all they had. It lacked all of the usability that I was used to finding on webpages, and therefore made for a very frustrating experience. No bookmarking, no back and forward, no right clicking to open a new tab, frequent superfluous sound, major security holes, long loading times. I would hardly call any of this elegant. As a user, I was thrilled when people stopped using flash.
Flash was like using poor quality native apps, which was a step backwards from a browser.
To me, JavaScript is like the past, or a warped vision of the distant future. The interfaces people created had visual appeal, but that’s all they had. It lacked all of the usability that I was used to finding on webpages, and therefore made for a very frustrating experience. No bookmarking, no back and forward, no right clicking to open a new tab, frequent superfluous sound, major security holes, long loading times. I would hardly call any of this elegant. As a user, I was thrilled when people stopped using JavaScript.
Flash websites could have a back button. Maybe it wasn't quite as easy as putting in a hyperlink, but it was still pretty easy to configure.
Not arguing with you about the rest. As a web developer I still haven't made a website in the last 5 years that was as visually impressive as my Flash projects from before, and sometimes I miss the visuals....
But people forget that the AWESOME intro that you wait two minutes of loading to see... is only cool the first couple times. Many many people where losing 5 minutes of loading time, just to visit websites that they used EVERY day.
I agree but it's about using the right tool at the right time. How else would could they have started YouTube back then for example? Early YouTube definitely was a glimpse into the future.
I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.
> I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.
I have memories of being wowed by what creative people could do with flash in the 90s and I would not call that amateur animation. I think the problem (1) is we currently seem to be lacking in tools that let people skilled in the visual arts create things without knowledge of the mechanics, so to speak. It's like if in order to write a great song someone would first have to understand how a musical instrument is built. I would not call Amanda Palmer or Lee Ranaldo amateur yet I bet they probably can't build their own instruments.
(1) I must make the disclaimer that for a couple of decades already I've been working only on the backend, so I may surely be missing part of the picture here.
I suspect a lot of early internet amateur animators got started on pirated/shared versions of Flash. Probably helped the proliferation of that scene a whole lot. Cloud based service models make that much harder.
In 2007 flash could run 3d games, do shaders, have multiplayer, run physics manipulation, have 3d sound, do bitmap manipulation, socket programming, and had documentation built into the editor.
Even now html/J's can't do all of the things and most of the things that you can do, are not as fast. While browsers are stuck with legacies to uphold. Flash had no dom to worry about, untyped language (as3) or had css holding it back.
General argument was flash sucks because people make terrible content with it. Which is like saying I hate having hands because I trip things over.. so no limbs = no mess PERFECT.
In turn I think it helped push native apps. Since plain Js/html app just sucked in comparison when it comes to experience and capabilities.
Flash should have been open sourced. Hopefully with webgl and web assembly someone can step in and create something similar
I think the key thing to keep in mind is that Flash gave you all of that from one vendor in a coherent, easy to make use of experience. We can certainly do the same things with JS and the technologies we have outside Flash, but it takes a lot of mental work to stitch it all together. You have to know so many frameworks. You have to know the type of frameworks you're looking for to do `x`. And then there's a lot of performance tuning because something done through canvas isn't as fast as something done with the DOM, etc. Flash was gross but useful.
I agree. In the early 2000s, I learned about animation through Flash. I remember working on a 9th grade project that I illustrated through Flash. I remember learning what frames were, what keyframes were, what vectors were, etc. It was just the right amount of non-technical for me to make sense of it.
Taking a step back, man what a different world it was back then. I'd fire up MS Frontpage or Macromedia Dreamweaver and go to town. The expectations have changed on the maintainability, usability, and functionality parts so I understand why we are where we are today. Both I do miss those simple days.
Flash was much more than animation. ActionScript was a fantastic language for developing dynamic client guis in. Flex's component based model was way better than what we have now -- basically typed webcomponents with a consistent runtime, and mxmlc and mxml was open source. Actionscript 3 was a fine language, with types and a nice compiler to work with.
Steve Jobs and browser security holes killed Flash, and no current open web platform covers all the use cases mxml and AS3 covered for cross-browser development. I could analyze audio channels, run lightweight process concurrency via green-threading, store user files, do i81n translation, streaming websockets and work with actual binary data types in the browser in 2006. I could trigger actions based on events in video and audio streams. I had consistently applied css with animations across components in 2006. I had reusable web components in 206. It's now 2018 and we still don't have cross-browser support for all of that. Oh, and I could run my app on the desktop in offline mode and in the browser.
Security was an issue. Looking back now, I think an Android-like permissions scheme is what it, and the browser, needs to fulfill the promise of write once run anywhere that the browser and the web tends to make.
i once (~15 years ago) worked at a web design shop where the founders were both architects. they did everything in flash as it somewhat resembled the tools they knew - cad software. one of them was the designer and he was really good and they made amazing websites without even knowing a shred of html or even programming languages (they didn't even use actionscript). in case the customer wanted a guestbook they used a free ad-supported one (until i joined as a programmer).
flash was a good tool for the websites they used to create, usually graphics and animation heavy websites low on interactivity. they used the export function to generate the swf including the html index page to embed it.
over the years focus shifted more and more to dynamic websites with content generated from databases and they were mostly lost there. dynamic content (loaded by http requests from databases) in flash usually turned into a huge pain in the ass after a while. for those projects we switched to a traditional website model where dynamic content mostly wasn't loaded into flash, instead it was a html-by-php website where flash animations replaced header jpegs (i.e. animated passive content).
so, in our case, flash was a good replacement for animated and slightly interactive but not dynamically generated content.
I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash.
A lot of users did, too. But it was usually along the lines of "Oh, wow. This page has flash. Well, I guess I'll go get a Coke while the Flash plugin loads into the browser and my computer can't go anything else. If I'm lucky, the whole thing won't crash and take all my work with it by the time I get back."
Macromedia Flash keyframe animation was great even for technical users who didn't know animation. However, as soon as you wanted any kind of interactivity you had to start learning a new scripting language and that was painful for everyone.
> Macromedia Flash keyframe animation was great even for technical users who didn't know animation. However, as soon as you wanted any kind of interactivity you had to start learning a new scripting language and that was painful for everyone.
Not really. Action Script has always been close to Javascript/JScript.Net and now Typescript, it is the same syntax. In fact ActionScript 3 was supposed to be the template for ECMASCRIPT 4, before it was abandoned.
Totally agree with this! I miss the simple days of build once run anywhere and just being able to bash out a fun idea in an afternoon and release it and know everyone would have the same experience. Yeah security was crap with flash but surely that could have been solved. I think the battery use and apple’s desire to make sure everything had to go through their paid AppStore was what killed it really though.
Kind of makes me wonder if there's a market for a React.js editor that works a lot like the Web Inspector does right now, where you can build your hierarchy as a nested list, set properties of each layer through a table, "wire" properties through several layers of hierarchy to reach further down components, and reference functions easily in the table. As long as it has support for lifecycle methods, it feels like it could become a natural UI for writing React apps!
Flash was indeed a tool for cultural creatives. You can tell because it only ever worked properly under Internet Explorer for Mac OS Classic. On every other browser/platform combo, there were framerate issues and the audio would gradually desync from the video. Forget about seeing these issues resolved in the afterthought of a Linux port.
is supposed to be an implementation of the Flash VM in typescript, But it can't even run in latest Firefox browser anymore apparently and no commits from 2 years.
Adobe has Animate CC, which is basically Flash Pro with a new name that also outputs to JS, canvas, and webgl. I think it's very probable it will implement WASM in the future.
> If you built an applet in one of these technologies, you didn’t really build a web application. You had a web page with a chunk cut out of it, and your applet worked within that frame. You lost all of the benefits of other web technologies; you lost HTML, you lost CSS, you lost the accessibility built into the web.
But that's also true of an application which relies on WebAssembly (or JavaScript): it loses all the benefits of the web, because in a very real sense it's no longer a web site, but is instead a program running in a web page.
WebAssembly or JavaScript, neither is document-oriented; neither is linkable; neither is cacheable. It's Flash, all over again — except at least with Flash one could disable it and sites were okay. with WebAssembly & JavaScript, every site uses them for everything, meaning we get to choose between allowing a site to execute code on our CPUs, or seeing naught but a 'This page requires JavaScript' notice.
It is the return of Flash, and that's a bad thing. We thought we'd won the war, but really we just won a battle.
I envision horrible "all WASM" websites, just like the old "all Flash" websites, that won't have accessibility, won't be able to be linked to, etc. Worse, I envision this as being another step in the ad blocker arms race. Inevitably there are going to be websites that package an entire WASM-based browser that will need to be used to access the site, nullifying client-side ad and script blockers. I can see the pitch now-- "Keep your existing website but add our tools to prevent ad blockers!"
(Edit: Typos. I should know better than to post from my phone by now. Grrr...)
This is a criticism that would be more suited to the Canvas API than the WASM API. WASM is still meant to drive the DOM API which is still as introspectable as before.
[EDIT]: Steve is right of course, and I misspoke here, "WASM is still able to drive the DOM" is closer to what I meant to say.
It already is possible, with JavaScript. WASM doesn't change anything. And the fact that although it happens with JavaScript, it isn't pervasive, I think should assuage this fear.
To be frank, I'm surprised Wildvine hasn't been used in conjunction with DoubleClick/GoogleAds to enforce websites in showing adverts.
Sure, there's "fuckAdblock" but that shortly spawned "FuckFuckAdblock". It's a whole different case when the very browser prevents the content from being tampered with.
My position on this is basically, WebAssembly is no different than JavaScript here. If you think JavaScript ruins this property, well, the web was only in the form you describe for four years, and has existed this way for 23 years now.
The focus on driving WASM performance in the browser platforms, combined with the ability to transpile more languages to WASM, pushes the barrier-to-entry lower. Yes, these concerns aren't specific to WASM, but the platform is being made more capable of hosting this kind of troubling code, and more attractive to developers who would develop these things.
The web long ago became not only a document store but also a thin client platform for distributing full client applications to end users. That cat is out of the bag and is not going to be stuffed back in.
WASM is really just a cleaner, faster, more elegant way of running alternative languages to JavaScript in the browser. It replaces transpilers that turned languages like Java or Go into ugly basically machine code JavaScript blobs. It will save bandwidth and improve performance but otherwise doesn't change much. Note that transpiled and uglified JavaScript is already "closed source," so nothing changes there. Anything can be obfuscated.
I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day. Marketing departments everywhere tend to turn the web into Blinkenlights.
How many support documents of more than 15-20 years ago are you able to still find using the old links? So many sites are working as dumb front-ends for a database.
The information retrieval and persistence over time is not something many worries about.
The cat is for sure out of the bag. I just hope what was still can survive.
WebAssembly enables load-time and run-time (dlopen) dynamic linking in the MVP by having multiple instantiated modules share functions, linear memories, tables and constants using module imports and exports. In particular, since all (non-local) state that a module can access can be imported and exported and thus shared between separate modules’ instances, toolchains have the building blocks to implement dynamic loaders.
The code is fetched via URLs so you can link to it in that sense, too.
I believe the parent comment was referring to hyperlinks, not dynamic linking.
The point was more that once webpages become applications running on the client (think single page apps), the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained.
Sometimes a program running in the browser will be valuable when it's full window.
Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.
> Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.
I for one would, because the browser is an absolutely shitty interface. You're still forced into "there are tabs, which contain sandboxed documents" model of use. Interoperability is nonexistant, integration with machine capabilities is superficial and completely opaque to the user, the data model is hidden (where is my localStorage equivalent of the file browser again?), everything assumes you're constantly connected - it's a corporate wet dream, but for individuals, it's a nightmare.
In my experience, the application-on-browser products consume far more CPU and RAM than the application-on-OS products. For me, that's a pretty big deal: I need the laptop to run as long as possible on a charge. Right now, I would complain if I _had_ to run a full version of Word or Excel in the browser.
Perhaps Web Assembly will drive this power usage down. But as it stands now, I actively avoid more than one of these app-on-browser products at a time.
> decrease our reliance on particular operating systems
By replacing it with a poor simulacrum of an operating system. Browser APIs are an inefficient subset of libc and bsd sockets offer.
And they provide near-zero interoperability with native applications. No filesystem access (beyond the clunky save-one-file dialog), no CLI, no IPC, nothing. That means browsers are building on top of operating systems while not interoperating with them.
Different psuedo-VMs, I mean browsers, operate differently even on the same specs for various technologies (CSS, JS). They already act effectively like "particular operating systems," except they're less efficient and more obnoxious to work with.
Potentially fun questions: are there any “DOM-native JavaScript games”? I.e., games that manipulate the DOM for their “graphics”—or even have hypertext in place of graphics—rather than running in a canvas?
The only example I can think of is the Twine engine for Interactive Fiction.
You should look into Crafty, it’s a js game engine which can output to either the DOM or canvas, I’m not sure how popular it is anymore but quite a few games used it. There are demo games here http://craftyjs.com
Compiling to Wasm will only get easier. It's only hard now because the target is new and people are still adapting the tooling. There is no reason why it would be any harder than compiling for a machine.
Wasm will almost certainly lead to UI frameworks for the Web. JS people try very hard to get similar stuff, but the language is just not good enough; at the same time the desktop people that have this stuff is claiming for some way to use the same on the Web. People are already working on those frameworks, by the way.
Yes its bad for document markup, but I wouldn't waste time coding the next Excel in HTML and CSS, I'd just straight to a gui language with guaranteed cross platform rendering.
For the commenters that seem to have some underlying fear that WASM apps will be another incarnation of a "window in a window" or some horrible bitmapped graphics pane that does not fit into the web model:
WASM is just a CPU. It's a bytecode format for expressing low-level, high-performance programs. It comes "batteries not included"--intentionally. By batteries, I mean APIs. WebAssembly modules must import everything they need from the outside. When embedded in JS and the Web, the first and still primary use case of WASM, that means modules can import functionality from both JS and the Web, and call literally anything that JS can call. That means WASM can (though still somewhat clunkily) manipulate the DOM, WebGL, audio, service events, etc, through all of the same APIs that JS can do. There is nothing that prevents a WASM app from looking and feeling exactly like something written in JS.
To reiterate: WASM does not require you to drop down to canvas or render fonts yourself. You can call out to JS or direct to WebAPIs! (again, it just happens to be clunky to do this from C++.) But other languages are working on bindings that make this much nicer. Rust anyone? :)
What WASM gives the web is a proper layer for expressing computation. The APIs and paradigms that build on top of WASM are independent, swappable, interposable, by design. Because it's a layer for computation, and a low-level one, it is by nature language-independent. As Steve mentioned, adding languages to the web one by one does not scale. Thus WASM.
> To reiterate: WASM does not require you to drop down to canvas or render fonts yourself.
The fear isn't that it requires that. The fear is that it enables that.
The web is, and has been over the past two decades, in the constant state of war over control between publishers and consumers. People - and especially businesses - making pages would like to have 100% control over how the webpage/app looks and is being used. But the users would like to have some control over what they're viewing too[0].
The most widely known battle in this war is the battle for ad-blocking. The publishers want you to view lots of ads. You want just the content, without any of the ads. So far, the technology (and economics) favors the user, but it's not a given.
The balance of control on the web was always maintained by the technologies on which the web standardized on. Pure HTML, or even HTML+CSS, strongly favours the user. JavaScript tilts the balance significantly towards the publishers, as now they can (and do) generate content with code, which renders the page difficult to interpret and modify on the user end. One of the biggest complaints about Flash was how shitty the pure-Flash/mostly-Flash webpages were. That's not an intrinsic problem of Flash - this happened, because Flash gave the publishers too much control. And publishers (again, especially businesses) will use (and abuse) any control they're given.
The fear here is, that WASM against tilts the control in favour of publishing, which will lead to abuse and web becoming a much worse place for the consumers. If WASM will, by virtue of efficiency, enable publishers to embed a browser they control within the page, the publishers will use this, because this would single-handedly eliminate most ad-blocking, userscripting and scrapping efforts.
--
[0] - and the power users, like myself, would like to have 100% of that control - think of how much better the web would be if the data was always published in machine-readable format, without tons of bullshit paginations and stylistic choices to scroll and click through. For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.
It's a big trade-off to be sure. On the one hand, I'm worried about the web becoming more closed-source and less hackable for all the reasons you've mentioned.
But on the other hand, I can't help see the enormous potential of a proper assembly language for web. Web technologies have felt like a massive hack for decades: tools designed for basic text formatting and a bit of interactivity which have been stretched in extreme ways to meet the needs of the modern web. Web applications are the most widely used software on the planet, and if you ask me it's about time developers will have the freedom to develop them in the language which makes the most sense for the task at hand rather than the only one which is available. And I am quite keen to see what kinds of new things will be possible when the ceiling is significantly raised for performance optimization.
> For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.
I feel the same way. But those ads are there because that's the entire business model of people putting weather data out there for free. On most free sites, ads aren't just a sideshow, they're the driving engine. Take away the ads, and there goes the business model.
What we need is some other way to pay for the weather data. Maybe this could be a service provided by your ISP, like NTP or DNS. Or some third party subscription service. Or maybe even taxpayer funded. But if you're using a service that relies on ads as their revenue model, then expect to put up with ads. They're part of the deal.
they won't abuse because of GDPR. And people are already smarter, the same tricks from the past won't repeat.
Browser vendors will be able to easily block too heavy WASM programs, for example which run too long.
Or new laws will enforce that.
[edit]:
Or even very heavy WASM apps will require to be signed by certificate provider, otherwise user will be warned about risks. Just like HTTP vs HTTPS.
I'm typically sharply critical of the web, but I think this comparison is kind of silly. The biggest problem with Java Applets and Flash is the security issues, which were largely caused by giving web pages access to a second, less secure sandbox. WASM stays in the same sandbox as the rest of the web. Flash also had the problem of being proprietary and non-standardized with only one implementation, something WASM does not suffer from.
For those worried about the "all WASM" pages looking like the old "all Flash" pages of yore, consider that Flash and Java applets had their own UI stack and WASM does not. The closest WASM has to that is OpenGL, but you've been able to make all-OpenGL apps with pure JavaScript for some time and it hasn't taken over the web with terrible sites yet. WASM code can interact with the DOM. I guess we could worry about native C/C++ GUI toolkits being ported to WASM, but the web community gets what you deserve for making Electron a thing.
I don't like JavaScript in general but I don't see how WASM is any worse, and if anything it's quite a bit better.
> Flash player had a open access spec and there was more than just the Adobe Flash Player as implementations.
No, the compiler maybe, Action Script maybe, but not the player. The player is entirely closed source and there is no open spec for the player. Or you need to show it to me.
> there was more than just the Adobe Flash Player as implementations.
Only Adobe's implementation could run all swf files. Scaleform was not an alternative flash player. Any attempt at creating an alternative and feature complete flash player failed.
WASM has all the security problems of flash, and then it multiplies them, by making WASM content linkable.
So someone makes a game, and they use this very useful WASM library over here. Only that library exploits spectre or meltdown to steal data. Or maybe it just silently hoses your machine by targeting the new WebGL shaders? Or any myriad number of other things.
Exploiting browser bugs is still just exploiting browser bugs and this is already a problem for JavaScript, WASM doesn't make it worse. Flash introduces a second, black box sandbox implemented by morons.
WASM itself isn't Java, Flash, or Silverlight, but isnt it another step in the ongoing multiyear process of replicating the features of those technologies in a way that they tried to accomplish: compile to one format and run it on multiple platforms?
I think so, and the managers at Adobe and Sun must be kicking themselves for not somehow getting their runtime more open, modular, and standardized now that we see write once run anywhere with a few system hooks is all we need.
Then again... It was a different world in the mid 2000s. The web standardization process? Ha, what was that?
On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?
I think it's a step forward in that it's more integrated into the platform. Remember when TCP/IP used to be an add-on for an operating system?
> managers at Adobe and Sun must be kicking themselves
Both tried. As I recall Sun were blocked by Microsoft, and Flash was bundled as standard with Netscape from about 2001 onwards. Steve Jobs killed that stone-dead when he point-blank refused to support it on iDevices.
Some companies have all the right ideas and for whatever reason still can't execute.
Adobe AIR beat things like Electron and PhoneGap to market by years. IMHO the issue with Adobe is this insistence on 'open' still having various vary opinionated elements. Adobe Air for example had a lot of good ideas but still attempted to evangelize Flash and ActionScript. I _think_ MS is trying to pivot of that grave now with .NET Core. Time will tell if the Mono-to-Wasm or .NET Core Native projects have legs.
I was so very excited about Adobe Air and wrote a production application with it in 2009.
I _think_ a sweet spot for WASM data processing. The data visualization space should explode once I can with data in the browser at near native speed.
> On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?
It basically means that WASM has the same safety/security model as the JS VM. Just like JS, it is compiled to native code (I'm simplifying a bit, of course) before being executed. However, where JS is one of the languages with the most complicated semantics around, which makes it really, really hard to compile efficiently, WASM has extremely simple semantics and is designed to be really, really easy to compile efficiently.
No, wasm, like asmjs, is designed to be compiled into native code once validated. Unlike asmjs, it doesn't also require a long parsing step. It uses the same code paths used to emit native code from the JS VM JIT.
It’s bootstrapping. You’re building a new thing (example: C++) that works a lot like the old thing (C). So you build a wrapper (Charm) that works on top of the old thing so you can get the conversation going, expand your capabilities and recruit.
Over time you do more of your own thing and you or someone else splits these two pieces of code into three smaller ones. Like the LLVM backend that can be fed by a C or C++ frontend.
As webasm becomes a competitive advantage you should expect to see people split up their javascript VM into three pieces, and Javascript and Webassembly running as peers instead of guest and host.
In a very small way, we kind of saw a similar thing with JSON. JSON was just a strict subset of Javascript and you could emulate it on old browsers with a linter in front of an eval(). Now it’s its own thing.
Seems very premature to say the Wasm has "won" when it's only just come out. People were saying Flash had "won" when all the browsers had it embedded by default a decade ago.
Browsers removed Flash support, they might end up rendering Wasm useless by putting it behind loads of permission warnings.
There's also the chance it'll end up lacking as it is and it'll end up being a useless appendage that gets killed off.
Wasm doesn't need permission warnings since by default the sandbox is very restrictive.
I think the reason Browser removed Flash was more because it was an absolute security nightmare for the Browser vendors and they had to fully rely on Adobe to patch the worst of it.
Wasm on the other hand leverages the Javascript VM so browsers don't have that problem. And they don't depend on an external vendor either.
"...Wasm doesn't need permission warnings since by default the sandbox is very restrictive.."
Upcoming changes to WASM include threading and shared memory. Unless browser makers implement those features in a manner that slows the machine to a crawl, WASM certainly will be getting some security warnings. Either because security minded organizations will disable it, or because browser makers will be honest and up front about the risks with those features. (There will be either security implications, or performance implications because they implemented the new features in a secure fashion.)
> Wasm has a really strong sandboxing and verification story that others simply did not.
Eh, (P)NaCL had a very strong sandboxing and verification story.
Wasm is, basically, NaCL in a form that Mozilla and the non-googles etc could accept. There are small implementation differences but the details are all non-important. It was all political. If they had accepted NaCL we'd have had what we have with wasm only we'd have had it years ago.
Naaah it really doesn't seem small. Details are important.
NaCl comes from the plugin world (NPAPI → PPAPI), while wasm comes from the JS world ("removing the JS from asm.js"), and most existing JS engines have been able to just reuse their codegen/backend parts.
PNaCl was based on a bad idea: "let's make a stable subset of LLVM IR". LLVM moves quickly, if you have a stable subset you'll eventually have to translate it to actual updated LLVM IR anyway. And good luck to anyone who wants to implement a simple small interpreter for that!
wasm is really simple — just a close-to-metal abstract machine. No included libc, no bindings to any particular platform (there might be DOM/etc. bindings in the future, but on top of wasm, not right in the core spec).
At this point, Chrome is just as happy to get rid of NaCl. It's one of the last things to still use the crufty old PPAPI interface. Mozilla dropped the predecessor, NPAPI, years ago.
This may be true re chrome wanting to move on, but it seems a bit unfair to liken ppapi to npapi in reply to a discussion on security! Npapi was in-process, which was the whole security problem. Ppapi was built around strong process isolation.
Unlike NaCl, Wasm doesn't require building an entire separate (undocumented and unspecified) API to make it work.
Accepting NaCl without accepting PPAPI would not have been very useful. Accepting PPAPI was a non-starter without a lot more work than Google was willing to put into it.
Indeed. But, making a secure sandbox is the easy part. The hard part is poking holes all over the sandbox so the code can interact with the outside world without compromising security. JavaScript and browsers have spent decades figuring out that balance and working out all the detailed tradeoffs like same-origin policy, cookie rules, etc.
As I understand it, Java has features that make the time complexity (in the O(n^2) sense) and the general implementation complexity (in the probablity of bugs sense) of the Java byte code verifier worse than WebAssembly validation.
the Java and Flash approach was software-implemented security contexts and managers, running the VM in the same process as the browser.
People used to say that you couldn't do strong process isolation because it would be unworkably slow.
And then Google Chrome demonstrated that was a falacy. People actually flocked to chrome because it was faster, despite it using multiple processes and isolating plugins.
NaCL built on that - it's security model was strong process isolation and verification that the code run in that isolated process couldn't 'escape'.
Mozilla is still kinda in denial re process isolation.
Isn't the answer really, "no, because WASM doesn't allow you to do anything you couldn't already do with JavaScript"? I don't think anyone would complain about Applets or Flash if the only way they could interact with the web page was through the DOM.
WASM enables nothing. It just makes a particular subset of JavaScript slightly faster.
EDIT: I'm not talking about Canvas. That isn't part of WASM.
Eh, I'd still complain about them; they tended to be terribly poorly written and do a lot of unnecessary work, slowing things down and using too much power. Of course, so do random javascript things.
I kind of miss the days of the HTML/HTTP-only web.
Not sure if I agree. I would still say that WASM enables new types of applications. Of course you could have compiled all that new WASM applications to asm.js or even JS as well. But it would've been slower: maybe even unusable slow. That's why I think WASM enables new types of applications. It's the same as in the past where faster JS engines enabled richer web applications like GMail, etc. Now we get games in the browser and large native applications like AutoCAD.
I worked on a 3d web based modeling app in my last job (Autodesk) that was based on asm.js. (and this was a legit solid modeler with heavy CPU usage - not a toy). It was basically 50% speed compared to the desktop version, which isn't ideal but totally viable. WASM is definitely welcome but you could do the same things in 2015 with asm.js. Honestly I think the biggest selling point is decreased download size. That was our big pain point with asm.js
I don't doubt the claim that a Java Applet can be faster than JavaScript, but I haven't ever seen a modern high-performance web application written as a Java Applet. The CAD stuff, games, etc., are always Flash (EDIT: when they aren't JavaScript). So the comparison is really Flash vs. JavaScript.
And… comparing Flash to V8? I'm having trouble finding modern benchmarks, but Flash is based on ActionScript, which is related to JavaScript. JavaScript has V8, which is a world-class JIT with Google's engineering talent behind it. I'd be very surprised if Flash's JIT was competitive with V8. At least as of 2011 it wasn't [1].
Flash was basically JavaScript that was was easily portable between browsers and allowed faster multimedia applications. In that sense the combination of WebAssemply + WebGL and others improvements is exactly like the return of Java Applets and Flash.
Is the end result from the user perspective the same? I don't know. I'm just little worried that it might be.
"and other improvements" is doing a lot of work there. Flash had an entire scripting language and an API for drawing/creating UI elements, listening to their events, etc.
Raw WebGL isn't going to give you any of that. And listening for any interaction events at all is going to necessitate the DOM.
I don't doubt WASM will end up with something similar, but it's also something that's possible with JS today, there are a bunch of JS game engines that render to WebGL and perform very well.
The article isn't about WebGL, it's about WASM. WASM doesn't make anything new possible, it just makes already-possible things faster. (WASM is quite literally designed as a faster replacement of pure-JavaScript asm.js.)
IMO the prospect of less of my bandwidth, memory and CPU cycles being waisted is a pretty big deal. It's silly how much overhead there is with standard web technologies given the fact that it's how most people spend most of their time on their computer.
I'd still complain. I remember supporting a Java plugin application that was incredibly slow and beyond buggy. Then again, limiting the scope to only what JavaScript could already do could have helped avoid those issues to begin with.
Flash was more like using Premiere where you just edited your timeline with a bit of interactivity sprinkled over it, no movie editor ever had to get his hands dirty with some kind of scripting language or low level file formats just to edit a movie.
I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash. It was like the web was warped into the future. Nowadays you can achieve the same but not as elegant. It kind of reminds me of how PC's had to catch up with the Amiga for years. Perhaps starting with Wing Commander things were really on par or better, almost 8 years later.
Flash was like using poor quality native apps, which was a step backwards from a browser.
Not arguing with you about the rest. As a web developer I still haven't made a website in the last 5 years that was as visually impressive as my Flash projects from before, and sometimes I miss the visuals....
But people forget that the AWESOME intro that you wait two minutes of loading to see... is only cool the first couple times. Many many people where losing 5 minutes of loading time, just to visit websites that they used EVERY day.
I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.
I have memories of being wowed by what creative people could do with flash in the 90s and I would not call that amateur animation. I think the problem (1) is we currently seem to be lacking in tools that let people skilled in the visual arts create things without knowledge of the mechanics, so to speak. It's like if in order to write a great song someone would first have to understand how a musical instrument is built. I would not call Amanda Palmer or Lee Ranaldo amateur yet I bet they probably can't build their own instruments.
(1) I must make the disclaimer that for a couple of decades already I've been working only on the backend, so I may surely be missing part of the picture here.
Edit: Fixed footnote mark.
Even now html/J's can't do all of the things and most of the things that you can do, are not as fast. While browsers are stuck with legacies to uphold. Flash had no dom to worry about, untyped language (as3) or had css holding it back.
General argument was flash sucks because people make terrible content with it. Which is like saying I hate having hands because I trip things over.. so no limbs = no mess PERFECT.
In turn I think it helped push native apps. Since plain Js/html app just sucked in comparison when it comes to experience and capabilities.
Flash should have been open sourced. Hopefully with webgl and web assembly someone can step in and create something similar
Taking a step back, man what a different world it was back then. I'd fire up MS Frontpage or Macromedia Dreamweaver and go to town. The expectations have changed on the maintainability, usability, and functionality parts so I understand why we are where we are today. Both I do miss those simple days.
Steve Jobs and browser security holes killed Flash, and no current open web platform covers all the use cases mxml and AS3 covered for cross-browser development. I could analyze audio channels, run lightweight process concurrency via green-threading, store user files, do i81n translation, streaming websockets and work with actual binary data types in the browser in 2006. I could trigger actions based on events in video and audio streams. I had consistently applied css with animations across components in 2006. I had reusable web components in 206. It's now 2018 and we still don't have cross-browser support for all of that. Oh, and I could run my app on the desktop in offline mode and in the browser.
Security was an issue. Looking back now, I think an Android-like permissions scheme is what it, and the browser, needs to fulfill the promise of write once run anywhere that the browser and the web tends to make.
flash was a good tool for the websites they used to create, usually graphics and animation heavy websites low on interactivity. they used the export function to generate the swf including the html index page to embed it.
over the years focus shifted more and more to dynamic websites with content generated from databases and they were mostly lost there. dynamic content (loaded by http requests from databases) in flash usually turned into a huge pain in the ass after a while. for those projects we switched to a traditional website model where dynamic content mostly wasn't loaded into flash, instead it was a html-by-php website where flash animations replaced header jpegs (i.e. animated passive content).
so, in our case, flash was a good replacement for animated and slightly interactive but not dynamically generated content.
A lot of users did, too. But it was usually along the lines of "Oh, wow. This page has flash. Well, I guess I'll go get a Coke while the Flash plugin loads into the browser and my computer can't go anything else. If I'm lucky, the whole thing won't crash and take all my work with it by the time I get back."
We romanticize the past.
Java applets OTOH had the exact experience you describe. Those were absolutely terrible.
Not really. Action Script has always been close to Javascript/JScript.Net and now Typescript, it is the same syntax. In fact ActionScript 3 was supposed to be the template for ECMASCRIPT 4, before it was abandoned.
But it would still come with many of the drawbacks.
is supposed to be an implementation of the Flash VM in typescript, But it can't even run in latest Firefox browser anymore apparently and no commits from 2 years.
But that's also true of an application which relies on WebAssembly (or JavaScript): it loses all the benefits of the web, because in a very real sense it's no longer a web site, but is instead a program running in a web page.
WebAssembly or JavaScript, neither is document-oriented; neither is linkable; neither is cacheable. It's Flash, all over again — except at least with Flash one could disable it and sites were okay. with WebAssembly & JavaScript, every site uses them for everything, meaning we get to choose between allowing a site to execute code on our CPUs, or seeing naught but a 'This page requires JavaScript' notice.
It is the return of Flash, and that's a bad thing. We thought we'd won the war, but really we just won a battle.
(Edit: Typos. I should know better than to post from my phone by now. Grrr...)
[EDIT]: Steve is right of course, and I misspoke here, "WASM is still able to drive the DOM" is closer to what I meant to say.
What difference do you see?
Apps are apps.
Sometimes both are in the browser.
It'd be great if all "documents" had an HTML version, with minimal JS. For accessibility, searching, deep linking, etc.
https://engineering.flipboard.com/2015/02/mobile-web
Sure, there's "fuckAdblock" but that shortly spawned "FuckFuckAdblock". It's a whole different case when the very browser prevents the content from being tampered with.
WASM is really just a cleaner, faster, more elegant way of running alternative languages to JavaScript in the browser. It replaces transpilers that turned languages like Java or Go into ugly basically machine code JavaScript blobs. It will save bandwidth and improve performance but otherwise doesn't change much. Note that transpiled and uglified JavaScript is already "closed source," so nothing changes there. Anything can be obfuscated.
I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day. Marketing departments everywhere tend to turn the web into Blinkenlights.
How many support documents of more than 15-20 years ago are you able to still find using the old links? So many sites are working as dumb front-ends for a database.
The information retrieval and persistence over time is not something many worries about.
The cat is for sure out of the bag. I just hope what was still can survive.
WebAssembly enables load-time and run-time (dlopen) dynamic linking in the MVP by having multiple instantiated modules share functions, linear memories, tables and constants using module imports and exports. In particular, since all (non-local) state that a module can access can be imported and exported and thus shared between separate modules’ instances, toolchains have the building blocks to implement dynamic loaders.
The code is fetched via URLs so you can link to it in that sense, too.
It's also cacheable: https://developer.mozilla.org/en-US/docs/WebAssembly/Caching...
The point was more that once webpages become applications running on the client (think single page apps), the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained.
Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.
I for one would, because the browser is an absolutely shitty interface. You're still forced into "there are tabs, which contain sandboxed documents" model of use. Interoperability is nonexistant, integration with machine capabilities is superficial and completely opaque to the user, the data model is hidden (where is my localStorage equivalent of the file browser again?), everything assumes you're constantly connected - it's a corporate wet dream, but for individuals, it's a nightmare.
Perhaps Web Assembly will drive this power usage down. But as it stands now, I actively avoid more than one of these app-on-browser products at a time.
By replacing it with a poor simulacrum of an operating system. Browser APIs are an inefficient subset of libc and bsd sockets offer.
And they provide near-zero interoperability with native applications. No filesystem access (beyond the clunky save-one-file dialog), no CLI, no IPC, nothing. That means browsers are building on top of operating systems while not interoperating with them.
Dead Comment
The only example I can think of is the Twine engine for Interactive Fiction.
https://github.com/mozilla/BrowserQuest
Doesn't work in Safari.
Wasm will almost certainly lead to UI frameworks for the Web. JS people try very hard to get similar stuff, but the language is just not good enough; at the same time the desktop people that have this stuff is claiming for some way to use the same on the Web. People are already working on those frameworks, by the way.
For the commenters that seem to have some underlying fear that WASM apps will be another incarnation of a "window in a window" or some horrible bitmapped graphics pane that does not fit into the web model:
WASM is just a CPU. It's a bytecode format for expressing low-level, high-performance programs. It comes "batteries not included"--intentionally. By batteries, I mean APIs. WebAssembly modules must import everything they need from the outside. When embedded in JS and the Web, the first and still primary use case of WASM, that means modules can import functionality from both JS and the Web, and call literally anything that JS can call. That means WASM can (though still somewhat clunkily) manipulate the DOM, WebGL, audio, service events, etc, through all of the same APIs that JS can do. There is nothing that prevents a WASM app from looking and feeling exactly like something written in JS.
To reiterate: WASM does not require you to drop down to canvas or render fonts yourself. You can call out to JS or direct to WebAPIs! (again, it just happens to be clunky to do this from C++.) But other languages are working on bindings that make this much nicer. Rust anyone? :)
What WASM gives the web is a proper layer for expressing computation. The APIs and paradigms that build on top of WASM are independent, swappable, interposable, by design. Because it's a layer for computation, and a low-level one, it is by nature language-independent. As Steve mentioned, adding languages to the web one by one does not scale. Thus WASM.
The fear isn't that it requires that. The fear is that it enables that.
The web is, and has been over the past two decades, in the constant state of war over control between publishers and consumers. People - and especially businesses - making pages would like to have 100% control over how the webpage/app looks and is being used. But the users would like to have some control over what they're viewing too[0].
The most widely known battle in this war is the battle for ad-blocking. The publishers want you to view lots of ads. You want just the content, without any of the ads. So far, the technology (and economics) favors the user, but it's not a given.
The balance of control on the web was always maintained by the technologies on which the web standardized on. Pure HTML, or even HTML+CSS, strongly favours the user. JavaScript tilts the balance significantly towards the publishers, as now they can (and do) generate content with code, which renders the page difficult to interpret and modify on the user end. One of the biggest complaints about Flash was how shitty the pure-Flash/mostly-Flash webpages were. That's not an intrinsic problem of Flash - this happened, because Flash gave the publishers too much control. And publishers (again, especially businesses) will use (and abuse) any control they're given.
The fear here is, that WASM against tilts the control in favour of publishing, which will lead to abuse and web becoming a much worse place for the consumers. If WASM will, by virtue of efficiency, enable publishers to embed a browser they control within the page, the publishers will use this, because this would single-handedly eliminate most ad-blocking, userscripting and scrapping efforts.
--
[0] - and the power users, like myself, would like to have 100% of that control - think of how much better the web would be if the data was always published in machine-readable format, without tons of bullshit paginations and stylistic choices to scroll and click through. For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.
But on the other hand, I can't help see the enormous potential of a proper assembly language for web. Web technologies have felt like a massive hack for decades: tools designed for basic text formatting and a bit of interactivity which have been stretched in extreme ways to meet the needs of the modern web. Web applications are the most widely used software on the planet, and if you ask me it's about time developers will have the freedom to develop them in the language which makes the most sense for the task at hand rather than the only one which is available. And I am quite keen to see what kinds of new things will be possible when the ceiling is significantly raised for performance optimization.
I feel the same way. But those ads are there because that's the entire business model of people putting weather data out there for free. On most free sites, ads aren't just a sideshow, they're the driving engine. Take away the ads, and there goes the business model.
What we need is some other way to pay for the weather data. Maybe this could be a service provided by your ISP, like NTP or DNS. Or some third party subscription service. Or maybe even taxpayer funded. But if you're using a service that relies on ads as their revenue model, then expect to put up with ads. They're part of the deal.
For those worried about the "all WASM" pages looking like the old "all Flash" pages of yore, consider that Flash and Java applets had their own UI stack and WASM does not. The closest WASM has to that is OpenGL, but you've been able to make all-OpenGL apps with pure JavaScript for some time and it hasn't taken over the web with terrible sites yet. WASM code can interact with the DOM. I guess we could worry about native C/C++ GUI toolkits being ported to WASM, but the web community gets what you deserve for making Electron a thing.
I don't like JavaScript in general but I don't see how WASM is any worse, and if anything it's quite a bit better.
Pretty much every AAA video game of a certain period would have been using scaleforms Flashplayer for their user interface.
No, the compiler maybe, Action Script maybe, but not the player. The player is entirely closed source and there is no open spec for the player. Or you need to show it to me.
> there was more than just the Adobe Flash Player as implementations.
Only Adobe's implementation could run all swf files. Scaleform was not an alternative flash player. Any attempt at creating an alternative and feature complete flash player failed.
Flash the tech is not open, at all.
So someone makes a game, and they use this very useful WASM library over here. Only that library exploits spectre or meltdown to steal data. Or maybe it just silently hoses your machine by targeting the new WebGL shaders? Or any myriad number of other things.
I think so, and the managers at Adobe and Sun must be kicking themselves for not somehow getting their runtime more open, modular, and standardized now that we see write once run anywhere with a few system hooks is all we need.
Then again... It was a different world in the mid 2000s. The web standardization process? Ha, what was that?
On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?
> managers at Adobe and Sun must be kicking themselves
Both tried. As I recall Sun were blocked by Microsoft, and Flash was bundled as standard with Netscape from about 2001 onwards. Steve Jobs killed that stone-dead when he point-blank refused to support it on iDevices.
Adobe AIR beat things like Electron and PhoneGap to market by years. IMHO the issue with Adobe is this insistence on 'open' still having various vary opinionated elements. Adobe Air for example had a lot of good ideas but still attempted to evangelize Flash and ActionScript. I _think_ MS is trying to pivot of that grave now with .NET Core. Time will tell if the Mono-to-Wasm or .NET Core Native projects have legs.
I was so very excited about Adobe Air and wrote a production application with it in 2009.
I _think_ a sweet spot for WASM data processing. The data visualization space should explode once I can with data in the browser at near native speed.
https://en.wikipedia.org/wiki/Adobe_AIR
It basically means that WASM has the same safety/security model as the JS VM. Just like JS, it is compiled to native code (I'm simplifying a bit, of course) before being executed. However, where JS is one of the languages with the most complicated semantics around, which makes it really, really hard to compile efficiently, WASM has extremely simple semantics and is designed to be really, really easy to compile efficiently.
Well, here's a benchmark of asm.js JavaScript versus WebAssembly in a real world application:
https://pspdfkit.com/blog/2018/a-real-world-webassembly-benc...
The WebAssembly version outperforms the asm.js version.
Deleted Comment
Over time you do more of your own thing and you or someone else splits these two pieces of code into three smaller ones. Like the LLVM backend that can be fed by a C or C++ frontend.
As webasm becomes a competitive advantage you should expect to see people split up their javascript VM into three pieces, and Javascript and Webassembly running as peers instead of guest and host.
In a very small way, we kind of saw a similar thing with JSON. JSON was just a strict subset of Javascript and you could emulate it on old browsers with a linter in front of an eval(). Now it’s its own thing.
It runs on the JS sandbox, but it can not be efficiently emulated by the JS CPU. VM is an ambiguous term.
Browser developers are talking about running JS in the Wasm VM. That will probably be reasonable very soon.
Browsers removed Flash support, they might end up rendering Wasm useless by putting it behind loads of permission warnings.
There's also the chance it'll end up lacking as it is and it'll end up being a useless appendage that gets killed off.
I think the reason Browser removed Flash was more because it was an absolute security nightmare for the Browser vendors and they had to fully rely on Adobe to patch the worst of it.
Wasm on the other hand leverages the Javascript VM so browsers don't have that problem. And they don't depend on an external vendor either.
Upcoming changes to WASM include threading and shared memory. Unless browser makers implement those features in a manner that slows the machine to a crawl, WASM certainly will be getting some security warnings. Either because security minded organizations will disable it, or because browser makers will be honest and up front about the risks with those features. (There will be either security implications, or performance implications because they implemented the new features in a secure fashion.)
Eh, (P)NaCL had a very strong sandboxing and verification story.
Wasm is, basically, NaCL in a form that Mozilla and the non-googles etc could accept. There are small implementation differences but the details are all non-important. It was all political. If they had accepted NaCL we'd have had what we have with wasm only we'd have had it years ago.
Naaah it really doesn't seem small. Details are important.
NaCl comes from the plugin world (NPAPI → PPAPI), while wasm comes from the JS world ("removing the JS from asm.js"), and most existing JS engines have been able to just reuse their codegen/backend parts.
PNaCl was based on a bad idea: "let's make a stable subset of LLVM IR". LLVM moves quickly, if you have a stable subset you'll eventually have to translate it to actual updated LLVM IR anyway. And good luck to anyone who wants to implement a simple small interpreter for that!
wasm is really simple — just a close-to-metal abstract machine. No included libc, no bindings to any particular platform (there might be DOM/etc. bindings in the future, but on top of wasm, not right in the core spec).
It’s only a “political thing” if you don’t think that autonomy gives us diversity, or that diversity gives us more unique ideas to consider.
If we went with NaCL then everyone but the dominant player would be playing catch-up forever, worse than they already will be.
Accepting NaCl without accepting PPAPI would not have been very useful. Accepting PPAPI was a non-starter without a lot more work than Google was willing to put into it.
EDIT "does" - and of course it's flawed but the "story" is pretty great :-D
People used to say that you couldn't do strong process isolation because it would be unworkably slow.
And then Google Chrome demonstrated that was a falacy. People actually flocked to chrome because it was faster, despite it using multiple processes and isolating plugins.
NaCL built on that - it's security model was strong process isolation and verification that the code run in that isolated process couldn't 'escape'.
Mozilla is still kinda in denial re process isolation.
WASM enables nothing. It just makes a particular subset of JavaScript slightly faster.
EDIT: I'm not talking about Canvas. That isn't part of WASM.
I kind of miss the days of the HTML/HTTP-only web.
My point exactly.
The author is making the wrong comparison. WASM is JavaScript all over again, and, the HN NoScript crowd aside, that's not going anywhere soon.
And… comparing Flash to V8? I'm having trouble finding modern benchmarks, but Flash is based on ActionScript, which is related to JavaScript. JavaScript has V8, which is a world-class JIT with Google's engineering talent behind it. I'd be very surprised if Flash's JIT was competitive with V8. At least as of 2011 it wasn't [1].
[1] https://habr.com/post/121997/
Is the end result from the user perspective the same? I don't know. I'm just little worried that it might be.
Raw WebGL isn't going to give you any of that. And listening for any interaction events at all is going to necessitate the DOM.
I don't doubt WASM will end up with something similar, but it's also something that's possible with JS today, there are a bunch of JS game engines that render to WebGL and perform very well.
Web assembly does, as far as I understand it.