It is exactly the failure of Microsoft and Apple to propose any meaningful cross-platform API. Both are fighting to increase vendor lock-in - it is in their best interests that apps written on Windows don't work on macOS/iOS and the other way round. Linux/OS folks themselves are unable to solve this problem on their own and it becomes a mouse-and-cat game.
The popularity of web apps helped to reduce the problem, but it still exists. Electron is only half a measure because the technology used is inefficient resource-wise and a platform optimized for the desktop would do the job better. If someone cared enough, they would allocate the necessary resources to minimize this problem (I don't believe solving it is effectively possible because of the complexity and bloat of current web technologies).
Microsoft has destroyed the Windows desktop singlehandedly on their own. Since Win8 and “Metro” they moved away from having consistent UI widgets like it was in Windows 2000 or XP but instead made the Windows desktop a Wild West if different UI paradigms. Even the ribbon introduced a new UI paradigm that wasn’t available to third party devs.
If even MS themselves doesn’t bother to be consistent then developers certainly won’t.
I am a long time Windows desktop dev but all my new projects are either web or Electron. I am done with the mess in Windows.
I'm in the same boat. I was asked recently to develop a small custom app for a library that would need to be maintained for decades. This left me scratching my head as to which GUI technology to choose to minimize the danger of it being broken/unsupported after 10 years. Paradoxically, I think Win32 might be the safest choice as MS can't really innovate (=break) too much here anymore (although I might underappreciate their skills in this regard).
Yes. I would argue that it is precisely the obsession with cross-platform that has destroyed native desktop applications. If all you are offering with your native desktop framework is a dumbed-down experience, because it has to also run on tablets and phones and TVs and game consoles and smart refrigerators, then you quickly get to a place where there are few advantages in going native over shipping a webapp in a Chrome wrapper.
Windows 8 was a catastrophic failure, and Apple seems to be leaning into the same failed philosophy now.
I don't get this. I've used GTK, Qt, Tk and FLTK software in OSX, Windows and Linux. The rest is usually covered by standard libraries and third party libraries.
So I take it this approach is disqualified on your terms for not being meaningful, which raises the question about what "meaningful" means.
My own guess is that it's a matter of convenience of distribution: enter an URL and you can start using the app.
Isn’t it that most software written today has to target the web? So you have two distinct platforms to write to: web, and non-web. Even if Microsoft and Apple came up with a brilliant, singular API, that’s still one more API that has to be coded to. With solutions like Electron, you have a very fast time-to-market with a “desktop” app, and your engineering team only needs to target the web. Customers will only notice the fundamental issues with the solution after it’s too late (or they won’t care.)
Sadly, the solution seems very business-driven. The ROI threshold to making a desktop-native app is quite high.
I use Qt for anything cross-platform. The main problem is that C++ is not an easy language for UI developments. also consider the fact that C++ devs are paid higher that Java, JS or C# devs. Most other UI toolkits are really terrible for cross-platform development.
Aren't all of those C/C++? There's time and place for manually managing memory, figuring out UB and segfaults. If you're trying to render a frame full of AAA graphics in under 15 milliseconds, or go through a million requests a second, then I don't mind it. But if I'm just writing an email client...
I don't think that's the reason. Webapps got widespread when the Internet was still mostly consumed on the Desktop, which essentially meant Windows.
The main reason in my opinion is ease of distribution: it has always been painful and full of friction for the end user on Windows, and the web made it much easier.
You can see how on mobile platforms, where app distribution is not a big problem, native applications are still doing well.
You certainly see that in business. My corporate customers don't want to install anything on the desktop other than Office (and I don't think they like that very much).
This isn't new, I remember talking to the CIO of a big bank twenty years ago about this.
On top of that users quite like software delivered through a browser as it allows them to bypass their IT department.
From the consumer side it's more about costs, people will tolerate all sorts of shit as long as it's free.
Microsoft made .NET platform
cross platform years ago but a UI has been missing. They have a cross platform UI project now too
https://github.com/dotnet/maui
I don't know what you are talking about. Microsoft built a Linux compatible subsystem for Windows and also built (well, acquired) .NET VMs for OSX/Linux. It's trying hard to keep people on the desktop - any desktop.
You're kidding, right? WSL is not related to cross-platform GUIs we're talking about, and .NET Core doesn't support Windows Forms/WPF - and probably never will, for the reasons mentioned above.
That comparison is pretty weird. Electron is more extensible than JavaFX? How does that work? Browser engines are not exactly famous for their sophisticated API extensibility or advanced text/graphics APIs.
I love JetBrains products and their approach. If I had any interest in writing Java (or desktop) I would love to work there and be part of the mission.
Did you see that the top post on HN for a decent chunk of yesterday was celebrating that they were getting 200 rps? 200 rps was not something to brag about to your parents 15 years ago, but half of HN seems to have never heard of serving a static file through Nginx.
I don't want to be overdramatic but it confused the hell out of me and it made me worry about the industry. How can you have all these people cargo-culting into frameworks and languages and they don't know the fundamentals?
As has been the case since the emergence of MicroSoft, companies in software/internet rely on general ignorance of users/developers for their business plans to succeed. Making people smarter and more self-reliant, running their own software for themselves, is not part of the plan. Making them reliant on someone else is the plan.
Desktop has too much potential for user control. Web is safely under the control of a handful of gatekeeper companies. Earlier HN submission today was promotion of a book by Microsoft Chief Legal Officer about how people are not competent to run their own software and should therefore let companies like MS run it for them in the cloud. Foreward written by Chief Marketing Officer, Microsoft. (Kidding again)
Put it this way - most websites probably never reach single digit rps. And it really doesn't matter if the limit is 200 rps when a website is only serving 1 rps.
Then there will be so many people involved in website building that the 1-2% who know what is going on will be enough to overwhelm the demand of people who need to run high-scale websites.
People might waste an order of magnitude more resources than they need in the process, but realistically the absolute value lost is small.
Frameworks are there to make development more accessible. So I can completely understand how one might trade on a framework without understanding the fundamentals.
Requests aren't made equal. I don't have an opinion about the manga hosting site, but I have worked on a website where many read requests required a lot of heavy joins other data work for actual useful work and weren't really cacheable, as every used requested his own data. Here's an example: https://cs.fastcup.net/match6279498/stats
Development / product culture. Electron should have a fix time quantum, for example 6 weeks, and make performance:feature ratio 4:1. Otherwise, floods of ongoing features development will also produce "Bug as a Denial of Service", and performance improvement yields negative result.
What a lot of people don't appreciate is that a GUI platform is a essentially a marketplace like eBay.
There are "sellers" and "buyers". The sellers are the developers. The buyers are the users. Real money is often involved in the transaction.
Where do sellers go? Where there are the most buyers!
Where do buyers go? Where there are the most sellers!
Hence, marketplaces with no physical constraints are natural monopolies. This includes auction houses (eBay) and GUI platforms.
Previously, Windows had an over 90% market share, and it was glorious as a developer. You write your code once, and it works "everywhere", because everywhere is just Windows. The 9% Mac users are out of luck, and nobody cared about the mish-mash of 1% other.
Now the web is the new 90%. Or more specifically, Chromium is, with over 80% and climbing.
What saddens me is that Microsoft threw what they had away, and continue to do so. I've never seen a company more keen to undermine their own market position.
Microsoft's big money probably isn't Windows anymore, but Azure. Given that they make so much more money off their cloud offerings, I suspect that they actively want to kill off personal computing entirely. If so, they're doing a good job with recent Windows releases.
With the move to subscription services like Office 365, and the startling number of incompetents that think VDI terminals are acceptable workstations for users, I could definitely see Windows desktop disappearing in the next decade or so, the OS merely being a bootloader for a browser.
Everything will be either a Chromebook or EdgeBook, basically.
This is a bit of an aside - but can anyone explain why Electron apps use so much memory/system resources? Is this a fundamental thing or accidental thing and could be remedied rather easily? I wonder why there hasn't been significant effort directed toward this, as this is the number 1 issue of desktop applications today.
I have a few guesses/questions:
- What kind of data is causing the GBs of memory usage? Is it JS objects, cached content, intermediate rendering data?
- Chrome is architected as a multi-process app, in order to isolate and share resources between websites. Since Electron runs as a single app, it doesn't need this overhead, how hard would it be to turn it off?
-Would it gain us a lot if we removed v8/minimized JS on the 'frontend' and just pushed all the html from the 'backend'?
-Is using node as the backend the cause of excess resource usage, would using something else be better?
As I understand it, the chrome team have tended towards using more memory as a way of improving performance (i.e. trading space for time), with the feeling being that most systems have quite a lot of memory (relatively speaking) so why not use it.
This isn't a one off thing so would be challenging to change. It's more a case of many small decisions about increasing memory usage that in aggregate add up.
I might be wrong here so someone correct me but I believe it's because every time you open a new electron app, you are opening another instance of Chrome. So if you have Chrome, VSCode and Figma running, that's 3 instances of Chrome each eating their respective 2-3GB of memory and associated CPU cycles.
You are not wrong, but there are multiple billion dollar companies that build their products on top of Electron. Presumably they could afford to hire engineers or Google could step in and optimize Chrome's renderer for Electron's use cases.
I may be wrong, but as things are now, Electron, more or less packages an unmodified, headless Chromium.
If desktops could make a platform where I can write my app in one language, host it on my own server and get global distribution running in a sandboxed environment then I'd consider it.
The web distribution model is just so much nicer for users and devs. No app stores, no install/uninstall process, no access to system internals.
I can deploy an app today and just send a link to my friends for them to use it. It takes a day just to put together an app store submission with all the necessary screenshots and configuration, not to mention the lengthy review process which could then deny me permission to launch.
The difficulty and uncertainty means that many MVPs launch web first where there's less risk, and more reach, for the same amount of effort. Only after a successful web launch are native apps developed.
Sure there are exceptions if the app requires features not yet available on the web (persistence, actually good multithreading, low level access) but the feature gap is closing slowly.
> If desktops could make a platform where I can write my app in one language, host it on my own server and get global distribution running in a sandboxed environment then I'd consider it.
It was called Java and ironically people hated it because it was sluggish and didn't use native widgets. Now we have webapps which reinvent the GUI on almost every site and have built in 100ms+ latencies. Go figure.
> The web distribution model is just so much nicer for users and devs. No app stores, no install/uninstall process, no access to system internals.
No ability to use it at all in an area with shitty network, no consistency, no integration, no respect for OS themes, no ability to continue using old versions if something is wrong with the new one...
> I can deploy an app today and just send a link to my friends for them to use it. It takes a day just to put together an app store submission with all the necessary screenshots and configuration, not to mention the lengthy review process which could then deny me permission to launch.
On Windows you could put your binary and libraries in a zip file and distribute to users in any variety of ways, including via web hosting. Macs could do the same thing with Application Bundles or their single-file applications before that.
> It was called Java and ironically people hated it because it was sluggish and didn't use native widgets.
First, Java is in use on plenty of systems. It has all but disappeared as a serious client side system - but not because of performance. Java failed more because it failed to meet market expectations, especially in ease of use. Other solutions were more agile in creating better platforms on the client side.
> Now we have webapps which reinvent the GUI on almost every site
This is not an impediment to adoption. This is clearly seen on such successful historical software like winamp which intentionally abandoned fitting the mold. X has not historically had much consistency at all for GUI. Xlib, Motive, GTK, Qt/KDE, at any given time there has been many options to pick from.
You do lose system-consistency, and that can be a negative. However, you gain platform consistency in your application appearing the same on pretty much every device. It's not a total negative.
> and have built in 100ms+ latencies. Go figure.
Webapps do not need to hit the backend for every little thing. That latency requirement really depends on how much data is appropriate to send to the client and the development philosophy of the developer. For personally owned data it can certainly live on the client and have a replication stream to the cloud system and have no built-in network latency. This is not a hard requirement for a webapp.
> No ability to use it at all in an area with shitty network, no consistency, no integration, no respect for OS themes, no ability to continue using old versions if something is wrong with the new one...
Webapps (PWAs, electron, cordova, and websites for that matter) can all work offline if they are designed that way. Just like native software can be built to require the mothership to do anything and behave just like a traditional website. This issue is more mindset than fundamental to the technology.
You hit the OS integration side pretty hard. The software solves a problem, its success depends on that. All this integration is simply a list of "nice to haves". The market has shown time and time again that spending too much on your hobby features rather than what the customer's actually want is not a long term winning devopment strategy.
Look at it from the user side. If some slightly slower, loud looking software solves my problem 200% better than my current native solution - I'm switching.
> No app stores, no install/uninstall process, no access to system internals
the problem with app-stores is lock-in, but there's nothing wrong with repos in general e.g. linux package repositories.
What we really are missing is good sandboxes. We got there in browser/js-land, but it was hard progress; the constant attacks on web architecture and the value in overcoming it for e.g. online banking and purchase incentivised the effort to provide a solution.
If we had properly isolated executions environments, with whatever permissions model that implies, then that could be the basis of proper remote/auth-less execution; but even VMs/containers can't meet that guarantee.
Then we wouldn't need so much review process around app, or locked-down stores; installs could be pre-emptive and automatic, and system calls managed.
I suppose than rather make just a single one-off language, it would make more sense as a abstract/virtual-machine on which bytecode is executed, as the basis for other, proper language (ala the JVM).
A lot of players entered this space and some are actually profitable companies, but are none are as popular as Electron. aside from Javascript and WebAPIs, Electron is also free as in free beer.
Qt: is truly cross-platform, supporting Mac, Windows, Android, iOS, and even LG TV webOS. VLC uses Qt and is rock solid on all platforms it runs. It is very expensive for commercial development.
Xamarin: run on all major operating systems, and mobile, they were profitable. C# is much better language that C++ or Javascript. They became free to use once Microsoft acquired them
Web still has by far the best developer experience + cross platform stability, not to mention the lowest barrier to entry. I would be really interested to see what happens with Tauri[1] as that looks like a more promising alternative to Electron.
I do want to see a return to native apps but there are no worthwhile incentives to do so outside of "our customers demanded a native app."
> I do want to see a return to native apps but there are no worthwhile incentives to do so outside of "our customers demanded a native app."
I work in the public sector of Denmark, which has always had native Windows apps for most of our systems. The past few years of suppliers moving to web based solutions have been such a blessing.
Out of the around 300 it-systems we operate, the ones running in a browser are build on a web-front of some sort, electron, included are by far the favoured experience among our 10.000 users. We’ve run a benchmark on our major systems for the past decade, and the only ones to break a 8/10 rating among 500+ users are based on angular, react, electron or vue.
One of our old documenting systems has a Windows application and is also moving to a web-client. The old application scored a 1.2/10 the web client scored a 8.5/10. Covid had a massive impact on this, as the native client worked horribly over a DNS, but it never scored a rating over 5/10.
You can certainly put up the argument that Danish development houses should get better at building native applications, and that’s a fair point. Reality is that they aren’t, however, so the fact that these same developers are also building the new and well received web-based UIs is a good way to show how the impact goes far beyond developer experience.
I think Microsoft is the only company that has continually scored well on native applications that see massive usage, because the office package has always been popular. But isn’t much of the modern office UI build with react?
It doesn't help that for the major consumer platform (Windows), Microsoft can't make their mind up about how to replace Win32 or WinForms. It seems like every other week there is some new framework announced, so even if you have a need to make a native app, choosing how to do it has become near impossible. Is it .NET Maui? Should I try Electronize? or take a huge punt on Avalonia? It's easier just to wrap up a web app and hand it over. Sure, that is kicking the can down the road a bit, but until Microsoft can manage to provide something other than vague, half finished plans we're all in limbo.
For some platforms the official way to do things now is to make it a web app. Consider Microsoft Word Add Ins. The official way to do them now is with JS, React or Angular.
I have seen this indecision as a death knell on a few projects. You really just have to pick something, especially if you stop developing the soon to be deprecated tooling.
If it takes you two years to pick a new method, then all your developer community has to sit on their laurels. They can't invest in the old platform, they can't invest in novel tooling of their own, and they can't build with the new thing because you haven't chosen it yet.
How can you say that customers are demanding native apps when only a minuscule number of users can tell if an app is native or not? Most are not even familiar with the term native; desktop installed app (of all types) vs browser app at best.
> Web still has by far the best developer experience + cross platform stability, not to mention the lowest barrier to entry. I would be really interested to see what happens with Tauri[1] as that looks like a more promising alternative to Electron.
Compared to what?
C code I wrote in the 90s in middle school still compiles. JS code I wrote two weeks ago probably doesn't because the mountain of dependencies has shifted somewhere.
Command-line C-application is "simple". But if you want to give access to your centrally managed (and updated) app to anybody in the world who has browser and internet, it gets more complicated. It gets complicated because you have to program both the client and the server and make sure it works on multiple browsers including cell-phones.
So even though basic command-line C-apps are great in terms of maintainability they also typically do less than modern web-apps.
JavaScript is the lingua franca of the web. Therefore Node.js is a great platform for web-apps. There is also DENO which allows you to compile your JavaScript. For Node.js there is something similar in terms of Vercel/Pkg.
There are benefits to using the same language for both the front-end and back-end.
Even for 2017, this is extremely late thinking. Back in '10 it was clear the consumer desktop was dying a rapid death. Now professional and business software are going the same way, and it is no wonder because nobody respects desktop development anymore. The advent of tools like Cosmopolatan libc are looking too late and too "hard" for new developers to catch on, so far...
Re: cosmopolitan libc: As cool as it is, I don't think the barrier to widespread adoption of C as a greenfield language for application development was ever that things didn't run cross-platform, it was, you know, the footguns and complexity of building anything significant in C.
I don't know how old you are, but at 57 and having been "computing" for 35 years, I'd say that this is wrong, on two levels.
1. C has been the green field language for application for decades.
2. The major obstacle is precisely cross-platform development; without a fairly thick layer to abstract away the OS API/platform, it's very difficult.
In addition to those two, there's the detail that lots of development projects want in-browser capabilities, and that essentially rules out compiled languages (ignoring the developing options for webasm, even though they still do not take it as far as native C/C++).
Cosmopolitan libc is fun as a toy but not something you can ship software on. Its design is just fundamentally disconnected from how operating systems work today.
Why would anyone use Hyper? It's not like there aren't better terminal emulators out there for each platform. Using Electron to emulate a terminal is plain insanity to me, you are trading off performance and system resources for the js hype.
It feels like some point within the last few years tech became a place where you are identified by the tools you choose (ok, vim and emacs have been around). Not just that, but you sell your project based on those tools. So you can have that revolt app that showed up yesterday that for some ungodly reason disabled text selection on their website (user hostile as fuck) but it was getting advertised because it was Rust. The top selling point was theming via Electron. They have to sell the project before it's actually ready.
Blog posts like this are very thinly disguised advertisements for people like this guy who tweet all day about what tools they use. Which, he's successful, and more people are aware about this stuff than before. It used to be bad form to quote yourself, but tweet yourself? That's content.
I blame a big part of it on YC and the VC ethos of big big big instead of better better better.
Same reason Rust people use a package manager made in Rust, JS people use a package manager made in JS and Java people use a package manager made in Java, because it's close to the context you're already in and if you need to extend/modify/remove something for your own use/for the community, you already know how to do it.
I myself would never use Hyper, but I can see why people would. If there was a comfortable terminal in/for/with Clojure, I'd at least try it.
The popularity of web apps helped to reduce the problem, but it still exists. Electron is only half a measure because the technology used is inefficient resource-wise and a platform optimized for the desktop would do the job better. If someone cared enough, they would allocate the necessary resources to minimize this problem (I don't believe solving it is effectively possible because of the complexity and bloat of current web technologies).
If even MS themselves doesn’t bother to be consistent then developers certainly won’t.
I am a long time Windows desktop dev but all my new projects are either web or Electron. I am done with the mess in Windows.
Windows 8 was a catastrophic failure, and Apple seems to be leaning into the same failed philosophy now.
So I take it this approach is disqualified on your terms for not being meaningful, which raises the question about what "meaningful" means.
My own guess is that it's a matter of convenience of distribution: enter an URL and you can start using the app.
Sadly, the solution seems very business-driven. The ROI threshold to making a desktop-native app is quite high.
Aren't all of those C/C++? There's time and place for manually managing memory, figuring out UB and segfaults. If you're trying to render a frame full of AAA graphics in under 15 milliseconds, or go through a million requests a second, then I don't mind it. But if I'm just writing an email client...
You can see how on mobile platforms, where app distribution is not a big problem, native applications are still doing well.
This isn't new, I remember talking to the CIO of a big bank twenty years ago about this.
On top of that users quite like software delivered through a browser as it allows them to bypass their IT department.
From the consumer side it's more about costs, people will tolerate all sorts of shit as long as it's free.
What was less obvious was that electron would grow this big and still be this bad. How is the memory usage still so inefficient?
I don't want to be overdramatic but it confused the hell out of me and it made me worry about the industry. How can you have all these people cargo-culting into frameworks and languages and they don't know the fundamentals?
(Kidding)
As has been the case since the emergence of MicroSoft, companies in software/internet rely on general ignorance of users/developers for their business plans to succeed. Making people smarter and more self-reliant, running their own software for themselves, is not part of the plan. Making them reliant on someone else is the plan.
Desktop has too much potential for user control. Web is safely under the control of a handful of gatekeeper companies. Earlier HN submission today was promotion of a book by Microsoft Chief Legal Officer about how people are not competent to run their own software and should therefore let companies like MS run it for them in the cloud. Foreward written by Chief Marketing Officer, Microsoft. (Kidding again)
Then there will be so many people involved in website building that the 1-2% who know what is going on will be enough to overwhelm the demand of people who need to run high-scale websites.
People might waste an order of magnitude more resources than they need in the process, but realistically the absolute value lost is small.
tldr; just pay tech debt ASAP.
There are "sellers" and "buyers". The sellers are the developers. The buyers are the users. Real money is often involved in the transaction.
Where do sellers go? Where there are the most buyers!
Where do buyers go? Where there are the most sellers!
Hence, marketplaces with no physical constraints are natural monopolies. This includes auction houses (eBay) and GUI platforms.
Previously, Windows had an over 90% market share, and it was glorious as a developer. You write your code once, and it works "everywhere", because everywhere is just Windows. The 9% Mac users are out of luck, and nobody cared about the mish-mash of 1% other.
Now the web is the new 90%. Or more specifically, Chromium is, with over 80% and climbing.
What saddens me is that Microsoft threw what they had away, and continue to do so. I've never seen a company more keen to undermine their own market position.
https://static.seekingalpha.com/uploads/2019/7/31/49654246-1...
Everything will be either a Chromebook or EdgeBook, basically.
Mozilla.
I have a few guesses/questions:
- What kind of data is causing the GBs of memory usage? Is it JS objects, cached content, intermediate rendering data?
- Chrome is architected as a multi-process app, in order to isolate and share resources between websites. Since Electron runs as a single app, it doesn't need this overhead, how hard would it be to turn it off?
-Would it gain us a lot if we removed v8/minimized JS on the 'frontend' and just pushed all the html from the 'backend'?
-Is using node as the backend the cause of excess resource usage, would using something else be better?
This isn't a one off thing so would be challenging to change. It's more a case of many small decisions about increasing memory usage that in aggregate add up.
I may be wrong, but as things are now, Electron, more or less packages an unmodified, headless Chromium.
The web distribution model is just so much nicer for users and devs. No app stores, no install/uninstall process, no access to system internals.
I can deploy an app today and just send a link to my friends for them to use it. It takes a day just to put together an app store submission with all the necessary screenshots and configuration, not to mention the lengthy review process which could then deny me permission to launch.
The difficulty and uncertainty means that many MVPs launch web first where there's less risk, and more reach, for the same amount of effort. Only after a successful web launch are native apps developed.
Sure there are exceptions if the app requires features not yet available on the web (persistence, actually good multithreading, low level access) but the feature gap is closing slowly.
It was called Java and ironically people hated it because it was sluggish and didn't use native widgets. Now we have webapps which reinvent the GUI on almost every site and have built in 100ms+ latencies. Go figure.
> The web distribution model is just so much nicer for users and devs. No app stores, no install/uninstall process, no access to system internals.
No ability to use it at all in an area with shitty network, no consistency, no integration, no respect for OS themes, no ability to continue using old versions if something is wrong with the new one...
> I can deploy an app today and just send a link to my friends for them to use it. It takes a day just to put together an app store submission with all the necessary screenshots and configuration, not to mention the lengthy review process which could then deny me permission to launch.
On Windows you could put your binary and libraries in a zip file and distribute to users in any variety of ways, including via web hosting. Macs could do the same thing with Application Bundles or their single-file applications before that.
> It was called Java and ironically people hated it because it was sluggish and didn't use native widgets.
First, Java is in use on plenty of systems. It has all but disappeared as a serious client side system - but not because of performance. Java failed more because it failed to meet market expectations, especially in ease of use. Other solutions were more agile in creating better platforms on the client side.
> Now we have webapps which reinvent the GUI on almost every site
This is not an impediment to adoption. This is clearly seen on such successful historical software like winamp which intentionally abandoned fitting the mold. X has not historically had much consistency at all for GUI. Xlib, Motive, GTK, Qt/KDE, at any given time there has been many options to pick from.
You do lose system-consistency, and that can be a negative. However, you gain platform consistency in your application appearing the same on pretty much every device. It's not a total negative.
> and have built in 100ms+ latencies. Go figure.
Webapps do not need to hit the backend for every little thing. That latency requirement really depends on how much data is appropriate to send to the client and the development philosophy of the developer. For personally owned data it can certainly live on the client and have a replication stream to the cloud system and have no built-in network latency. This is not a hard requirement for a webapp.
> No ability to use it at all in an area with shitty network, no consistency, no integration, no respect for OS themes, no ability to continue using old versions if something is wrong with the new one...
Webapps (PWAs, electron, cordova, and websites for that matter) can all work offline if they are designed that way. Just like native software can be built to require the mothership to do anything and behave just like a traditional website. This issue is more mindset than fundamental to the technology.
You hit the OS integration side pretty hard. The software solves a problem, its success depends on that. All this integration is simply a list of "nice to haves". The market has shown time and time again that spending too much on your hobby features rather than what the customer's actually want is not a long term winning devopment strategy.
Look at it from the user side. If some slightly slower, loud looking software solves my problem 200% better than my current native solution - I'm switching.
Would you really do that though? Or would you run a random exe from a random zip hosted from a random website?
the problem with app-stores is lock-in, but there's nothing wrong with repos in general e.g. linux package repositories.
What we really are missing is good sandboxes. We got there in browser/js-land, but it was hard progress; the constant attacks on web architecture and the value in overcoming it for e.g. online banking and purchase incentivised the effort to provide a solution.
If we had properly isolated executions environments, with whatever permissions model that implies, then that could be the basis of proper remote/auth-less execution; but even VMs/containers can't meet that guarantee.
Then we wouldn't need so much review process around app, or locked-down stores; installs could be pre-emptive and automatic, and system calls managed.
I suppose than rather make just a single one-off language, it would make more sense as a abstract/virtual-machine on which bytecode is executed, as the basis for other, proper language (ala the JVM).
Qt: is truly cross-platform, supporting Mac, Windows, Android, iOS, and even LG TV webOS. VLC uses Qt and is rock solid on all platforms it runs. It is very expensive for commercial development.
Xamarin: run on all major operating systems, and mobile, they were profitable. C# is much better language that C++ or Javascript. They became free to use once Microsoft acquired them
I do want to see a return to native apps but there are no worthwhile incentives to do so outside of "our customers demanded a native app."
[1] https://tauri.studio/en/
I work in the public sector of Denmark, which has always had native Windows apps for most of our systems. The past few years of suppliers moving to web based solutions have been such a blessing.
Out of the around 300 it-systems we operate, the ones running in a browser are build on a web-front of some sort, electron, included are by far the favoured experience among our 10.000 users. We’ve run a benchmark on our major systems for the past decade, and the only ones to break a 8/10 rating among 500+ users are based on angular, react, electron or vue.
One of our old documenting systems has a Windows application and is also moving to a web-client. The old application scored a 1.2/10 the web client scored a 8.5/10. Covid had a massive impact on this, as the native client worked horribly over a DNS, but it never scored a rating over 5/10.
You can certainly put up the argument that Danish development houses should get better at building native applications, and that’s a fair point. Reality is that they aren’t, however, so the fact that these same developers are also building the new and well received web-based UIs is a good way to show how the impact goes far beyond developer experience.
I think Microsoft is the only company that has continually scored well on native applications that see massive usage, because the office package has always been popular. But isn’t much of the modern office UI build with react?
Unless of course you're taking about pro apps like AutoCad or Photoshop; but they have never been electron and unlikely to ever be.
If it takes you two years to pick a new method, then all your developer community has to sit on their laurels. They can't invest in the old platform, they can't invest in novel tooling of their own, and they can't build with the new thing because you haven't chosen it yet.
Every native app I have ever used has been many times snappier than its Electron/Web equivalent. Think on why customers demand native apps.
If Electron is what it takes then I’ll use an Electron app and not care.
Compared to what?
C code I wrote in the 90s in middle school still compiles. JS code I wrote two weeks ago probably doesn't because the mountain of dependencies has shifted somewhere.
So even though basic command-line C-apps are great in terms of maintainability they also typically do less than modern web-apps.
There are benefits to using the same language for both the front-end and back-end.
Isn't customer demands are the main incentive for any developer?
1. C has been the green field language for application for decades.
2. The major obstacle is precisely cross-platform development; without a fairly thick layer to abstract away the OS API/platform, it's very difficult.
In addition to those two, there's the detail that lots of development projects want in-browser capabilities, and that essentially rules out compiled languages (ignoring the developing options for webasm, even though they still do not take it as far as native C/C++).
Blog posts like this are very thinly disguised advertisements for people like this guy who tweet all day about what tools they use. Which, he's successful, and more people are aware about this stuff than before. It used to be bad form to quote yourself, but tweet yourself? That's content.
I blame a big part of it on YC and the VC ethos of big big big instead of better better better.
I myself would never use Hyper, but I can see why people would. If there was a comfortable terminal in/for/with Clojure, I'd at least try it.