Readit News logoReadit News
cladopa · 9 days ago
I would say that the real reason is because "it works". As simple as that.

The first thing you need when you make something new is making it work, it is much better that it works badly than having something not working at all.

Take for example the Newcomen engine, with an abysmal efficiency of half a percent. You needed 90 times more fuel than an engine today, so it could only be used in the mines were the fuel was.

It worked badly, but it worked. Later came efficiency.

The same happened with locomotives. So bad efficiency at first, but it changed the world.

The first thing AI people had to do is making it work in all OSes. Yeah, it works badly but it works.

We downloaded some Clojure editor made in java to test if we were going to deploy it in our company. It gave us some obscure java error in different OSes like linux or Mac configurations. We discarded it. It did not work.

We have engineers and we can fix those issues but it is not worth it. The people that made this software do not understand basic things.

We have Claude working in hundreds of computers with different OSes. It just works.

fxtentacle · 7 days ago
Related to your answer, I would say the reason is that it works good enough for now and it can always be patched later. Back in the good old days that we remember, software was frozen on a gold master disc, which was then tested for weeks or months before its public release. The fact that bugs could not easily be fixed in the field meant they would incur support costs or lost revenue with people returning their purchased software box.

In my opinion that is the true reason why the old native software was developed to such a high standard. But then once online stores and shrink wrap agreements made it impossible to return buggy software, then the financial incentives shifted towards shipping a partially broken product.

Who cares about pleasing with good performance when you can instead keep customers hostage?

voidnap · 8 days ago
Your examples of engines are less about "it works" as more that it does a thing we couldn't do before and it works better than the previous thing. But neither of those are especially true of react.

React was an instant hit because it had the facebook brand behind it and everyone was tired of angular. But ultimately, react has worse outcomes for developers, users, and businesses. On the web, react websites are bloating. They run slower, their javascript payloads are larger, and they take longer to load.

Your suggestion -- that it works and then it gets more efficient later -- would make sense if we lived in a world where react moved off the virtual dom model. A virtual dom is a fine first attempt or prototype but we can do better. We know how. Projects like SolidJS do do better. React has not caught up, but it is still very popular. This whole "It worked badly, but it worked. Later came efficiency" thing is complete nonsense.

And there are loads of businesses that started off with an angular app, started to migrate to react, then started to migrate to react hooks, now switching to whatever the latest methodology is. Time and again you find these products, always endlessly migrating to the new thing, most of them never finishing a migration before beginning a new one. So these products end up being a chimera of four different frameworks held together with pain.

This isn't a good outcome for businesses, or for users, and it's not a good developer experience. react is stagnant and surviving off of being the default or the status quo and supported by tech companies that have long since stopped innovating and subsist on rent seeking. Developers choose react because nobody was ever fired for buying IBM and because they can look busy at their job, and because they buy a new phone and laptop every year with the latest hardware that can compensate for the deteriorating software they ship.

Ygg2 · 8 days ago
> React was an instant hit because it had the facebook brand behind it and everyone was tired of angular.

Ok, but why was everyone tired of Angular? Sure, web frameworks are examples of Fad Driven Development to the extreme, but Angular.js, was pure unmitigated ARSE.

Made ten bindings on a page? That's 100 cross connections. Made 100 two-way bindings? that's 10000 connections.

Clicked one way through fields A, B then started typing, they show same data. Clicked through fields A and C, now they are bound but B isn't. Clicked B then C, congrats all three of your bindings suddenly start filling in.

It was a combination of shitty performance scaling and unintuitive Angular data flow that primed everyone for React to take over.

JKCalhoun · 8 days ago
I would prefer the old-school approach to wrapping the "Claude bits" in a per-platform framework (or if you really can't be bothered, a platform-specific command-line tool that could then be called natively from the program).

The UI wrapper then could be Electron, or something a little more platform-native you hand off to some junior engineers.

patrick451 · 8 days ago
> The first thing you need when you make something new is making it work, it is much better that it works badly than having something not working at all.

It is better for something to not exist than for a shitty version to exist. Software doesn't get better over time, it gets worse. If you make a bad, suboptimal choice today chances are that solution becomes permanent. It's telling that all of your examples of increasing efficiency are not software.

If are aren't going to do it well, don't do it.

corroclaro · 8 days ago
obscure java error? Clojure editor made in Java?

Are you sure it wasn't just an unfamiliarity with Java errors in general?

Clojure popped out of the _senior_ Java camp. It often lives within that mindshare.

observationist · 9 days ago
They could have done better. They chose the path of least resistance, putting in the least amount of effort, spending the least amount of resources into accomplishing a task.

There's nothing "good" about electron. Hell, there are even easier ways of getting high performance cross platform software out there. Electron was used because it's a default, defacto choice that nobody bothered with even researching or testing if it was the right choice, or even a good choice.

"It just works". A rabid raccoon mashing its face on a keyboard could plausibly produce a shippable electron app. Vibe-bandit development. (This is not a selling point.) People claiming to be software developers should aim to do better.

Gooblebrai · 8 days ago
> They could have done better. They chose the path of least resistance, putting in the least amount of effort, spending the least amount of resources into accomplishing a task

You might as well tell reality to do better: The reality of physics (water flows downhill, electricity moves through the best conductor, systems settle where the least energy is required) and the reality of business (companies naturally move toward solutions that cost less time, less money, and less effort)

I personally think that some battles require playing within the rules of the game. Not to wish for new rules. Make something that requires less effort and resources than Electron but is good enough, and people will be more likely to use it.

3oil3 · 8 days ago
I agree with you, I even think it's shameful. When I saw it was elctron, I sighted so long I almost choked. Can't even cmd+g nor shift+cmd+f to search, context menu has nothing. Can't even swipe, no gestures etc. ELctron is better than nothing, and I'm grateful, but it tastes bitter. As for performance, somebody if I remember correctly, once asked here "what's the point of 512GB RAM on the mac Studio?" And then someone replied "so you can run two electron apps".

Deleted Comment

pjmlp · 9 days ago
Nah, some developers are lazy, that is all, lets not dance around the bush with that one.

Most of those Electron folks would not manage to even write C applications on an Amiga, use Delphi, VB, or whatever.

Educated on node and do not know anything else.

Even doing a TUI seems like a revelation to current generations, something quite mudane and quite common on 1980's text based computing of Turbo Vision, Clipper and curses.

hedgehog · 9 days ago
Let's assume for the moment the developers are doing about the best they can for the goals they're given. The slowness is a product decision, not an engineering problem. They are swimming in cash and could write a check to solve the problem if they cared. Evidence is they have other priorities.
pjmlp · 8 days ago
Caring now that is an interesting word, it is clear they don't care about their customers hardware, or user experience, only themselves.
zadikian · 9 days ago
At least it seems like a lot more apps are cross-platform than before. I wouldn't call the native devs lazy for not making a Mac version of their Windows app.
pjmlp · 9 days ago
Agreed, yet back in the day we even managed to do that with applications being written in Assembly, in some cases.

Uphill both ways, should be easy for a company doing C compilers with LLMs.

reactordev · 9 days ago
They wrote a React TUI renderer, that’s what they did. Shame…

I understand why, but there is such beauty in the simplicity of ansi.

pjmlp · 9 days ago
The company that supposedly is selling that AI can do everything to replace us.
port11 · 8 days ago
I think your view is lazy. The article explains some of the actual reasons, none of which have to do with laziness.

I’ve built for Electron and did a course on Swift for macOS apps. Not out of laziness, but I don’t think I’d ever build native for Macs. And the Windows folk have been complaining for a long time about the native APIs.

Now, native on mobile, that’s something else. I’ve been stuck on RN/Expo because that’s what the resources the business had allowed for, but native Kotlin is much more enjoyable (AFAIK). Swift… dunno, still icky.

JKCalhoun · 8 days ago
FTA:

"Looks could be good, but they also can be bad, and then you are stuck with platform-consistent, but generally bad UI (Liquid Glass ahem)."

Since the discussion was specifically about platform-consistency, odd that the author would decide that personal taste might take priority over platform-consistency.

"It changes too often, too: the app you made today will look out of place next year, when Apple decides to change look and feel yet again."

Seems to be arguing the exact opposite? If you adopt a native API for controls, windows, etc., your app will change next year to look completely in place (perhaps for better or worse according to the author).

Going DIY on UI and your app might still, in 2026, have brushed aluminum and "lickable" buttons.

sehugg · 9 days ago
Realize, though, that just grabbing a frame buffer is not a thing anymore. To render graphics you need GLES support through something like ANGLE, vectors and fonts via Skia, Unicode, etc. A web browser has those things. Any static binary bundling those things is also gonna be pretty large.

And JavaScript is very good at backwards compatibility when you remove the churn of frameworks (unfortunately Electron doesn't guarantee compatibility quite as far back)

pjmlp · 9 days ago
And CPUs are only sand powered by electricity.

I do realise the need for abstractions and they do exist, provided there is actually the interest to learn them.

senadir · 8 days ago
Do you think that writing Delphi, a language no one is buying, makes you hardworking?
pjmlp · 8 days ago
If no one would be buying Delphi, Embarcadero would not be in business, yet here they are.

https://www.embarcadero.com/products/delphi

One of the related conferences just took place last October,

https://entwickler-konferenz.de/en/

sharts · 8 days ago
Facts.

The sad reality is everyone wanting to have fancy looking pages/apps as quickly and easily as possible.

And now, the web (and increasingly desktop) is littered with the lowest common denominator of platforms with all sorts of crazy optimizations that still can’t be as snappy as a Windows 95 app on a 200mhz / 16MB desktop.

At this point we may as well just use electron and nodejs for fighter jets and missile defense systems. Surely it’s fast enough for that, too.

Ygg2 · 8 days ago
What's the alternative for cross-platform GUIs? Qt? Swing? Avalonia?

Electron is bad, but it's the least bad of all other cross-platform GUIs.

> Most of those Electron folks would not manage to even write C applications on an Amiga, use Delphi, VB, or whatever.

That's true of most people on the planet. No one has access to them.

tracerbulletx · 8 days ago
Go replace discord, slack, and vscode natively then. If everyone is crying out for the superior missing and obviously better alternative then that's a great opportunity.
eviks · 8 days ago
What if pursuing that opportunity exceeds the risk appetite / budget of the person you're suggesting this to, by about a billion, and even then has the exact same potential to be corrupted by the organizational dynamics that resulted in the current generation of sloppy designs?
Rebelgecko · 8 days ago
I thought Discord bans people for using alternate clients?
pjmlp · 8 days ago
I have already.

Don't use Discord, never plan to. Even if I cared, I would only use the browser as it should be.

Slack only lives on the browser, as it should.

VSCode is only used when there are no SDKs for my editors or IDEs, and I am forced into VSCode.

However contrary to Electron crap, VSCode at least has tons of external processes written in a mix of C++, Rust and C#, and parts of it have moved into WebGL, not the traditional Electron crap application.

nsonha · 8 days ago
You mean writing simplistic forms so full of side effects they need to be written in a "native" framework? Sounds like good engineering.
dist-epoch · 9 days ago
I've programmed in every native Windows GUI starting with MFC. Used Delphi too. I've even created MS-DOS TUI apps using Turbo Vision.

Compared to the web stack and Electron, native GUI APIs are complete shit. Both for the programmer, but also for the user.

Reactive UI (in the sense of React, Vue, ...), the greatest innovation in UI programming ever, was made popular by web-people.

Unless you write some ultra-high performance app like a Browser, a CAD app, a music production app, it doesn't make any sense to use native GUIs. Takes longer to program, the result is uglier and with less features. And for those ultra high performance apps you don't use the native UI toolkit anyway, you program your own widgets since the native ones are dog shit and slow.

mathw · 8 days ago
Reactive UIs may have been made popular on the web, where they're an absolute nightmare, but native code does them better still.

Best time I ever had in a job was writing WPF applications in C# using ReactiveUI. Once we really understood the underlying model we were plugging stuff together so easily. It is a really good model, but I can't see how React is a good example of it.

Of course I had lots to complain about then, WPF had bugs, C# has a number of big problems, but it was, overall, very nice.

pjmlp · 8 days ago
Interesting, I have the exact opposite opinion, and I also have to deal with Web since 1999.

React is no innovation at all, some folks got a bit enthusiastic with Haskell reactive programming papers.

VorpalWay · 9 days ago
Don't forget about all the embedded UIs (kiosks, appliances, car infotainment, industrial equipment, ...), those computers are weaker and it makes a ton of sense to use native toolkits there.

They tried to replace our Qt GUI at work in this space with a react based one, the new UI ran like utter shit comparatively, even after a lot of effort was spent on optimisation.

Also, most of the world don't use high end developer laptops. Most of the world is the developing world, where phone apps on low end Android phones reign supreme.

wolvesechoes · 8 days ago
Just serialize and deserialize your JSON bro, so much faster and easier
Iolaum · 9 days ago
In an age where LLM's start writing applications why would this matter?
avbanks · 9 days ago
I agree, this is clearly an indictment against LLMs. If LLMs and agents were capable they'd 100% write it natively but they realize the current limitations.
pjmlp · 9 days ago
Yet another reason to have those LLMs create native applications, should be easy apparently.
anon7000 · 8 days ago
The article concludes with

> The real problem is a lack of care. And the slop; you can build it with any stack.

so you agree.

__alexs · 9 days ago
This is such a lazy take.

Electron is very easy to deliver a good quality app for everything but absolute power users.

Yes it's horribly slow but it enables rapid experimentation and it's easy to deliver the wide range of UI integrations you are likely to want in a chat-esq experience.

dangus · 9 days ago
It’s not even horribly slow. It works fine. It’s just a chat program. It’s the right trade off for the job.

Doing more work for no reason is stupid even if you the have money of a small nation.

The inevitable differences between platforms you get with all native everything isn’t a good user experience, either. Now you need to duplicate your documentation and support pages and have support staff available to address every platform. And what’s the payoff? Saving 80MB of RAM? Gaining milliseconds of latency that Joe Business will never notice as he’s hunt and pecking his way through the interface?

I thought we were done with Electron hate articles. It’s so 2018 to complain about it. It’s like talking about millennials and their skinny jeans. Yawn.

paulddraper · 8 days ago
> Even doing a TUI seems like a revelation to current generations

Ironic statement, given that Claude Code is exactly that.

So maybe Claude could have used another cross-platform desktop development framework, but Electron was better/faster.

pjmlp · 8 days ago
ncurses then
hresvelgr · 8 days ago
Something that isn't touched on as much is that in the time between old-school native apps and Electron apps is design systems and brand language have become much more prevalent, and implementing native UI often results in compromising design, and brand elements. Most applications used to look more or less the same, nowadays two apps on the same computer can look completely different. No one wants to compromise on design.

This mentality creates a worse experience for end users because all applications have their own conventions and no one wants to be dictated to what good UX is. The best UX in every single instance I've encountered is consistency. Sure, some old UIs were obtuse (90% weren't) but they were obtuse in predictable ways that someone could reasonably navigate. The argument here is between platform consistency and application consistency. Should all apps on the platform look the same, or should the app look the same on all platforms?

edit: grammar

pavlov · 8 days ago
If I look at the Notion and Linear desktop apps, they’re essentially identical in styling and design. They’re often considered the best of today’s web/Electron productivity apps, and they have converged on a style that’s basically what Apple had five years ago.

IMO that’s a fairly strong argument that the branding was always unnecessary, and apps would have been better off built from a common set of UI components following uniform human interface guidelines.

lelandfe · 8 days ago
I do notice those things occupying your "essentially," and your "basically." The success of worse designed stuff is a hard thing to argue against, though.
zigzag312 · 7 days ago
> The best UX in every single instance I've encountered is consistency.

While I agree that consistency is hugely important, I have also seen a lot of cases where it made the UX worse. The reason is that, unfortunately, UX isn't so simple. There isn't a single UX rule that is always true. UX design rules (best practices, guidelines, or principles) are a good starting point, but in a lot of situations multiple rules are conflicting each other. UI/UX design is dealing with tradeoffs most of the time. Good designer will know when breaking a specific rule will actually improve the UX.

Consistency is very important, but sometimes a custom UI element will be the best tool for the job. For example, imagine UI for seat selection in a movie theater ticket booking app. A consistent design would mean using standard controls users are already familiar with, but no standard control will provide high quality UX in this situation (not without heavy modifications).

But I still I agree with you that a lot of bad UX is due to inconsistency. There needs to be a good reason each time consistency broken and often it is broken for the wrong reasons.

rixed · 8 days ago

  > No one wants to compromise on design.
I, the user, would totally want that.

copperx · 8 days ago
The user is at the bottom of the stakeholder list.
ttd · 9 days ago
Some random thoughts, since I've had a similar train of thought for a while now.

On one hand I also lament the amount of hardware-potential wastage that occurs with deep stacks of abstractions. On the other hand, I've evolved my perspective into feeling that the medium doesn't really matter as much as the result... and most software is about achieving a result. I still take personal joy in writing what I think is well-crafted code, and I also accept that that may become more niche as time goes on.

To me this shift from software-as-craft to software-as-bulk-product has some similarities to the "pets vs cattle" mindset change when thinking about server / process orchestration and provisioning.

Then also on the dismay of JS becoming even more entrenched as the lingua franca. There's every possibility that in a software-as-bulk-product world, LLM-driven development could land on a safer language due to efficiency gains from e.g. static type checking. Economically I wonder if an adoption of a different lingua franca could manifest by way of increasing LLM development speed / throughput.

usrnm · 9 days ago
> LLM-driven development could land on a safer language

Why does an LLM need to produce human readable code at all? Especially in a language optimized around preventing humans from making human mistakes. For now, sure, we're in the transitional period, but in the long run? Why?

jerf · 9 days ago
From my post at https://jerf.org/iri/post/2026/what_value_code_in_ai_era/ , in a footnote:

"It has been lost in AI money-grabbing frenzy but a few years ago we were talking a lot about AIs being “legible”, that they could explain their actions in human-comprehensible terms. “Running code we can examine” is the highest grade of legibility any AI system has produced to date. We should not give that away.

"We will, of course. The Number Must Go Up. We aren’t very good at this sort of thinking.

"But we shouldn’t."

mjr00 · 9 days ago
Because the traits that make code easy for LLMs to work on are the same that make it ideal for humans: predictable patterns, clearly named functions and variables, one canonical way to accomplish a task, logical separation of concerns, clear separation of layers of abstraction, etc. Ultimately human readability costs very little.
mandevil · 9 days ago
I can't even imagine what "next token prediction" would look like generating x86 asm. Feels like 300 buffer overflows wearing a trench-coat, honestly.
recursive · 9 days ago
So humans can verify that the code is behaving in the interests of humanity.
ianbicking · 8 days ago
In a sense they do use their own language; they program in tokenized source, not ASCII source. And maybe that's just a form of syntactic sugar, like replacing >= with ≥ but x100. Or... maybe it's more than that? The tokenization and the models coevolve, from my understanding.

If we do enough passes of synthetic or goal-based training of source code generation, where the models are trained to successfully implement things instead of imitating success, then we may see new programming paradigms emerge that were not present in any training data. The "new language" would probably not be a programming language (because we train on generating source FOR a language, not giving it the freedom to generate languages), but could be new patterns within languages.

davorak · 9 days ago
> For now, sure, we're in the transitional period, but in the long run? Why?

Assuming that after the transitional period it will still be humans working with ai tools to build things where humans actually add value to the process. Will the human+ai where the ai can explain what the ai built in detail and the human leverages that to build something better, be more productive that the human+ai where the human does not leverage those details?

That 'explanation' will be/can act as the human readable code or the equivalent. It does not need to be any coding language we know today however. The languages we have today are already abstractions and generalizations over architectures, OSs, etc and that 'explanation' will be different but in the same vein.

IncreasePosts · 9 days ago
For one thing, because it would be trained on human readable code.
ttd · 9 days ago
Well, IMO there's not much reason for an LLM to be trained to produce machine language, nor a functional binary blob appearing fully-formed from its head.

If you take your question and look into the future, you might consider the existence of an LLM specifically trained to take high-level language inputs and produce machine code. Well, we already have that technology: we call it a compiler. Compilers exist, are (frequently) deterministic, and are generally exceedingly good at their job. Leaving this behind in favor of a complete English -> binary blob black box doesn't make much sense to me, logically or economically.

I also think there is utility in humans being able to read the generated output. At the end of the day, we're the conscious ones here, we're the ones operating in meatspace, and we're driving the goals, outputs, etc. Reading and understanding the building blocks of what's driving our lives feels like a good thing to me. (I don't have many well-articulated thoughts about the concept of singularity, so I leave that to others to contemplate.)

zadikian · 9 days ago
LLMs are better at dealing with human-readable code on their own too

Dead Comment

il-b · 9 days ago
Somehow, a CAD program, a 3D editor, a video editor, broadcasting software, a circuit simulation package, etc are all native applications with thousands of features each - yet native development has nothing to offer?
rapnie · 9 days ago
Besides going full native, a Tauri [0] app might have been another good alternative given they already use Rust. There are pros and cons to that choice, of course, and perhaps Tauri was considered and not chosen. Tauri plus Extism [1] would have been interesting, enabling polyglot plugin development via wasm. For Extism see also the list of known implementations [2].

[0] https://tauri.app/

[1] https://extism.org/

[2] https://github.com/extism/extism/discussions/684

TimFogarty · 9 days ago
I have been using Tauri for a macOS app I'm making[1] and it has been great. The app is only 11MB and I've had most of the APIs I'd need.

However, there are still some rough edges that have been annoying to work with. I think for my next project I will actually go back to electron. There are two issues that caused me pain:

1. I can't use Playwright to run e2e tests on the tauri app itself. That's because the webview doesn't expose the Chrome DevTools Protocol, and the tauri-driver [2] does not work on MacOS.

2. Security Scoped Resources aren't fully implemented which means if a user gets the app through the app store the app won't be able to remember file permissions between runs [3]. It's not too much of an issue since I probably won't release it on the app store, but still annoying.

But I hope Tauri continues to grow and we start seeing apps use it more.

[1] https://tidyfox.app/

[2] https://v2.tauri.app/develop/tests/webdriver/

[3] https://github.com/tauri-apps/tauri/issues/3716

moogly · 8 days ago
I have also used Tauri for one of my private apps, and using the OS's webview just doesn't work for me, so for my next stuff I'm probably going to use Electron as well since you can embed the webview. Yeah, it's bloated, but I'm so tired of things not working properly on Wayland without disabling this and that with random env vars and not able to do a fully OOTB single portable AppImage build on Linux. I can either make it work in Kubuntu + Arch (building on Ubuntu), or Arch + Fedora (building on Arch), but not all 3.

I tried Uno Platform and AvaloniaUI last year but I had similar problems there with external drag 'n' drop not working on Wayland and the difficulty in writing your own advanced components of which there are oodles to choose from using React/Vue/Solid/Svelte.

I'm not rewriting that other app in Electron, so for Tauri (the development of which largely seems to have stalled?) I'm hoping this[1] will solve my Linux hurdles. Going to try that branch out.

[1]: https://github.com/tauri-apps/tauri/pull/12491

And this is just desktop Linux. I used to care about Windows but stopped building for that.

adisinghyc · 6 days ago
such a real problem, tested the webdriver myself. really should be something to automate e2e tests via an mcp for tauri aswell.
Joeboy · 9 days ago
I find it a bit odd how much people talk up the Rust aspect of Tauri. For most cases you'll be writing a Typescript frontend and relying on boilerplate Rust + plugins for the backend. And I'd think most of the target audience would see that as a good thing.
francisl · 9 days ago
I working on a project using tauri with htmx. I know a bit uncommon. But the backend part use axum and htmx. No Js/Ts UI. It's fast, reliable and it work well. Plus its easy to share/reuse the lib with the server/web.
rapnie · 9 days ago
I am considering a Tauri app, but still wondering about architecture design choices, which the docs are sparse about. For instance the Web-side may constitute a more full-blown, say NextJS, webapp. And include the database persistance, say SQLite based, on the web side too, closest to the webapp. That goes against the sandboxing (and best-practice likely), where all platform-related side effects are dealt with Platform-side, implemented in Rust code. I wonder if it is a valid choice. There is a trade-off in more ease of use and straightforwardness vs. stricter sandboxing.
arjie · 8 days ago
I built a vibe-coded personal LLM client using Tauri and if I'm being honest the result was much worse than either Electron or just ditching it and going full ratatui. LLMs do well when you can supply them an verification loop and Tauri just doesn't have the primitives to expose. For my personal tools, I'm very happy with ratatui or non-TUI CLIs with Rust, but for GUIs I wouldn't use it. Just not good dev ex.
headcanon · 9 days ago
+1 for Tauri, I've been using it for my recent vibe-coded experimental apps. Making rust the "center of gravity" for the app lets me use the best of all worlds:

- declarative-ish UI in typescript with react

- rust backend for performance-sensitive operations

- I can run a python sidecar, bundled with the app, that lets me use python libraries if I need it

If I can and it makes sense to, I'll pull functionality into rust progressively, but this give me a ton of flexibility and lets me use the best parts of each language/platform.

Its fast too and doesn't use a ton of memory like electron apps do.

EduardoBautista · 9 days ago
Also, Rust's strong and strict type system keeps Claude honest. It seems as if the big LLM models have trained on a lot of poorly written TypeScript because they tend to use type assertions such as `as any` and eslint disable comments.

I had to add strict ESLint and TypeScript rules to keep guardrails on the coding agents.

rapnie · 9 days ago
I added a list of known Extism implementers to my comment above, to take inspiration from should Extism be attractive to consider for you.
thoughtfulchris · 8 days ago
My team is building a cross platform app with Tauri that is mobile, web, and desktop in one codebase and we've had almost nothing bad to say. It's been great. Also the executable size and security are amazing. Rust is nice. Haven't done as much with it yet but it will come in useful soon as we plan to implement on-device AI models that are faster in Rust than WebGPU.
oooyay · 9 days ago
I use something similar to Tauri called Wails: https://wails.io/ that's Go-based.
tvink · 9 days ago
Looks cool, but the phrase 'build applications with the flexibility and power of go' made me chuckle. Least damn flexible language in this whole space.
YmiYugy · 8 days ago
This might be a "the grass is greener on the other side" situation because I do a lot more web than native dev, but in my experience native while just as quirky as web will usually give you low level APIs to work around design flaws. On web it too often feels like you can either accept a slightly janky result or throw everything away and use canvas or webgl. Here are some recent examples I stumbled across: - try putting a semi transparent element on part of an image with rounded corners and you will observe unfixable anti-alias issues in those corners - try animating together an input with the on screen keyboard - try doing a JS driven animation in a real app (never the main thread feels hopeless and houdini animation worklets never materialized)

I don't think it's that native has nothing to offer. I think that developing (in case of desktop) for 3 different platforms all with own complication of what is native UI is a nightmare. macos has swiftui (incomplete), uikit and appkit, linux in practice gtk/qt, windows winui 3 (fundamentally broken) with WPF and WinForms still hanging around .

rayiner · 8 days ago
> I think that developing (in case of desktop) for 3 different platforms all with own complication of what is native UI is a nightmare. macos has swiftui (incomplete), uikit and appkit, linux in practice gtk/qt, windows winui 3 (fundamentally broken) with WPF and WinForms still hanging around .

Wouldn’t it be a good use of AI to port the same app to several native platforms?

YmiYugy · 8 days ago
yes it would, but depending on the app it could put you in a ton of hurt. - AI has gotten a lot better on less popular tech, but there is still a big capability gap between native frameworks an the blessed react + tailwind stack. - You will get something that is likely in the right shape but littered with a million subtle bugs and fixing them without having intimate knowledge of the plat form is really hard.
odiroot · 9 days ago
I'd still take native KDE/Plasma apps over Electron any day. Just the performance and memory usage alone is worth it.

Sublime Text feels so much snappier than VSCode, for another example. And I can leave it running for weeks without it leaking memory.