Readit News logoReadit News
geerlingguy · 2 years ago
I've used this for years when passing large files between systems in weird network environments, it's almost always flawless.

For some more exotic testing, I was able to run my own magic wormhole relay[1], which let me tweak some things for faster/more reliable huge file copies. I still hate how often Google Drive will fall over when you throw a 10s-of-GB file at it.

[1] https://www.jeffgeerling.com/blog/2023/my-own-magic-wormhole...

bscphil · 2 years ago
> For some more exotic testing, I was able to run my own magic wormhole relay[1], which let me tweak some things for faster/more reliable huge file copies.

The lack of improvement in these tools is pretty devastating. There was a flurry of activity around PAKEs like 6 years ago now, but we're still missing:

* reliable hole punching so you don't need a slow relay server

* multiple simultaneous TCP streams (or a carefully designed UDP protocol) to get large amounts of data through long fat pipes quickly

Last time I tried using a Wormhole to transmit a large amount of data, I was limited to 20 MB/sec thanks to the bandwidth-delay product. I ended up using plain old http, with aria2c and multiple streams I maxed out a 1 Gbps line.

IMO there's no reason why PAKE tools shouldn't have completely displaced over-complicated stuff like Globus (proprietary) for long distance transfer of huge data, but here we are stuck in the past.

themoonisachees · 2 years ago
I overall agree, but "reliable holpunching" is an oxymoron. Hole punching is by definition an exploit of undefined behavior, and I don't see the specs getting updated to support it. UPnP IGD was supposed to be that, but well...
Uptrenda · 2 years ago
I've been working on this problem for a few years now and have made considerable progress. https://p2pd.readthedocs.io/en/latest/python/index.html

I'm working on a branch that considerably improves the current code and hole punching in it works like a swiss watch. If you're interested you should check out some of the features that work well already.

croemer · 2 years ago
20MB/sec is 160Mbps, so wormhole wasn't that far off the 1Gbps. Sure not maxing out but within a factor of 6.
sleepydog · 2 years ago
As a protocol tcp should be able to utilize a long fat pipe with a large enough receive window. You might want to check what window scaling factor is used and look for a tunable. I accept that some implementations may have limits beyond the protocol level. And even low levels of packet loss can severely affect throughput of a single stream.

A bigger reason you want multiple streams is because most network providers use a stream identifier like the 5-tuple hash to spread traffic, and support single-stream bandwidth much lower than whatever aggregate they may advertise.

cl3misch · 2 years ago
> you need a machine that can handle whatever link speeds you need

I would have expected the relay server only being used for initial handshake to punch through NAT, after which the transfer is P2P. Only in the case of some network restrictions the data really flows through the relay. How could they afford running the free relay otherwise?

lotharrr · 2 years ago
There are two servers. The "mailbox server" helps with handshakes and metadata transfers, and is super-low bandwidth, a few hundred bytes per connection. The "transit relay helper" is the one that handles the bulk data transfer iff the two sides were unable to establish a direct connection.

I've been meaning to find the time to add NAT-hole-punching for years, but haven't managed it yet. We'd use the mailbox server messages to help the two sides learn about the IP addresses to use. That would increase the percentage of transfers that avoid the relay, but the last I read, something like 20% of peer-pairs would still need the relay, because their NATs are too restrictive.

The relay usage hasn't been expensive enough to worry about, but if it gets more popular, that might change.

from-nibly · 2 years ago
You cant make a p2p connection over a NAT without exposing a port on the public side of the NAT.
bsharper · 2 years ago
I end up using a combination of scp, LocalSend, magic wormhole and sharedrop.io. Occasionally `python -m http.server` in a pinch for local downloads. It's unfortunate that this xkcd comic is still as relevant as it was in 2011: https://xkcd.com/949/
dangoodmanUT · 2 years ago
I just read this out in your voice
geerlingguy · 2 years ago
Heh and I was able to do some of that work in service of the dumb but fun test of Internet vs Pigeon data transfer speeds.
aftergibson · 2 years ago
This is one of those amazing single feature utilities that does one thing incredibly well and goes completely unnoticed as it’s so good but also unremarkable. I should try to be more grateful for these brilliant creations.
hoppyhoppy2 · 2 years ago
A similar project with some nice features that I use is croc: https://github.com/schollz/croc
Klathmon · 2 years ago
I've used https://file.pizza a bunch before, only because of the memorable name
foresto · 2 years ago
It was useless for large files when I tried it, at least on Firefox. It seemed to be trying to pull the entire file into RAM.
wodenokoto · 2 years ago
It hasn’t been working for me for years, but somehow I always end up there when I need to transfer a large file. It’s just so easy to remember the name and url!
fn0rd_ · 2 years ago
Maybe stay with wormhole

https://redrocket.club/posts/croc/

Twixes · 2 years ago
Got fixed pretty thoroughly though, it seems! https://schollz.com/tinker/croc9/
Lord_Zero · 2 years ago
I love croc
xbkandxe · 2 years ago
Ditto. Been using it for years to transfer very large files to friends when throwing them up on my webserver will be too slow.
netsec_burn · 2 years ago
I've used wormhole once to move a 70 GB file. Couldn't possibly do that before. And yes, I know I used the bandwidth of the relay server, I donated to Debian immediately afterwards (they run the relay for the version in the apt package).
lotharrr · 2 years ago
(magic-wormhole author here)

Thanks for making a donation!

I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.

Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.

Thanks for using magic wormhole!

password4321 · 2 years ago
> move to a slower-but-flat-rate provider

As I'm sure you're aware: https://www.scaleway.com/en/stardust-instances/ "up to 100Mbps" for $4/month

jancsika · 2 years ago
I remember at one point reading about webrtc and some kind of "introducer" server that would start the peer-to-peer connections between clients.

Does wormhole try something like that before acting as a relay?

pyrolistical · 2 years ago
Seems like the only way to ensure wormhole to scale is to only to use relay server to setup direct connections.

I know this requires one of the ends to be able to open ports or whatever but that should be baked into the wormhole setup.

floam · 2 years ago
Do you do NAT hole punching, and/or port traversal like uPNP, NAT-PMP? I think for all but the most hostile networks the use of the relay server can be almost always avoided.
AtlasBarfed · 2 years ago
It took this far down in the comments to get to some inkling of the meat of this.

It relys on some singular or small set of donated servers?

NAT <-> NAT traversal is obviously the biggest motivator, since otherwise you just scp or rsync or sftp if you don't have the dual barrier.

Is the relay server configurable? Seemed to be implied it is somewhat hardcoded.

lotharrr · 2 years ago
Yes, it relies on two servers, both of which I run. All connections use the "mailbox server", to exchange short messages, which are used to do the cryptographic negotiation, and then trade instructions like "I want to send you a file, please tell me what IP addresses to try".

Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.

The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.

Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:

* my public IP addresses * your public IP addresses * helperA (after a short delay) * helperB (after a short delay)

and the first one to negotiate successfully will get used.

> since otherwise you just scp or rsync or sftp if you don't have the dual barrier

True, but wormhole also means you don't have to set up pubkey ahead of time.

grumbel · 2 years ago
> scp or rsync or sftp

All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.

If I have a direct network connection I tend to go with:

    python3 -m http.server
or

    tar ...| nc
Neither of which is great, but at least you'll find them on many machines already preinstalled.

teruakohatu · 2 years ago
The wormhole transit protocol will attempt to arrange a direct connection and avoid transferring data through the relay.
bredren · 2 years ago
Is there a switch to fail rather than fall back on relay?
DavideNL · 2 years ago
Just noticed by coincidence there's also:

"Rust implementation of Magic Wormhole, with new features and enhancements": https://github.com/magic-wormhole/magic-wormhole.rs

Klonoar · 2 years ago
This lacks a few of the features of the other implementations (e.g, sending a zip and having it unpack correctly).
meejah · 2 years ago
There is also https://magic-wormhole.readthedocs.io/en/latest/ecosystem.ht... and if that is lacking anything please file a ticket or pull-request
lotharrr · 2 years ago
author here.. happy to answer any questions!
panarky · 2 years ago
I use wormhole a lot, but I've been too lazy to figure out if it's as secure as ssh/scp, so I always gpg the file I'm transferring before putting it into wormhole.

Is that paranoid behavior?

lotharrr · 2 years ago
It can't hurt, but it shouldn't be necessary. The client-side software establishes an encrypted connection with its peer, using an encryption scheme that should be just as secure [but see below] as what GPG or SSH will give you.

For GPG to add security, you also have to make sure the GPG key is transferred safely, which adds work to the transfer process. Either you're GPG-encrypting to a public key (which you must have copied from the receiving side to the sending side at some point), or you're using a symmetric-key passphrase (which you must generate randomly, to be secure, and then copy it from one side to the other).

I should note that magic-wormhole's encryption scheme is not post-quantum -secure. So if you've managed to get a GPG symmetric key transferred to both sides via PQ-secure pathways (I see that current SSH 9.8 includes "kex: algorithm: sntrup761x25519-sha512@openssh.com", where NTRU is PQ-secure), then your extra GPG encryption will indeed provide you with security against a sufficiently-large quantum computer, whereas just magic-wormhole would be vulnerable.

jwilk · 2 years ago
> too lazy to figure out if it's as secure as ssh/scp

It absolutely isn't. See my rant: https://news.ycombinator.com/item?id=24519895

nickpsecurity · 2 years ago
I just wanted to thank you for making it. I wanted it just to bootstrap VM’s on new machines. I ended up using it all the time for many things. Great project!
deknos · 2 years ago
Hi, i have indeed some questions!

* is there an app for it? where i can share the password via qrcode? for when the data is to big for qrcodes? * what do you plan on doing regarding quantum computation? switching to some pqsafe cryptography, also to be safe before save-now-decrypt-later-attack? * is it possible to extend your protocol over more generic proxies like turn servers?

peterhadlaw · 2 years ago
Thank you, I use this software so much and it's really a wonderful solution
twp · 2 years ago
Seconded, magic-wormhole is fantastic and has "just worked" for me several times. Thank you for all your work in creating this brilliant software!
psanford · 2 years ago
Hey Brian! Just wanted to say that I really appreciate all the work that you put into Magic Wormhole.
happosai · 2 years ago
Nice project. Is there iptables connection tracking module that can handle the protocol?
lotharrr · 2 years ago
None that I know of. It just uses a TCP connection to the mailbox server (with keepalives), and then TCP connections for the bulk-transfer transit phase, so I can't think of anything special that iptables would need to handle it well.

The encrypted connection is used to exchange IP addresses.. maybe you're thinking of the module that e.g. can modify FTP messages to replace the IP addresses with NAT-translated ones? Our encryption layer would prevent that, but we'd probably get more benefit from implementing WebRTC or a more general hole-punching scheme, than by having the kernel be able to fiddle with the addresses.

Bayes7 · 2 years ago
great project!
matricaria · 2 years ago
Why is this better than rsycn or scp?
lotharrr · 2 years ago
scp/rsync are great tools, but they require pre-coordination of keys. One side is the client, the other is the server. The client needs an account on the server machine (so the human on the client machine must provide an ssh pubkey to the human on the server machine, who must be root, and create a new account with `adduser`, and populate the ~/.ssh/authorized_keys file). And the client needs to know the server's correct hostkey to avoid server-impersonation attacks (so the human on the server machine must provide an ssh host pubkey to the human on the client machine, who puts it in their ~/.ssh/known_hosts file).

Once that's established, and assuming that the two machines can reach each other (the server isn't behind a NAT box), then the client can `scp` and `rsync` all they want.

Magic-wormhole doesn't require that coordination phase. The human sending the file runs `wormhole send FILENAME` and the tool prints a code. The human receiving the file runs `wormhole rx CODE`. The two programs handle the rest. You don't need a new account on the receiving machine. The CODE is much much shorter than the two pubkeys that an SSH client/server pair require, short enough that you can yell it across the room, just a number and two words, like "4-purple-sausages". And you only need to send the code in one direction, not both.

Currently, the wormhole programs don't remember anything about the connection they just established: it's one-shot, ephemeral. So if you want to send a second file later, you have to repeat the tell-your-friend-a-code dance (with a new code). We have plans to leverage the first connection into making subsequent ones easier to establish, but no code yet.

Incidentally, `wormhole ssh` is a subcommand to set up the ~/.ssh/authorized_keys file from a wormhole code, which might help get the best of both worlds, at least for repeated transfers.

smusamashah · 2 years ago
tptacek · 2 years ago
wormhole-william is just a Go implementation of Magic Wormhole; those are the two you should use, Magic Wormhole and wormhole-william.
haunter · 2 years ago
sltkr · 2 years ago
10 different tools? Ridiculous! We need to develop one universal tool that covers everyone's use cases.
smusamashah · 2 years ago
I have a list of now 22 browser based p2p sharing tools that i shared here a few times in similar threads https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...
jimmySixDOF · 2 years ago
I was going to say SnapDrop was discontinued but I see it is back again thanks for the reminder this must be the third or fourth time I have thought they pulled the plug and see it get fixed back to normal bravo developers !!
singularity2001 · 2 years ago
scp
dgrove · 2 years ago
scp has the assumption that you have a login on the computers you're trying to share data from. wormhole allows for sharing with others without providing login access to the computer
alanbernstein · 2 years ago
I realize this is a dumb question, but what's a good way to do this between an iPhone and a MacBook? Airdrop is disabled (by policy), iCloud storage is full (because I'm lazy), and I use syncthing on every other device, but I haven't found a client I can use on my work iPhone.
etra0 · 2 years ago
I've been using sharedrop.io which also is open-source [1] and it works quite nice, I particularly like this one because I don't have to install any third-party app on any of the devices.

I think on mac Safari usually doesn't work as well as in Chrome, but I've been able to transfer from Windows to iOS, Windows to macOS and macOS to iOS without installing a thing.

[1] https://github.com/szimek/sharedrop

elesiuta · 2 years ago
If they're on the same network, cross platform, open source airdrop alternative https://github.com/localsend/localsend
tamimio · 2 years ago
I like LocalSend and Landrop. The latter performed better when I sent large files. However, neither of them does it automatically; you have to manually do it every time, which is okay since they don’t claim to be sync software.
aborsy · 2 years ago
Tailscale’s killer feature TailDrop. Reliable file transfer between devices of all kinds!
ducktective · 2 years ago
Connect both devices to the same WiFi network and use a http server like:

  python -m http.server

password4321 · 2 years ago
When python is not installed already (Windows pretty much) or the computer is the destination, I prefer https://github.com/sigoden/dufs, a single binary supporting uploads, folder .zip download, and even webdav
mixmastamyk · 2 years ago
I used this for years, then finally implemented uploads with flask. Can send photos from mobile quickly.
DavideNL · 2 years ago
sdoering · 2 years ago
Came here to say syncthing and Möbius Sync. Works like a charm for me between Win, *nix, macOS, Android and iOS.

But getting iOS to sync was a pain. Still, now it works just fine.

lazyeye · 2 years ago
Use Signal. If you install the desktop client Signal has a special "Note to Self" address you can use to transfer message attachments between devices.
Ylpertnodi · 2 years ago
Signal 'note to self' works. I have several nts's...medical, links only, shopping...if I think of something on one device (pc/ android for me), it's on the other within seconds.
vlovich123 · 2 years ago
Doesn’t signal have pretty restrictive size limitations or does this not apply to note to self?
skeledrew · 2 years ago
I use Telegram to transfer (and store) a crazy lot of stuff. Unlimited storage, with file sizes limited to 2GB (or 4GB for premium users).
hinkley · 2 years ago
I tend to message myself a lot of things. Usually links not files, but it works and it doesn’t take me out of the headspace I’m occupying. Either Apple messages or slack.
LVB · 2 years ago
loloquwowndueo · 2 years ago
Pairdrop.net is my go-to in these cases. Easy to remember, just add a P to Airdrop :)
tamimio · 2 years ago
Mobius Sync. Works like a charm.
toastercat · 2 years ago
Möbius Sync doesn't sync in the background; you must have the app in the foreground for it to function. So, not quite a proper substitute for Syncthing, but may work for OPs usecase.
aaronbrethorst · 2 years ago
Dropbox