Readit News logoReadit News
remram · a month ago
You can do this with ssh (and socat or mkfifo):

  # receiver
  socat UNIX-RECV:/tmp/foobar - | my-command

  # sender
  my-command | ssh host socat - UNIX-SENDTO:/tmp/foobar
You can relay through any other SSH server if your target is behind a firewall or subject to NAT (for example the public service ssh-j.com). This is end-to-end encrypted (SSH inside SSH):

  # receiver
  ssh top-secret@ssh-j.com -N -R ssh:22:localhost:22
  socat UNIX-RECV:/tmp/foobar - | my command

  # sender
  my-command | ssh -J top-secret@ssh-j.com ssh socat - UNIX-SENDTO:/tmp/foobar
(originally posted on the thread for "beam": https://news.ycombinator.com/item?id=42593135)

kragen · a month ago
This doesn't do most of what dumbpipe claims to do: it doesn't use QUIC, doesn't avoid using relays when possible, doesn't pick a relay for you, and doesn't keep your devices connected as network connections change. It also depends on you doing the ssh key management out-of-band, while dumbpipe appears to put the keys into random ASCII strings.

WireGuard is more similar.

nightfly · a month ago
Wireguard doesn't do most of those either
cakealert · a month ago
First sentence after following the link this topic is about:

  Dumb pipe punches through NATs, using on-the-fly node identifiers. It even keeps your machines connected as network conditions change.

Deleted Comment

nine_k · a month ago
You can simplify things even more by running https://www.tarsnap.com/spiped.html

It doesn't even assume ssh.

cyberge99 · a month ago
Similar with iroh.
rfl890 · a month ago
You could also set up a wg server, have both clients connect to it and then pass data between the two IPs. There's still a central relay passing data around, NAT or no NAT.
bb88 · a month ago
After getting burnt on wireguard a few times now, I'm not keen on using it anymore.

I want less magic, not more impenetrable ip table rulesets (in linux at least).

actinium226 · a month ago
Never knew about ssh-j.com. Neat.
ndyg · a month ago
The approach you describe requires host to have an open ssh port you can access. quic + nat hole punching works around this.
defraudbah · a month ago
you need an ssh server and an open port, different protocol etc

Dead Comment

smusamashah · a month ago
Somewhat relevant, I have a list of (mostly browser based + few no-setup cli) tools [1] to send files from A to B. I keep sharing this list here to fish more tools whenever something like this comes up.

[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...

Liquix · a month ago
I love LocalSend for quick transfers between your own devices, just werks on every OS.

https://github.com/localsend/localsend

voidmain0001 · a month ago
One limitation of iOS is the inability to use Bluetooth to transfer an image/video file to a Bluetooth receiver such as Windows. The Apple documentation requires a wired connection. https://support.apple.com/en-ca/120267

If LocalSend is running on iOS and Windows does LocalSend have the ability to send photos?

Deleted Comment

mrheosuper · a month ago
Recently there is this project that caught my attention. The project claim to support multi different protocol, on various web browser(even IE6), and extremely easy to setup(single python file). I have not given it a try, just want to share.

https://github.com/9001/copyparty

b_fiive · a month ago
same team behind dumbpipe makes sendme, which is much closer to this use case! https://github.com/n0-computer/sendme
44za12 · a month ago
Every time someone calls a product “dumb,” I get a little excited, because it usually means it’s actually smart. The internet is drowning in “smart” stuff that mostly just spies on you and tries to sell you socks. Sometimes, I just want a pipe that does what it says on the tin; move my bits, shut up, and don’t ask for my mother’s maiden name.
Sateeshm · a month ago
Dumb is now 'we don't steal your data'
josephg · a month ago
> and tries to sell you socks

I've been writing raw POSIX net\ code today. A lot of variables shorten "socket" to "sock". And my brain was like.. um, bad news! This is trying to sell us on their special sock(et)s!

lozf · a month ago
I thought it was quite a fun pun for the same reason.
thuridas · a month ago
But what about the enterprise ready AI features so that they can train on your data?
Aardwolf · a month ago
I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and have them communicate/transfer files. With same protocol in all OSes, of course. Seems like it should have been one of the first features USB could have had since the beginning, imho

I know there's something about USB A to USB A cables not existing in theory, but this would have been a good reason to have it exist, and USB C of course can do this

Also, Android to PC can sort of do it, and is arguably two computers in some form (but this was easier when Android still acted like a mass storage device). But e.g. two laptops can't do it with each other.

felurx · a month ago
You actually can connect two machines via USB-C (USB4 / Thunderbolt) and you get a network connection.

You only get Link-Local addresses by default, which I recall as somewhat annoying if you want to use SSH or whatever, but if you have something that does network discovery it should probably work pretty seamlessly.

See https://christian.kellner.me/2018/05/24/thunderbolt-networki... or https://superuser.com/a/1784608

userbinator · a month ago
You only get Link-Local addresses by default

The same thing happens with two machines connected via an Ethernet cable, which appears to be what this USB4 network feature does - an Ethernet NIC to software, but with different lower layer protocols.

grishka · a month ago
Non-USB-shaped older Thunderbolt, down to version 1, can do this too, iirc. But you do need the expensive and somewhat rare cable.
Dagger2 · a month ago
ssh is fine:

  ssh fe80::2%eth0
where fe80::2 is the peer's address, and eth0 is the local name of the interface they're on.

Unfortunately browsers have decided that link-local is pointless and refuse to support it, so HTTP is much more difficult.

clearleaf · a month ago
The incredible technology you're describing was possible on the Nintendo DS without wires and no need for a LAN either. It's a problem that's been solved in hundreds of different ways over the last 40 years but certain people don't want that problem to ever be solved without cloud services involved.

This dumb pipe thing is certainly interesting but it will run into the same problem as the myriad other solutions that already exist. If you're trying to give a 50MB file to a Windows user they have no way to receive it via any method a Linux user would have to send it unless the Windows user has gone out of their way to install something most people have never heard of.

deathanatos · a month ago
> It's a problem that's been solved in hundreds of different ways over the last 40 years

If we put the requirements of,

  1. E2EE
  2. Does not rely on Google. (Or ideally, any other for profit corporation.)
That eliminates like 90% of the recent trend of WebRTC P2P file transfer things that have graced HN over the last decade, as all WebRTC code seems to just copy Google's STUN/TURN servers between each other.

But as you say,

> but certain people don't want that problem to ever be solved without cloud services involved.

ISPs seem to be that in set. IPv6 would obsolete NAT, but my ISP was kind enough to ship an IPv6 firewall that by default drops incoming packets. It has four modes: drop everything, drop all inbound, a weird intermediate mode that is useless¹, and allow everything.

(¹this is Verizon fios; they claim, "This feature enables "outside-to-inside" access for IPv6 services so that an "outside" Internet service (gaming, video, etc.) can access a specific "inside" home client device & port in your local area network."; but the feature, AFAICT, requires the external peer's address. I.e., I need to know what my roaming IP will be before I leave the house, somehow, and that's obviously impossible. It seems utterly clearly slapped on to say "it comes with a firewall" but was never used by anyone at Verizon in the real world prior to shipping…)

loloquwowndueo · a month ago
Pairdrop.net - no need to install anything, transfers go over the local network if both devices are in a LAN.
elliotec · a month ago
I mean, windows users install things they’ve never heard of all the time.

If this was a real thing you needed to do, and it is too much work to get them to install WSL, you could probably just send them the link to install Git and use git bash to run that curl install sh script for dumbpipe.

And if this seemed like a very useful thing, it couldn’t be too hard to package this all up into a little utility that gets windows to do it.

But alas, it remains “easier” to do this with email or a cloud service or a usb stick/sd card.

kovek · a month ago
> It's a problem that's been solved in hundreds of different ways over the last 40 years

I guess now you can find the solution that you need by telling the requirements to LLMs who have now indexed a lot of the tradeoffs

userbinator · a month ago
USB is asymmetric - there's a host and a device, and the latter acts as a polled slave.

The use-case of a wired connection between two PCs was already solved years before USB --- with Ethernet.

genewitch · a month ago
there are USB 2.0 (and probably 1.x) devices with usb-A on both sides and a small box in the middle that acts as a network crossover between two machines, i've seen them in stores. I've never used one because i know how to set CIDR. And, as others have mentioned, this does just work with usb-c.
jrm4 · a month ago
Like so many possible networking/connection nice things that we can't have, you really can directly blame this one on "the companies."

Brought to you by the same people that made "peer-to-peer" a dirty word.

1vuio0pswjnm7 · a month ago
"I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and them communicate/transfer files."

After TCP/IP became standard on personal computers, I used Ethernet crossover cable to transfer large files between compuers. I always have some non-networked computers. USB sticks were not yet available.

Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.

Much has changed over the years. Expect replies about those changes. There are many, many different ways to transfer files today. Expect comments advocating those other methods. But the crossover cable method still works. With a USB-to-Ethernet adapter it can work even on computers with no Ethernet port. No special software is needed. No router is needed. No internet is needed. Certainly no third party is needed. Just TCP/IP which is still a standard.

loloquwowndueo · a month ago
> Today the Ethernet port is removed from many personal computers

Pretty sure one can set up an ad hoc wifi network for this.

vineyardmike · a month ago
> Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.

Oh come on, this isn't a conspiracy. For the last decade, every single laptop computer I've used has been thinner than an ethernet port, and every desktop has shipped with an ethernet port. I think the last few generations of MacBook Pros (which were famously thicker than prior generations) are roughly as thick as an ethernet port, but I'm not sure it'd practically fit.

And I know hacker news hates thin laptops, but most people prefer thin laptops over laptops with ethernet. My MacBook Air is thin and powerful and portable and can be charged with a USB-C phone charger. It's totally worth it for 99% of people to not have an ethernet port.

Bluecobra · a month ago
You used to be able to connect two PC’s together via the parallel port. I had to do this once to re-install Windows 95 on a laptop with a hard drive and floppy. It was painfully slow but it worked.
userbinator · a month ago
https://en.wikipedia.org/wiki/IEEE_1284#Characteristics

Up to 2MB/s effective throughput, better than 10M Ethernet. Likely it was slower for you due to other limitations.

exidy · a month ago
I believe this was pioneered by Laplink[0].

[0] https://en.wikipedia.org/wiki/Laplink

chriswarbo · a month ago
> You used to be able to connect two PC’s together via the parallel port.

This could be done on Amiga too, using parnet https://crossconnect.tripod.com/PARNET.HTML

I recall it being easier to set up than a dialup modem (since the latter also required installing a TCP/IP stack)

viraptor · a month ago
On Linux you can do it by creating an MTP endpoint, like mobile devices do https://github.com/viveris/uMTP-Responder

It looks like MS also had one, but only on Windows CE for some reason https://www.microsoft.com/en-us/download/details.aspx?id=933...

cozzyd · a month ago
Or an rndis gadget
rtpg · a month ago
You can plug an ethernet cable in between machines and send files over it! So that period where this would be useful already had a pretty good solution (I vividly remember doing this like 3 times in the same day with some family members for some reason (probably nobody having a USB drive at the moment!))
thebruce87m · a month ago
FireWire did the IIRC. When buying a new Mac you would connect them via a single cable to do the data transfer.
ericwood · a month ago
Macs still have target disk mode but it requires rebooting. Highly recommend using thunderbolt to transfer over to a new computer!
0_____0 · a month ago
IIRC Apple computers can be put into Target Disk Mode, which lets a host computer rifle through its contents as if it is a dumb disk drive
pletnes · a month ago
This requires shutting down one computer (the mac) first, though.
incanus77 · a month ago
I realize you are asking for cross-OS, but Mac OS X was doing this in 2002 (and probably earlier) for PowerBook models with an ethernet cable between them. As I recall, iBooks didn't do this even if they had the port, but PowerBooks would do the auto-crossover, then Finder/AFP would support the machines showing up for each other.
dotancohen · a month ago
I actually have a USB-A to USB-A cable. It came with priority Windows software on an 80mm CD-ROM. It wasn't long enough to connect two desktops in the same room if not on the same table, and I just never tried with a laptop because all my laptops have run Debian or some variant thereof since 2005 or so.
tripdout · a month ago
The USB 3.0 spec does actually support A to A cables, but I'm not sure if any software makes use of it.
xandrius · a month ago
You mean a cross ethernet cable?

Or using Bluetooth? Or using local WiFi (direct or not).

deathanatos · a month ago
> You mean a cross ethernet cable?

If both machines have an Ethernet port.

> Or using Bluetooth?

Half the time I need a dumb pipe, it's from personal to work. Regrettably, work forces me to use macOS, and macOS's bluetooth implementation is just an utter tire fire, and doesn't work 90% of the time. I usually fall back to networks, for that reason.

Of course, MBPs also have the "no port" problem above.

> Or using local WiFi (direct or not)

If I'm home, yeah. But TFA is advertising the ability to hole-punch, and if I'm traveling, that'd be an advantage.

defraudbah · a month ago
ethernet works out of the box, i used local lans long time before i knew how to program

usb probably works too if you google a bit

meindnoch · a month ago
But then nobody could analyze your files... :/
kiitos · a month ago
> In the iroh world, you dial another node by its NodeId, a 32-byte ed25519 public key. Unlike IP addresses, this ID is globally unique, and instead of being assigned,

ok but my network stack doesn't speak nodeID, it speaks tcp/ip -- so something has to resolve your public keys to a host and port that I can actually connect to.

this is roughly the same use case that DNS solves, except that domain names are generally human-compatible, and DNS servers are maintained by an enormous number of globally-distributed network engineers

it seems like this system rolls its own public key string to actual IP address and port mapping/discovery system, and offers a default implementation based on dns which the authors own and operate, which is fine. but the authors kind of hand-wave that part of the system away, saying hey you don't need to use this infra, you can use your own, or do whatever you want!

but like, for systems like this, discovery is basically the entire ball game and the only difficult problem that needs to be solved! if you ignore the details of node discovery and name mapping/resolution like this, then of course you can build any kind p2p network with content-addressable identifiers or whatever. it's so easy a cave man can do it, just look at ipfs

rklaehn · a month ago
We do use DNS, but we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised.

And, as somebody else remarked, the ticket contains the direct IP addresses for the case where the two nodes are either in the same private subnet or publicly reachable. It also contains the relay URL of the listener, so as long as the listener remains in the same geographic region, dumbpipe won't have to use node discovery at all even if the listener ip changes or is behind a NAT.

kiitos · a month ago

    we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised
if users access that bittorrent mainline DHT thru a third party server then it's obviously not decentralized, right? that server is the central point to which clients delegate trust

makeworld · a month ago
In practice, the "ticket" provided by dumbpipe contains your machine's IP and port information. So I believe two machines could connect without any need for discovery infra, in situations that use tickets. (And have UPnP enabled or something.)

See also https://www.iroh.computer/docs/concepts/discovery

kiitos · a month ago
OK so given

    $ ./dumbpipe listen
    ...
    To connect use: ./dumbpipe connect nodeecsxraxj...
that `nodeecsxraxj...` is a serialized form of some data type that includes the IP address(es) that the client needs to connect to?

forgive me for what is maybe a dumb question, but if this is the case, then what is the value proposition here? is it just the smushing together of some IPs with a public key in a single identifier?

Liftyee · a month ago
I wonder how much reimplementation there is between this and Tailscale, as it seems like there are many needs in common. One would think that there are already low level libraries out there to handle going through NATs, etc. (but maybe this is just the first of said libraries!)
homebrewer · a month ago
Who cares at this point, Tailscale itself is the 600th reimplementation of the same idea, with predecessors like nebula and tinc. They came at the right time, with WireGuard being on the rise, and poured millions into advertisements that their community "competitors" didn't have since most of them isn't riding on VC money.
api · a month ago
I've met a lot of people who think Tailscale invented what it does.

Prior to Tailscale there were companies -- ZeroTier and before it Hamachi -- and as you say many FOSS projects and academic efforts. Overlay networks aren't new. VPNs aren't new. Automated P2P with relay fallback isn't new. Cryptographic addressing isn't new. They just put a good UX in front of it, somewhat easier to onboard than their competitors, and as you say had a really big marketing budget due to raising a lot when money was cheap.

Very few things are totally new. In the past ten years LLMs are the only actually new thing I've seen.

Shill disclosure: I'm the founder of ZeroTier, and we've pivoted a bit more into the industrial space, but we still exist as a free thing you can use to build overlays. Still growing too. Don't have any ill will toward Tailscale. As I said nobody "owns" P2P and they're doing something a bit different from us in terms of UX and target market.

These "dumb pipe" tools -- CLI tooling for P2P pipes -- are cool and useful and IMHO aren't exactly the same thing as ZT or TS etc. They're for a different set of use cases.

The worst thing about the Internet is that it evolved into a client-server architecture. I remain very cautiously optimistic that we might fix this eventually, or at least enable the other paradigm to a much greater extent.

benreesman · a month ago
TailScale sells certificate escrow, painless SSO, high-quality integrations/co-sell with e.g. Mullvad, full-take netlogging, and "Enterprise Look and Feel" wrapped around the real technology. You can run WireGuard yourself, and sometimes I do, but certificate management is tricky to get right, the rest is a pain in the ass, and TailScale is cheap. The hackers behind it (bfitz et all) are world-class, and you can get it past most "Enterprise" gatekeeping.

It doesn't solve problems on my personal infrastructure that I couldnt solve myself, but it solves my work problem of getting real networking accepted by a diverse audience with competing priorities. And its like 20 bucks a seat with all the trimmings. Idk, maybe its 50, I don't really check because its the cheapest thing on my list of cloud stuff by an order of magnitude or so.

Its getting more enterprise and less hackerish with time, big surprise, and I'm glad there's younger stuff in the pipe like TFA to keep it honest, but of all the necessary evils in The Cloud? I feel rather fondly towards tailscale rather than with cold rage like most everything else on the Mercury card.

senko · a month ago
I've managed a Wireguard-based VPN before Tailscale. It's pretty straightforward[0].

Tailscale makes it even more convenient and adds some goodies on top. I'm a happy (free tier) user.

[0] I also managed an OpenVPN setup with a few hundred nodes a few decades back. Boy do we have it easy now...

conradev · a month ago
Iroh is much better suited for the application layer. You can multiplex multiple QUIC streams over the same connection, each for a specific purpose. All you need is access to QUIC, no virtual network interface.

It’s a bit like gRPC except you control each byte stream and can use one for, say, a voice call while you use another for file transfer and yet another for simple RPC. It’s probably most similar to WebRTC but you have more options than SCTP and RTMP(?).

mpalmer · a month ago
This is made using iroh, which aims to be a low level framework for distributed software. Involves networking but also various data structures that enable replication and consistency between networked nodes.
danenania · a month ago
Does it include reconnection logic? I presume that's not considered "low level", but it does always annoyingly have to be reimplemented every time you deal with long-lived socket connections in production.
rklaehn · a month ago
Iroh is one of these low level libraries. It is basically p2p QUIC, where p2p means 1. addressing by node id and 2. hole punching.

Dumbpipe is meant to be an useful standalone tool, but also a very simple showcase for what you can do with iroh.

TechDebtDevin · a month ago
Connecting phones on mobile/cignat with Tailscale is really one of the few software "Aha" moments I've had.
cr125rider · a month ago
Isn’t tailscale a wrapper around WireGuard? With some other hole-punch sprinkles?
odo1242 · a month ago
Well, WireGuard and WebRTC, but yes.

The real feature of Tailscale is being able to connect to devices without worrying about where they are.

scosman · a month ago
Nat punch is a big part of it, but so is key management/sync, and configuration management.
nine_k · a month ago
...and DNS, and host provisioning, and SSO, and RBAC, and other stuff you need to sell to enterprises.
kiitos · a month ago
tailscale is a wrapper around wireguard in the same way that dropbox is a wrapper around rsync
benreesman · a month ago
Theres overlap but i can see complementary uses as well. It uses some of the same STUN-family of tecniques. I have no plans to stop using TailScale (or socat) but i think i use this every day now too.
max-privatevoid · a month ago
iroh is meant to be this library, but there is also libp2p, which existed before iroh.
binary132 · a month ago
Part of the problem with libp2p is that the canonical implementations are in Go which isn’t really well-suited to use from C++, JS, or Rust. The diversity of implementations in other languages makes for varying levels of quality and features. They really should have just picked one implementation that would be well-suited to use via C FFI and provided ergonomic wrappers for it.
zackmorris · a month ago
After writing a response about using this for games below, it occurred to me that most tunneling solutions have one or more fatal flaws that prevent them from being "the one true" tunnel. There are enough footguns that maybe we need a checklist similar to the "Why your anti-spam idea won’t work" checklist:

https://trog.qgl.org/20081217/the-why-your-anti-spam-idea-wo...

I'll start:

  Your solution..
  ( ) Can't punch through NAT
  ( ) Isn't fully cross-platform
  ( ) Must be installed at the OS level and can't be used standalone by an executable
  ( ) Only provides reliable or best-effort streams but not both
  ( ) Can't handle when the host or peer IP address changes
  ( ) Doesn't checksum data
  ( ) Doesn't automatically use encryption or default to using it
  ( ) Doesn't allow multiple connections to the same peer for channels or load balancing
  ( ) Doesn't contain window logic to emulate best-effort datagrams over about 1500 bytes
  ( ) Uses a restrictive license like GPL instead of MIT
Please add more and/or list solutions that pass the whole checklist!

rklaehn · a month ago
Nice list.

I think iroh checks all the boxes but one.

( ) Doesn't contain window logic to emulate best-effort datagrams over about 1500 bytes

So you want a way to send unreliable datagrams larger than one MTU. We don't have that, since we only support datagrams via https://datatracker.ietf.org/doc/html/rfc9221 .

You could just use streams - they are extremely lightweight. But those would then be reliable datagrams, which comes with some overhead you might not want.

So how hard would it be to implement window logic on top of RFC9221 datagrams?

flub · a month ago
I'm not sure I fully understand this window logic question. QUIC does MTU discovery, so if the link supports bigger datagrams the MTU will go up. Unreliable datagrams using RFC9221 can be sent up to the MTU size minus the QUIC packet overhead. So if your link supports >1500 bytes then you should be able to send datagrams >1500 bytes using iroh.
GoblinSlayer · a month ago
Also there's no solution to punch through NAT.
ilovefood · a month ago
iroh is fantastic tech.

I attended Rüdiger's (N0) workshop 2 weeks ago at the web3 summit in Berlin and was left super inspired. The code for building something like this is available here https://github.com/rklaehn/iroh-workshop-web3summit2025 and I highly recommend checking out the slides too :)

rklaehn · a month ago
Thank you for the praise! It is nice to hear that people enjoy these workshops.

I would love to see what people would build if they had a little bit more time with help from the n0 team. A one hour or even three hour workshop is too short.