Readit News logoReadit News
rwmj · 2 days ago
passt (the network stack that you might be using if you're running qemu, or podman containers) also has no dynamic memory allocations. I always thought it's quite an interesting achievement. https://blog.vmsplice.net/2021/10/a-new-approach-to-usermode... https://passt.top/passt/about/#security

Deleted Comment

rpcope1 · 2 days ago
It would be interesting to know why you would choose this over something like the Contiki uIP or lwIP that everything seems to use.
RealityVoid · 2 days ago
Not sure if they do for _this_ package, but the Wolf* people's model is usually selling certification packages so you can put their things in stuff that need certifications and you offload liability. You also get people that wrote it and that you can pay for support. I kind of like them, had a short project where I had to call on them for getting their WolfSSL to work with a ATECC508 device and it was pretty good support from them.
jpfr · 2 days ago
As the project is GPL’ed I guess they sell a commercial version. GPL is toxic for embedded commercial software. But it can be good marketing to sell the commercial version.

Edit: I meant commercial license

fulafel · a day ago
How does it deal with all the dynamic TCP buffering things where things may get quite large?
Ao7bei3s · a day ago
It has a fixed maximum number of concurrent sockets, and each socket has queues backed by per-socket fixed-size transmit and receive buffers (see `rxmem` and `txmem` in `struct tsocket`[1]). This is fine, because in TCP, each side advertises remaining buffer space via the window size header field [2] (possibly with its meaning modified by the window scale option during the initial handshake - see [3] & `struct PACKED tcp_opt_ws`), and possibly also how much it can maximally receive in one packet (via the MSS option on the initial handshake [4]; possibly modified by intermediary systems via MSS clamping). wolfip has unusually small buffer sizes, and hardcodes them via #define, and everything else (e.g. congestion control) is pretty rudimentary too, but otherwise it's pretty much the same as in a "normal" implementation.

[1] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [2] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [3] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [4] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca...

fulafel · a day ago
Ok. So I guess it doesn't try to go very fast.
CyberDildonics · 2 days ago
Are there TCP/IP stacks out there in common use that are allocating memory all the time?
fulafel · a day ago
Yes, TCP is pretty hungry for buffers. The bandwidth*delay product can eat gigs of memory on a server. You have to be ready to retransmit anything that's in flight / haven't received the ack for yet.
nly · a day ago
The bandwidth delay product for a 10Gbps stream for a 300ms RTT theoretically only requires ~384MB

One option is just to simply keep buffers small and fixed and disconnect blocked clients on write() after some timeout

CyberDildonics · a day ago
Needing memory doesn't have to mean allocating memory over and over. Memory allocation is expensive. If someone is doing that reusing memory is going to be by far the best optimization.
wmf · 2 days ago
Packets and sockets have to be stored in memory somehow. If you have a fixed pool that you reuse it's basically a slab allocator.
CyberDildonics · a day ago
You need some memory but that doesn't mean you would constantly allocate memory. There is a big difference between a few allocations and allocating in a hot loop.
bobmcnamara · a day ago
Yes, it is pretty common.

However sometimes the buffers are pooled so buffer allocator contention only occurs within the network stack or within a particular nic.

Deleted Comment

sedatk · 2 days ago
It only implements IPv4 which explains to a degree that why IPv6 isn't ubiquitous: it's costly to implement.
hrmtst93837 · a day ago
If you want IPv6 without dynamic allocation you end up rewriting half the stack anyway so probably not what most embedded engineers are itching to spend budget on. The weird part is that a lot of edge gear will be stuck in legacy-v4 limbo just because nobody wants to own that porting slog which means "ubiquitous IPv6" will keep being a conference slide more than a reality.
notepad0x90 · 2 days ago
It's just not worth it. the only thing keeping it alive is people being overly zealous over it. if the cost to implement is measured as '1', the cost to administer it is like '50'.
sedatk · 2 days ago
> the only thing keeping it alive is people being overly zealous over it

Hard disagree. It turned out to be great for mobile connectivity and IoT (Matter + Thread).

> the cost to administer it is like '50'.

I'm not sure if that's true. It feels like less work to me because you don't need to worry about NAT or DHCP as much as you need with IPv4.

nicman23 · a day ago
what. have you seen ipv4 block pricing?
toast0 · 2 days ago
Eh. IPv6 is probably cheaper to run compared to running large scale CGNAT. It's well deployed in mobile and in areas without a lot of legacy IPv4 assignments. Most of the high traffic content networks support it, so if you're an eyeball network, you can shift costs away from CGNAT to IPv6. You still have to do both though.

Is it my favorite? No. Is it well supported? Not everywhere. Is it going to win, eventually? Probably, but maybe IPv8 will happen, in which case maybe they learn and it it has a 10 years to 50% of traffic instead of 30 years to 50% of traffic.

gnerd00 · 2 days ago
my 15 year old Macbook does IPv6 and IPv4 effortlessly
preisschild · 18 hours ago
Matter (a smart home connectivity standard in use by many embedded devices) is using IPv6. Doesnt seem to be a problem there.