This is an outstanding blog post. Initially, the title did little to captivate me, but the blog post was so well written that I got nerd-sniped. Who knew this little adapter was so fascinating! I wonder if the manufacturer is buying the Mellanox cards used from data center tear-downs. The author claims they can be had for only 20 USD online. That seems too good to be true!
I cannot find anything for less than 285 USD. The blog post gave a price of 174 USD. I have no reason to disbelieve the author, but a bummer to see the current price is 110 USD more!
> The author claims they can be had for only 20 USD online. That seems too good to be true!
In my experience, the cheap eBay MLX cards are DellEMC/HPE/etc OEM cards. However I also encountered zero problems cross-flashing those cards back to generic Mellanox firmware. I'm running several of those cross-flashed CX-4 Lx cards going on six or seven years now and they've been totally bulletproof.
I believe the author is talking about the OCP (2.0) network card itself, that these adapters internally. The OCP nics are quite cheap compared to pcie - here’s 100GBE for 100!
https://ebay.us/m/HMQAph
This 100GbE card is an OCP 2.0 type 2 adapter, which will _probably_ not work with the PX PCB since that NIC has two of these mezzanine connectors, and PX only one.
What also may not work are Dell rNDC cards. They look like they have OCP 2.0 type 1 connectors, but may not quite fit (please correct me if I'm wrong). They do however have a nice cooling solution, which could be retrofitted to one of the OCP 2.0 cards.
I've also ordered a Chelsio T6225-OCP cards out of curiosity. These should fit in the PX adapter but require a 3rd-party driver on macOS (which then supports jumbo frames, etc.)
What also fits physically is a Broadcom BCM957304M3040C, but there are no drivers on macOS, and I couldn't get the firmware updated on Linux either.
Ha! Been running these for years on both linux and windows (on lenovo x1 laptops). Using cheap chinese thunderbolt-to-nvme adapters + nvme-to-pcie boards + mellanox cx4 cards (recently got one cx5 and a solarflare x2).
If you don’t mind me asking, what are you using these for? Saturating these seems like it would have reasonably few workloads outside of like cdn or multi-tenant scenarios. Curious what my lack of imagination is hiding here.
Officially: to access NAS, get raw market-data files (tens to hundreds of gigabytes a day), not needed on laptop every day, but only once in a while to fix or analyze something.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
I do media production, and sometimes move giant files (like ggufs) around my network, so 25 Gbps is more useful than 10 Gbps, if it's no too expensive.
Ha. I got one of the 10G Thunderbolt adapters a several years ago. And eventually started having problems with Zoom calls around noon. With dropped connections and stuttering. Zoom restarts usually fixed the problem.
After it happened 3-4 times, I started debugging. It turned out that we usually get at least a bit of sunlight around noon, as it burns away the morning clouds. And my Thunderbolt box was in direct sunlight, and eventually started overheating.
And a Zoom restart made it fall back onto the Wifi connection instead of wired.
I fixed that by adding a small USB-powered fan to the Thunderbolt box as a temporary workaround. I just realized that it's been like this for the last 3 years: https://pics.ealex.net/s/overheat
Thunderbolt is basically external PCIe, so this is not so surprising. High speed NICs do consume a relatively large amount of power. I have a feeling I've seen that logo on the board before.
I don't know how to measure the direct power impact on a MacBook Pro (since it's got a battery), but the typical power consumption of these cards is 9 W, not much more than Aquantia 10 GBit cards.
Also, if you remember where you saw that logo, please let me know!
JFYI, for measuring power draw, you might be able to use `macmon`[0] to see the total system power consumption. The values reported by the internal current sensor seem to be quite accurate.
Plus 1-2.5w per active cable. You need the heatsinks as the cx4 cards expect active airflow, and active transceivers as well.
I have a 10gbit dual port card in a Lenovo mini pc. There is no normal way to get any heat out of there so I put a 12v small radial fan in there as support. It works great at 5v: silent and cool. It is a fan though so might not suit your purpose.
It isn't. There is no sense in which "Thunderbolt is basically external PCIe". Thunderbolt provides a means of encapsulating PCIe over its native protocols, which puts PCIe on the same footing as other encapsulated things like DisplayPort and, for TB4 and later, USB.
I'm surprised you are only getting 20gbit/s. I did not expect PCIe to be be the limiting factor here. I've got a 100gbit cx4 card currently in a PCIe3 X4 slot (for reasons, don't judge) and it easily maxes that out. I would have expected the 25g cx4 cards to be at least able to get everything out of it. RDMA is required to achieve that in a useful way though.
Yeah, it's because the network card adapter's heatsink is sandwiched between two PCBs. Not great, not terrible, works for me.
The placement is mostly determined by the design of the OCP 2.0 connector. OCP 3.0 has a connector at the short edge of the card, which allows exposing/extending the heat sink directly to the outer case.
If somebody has the talent, designing a Thunderbolt 5 adapter for OCP 3.0 cards could be a worthwhile project.
A Flex PCB connecting the OCP2 connector would allow to put the converter board behind the NIC board, allowing the NIC board to be exposed to the aluminum case to use the case itself as a heatsink (would need a split case so the NIC board can be screwed to one side of the case, pressing the main chip against it via a thermal pad).
As a stop-gap, I'd see if there was any way to get airflow into the case - I'd expect even a tiny fan would do much more than those two large heatsinks stuck onto the case (since the case itself has no thermal connection to the chip heatsink).
Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.
Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
The new low-power Realtek chipsets will definitely push 10 GbE forward because the chipset won't be much more expensive to integrate and run than the 2.5Gbps packages.
It all comes down to performance per Watt, the availability of cheap switching gear, and the actual utility in an office / home environment.
For 10 Gbps, cabling can be an issue. Existing "RJ45"-style Cat 6 cables could still work, but maybe not all of them.
Higher speeds will most likely demand a switch to fiber (for anything longer than a few meters) or Twinax DAC (for inter-device connects). Since Wifi already provides higher speeds, one may be inclined to upgrade just for that (because at some point, Wireless becomes Wired, too).
That change comes with the complexity of running new cabling, fiber splicing, worrying about different connectors (SFP+, SFP28, SFP56, QSFP28, ...), incompatible transceiver certifications, vendor lock-in, etc. Not a problem in the datacenter, but try to explain this to a layman.
Lastly, without a faster pipe to the Internet, what can you do other than NAS and AI? The computers will still get faster chips but most folks won't be able to make use of the bandwidth because they're still stuck on 1Gbps Internet or less.
But that will change. Swiss Init7 has shown that 25GBps Internet at home is not only feasible but also affordable, and China seems to be adding lots of 10G, and fiber in general.
Ethernet did not stagnate. Ethernet on UTP did stagnate due to reaching the limits of the technology, but Ethernet continues to advance over fiber.
For 10 Gbps I find it simpler and cheaper to use fiber or DACs, but motherboards don't provide SFP+, only RJ45 ports. Over 10 Gbps copper is a no go. SFP28 and above would be nice to have on motherboards, but that's a dream with almost zero chances to happen. For most people RJ45 + WiFi 7 is good enough, computer manufacturers will not put SFP+ or SFP28 for a small minority of people.
> Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.
Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.
> Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
For both laptops and desktops, PCI lanes. Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
For laptops in particular, power draw. The faster you push copper, the more power you need. And laptops have even less PCIe lanes available to waste.
For desktops, it's a question of market demand. Again - most applications don't need ultra high transfer rate, most household connectivity is DSL and (G)PON so 1 GBit/s is enough to max out the uplink. And those few users that do need higher transfer rates can always install a PCIe card, especially as there is a multitude of different options to provide high bandwidth connectivity.
> Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.
Yes but a hogwash of several gigabits sometimes does give you real-world performance of more than a gigabit.
> Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
It's been a bunch of years that a single lane could do 10Gbps, and a bunch more years that a single lane could do 5Gbps.
Also don't ethernet ports tend to be fed by the chipset? So they don't really take lanes.
> Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.
wifi is not faster.
However ethernet is not as critical as it used to be, even at the office. People like the conveniency of having laptops they can move around. Unless you are working from home, having a dedicated office space is now seen as a waste of space. If the speed of the wifi is good enough when you are in a meeting room or in your kitchen, there is no reason to plug your laptop when you move back in another place, especially if most connections are to the internet and not the local network. In the workplace, most NAS have been replaced by onedrive / gdrive, at home NAS use has always been limited to a niche population: nerds/techies, photographers, music or video producers...
We have 400Gbe which is certainly faster than USB.. but;
On consumer devices, I think part of the issue is that we’re still wedded to four-pair twisted copper as the physical medium. That worked well for Gigabit Ethernet, but once you push to 5 or 10 Gb/s it becomes inherently expensive. Twisted pair is simply a poor medium at those data rates, so you end up needing a large amount of complex silicon to compensate for attenuation, crosstalk, and noise.
That's doable but the double whammy is that most people use the network for 'internet' and 1G is simply more than enough, 10G therefore becomes quite niche so there's no enormous volume to overcome the inherent issues at low cost.
Wireless happened, I'd think. People started using wifi and cellular data for everything, so applications had to adapt to this lowest common denominator, and consumer broadband demand for faster-than-wifi speeds isn't there. Plus operators put all their money into cellular infra leaving no money to update broadband infra.
Wifi now can pretty realistically beat 2.5gbit/s while most Ethernet is still gigabit. It just seems strange to live in a world where the average laptop will get a faster connection speed over wifi than plugged in to Ethernet.
Small thing: I just checked Amazon.com: https://www.amazon.com/s?k=thunderbolt+25G&crid=2RHL4ZJL96Z9...
I cannot find anything for less than 285 USD. The blog post gave a price of 174 USD. I have no reason to disbelieve the author, but a bummer to see the current price is 110 USD more!
I think, tragically, the blog post has caused this price increase.
The offers on Amazon are most likely all drop shippers trying to gauge a price that works for them.
You might have better luck ordering directly from China for a fraction of the price: https://detail.1688.com/offer/836680468489.html
In my experience, the cheap eBay MLX cards are DellEMC/HPE/etc OEM cards. However I also encountered zero problems cross-flashing those cards back to generic Mellanox firmware. I'm running several of those cross-flashed CX-4 Lx cards going on six or seven years now and they've been totally bulletproof.
I'm going to try a couple other fan assisted cooling options, as I'd like to keep the setup reasonably compact.
I just ran fiber to my desk and I have a more expensive QNAP unit that does 10G SFP+, but this will let me max out the connection to my NAS.
Although I managed to panic the kernel a couple of times without the extra heatsinks on...
What also may not work are Dell rNDC cards. They look like they have OCP 2.0 type 1 connectors, but may not quite fit (please correct me if I'm wrong). They do however have a nice cooling solution, which could be retrofitted to one of the OCP 2.0 cards.
I've also ordered a Chelsio T6225-OCP cards out of curiosity. These should fit in the PX adapter but require a 3rd-party driver on macOS (which then supports jumbo frames, etc.)
What also fits physically is a Broadcom BCM957304M3040C, but there are no drivers on macOS, and I couldn't get the firmware updated on Linux either.
Deleted Comment
Pic of a previous cx3 (10 gig on tb3) setup: https://habrastorage.org/r/w780/getpro/habr/upload_files/d3c...
10gig can saturate full speed, 25G in my experience rarely reaches same 20G as the author observed.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
Dead Comment
https://support.apple.com/guide/mac-help/ip-thunderbolt-conn... etc
You'd also mostly be limited to short cables (1-2m) and a ring topology.
After it happened 3-4 times, I started debugging. It turned out that we usually get at least a bit of sunlight around noon, as it burns away the morning clouds. And my Thunderbolt box was in direct sunlight, and eventually started overheating.
And a Zoom restart made it fall back onto the Wifi connection instead of wired.
I fixed that by adding a small USB-powered fan to the Thunderbolt box as a temporary workaround. I just realized that it's been like this for the last 3 years: https://pics.ealex.net/s/overheat
Also, if you remember where you saw that logo, please let me know!
[0] https://github.com/vladkens/macmon
I have a 10gbit dual port card in a Lenovo mini pc. There is no normal way to get any heat out of there so I put a 12v small radial fan in there as support. It works great at 5v: silent and cool. It is a fan though so might not suit your purpose.
Edit: forgot is isn't "true" PCIe but tunneled.
The placement is mostly determined by the design of the OCP 2.0 connector. OCP 3.0 has a connector at the short edge of the card, which allows exposing/extending the heat sink directly to the outer case.
If somebody has the talent, designing a Thunderbolt 5 adapter for OCP 3.0 cards could be a worthwhile project.
As a stop-gap, I'd see if there was any way to get airflow into the case - I'd expect even a tiny fan would do much more than those two large heatsinks stuck onto the case (since the case itself has no thermal connection to the chip heatsink).
Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
It all comes down to performance per Watt, the availability of cheap switching gear, and the actual utility in an office / home environment.
For 10 Gbps, cabling can be an issue. Existing "RJ45"-style Cat 6 cables could still work, but maybe not all of them.
Higher speeds will most likely demand a switch to fiber (for anything longer than a few meters) or Twinax DAC (for inter-device connects). Since Wifi already provides higher speeds, one may be inclined to upgrade just for that (because at some point, Wireless becomes Wired, too).
That change comes with the complexity of running new cabling, fiber splicing, worrying about different connectors (SFP+, SFP28, SFP56, QSFP28, ...), incompatible transceiver certifications, vendor lock-in, etc. Not a problem in the datacenter, but try to explain this to a layman.
Lastly, without a faster pipe to the Internet, what can you do other than NAS and AI? The computers will still get faster chips but most folks won't be able to make use of the bandwidth because they're still stuck on 1Gbps Internet or less.
But that will change. Swiss Init7 has shown that 25GBps Internet at home is not only feasible but also affordable, and China seems to be adding lots of 10G, and fiber in general.
Fun times ahead.
And while not every cat6 will do 10, it would still be worth a shot, and devices aren't using 5 instead they're using even less.
Not to mention that cat8 will happily do 40Gbps as long as you can get from your switch to your end devices in 30 meters.
For 10 Gbps I find it simpler and cheaper to use fiber or DACs, but motherboards don't provide SFP+, only RJ45 ports. Over 10 Gbps copper is a no go. SFP28 and above would be nice to have on motherboards, but that's a dream with almost zero chances to happen. For most people RJ45 + WiFi 7 is good enough, computer manufacturers will not put SFP+ or SFP28 for a small minority of people.
Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.
> Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
For both laptops and desktops, PCI lanes. Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
For laptops in particular, power draw. The faster you push copper, the more power you need. And laptops have even less PCIe lanes available to waste.
For desktops, it's a question of market demand. Again - most applications don't need ultra high transfer rate, most household connectivity is DSL and (G)PON so 1 GBit/s is enough to max out the uplink. And those few users that do need higher transfer rates can always install a PCIe card, especially as there is a multitude of different options to provide high bandwidth connectivity.
Yes but a hogwash of several gigabits sometimes does give you real-world performance of more than a gigabit.
> Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
It's been a bunch of years that a single lane could do 10Gbps, and a bunch more years that a single lane could do 5Gbps.
Also don't ethernet ports tend to be fed by the chipset? So they don't really take lanes.
Servers had a reason to spend for the 10G, 25G and 40G cards which used 4 lanes.
There are 10 Gigabit chips that can run off of one PCI-E 4.0 lane now and the 2.5G and 5G speeds are supported(802.3bz).
wifi is not faster.
However ethernet is not as critical as it used to be, even at the office. People like the conveniency of having laptops they can move around. Unless you are working from home, having a dedicated office space is now seen as a waste of space. If the speed of the wifi is good enough when you are in a meeting room or in your kitchen, there is no reason to plug your laptop when you move back in another place, especially if most connections are to the internet and not the local network. In the workplace, most NAS have been replaced by onedrive / gdrive, at home NAS use has always been limited to a niche population: nerds/techies, photographers, music or video producers...
On consumer devices, I think part of the issue is that we’re still wedded to four-pair twisted copper as the physical medium. That worked well for Gigabit Ethernet, but once you push to 5 or 10 Gb/s it becomes inherently expensive. Twisted pair is simply a poor medium at those data rates, so you end up needing a large amount of complex silicon to compensate for attenuation, crosstalk, and noise.
That's doable but the double whammy is that most people use the network for 'internet' and 1G is simply more than enough, 10G therefore becomes quite niche so there's no enormous volume to overcome the inherent issues at low cost.