using NBASE-T to get more out of older systems and cabling
In my home lab, my workstation has 10TiB of raw storage - 9.06TB once the 4x 5TiB disks are mirrored and striped. I use this for editing and storing raw 4K video for my YouTube channel.
A few times a week I’ll dump several hundred GiB of data and a ZFS scheduled replication task will soon after, begin to transfer the data to a backup server that’s just a basic AMD APU - no, not the fancy new Ryzen 2400G, but an AMD A4 7400. It’s a dual core box, and not much to write about. It’s got only 8G of non-ECC memory, but I’ll probably be changing it out when my 200Mbps fibre is installed.
When I used a 1Gbps link for this transfer, it would run about 112MiB/s and take quite some time to make it through all of the data. The constant churn from my disks would increase access times, even though they have unutilised bandwidth remaining. I did some local testing to verify first, that sending the same data to an SSD would finish in about 1⁄4 the time.
With this, I thought, if only I had a 5000mbps network interface on both ends, perhaps I could fully max the disks during replication. Someone on IRC recommended the ASUS XG-C100C, a ~$99 USD card that supports NBASE-T. There’s a few reasons this is so convenient:
CAT5e cabling works with the card.
For high enough quality 5e cables, you can achieve a full 10gbps link. I didn’t want to buy new CAT6(a) cable - it’s only a 50 foot run from my office to the basement, so I was curious to see how the link would negotiate and what kind of speed I could achieve. It should at least work with 2Gbps, and hopefully 5gbps, if not the full 10.
$99 is very cheap for a brand new 10Gbps interface
Aquantia has seemingly been dedicated to bringing us a low-cost solution for 10Gbps, and they have sold for as low as $79 ea during Black Friday sales. It’s spectacular that companies will be able to retain their CAT6(a) cabling for full 10Gbps connectivity.
It doesn’t require any special equipment or cables
I’m using just a simple PTP connection between my workstation and the storage server, all I needed was just a simple CAT5e run between the two and I can hit 9.7Gbps with iperf.
even if it’s not ideal, you’ll still get something - which is better than nothing
The 5e cable may not give you 10Gbps, but it should give you 5Gbps, and this is much better than NIC teaming!
there’s some problems
If you wanted to connect multiple systems with 10Gbps you’ll need a NBASE-T switch, and there are not many choices; this was the only model available on Amazon.ca at the time of writing, and it’s really expensive - $530 CAD for a 5-port NBASE-T switch. That’s C$106 per port. The 10-port unit on Amazon.com wasn’t much better. Ridiculous - you may as well buy a few Aquantia single PHY cards and build them into an old dual CPU Xeon system for all 40 of those beautiful PCI-e lanes.. and then make it into your own switch.
But, it’s better than the situation was just a few years ago. You should look into SFP+ to see if the prices for used equipment are comparable before you make the purchase for your home lab. For a business, these prices might be a no-brainer.