things aren’t always as simple as they seem
Background
The Big, Bad, i7.
My good friend had an ancient 2nd generation i7 2600 desktop running Windows 7 with an equally old GTX660. It had an undersized Corsair AIO unit that kept things nice and toasty for a few years until the combination of dust and time just caused one too many problems and it shut down one day with a bad smell.
It’d been up and down and having the occasional blue screen of death for a while, so we knew it was only time before it’d need replacement.
The MONSTER BEHEMOTH (that lives in a cardboard box :/)
Back in 2015 in an era of cheap Xeon CPUs from 2012 and dirt cheap DDR3 ECC, I found a bunch of used server components to build what I thought was, at the time, a pretty rockin’ and beefy workstation.
Specifications
- (2x) Intel Xeon E5620 - $12.50 ea. => $25
- Supermicro X8DTN+ => $125
- 96GB DDR3 PC3L-10600R => $15 per 8GB DIMM => $180
- EVGA 650W (650 N1) => $68
- (2x) Seagate 4TB SATA 7200rpm
- (2x) Intel 320 Series 160GB SSD
- NVIDIA GT710 2GB PCIe 1x GPU (host OS)
- AMD RX460 4GB (guest OS) => $200
Some of it was already sitting around, and there’s more in the system, but these were the bare minimum we required (lol ok we could have done with less than 96GB..) and the total cost at this point was about $600.
This machine served me well until I upgraded it with an E5 2670 (v1) for VFIO (GPU passthrough) as I could never get GPU passthrough working on this motherboard - honestly, I wasn’t even sure it was possible.
I gave it to my friend for him to use for learning how servers work and it collected dust in his closet until the big bad i7 finally bit the dust, never to return again.
The perfect winter project (December)
Luckily the i7 chose to die at a perfect time of year, right at the beginning of winter when there’s less compelling reasons to go outside and enjoy the sunlight - as, there is none.
It’s the perfect time to hunker down and set up Gentoo with VFIO using KVM.
Trial by Fire
My friend is not familiar with Linux; if he were doing this all on his own from scratch (well - he wouldn’t, probably) he would be using an “easy” Linux distribution like Debian, and would take a lot longer to set it up (his words, not mine)
However, I’m the one making all the decisions, so we went with what I knew would allow us the utmost flexibility - a Gentoo installation. I’m sure you can do this on Ubuntu or Debian or Arch, and probably with the same or better results. I’m not here to convince you to use Gentoo.
So his main system had died and he’s put things into my hands to put his new-to-him dual Xeon system in the most useful configuration possible. He was in the middle of several programming projects and wanted to get back to them ASAP - of course, they are in Visual Studio, so it wasn’t like we could just get VSCode running on Gentoo. Trust me - we tried.
Things looking good at first..
Getting the main OS up and running took no time at all with 16 threads for compilation and 96GB RAM to do it all in a tmpfs mount path - overly ambitious, I thought maybe I could do it in a single evening.
Yeah..
By about 4 or 6am, the system had just started Xorg for the first time and I even had Windows installing in QEMU using the emulated SVGA adapter. I was able to attach the physical GPU for a single boot, but Windows then lost the static IP I’d assigned and I could no longer access the VNC server..
Then things got weird
I should mention that originally the passthrough GPU was an AMD RX550 made by MSI.
This is critical information; certain GPUs that enter sleep state can never be woken up by the PCIe bus without some non-obvious reset mechanism.
If I rebooted the VM, it would just hang with 100% CPU use and the GPU was not initialising. If I removed the PCIe host device, the VM started with the SVGA adapter and I could access the network as if nothing were wrong. Rebooting the host allowed me to reattach the GPU once more, but another reboot of the VM and the GPU was back to sleep state, unable to wake.
This was something we dealt with for a while simply by rebooting the host every time we wanted to reboot the VM, and, avoiding rebooting or shutting down the VM as much as possible.. until it was suggested that only some AMD GPUs are prone to a reset bug on the PCI bus - but it’s not clear how to distinguish good cards from bad. We knew my RX460 4GB worked for my GPU passthrough on the X79 chipset, so I suggested we send the RX550 back and replace with a RX460.
And Then The RX460 Solved All The Problems!
Not.
On the positive side, the reset bug left with the RX550 card. On the other hand, issues continued to crop up that I had never run into before.
Windows 7 Kinda Sucks Now, aka: UEFI? hardly knew ’er!
(boom!)
We wanted to run Windows 7 because that’s what we wanted to run because that is what he was familiar with - and you can. But it seems that it’s got a few rough edges with VFIO, so be prepared to work for it.
Ideally you’ll be running Windows from the Q35 chipset with OVMF but I couldn’t get Windows 7 to even boot properly after installing in UEFI mode. The i440FX driver was selected for us by default by libvirt during VM creation, so we just went with it and used SeaBIOS, since at least it worked. I hadn’t needed to screw with this stuff on X79 where i440FX was working reasonably well enough for me.. so why would I on C600?
What the hell is with AMD getting Code 43?
I’m still unclear on how or why, but the AMD Catalyst GPU drivers are funky to install on this system. It fails at the end with an error, but you can go to the Device Manager and manually associate the AMD display driver with the device. On either Q35 or i440FX, only sometimes can I see an AMD Catalyst menu item when right-clicking the desktop. For what it’s worth, I haven’t noticed these problems with Windows 10/Q35/OVMF, but that’s almost an entirely different platform.
Random Xorg and VM host crashes
If you ever owned the Supermicro X8DTN+ motherboard to use as a desktop, you’ve probably noticed a few unfortunate things about it;
- There is no onboard sound card
- The onboard GPU is a total dog (ATI ES1000 - yes, an ATI chip!)
- It only has one physical x16 PCIe slot that operates at 8x speed
- The other 8x slot has a physical key that prevents a 2nd 16x GPU from fitting
This matters especially if one intends on using Linux / X11 for more than a glorified hypervisor - the onboard ES1000 is awful at everything, even basic 2D transformations lack acceleration. There is just no raster chip.
Trying to plug in a 2nd 16x GPU for Xorg led us down the path of PCIe 1x to 16x risers, used by Bitcoin miners to plug more GPUs into a single board. Here, we ran into problems. While it may “work” to run a 16x GPU in a 1x slot, when you engage any of the ‘fun hardware’ on the board, things can go sideways. Running YouTube in a browser would randomly crash the X server and in the worst cases it would hard lock the whole host with stack traces printed to console.
I searched all over for a PCIe 1x GPU, and surprisingly there is only one made (that I could find) by ZOTAC, the PCIe 1x 2GB GT710. It’s nothing to write home to mama about, but it does all of the necessary hardware acceleration to provide a decent experience in Xorg - with the open source nouveau driver.
Once we replaced the riser, NVIDIA proprietary driver, and 660 Ti with the GT710 and open source Mesa stack, all of the Xorg crashes went away and X is now usable with HDMI audio, as well. It isn’t good for gaming at all - basic 2D games, perhaps.. but we’ve not really tested it. Maybe in another post..?
So everything is working, right?
We’d set up the VM using libvirt and virtio-input device objects for high performance, low latency keyboard and mouse passthrough. That seemed to be working, and the desktop felt responsive and we were able to get all of the system drivers installed with just a couple unresolved minor ones.
Terrible audio, terrible performance, terrible everything
“Everything sucks, where am I?” - My friend, after I forced him to run Gentoo
Performance was downright awful and audio sounded terrible, so we enabled MSI using MSI Util v2 and also added options enable_msi=1
to /etc/modprobe.d/snd-hda-intel.conf
so that the Gentoo Linux host would also use MSI on its HDA codecs. Not sure how much this mattered, but it didn’t hurt.
The passed-through USB2 controller (the two weirdly-placed ports on the side of the motherborad) doesn’t support MSI, but it doesn’t seem to matter. Actually, you know, I recall it reporting MSI support at one time, but now it doesn’t.
Mysteries, man.
And then there were games
We were finally able to play some games - the first one on the list is usually the classic Grand Theft Auto V (GTAV) from 2015. Framerates were not stellar, but did the job despite this odd glitching that resembled excessive lag. A lot of the time it looked ‘acceptable’ on my side, but the dual Xeon system was experiencing frustrating “ghosting” glitches that made the experience less enjoyable.
Another favourite is World of Tanks, and this one had some annoying input issues where the whole game would stop responding to input and the mouse would go haywire after just a short time in the game. Load times here were also awful but we’re just taking whatever we can get at this point.
PEBKAC, sorta. (January)
(Problem Existed Between Keyboard And Chair, sorta)
Oopsies, we had the -object
but not the -device
. Once the device XML was added, Windows recognised the HID devices and I installed the latest libvirt drivers for Windows.
The input issues for World of Tanks and World of Warships were resolved, but the performance was still lacking and the GTAV glitches remained. Things didn’t seem as bad once we capped the number of vCPUs for Windows to 4 and disabled threads. Enabling NUMA on the host helped, but experimenting with the libvirt configuration and benchmarking remotely on my friend’s machine wasn’t an option because he would rather be using it. Fuckin’ guy!
A case of Electrical Narcolepsy (February)
aka: disable them C-states, dummy!
I don’t know why, and Intel sure isn’t going to tell me - but it turned out that setting the max cstate to 0 was all it took to resolve the remaining WoT, WoS, and GTAV glitches.
Seriously.
The “excessive lag”, ghosting problems and others with GTAV were all eliminated with this change. Of course, there is an additive effect of all the various tweaks and things we’ve attempted over the last few months, but this is the most remarkable improvement observed from just a single kernel parameter.
Further performance optimization (March - oh, er, today, actually)
aka: You can’t just leave well enough alone, can you?
I’ve written before about tuning vCPU performance in QEMU on Threadripper which behaves as if it were a dual socket system, and to be fair, I hadn’t done a lot of CPU benchmarking in QEMU in the past. My focus has been primarily on storage for the last several years, so the experience taught me a lot about things I hadn’t considered in depth before, NUMA scheduling and memory latency.
Once I knew what to look for, we started up Cinebench R15 and Geekbench and.. got a miserable 220 multi-core result in R15.. oops.. we’ve been starving the VM by hiding most of the host cores and only providing it with 4 threads.
Once we boosted the number of cores and threads up to the more respectable 16 (2 sockets, 4 cores, 2 threads) that more resembles my own Threadripper 1900X, the raw throughput improved by a wide margin, up to 440 in Cinebench, but the memory latency and other variables in Geekbench were still unimpressive.
For those who care about such stupid things, the CPU number in the “Windows Experience Index” was only 7.2.
NUMA support in libvirt
Following the instructions in my Threadripper post, we added a couple -numa
commandline arguments to his qemu script exposing the CPU sockets and cores to Windows exactly as they were paired in lstopo
.
With this change, Windows sees two distinct CPUs that each have 4 cores and 8 threads and Cinebench reports a much more respectable 744 for multi-core performance. Memory latency etc were improved in Geekbench, and World of Warships is finally stutter-free. Apparently the game is highly sensitive to memory locality.
Summary
1. Was it worth it?
Yes. BUT - that’s only because we already had the components lying around, and we considered the end result of a snapshottable ZFS VM running on Linux to be entirely worth the hassle. My friend has wanted to migrate to Linux for years, but it just hasn’t been possible to get everything working right out of the gate with WINE and native versions.
If we were buying these parts today for the same build… well.. let’s just say I wouldn’t. The E5620 was all I could afford when I was building the workstation back in 2015 and when I upgraded to the E5 2670 (v1), I didn’t have any regrets.
2. Will it last?
Now that we’ve got the final issues resolved, there just remain some, uh, obvious physical problems. But I have to say yes. I think this build could be pushed along for another year or two if needed, provided the hardware doesn’t rot away.
The X8DTN+ is an eEATX (enhanced extended ATX) and won’t fit into, well, any sort of case that I was willing to pay for. There’s some proprietary rackmount cases from Supermicro, but they were upwards of $900 including shipping due to the size and weight - and that wouldn’t fit in my friend’s cramped apartment, anyway.
The system lives in a cardboard box (for now) and the original plan was for it to be mounted on the wall, ala Linus Tech Tips (it’s a paid video), but I’m not in the same city anymore and I can’t go over to help, so it’s just not happened yet.
3. Upgrade path?
The next upgrade for this system would be to drop in some X5675 or newer LGA1366 CPUs that have more threads and push multi-core performance into the 1200s.
4. What would I recommend for others?
Just don’t buy this board. In general, avoid Supermicro stuff, as it’s proprietary and usually missing features you’d like in a workstation. However, if you know that you really want a power inefficient board with good IOMMU groupings, then this one might be for you.
Intel X79 (DDR3, cheap-ish) or X99 (DDR4, expensive) are alternate options I would strongly recommend looking into if a cheap virtualization host is what you’re after. A DDR3 option is far cheaper but the DDR4 will be reusable if you decide to later upgrade to Intel’s 8th gen or AMD’s Ryzen CPUs.
Of course, I’ll always recommend ECC memory and that users go with ZFS whenever possible, whatever CPU and chipset you decide to go with.
Ultimately, Windows should die.
Seriously, just kill it already.
I don’t like Microsoft, I don’t like Windows. Anything to help a friend move away from their clutches is worth it to me.