You are currently viewing 2022 Home Lab Rebuild – Part 2

2022 Home Lab Rebuild – Part 2

Now that some time has elapsed since I built new storage and compute hosts for my lab, most of the work I’ve been doing has been on the networking, VM, and functionality side of things, though I have made some hardware changes as well.

Hardware Updates

Storage:

  • Upgraded CPU to Ryzen 7 5700X as CPU was causing CEPH bottlenecks
  • Upgraded RAM to 48 GB also due to bottlenecks – the amount is not a power of two because spare RAM was installed
  • New motherboard with 6x SATA III slots instead of 4 – the PCI slots on the last motherboard were not allowing for full disk utilization of the HDDs and SSD plugged into an expansion slot. Now only the SSD hosting the block.db devices are in the expansion slot and all drives are capable of full throughput
  • RADOS benchmarks have not changed, but in production, read/write speeds have nearly doubled

Compute:

  • Removed RX 5600 XT – with how often my testing breaks things in the lab and issues running non-Steam games remotely, I decided to forgo the original plan of virtualizing my gaming PC

Desktop (new):

  • AMD Ryzen 5 7600X w/ 120mm AIO cooler
  • 32 GB DDR5-4800 MHz RAM
  • 2x 1 TB PCIe Gen 4 NVMes in RAID 0 – one was a spare from a different, abandoned project
  • Nvidia RTX 3060 Ti
  • Since most components of my gaming PC were now in the lab, I decided to build a new one from scratch. Since the Ryzen 7000 series had recently launched, I was able to utilize the new chips in the build

Power

  • 2x UPSes – one in the lab, one in the garage. Mainly for keeping routers and AT&T ONT online during power outages since cell reception is pretty much unusable on my street. Provides about 4 hours of power to networking equipment or 20 minutes to the lab. Also enables a clean shutdown of the lab during power outages

3D Printer

  • Previously a stock Ender 3 with BLTouch auto bed level, custom Marlin firmware, and a glass bed added
  • Upgraded board from 8-bit v1.1.4 to 32-bit silent stepper v4.2.7
  • Upgraded Marlin to the October 10th build w/ additional features added now that the 8-bit size limit is no longer an issue
  • New bed springs
  • Dragonfly All-Metal hot-end for better print quality
  • Metal Dual Gear Extruder upgrade
  • New Thermistor and heater cartridge for printing up to 450C
  • Filament Runout sensor
  • Dual Z-axis upgrade kit
  • 40mm ultra quiet mainboard fan swap

Networking Changes

  • New Segment – 10.0.0.0/24 (VLAN 1) – This acts as a passthrough between the NSX T0 Gateway and the production network. More on this later
  • New Segment – 192.168.2.0/24 (NSX Overlay) – This network will host most of the VMs in the lab, except for vCenter, Veeam systems, VCD, and DNS servers
  • Deployed NSX 4.0 (formerly NSX-T) – most components were deployed using VLAN 0, which is currently the production network. However, the external IP for the T0 gateway was not allowed to be deployed on the same VLAN as the VTEP components, hence the need for a segment on VLAN 1. My router does not support VLAN tagging, so a pfSense server was used as the gateway for this passthrough segment

Virtual Machine Changes

  • Deleted GamingPC since it was no longer needed
  • Deleted MarNovu, instead moving to a cloud-hosted Grafana instance that currently only provides Ceph metrics
  • Replaced MineOS with AMP, running Windows Server and CubeCoders’ game server hosting panel
  • Deployed NSX-Man-01, the NSX management server
  • Deployed NSX-Edge-01, the NSX edge node hosting the T0 and T1 gateways for the 192.168.2.X segment
  • Deployed pfSense, to bridge between VLANs 0 and 1 to allow NSX traffic to talk to the production network
  • Deployed vbr12beta23 a Windows Server 2022 to test a new build of the VBR 12 beta on
  • Reduced resources on most machines as due to the newer CPUs, they were able to get the same performance with fewer cores. RAM initially set on most VMs was also generous previously because it was slower and more was available, so it was set more conservatively
  • Re-IPed HomeBridge, PlexSvr, and Zabbix to reside on the new NSX segment

Still to Come

  • Configuration of VMware Tanzu on the cluster
  • Zabbix configuration to monitor resources
  • Zabbix integration with Grafana
  • VMware Cloud Director configuration
  • Deployment of nested Hyper-V cluster
  • Deployment of nested RHV cluster
  • Deployment of nested Nutanix AHV Community cluster
  • Conversion of HomeBridge, piHole, and PlexSvr to Kubernetes containers
  • Backup of VMware Tanzu with Veeam
  • Backup of nested Hyper-V VMs with Veeam
  • Backup of nested RHV VMs with Veeam
  • Backup of nested AHV VMs with Veeam
  • Configuration of SureBackup jobs
  • Configuration of Veeam Enterprise Manager
  • Configuration of Veeam ONE and integration with Grafana
  • Configuration of Veeam Disaster Recovery Orchestrator
  • Auto power on/off of lab servers based on UPS status
  • Pi KVM for lab servers
  • Fan vent for 3D printer PSU to redirect noise
  • Better cable management for 3D printer
  • Hot-end fan upgrades for 3D printer
  • Relocate filament spool
  • Octoprint

Leave a Reply