Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Currently Viewing Posts in Homelab

VMWare Workstation Pro and Fusion Pro now available for free for everyone

Back in May, VMware announced that VMware Workstation Pro and Fusion Pro would be free for non-commercial use. This was fantastic news for non-commercial users. However, a few days ago, they made an even better announcement: the free edition is now available to all users, including commercial users. While you can still purchase a license if you require support, the free version functions just as well, albeit without any support.

 

It appears that they are discontinuing Workstation Player and Fusion Player, as the functionality they offered is now included in the free version of Workstation and Fusion. 

How to Download Windows Server 2025 ARM edition (work around)

In my previous post, I mentioned that Windows Server 2025 had gained general availability, but I had no information about the ARM64 version. It appears that 4sysops has found a workaround. You can download an ARM version of Windows 2025 from uupdump.net. Since I have a few M1/M2 Macs lying around, I’ll try downloading it and see if it works on them. I’m curious to know how long it would take someone to get this running on a Raspberry Pi. I believe they would make excellent little AD/DNS/DHCP servers. 

Windows Server 2025 GA’ed, along with System Center 2025

Microsoft has just released Windows Server 2025 and System Center 2025 in General Availability. You can find more information on the Microsoft Release status site.

Current status as of November 1, 2024

 

Windows Server 2025 is now generally available. It delivers security advancements and new hybrid cloud capabilities in a high performing, AI-capable platform. Windows Server 2025 is Microsoft’s latest Long-Term Servicing Channel (LTSC) release for Windows Server. To download a free 180-day evaluation, visit the Microsoft Evaluation Center.

 

To learn more about Windows Server’s Lifecycle Policy, see the Windows Server 2025 lifecycle article. 

 

One aspect that hasn’t been discussed yet is the release of ARM64 support. While there were some ARM64 releases during the testing phase in the insiders group, there’s no official word on the GA versions yet. Additionally, here are the minimum requirements for CPUs.  (From NeoWin).

The GodBoxV3, equipped with its first-generation Xeon SP processor, requires an upgrade to transition to Server 2025. Hmmm….

Building Cloud Images for Proxmox

To create an Ubuntu VM for a Kubernetes cluster using Proxmox, follow these steps: download and tweak the base image, sysprep it, create a template with specified configurations, and clone the VM. Adjust settings such as memory, storage, and IP configurations. Fix shared IP issues by resetting the machine ID.

I needed to create a few Ubuntu VMs for a Kubernetes cluster for testing, and I wanted to make the process as simple as possible using Proxmox and some minimal automation. Here’s what I’ve done:

First, Download the base image:

wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

Then, tweak the image. Since I’m using my apt-cacher-ng proxy here, I’ve set the proxy for all VMs. You can remove it or adjust it as needed. If you want to remove it, simply remove the append-line option. Additionally, I’m installing qemu-guest-agent here. You can add any additional items at this point if desired.

sudo virt-customize -a jammy-server-cloudimg-amd64.img --install qemu-guest-agent --append-line '/etc/apt/apt.conf.d/00proxy:Acquire::http { Proxy "http://10.244.71.182:3142"; };'

Sysprepping the image resets it to the default stage. If you don’t perform this step, and you clone the machine multiple times, all the clones will have the same machine ID and IP address. [Note: This isn’t working fully for me. See below for the changes I made to the machine ID.]

sudo virt-sysprep -a jammy-server-cloudimg-amd64.img

Create the template. I used ID 9000 and assigned a name. You can modify this. Additionally, I’ve tagged mine with VLAN 72 (my Kubernetes VLAN). Feel free to change or remove this tag as needed. Furthermore, I set the disk size to add 50GB. Please replace any references to “godboxv2-tank” with your storage name.

sudo qm create 9000 --name "ubuntu-2204-cloudinit-template" --memory 4096 --cores 2 --net0 virtio,bridge=vmbr0,tag=72

sudo qm importdisk 9000 jammy-server-cloudimg-amd64.img godboxv2-tank

sudo qm set 9000 --scsihw virtio-scsi-pci --scsi0 godboxv2-tank:vm-9000-disk-0

sudo qm set 9000 --boot c --bootdisk scsi0

sudo qm disk resize 9000 scsi0 +50G

sudo qm set 9000 --ide2 godboxv2-tank:cloudinit

sudo qm set 9000 --serial0 socket --vga serial0

sudo qm set 9000 --agent enabled=1

sudo qm template 9000

Clone the VM into a new VM.

sudo qm clone 9000 2001 --name k8s-01

sudo qm set 2001 --sshkey godboxv3.pub

sudo qm set 2001 --memory 4096

sudo qm set 2001 --ciuser tiernano

sudo qm set 2001 --ipconfig0 ip=dhcp

Change tiernano and godboxv3.pub to your settings. Modify the names and memory as necessary.

As mentioned earlier, I’m still encountering the issue of IP addresses being shared. To resolve this, log into the boxes and execute the following command:

echo -n > /etc/machine-id

rm /var/lib/dbus/machine-id

ln -s /etc/machine-id /var/lib/dbus/machine-id

Reboot the computer, and the problem should be resolved.

Some network Upgrades going on

I’m currently in the midst of a significant network upgrade for the CloudShed. I’ve purchased two Ubiquiti Unifi Hi-Capacity Aggregation Switches, a 24-port Switch Pro POE, a Switch Enterprise 8 PoE, a couple of U7 Pro Access Points, and a U6 In-wall Access Point.

The two Aggregation Switches each have four 25Gb ports and 28 10Gb ports. Two of the 25Gb ports will be connected between the house and the CloudShed. The U6 InWall will be installed in the office, while the two U7 Pros are already in the house and powered by the Switch Enterprise 8 Poe (which supports 2.5Gb Ethernet). The 24-port Poe Switch will replace my older 16-port switch, which lacks 10Gb Ethernet. More details will be provided as I have time to install everything.

Day 61 of #100daysofhomelab – swapping disks in a Hetzner Dedicated Machine

It’s been a while… So, for Day 61 of , I thought I should write up how to swap a disk in a Hetzner Dedicated Machine.

I have a dedicated server I rent from Hetzner in Germany. It has an Xeon E5-1650 V2 processor (6 cores, 12 threads, 3.5Gz base, 3.9Gz turbo), 128Gb RAM, and a pretty impressive 15 6Tb HDD. All drives are hooked to a Mega RAID controller, but because I am running ProxMox, I left it in JBOD mode and set up the 15 drives in RAIDZ-2. All 15 drives are in a single pool (probably not ideal, but it works for me). Now and again, I get a message from ProxMox telling me about bad blocks… and every time it happens, I have to remember what to do to find the bad drive, report it to Hetzner, wait for them to replace the drive and then add it back to the pool… Today, it happened, so I thought I better document it, to help future me, and hopefully someone else out there…

First, we need to find the drive in question. Usually, I’m my alerts, I get the Serial number of the drive causing problems. So, I ran the following command:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

This gives me a full list of drives along with the Slot Number (needed when sending to Hetzner) and the Serial Number. the data output starts with the “Enclosure Device ID:” so when you find the Serial number, look above it for the Slot Number… so, my issue is with the disk in Slot 10. I opened a support ticket with Hetzner requesting a replacement disk. It can take an hour or more for this, but sometimes faster. Depends on their load…

Once you get a confirmation that the disk is done, you now need to swap it into the zpool.

first, we must check if the new drive is set up correctly. Run the following:

megacli -PDList -a0 | grep Firmware

We are looking for “Firmware status: Online, Spun Up”. If we have anything marked as configured, we need to run the following:

megacli -CfgForeign -Scan -a0

This shows us any foreign configurations. If that’s more than 0, we run:

megacli -CfgForeign -Clear -a0

This clears out that configuration. Next, we need the Enclosure ID and Slot number for the new drive from:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

cause we need to run:

megacli -PDMakeGood -PhysDrv [<enclosure>:<slot>] -a0

Finally, run:

megacli -CfgEachDskRaid0 WB RA Direct CachedBadBBU -a0

Note: If that fails with a message about cache data, you may need to run:

megacli -DiscardPreservedCache -L"10" -a0

This will clear the cache and then you can run the CfgEachDskRaid0. This will mark all new disks as JBOD disks… used for ZFS. If you have something different, check the docs from Hetzner below.

Next, we need to swap disks in ZFS. Run

zpool status

to get the info about the missing disks. the missing disk will show as unavailable. Next, find the ID of the disk that was added.

cd /dev/disk/by-id/

ls

find the new disk (usually wont have any partitions on it). Now, its a matter of running the following:

zpool replace rpool /dev/disk/by-id/scsi-3600605b008f498802aa37da51674ea7e-part3 /dev/disk/by-id/wwn-0x600605b008f498802b2a3a683752e088

swap the scsi-36xxx and wwn-0x6xxx parts for the ones you found and rpool with your ZFS pool name.

finally, run

zpool status

to see the status, run:

zpool status -v -1

shows you the status with more info and refreshes every second. ZFS is now running in the background resilvering the drives and swapping out the old ones. since the old one is missing, it will wait till the new drive is sorted then remove the old one. This can take some time, depending on your disks and data size.

Hopefully, this helps someone!

Some links for info:

LSI RAID Controller – Hetzner Docs

Day 60 of #100daysofhomelab

Day 60 of and I have been sick for most of the last 2 weeks, so that’s why I haven’t been posting much… Today is going to be links only too…

Day 59 of #100daysofhomelab – Proxmox Updates, LTT Hacked, New Framework Laptops

Day 59 of and Proxmox released 7.4 of their Virtual Environment. I have not upgraded any of my machines to it, just yet, but that’s the plan for the weekend. Other than that, some links:

Day 58 of #100daysofhomelab

Day 58 of and today is mostly a retrospective of what I did over the last few days, with some links thrown in for good measure…

Given I am going to keep GodBoxV3 running Windows Server 2022 for the foreseeable future, I installed Veeam Availability Suite (through their NFR program) and got it to backup up my Hyper-V VMs, along with my ESXi VMs to both local and Backblaze B2 storage. So far, so good.

Also, Ubiquiti released Unifi OS 3.0 for the UDM Pro, which I upgraded this morning. Links for that are below. Some nice bits in here, like:

  • Added Wireguard VPN Server support.
  • Added VPN Client Routing.
  • Added Ad-blocking feature.
  • Added support for OpenVPN tunnel in Traffic Routes.
  • Allow adding multiple VPN Clients.

the 2.5 release OS had the VPN Client option, but ALL traffic went over the VPN, whether you wanted it to or not. This release gives you the option to say that traffic from a given host, network or even traffic to a given IP or range, goes over the VPN link. The Ad Block feature is nice too, but I have not tried it yet (still using PiHole for the moment) and the Wireguard VPN option is going to be VERY handy. More testing coming soon…

Anyway, on to the links.