Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Currently Viewing Posts in Homelab

How to use Cloudflare Warp with a UDM Pro

If you’re considering using Cloudflare Wrap for specific machines on your network, you can easily install the Warp client directly on them. It supports various operating systems, including Windows, Linux, Mac, iOS, and Android. However, if you need to use it on devices that aren’t compatible with the client installation, for example, NAS Devices or Smart TVs, this tutorial may be helpful.

First, please note that this is not an officially supported option. Cloudflare might modify their configurations at some point, potentially causing this feature to break. You have been informed about this possibility.

What do you need:

  • UDM Pro (it can work on any Ubiquiti Unifi gateways, but this is the one I have).
  • Wireguard Configuration File Generator (WGCF). This tool will generate a Wireguard configuration file based on the Cloudflare settings.
  • I’ve created a script that executes the following commands. It worked on my MacBook Pro, and it should also work on Windows or Linux.

First, install WGCF. I installed it by running

brew install wgcf

on my Mac Book Pro.

Next, run:

wgcf register

This will register a client on your machine. A wgcf-account.toml file will be left in your running folder. Next, run the script again.

wgcf generate

You’ll be left with a wgcf-profile.config file in your running folder. Open this file in a text editor to access the necessary details for your next steps.

Go to your Unifi Network Dashboard, click on “Settings,” and then select “VPN” and “VPN Client.” Click on “Create New” and choose “Wireguard” as the protocol. Then, change the “Setup” to “Manual.”

The configuration file you created earlier should resemble this:

[Interface]
PrivateKey = <PRIVATEKEY>
Address = <IPv4Address>, <IPV6Address>
DNS = 1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, 2606:4700:4700::1001
MTU = 1280
[Peer]
PublicKey = <ServerPublicKey>
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <ServerEndpoint>

Use the contents of PrivateKey to overwrite the existing Private Key. This will automatically fill in your Public Key. Next, set your Tunnel IP to the value listed for IPv4Address. Remove the trailing slash and use that in the Netmask (my Netmask was a /32). Server Address is the value listed as ServerEndpoint. Check the port and include it as well. The Public Server Key is ServerPublicKey. Finally, add your DNS settings for IPv4 in the configuration and click Apply Changes.

After a few seconds, the status should change to “Connected”.

Next, you need to configure the Policy-Based Routes. This is located under the routing section, specifically under the heading “Policy-Based Routes.”

Here, you can name the rule and decide whether you want to send all traffic or specific traffic.

For all traffic, you can select a specific device or the entire network. For instance, in this example, all traffic from my Guest network will be routed through Warp:

You can also set it to send traffic to specific destinations:

Fallback allows it to fail back to one of the other connections if the Warp connection fails.

Finally, click Add Entry at the bottom. Now, run some tests on that machine and see the traffic counts increase.

That is now it. You can select what devices or networks, or even what destinations you want to send over Cloudflare. Happy hunting.

How to Set Up Mac OS 9 on QEMU

Want to experience the classic Apple operating system on modern hardware? Emulating Mac OS 9 using QEMU is the way to go! This guide will guide you through the process of setting up Mac OS 9 in QEMU, from creating a virtual hard drive to installing the operating system. Let’s get started!

Prerequisites

Before you begin, make sure you have these things:

A computer that can run QEMU (macOS, Linux, or Windows).

A Mac OS 9 installation ISO (like Mac OS 9.2.2 Universal Install. Check Archive.org).

A version of QEMU with sound support (like qemu-screamer).

You should also know a bit about using the terminal.

Step 1: Install QEMU

 

Download and install a version of QEMU that supports PowerPC emulation. The qemu-screamer fork is recommended for better audio support.

Clone the repository:

 

git clone -b screamer https://github.com/mcayland/qemu qemu-screamer

cd qemu-screamer

 

Configure and compile:

./configure --target-list="ppc-softmmu" --audio-drv-list="coreaudio" --enable-libusb --enable-kvm --enable-hvf --enable-cocoa

make

 

The compiled binary will be located in qemu-screamer/ppc-softmmu/qemu-system-ppc.

 

Step 2: Create a Virtual Hard Drive

Use the qemu-img tool to create a virtual hard drive for Mac OS 9:

 

./qemu-img create -f qcow2 macos9.img 2G

 

Replace 2G with your desired size if needed. Mac OS 9 does not require much space, so 2 GB is generally sufficient.

 

Step 3: Prepare the Installation Media

Ensure you have a bootable ISO of Mac OS 9. If you do not have one, download it from resources like “Mac OS 9 Lives.” Place the ISO in an accessible directory on your system.

Step 4: Start QEMU and Begin Installation

Run QEMU with the following command to boot into the Mac OS 9 installer:

./qemu-system-ppc \

-L pc-bios \

-cpu g4 \

-M mac99,via=pmu \

-m 512 \

-hda macos9.img \

-cdrom "/path/to/Mac_OS_9.iso” \

-boot d \

-g 1024x768x32 \

-device usb-mouse \

-device usb-kbd

 

Explanation of key flags:

 

    -cpu g4: Emulates a G4 processor.

    -M mac99,via=pmu: Sets the machine type to emulate a PowerMac G4.

    -m 512: Allocates 512 MB of RAM.

    -hda macos9.img: Specifies the virtual hard drive.

    -cdrom: Points to your Mac OS 9 installation ISO. Have a look on archive.org for the ISO.

    -boot d: Boots from the CD-ROM.

 

Step 5: Initialize and Install Mac OS 9

 

Once QEMU boots, open “Drive Setup” from the Utilities folder.

Select the uninitialized disk and click “Initialize.”

Choose “Mac OS Extended” as the file system and proceed.

After initializing, return to the installer and follow on-screen instructions to install Mac OS 9 onto your virtual hard drive.

 

The installation process typically takes about 7–10 minutes.

Step 6: Boot into Mac OS 9

After installation is complete:

 

Shut down QEMU.

Modify the boot command to boot from the hard drive instead of the CD-ROM:

    ./qemu-system-ppc \

    -L pc-bios \

    -cpu g4 \

    -M mac99,via=pmu \

    -m 512 \

    -hda macos9.img \

    -boot c \

    -g 1024x768x32 \

    -device usb-mouse \

    -device usb-kbd

 

Start QEMU again, and it should boot into your newly installed Mac OS 9 environment.

 

Optional: Enable Audio Support

If using qemu-screamer, audio can be enabled by ensuring CoreAudio is configured during compilation (–audio-drv-list=”coreaudio”). This setup allows sound output within Mac OS 9.

Tips and Troubleshooting

 

Backup Your Disk Image: After installation, back up your virtual hard drive (macos9.img) to avoid reinstalling if issues arise.

Adjust RAM: While Mac OS 9 can run on as little as 40 MB of RAM, allocating at least 512 MB ensures smoother performance.

Networking: Add networking support with flags like -netdev user,id=mynet and -device sungem,netdev=mynet.

 

By following these steps, you’ll have a fully functional emulation of Mac OS 9 running on QEMU! Enjoy exploring this nostalgic operating system.

VMWare Workstation Pro and Fusion Pro now available for free for everyone

Back in May, VMware announced that VMware Workstation Pro and Fusion Pro would be free for non-commercial use. This was fantastic news for non-commercial users. However, a few days ago, they made an even better announcement: the free edition is now available to all users, including commercial users. While you can still purchase a license if you require support, the free version functions just as well, albeit without any support.

 

It appears that they are discontinuing Workstation Player and Fusion Player, as the functionality they offered is now included in the free version of Workstation and Fusion. 

How to Download Windows Server 2025 ARM edition (work around)

In my previous post, I mentioned that Windows Server 2025 had gained general availability, but I had no information about the ARM64 version. It appears that 4sysops has found a workaround. You can download an ARM version of Windows 2025 from uupdump.net. Since I have a few M1/M2 Macs lying around, I’ll try downloading it and see if it works on them. I’m curious to know how long it would take someone to get this running on a Raspberry Pi. I believe they would make excellent little AD/DNS/DHCP servers. 

Windows Server 2025 GA’ed, along with System Center 2025

Microsoft has just released Windows Server 2025 and System Center 2025 in General Availability. You can find more information on the Microsoft Release status site.

Current status as of November 1, 2024

 

Windows Server 2025 is now generally available. It delivers security advancements and new hybrid cloud capabilities in a high performing, AI-capable platform. Windows Server 2025 is Microsoft’s latest Long-Term Servicing Channel (LTSC) release for Windows Server. To download a free 180-day evaluation, visit the Microsoft Evaluation Center.

 

To learn more about Windows Server’s Lifecycle Policy, see the Windows Server 2025 lifecycle article. 

 

One aspect that hasn’t been discussed yet is the release of ARM64 support. While there were some ARM64 releases during the testing phase in the insiders group, there’s no official word on the GA versions yet. Additionally, here are the minimum requirements for CPUs.  (From NeoWin).

The GodBoxV3, equipped with its first-generation Xeon SP processor, requires an upgrade to transition to Server 2025. Hmmm….

Building Cloud Images for Proxmox

To create an Ubuntu VM for a Kubernetes cluster using Proxmox, follow these steps: download and tweak the base image, sysprep it, create a template with specified configurations, and clone the VM. Adjust settings such as memory, storage, and IP configurations. Fix shared IP issues by resetting the machine ID.

I needed to create a few Ubuntu VMs for a Kubernetes cluster for testing, and I wanted to make the process as simple as possible using Proxmox and some minimal automation. Here’s what I’ve done:

First, Download the base image:

wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

Then, tweak the image. Since I’m using my apt-cacher-ng proxy here, I’ve set the proxy for all VMs. You can remove it or adjust it as needed. If you want to remove it, simply remove the append-line option. Additionally, I’m installing qemu-guest-agent here. You can add any additional items at this point if desired.

sudo virt-customize -a jammy-server-cloudimg-amd64.img --install qemu-guest-agent --append-line '/etc/apt/apt.conf.d/00proxy:Acquire::http { Proxy "http://10.244.71.182:3142"; };'

Sysprepping the image resets it to the default stage. If you don’t perform this step, and you clone the machine multiple times, all the clones will have the same machine ID and IP address. [Note: This isn’t working fully for me. See below for the changes I made to the machine ID.]

sudo virt-sysprep -a jammy-server-cloudimg-amd64.img

Create the template. I used ID 9000 and assigned a name. You can modify this. Additionally, I’ve tagged mine with VLAN 72 (my Kubernetes VLAN). Feel free to change or remove this tag as needed. Furthermore, I set the disk size to add 50GB. Please replace any references to “godboxv2-tank” with your storage name.

sudo qm create 9000 --name "ubuntu-2204-cloudinit-template" --memory 4096 --cores 2 --net0 virtio,bridge=vmbr0,tag=72

sudo qm importdisk 9000 jammy-server-cloudimg-amd64.img godboxv2-tank

sudo qm set 9000 --scsihw virtio-scsi-pci --scsi0 godboxv2-tank:vm-9000-disk-0

sudo qm set 9000 --boot c --bootdisk scsi0

sudo qm disk resize 9000 scsi0 +50G

sudo qm set 9000 --ide2 godboxv2-tank:cloudinit

sudo qm set 9000 --serial0 socket --vga serial0

sudo qm set 9000 --agent enabled=1

sudo qm template 9000

Clone the VM into a new VM.

sudo qm clone 9000 2001 --name k8s-01

sudo qm set 2001 --sshkey godboxv3.pub

sudo qm set 2001 --memory 4096

sudo qm set 2001 --ciuser tiernano

sudo qm set 2001 --ipconfig0 ip=dhcp

Change tiernano and godboxv3.pub to your settings. Modify the names and memory as necessary.

As mentioned earlier, I’m still encountering the issue of IP addresses being shared. To resolve this, log into the boxes and execute the following command:

echo -n > /etc/machine-id

rm /var/lib/dbus/machine-id

ln -s /etc/machine-id /var/lib/dbus/machine-id

Reboot the computer, and the problem should be resolved.

Some network Upgrades going on

I’m currently in the midst of a significant network upgrade for the CloudShed. I’ve purchased two Ubiquiti Unifi Hi-Capacity Aggregation Switches, a 24-port Switch Pro POE, a Switch Enterprise 8 PoE, a couple of U7 Pro Access Points, and a U6 In-wall Access Point.

The two Aggregation Switches each have four 25Gb ports and 28 10Gb ports. Two of the 25Gb ports will be connected between the house and the CloudShed. The U6 InWall will be installed in the office, while the two U7 Pros are already in the house and powered by the Switch Enterprise 8 Poe (which supports 2.5Gb Ethernet). The 24-port Poe Switch will replace my older 16-port switch, which lacks 10Gb Ethernet. More details will be provided as I have time to install everything.

Day 61 of #100daysofhomelab – swapping disks in a Hetzner Dedicated Machine

It’s been a while… So, for Day 61 of , I thought I should write up how to swap a disk in a Hetzner Dedicated Machine.

I have a dedicated server I rent from Hetzner in Germany. It has an Xeon E5-1650 V2 processor (6 cores, 12 threads, 3.5Gz base, 3.9Gz turbo), 128Gb RAM, and a pretty impressive 15 6Tb HDD. All drives are hooked to a Mega RAID controller, but because I am running ProxMox, I left it in JBOD mode and set up the 15 drives in RAIDZ-2. All 15 drives are in a single pool (probably not ideal, but it works for me). Now and again, I get a message from ProxMox telling me about bad blocks… and every time it happens, I have to remember what to do to find the bad drive, report it to Hetzner, wait for them to replace the drive and then add it back to the pool… Today, it happened, so I thought I better document it, to help future me, and hopefully someone else out there…

First, we need to find the drive in question. Usually, I’m my alerts, I get the Serial number of the drive causing problems. So, I ran the following command:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

This gives me a full list of drives along with the Slot Number (needed when sending to Hetzner) and the Serial Number. the data output starts with the “Enclosure Device ID:” so when you find the Serial number, look above it for the Slot Number… so, my issue is with the disk in Slot 10. I opened a support ticket with Hetzner requesting a replacement disk. It can take an hour or more for this, but sometimes faster. Depends on their load…

Once you get a confirmation that the disk is done, you now need to swap it into the zpool.

first, we must check if the new drive is set up correctly. Run the following:

megacli -PDList -a0 | grep Firmware

We are looking for “Firmware status: Online, Spun Up”. If we have anything marked as configured, we need to run the following:

megacli -CfgForeign -Scan -a0

This shows us any foreign configurations. If that’s more than 0, we run:

megacli -CfgForeign -Clear -a0

This clears out that configuration. Next, we need the Enclosure ID and Slot number for the new drive from:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

cause we need to run:

megacli -PDMakeGood -PhysDrv [<enclosure>:<slot>] -a0

Finally, run:

megacli -CfgEachDskRaid0 WB RA Direct CachedBadBBU -a0

Note: If that fails with a message about cache data, you may need to run:

megacli -DiscardPreservedCache -L"10" -a0

This will clear the cache and then you can run the CfgEachDskRaid0. This will mark all new disks as JBOD disks… used for ZFS. If you have something different, check the docs from Hetzner below.

Next, we need to swap disks in ZFS. Run

zpool status

to get the info about the missing disks. the missing disk will show as unavailable. Next, find the ID of the disk that was added.

cd /dev/disk/by-id/

ls

find the new disk (usually wont have any partitions on it). Now, its a matter of running the following:

zpool replace rpool /dev/disk/by-id/scsi-3600605b008f498802aa37da51674ea7e-part3 /dev/disk/by-id/wwn-0x600605b008f498802b2a3a683752e088

swap the scsi-36xxx and wwn-0x6xxx parts for the ones you found and rpool with your ZFS pool name.

finally, run

zpool status

to see the status, run:

zpool status -v -1

shows you the status with more info and refreshes every second. ZFS is now running in the background resilvering the drives and swapping out the old ones. since the old one is missing, it will wait till the new drive is sorted then remove the old one. This can take some time, depending on your disks and data size.

Hopefully, this helps someone!

Some links for info:

LSI RAID Controller – Hetzner Docs

Day 60 of #100daysofhomelab

Day 60 of and I have been sick for most of the last 2 weeks, so that’s why I haven’t been posting much… Today is going to be links only too…