Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Currently Viewing Posts in Tutorial

How to use Cloudflare Warp with a UDM Pro

If you’re considering using Cloudflare Wrap for specific machines on your network, you can easily install the Warp client directly on them. It supports various operating systems, including Windows, Linux, Mac, iOS, and Android. However, if you need to use it on devices that aren’t compatible with the client installation, for example, NAS Devices or Smart TVs, this tutorial may be helpful.

First, please note that this is not an officially supported option. Cloudflare might modify their configurations at some point, potentially causing this feature to break. You have been informed about this possibility.

What do you need:

  • UDM Pro (it can work on any Ubiquiti Unifi gateways, but this is the one I have).
  • Wireguard Configuration File Generator (WGCF). This tool will generate a Wireguard configuration file based on the Cloudflare settings.
  • I’ve created a script that executes the following commands. It worked on my MacBook Pro, and it should also work on Windows or Linux.

First, install WGCF. I installed it by running

brew install wgcf

on my Mac Book Pro.

Next, run:

wgcf register

This will register a client on your machine. A wgcf-account.toml file will be left in your running folder. Next, run the script again.

wgcf generate

You’ll be left with a wgcf-profile.config file in your running folder. Open this file in a text editor to access the necessary details for your next steps.

Go to your Unifi Network Dashboard, click on “Settings,” and then select “VPN” and “VPN Client.” Click on “Create New” and choose “Wireguard” as the protocol. Then, change the “Setup” to “Manual.”

The configuration file you created earlier should resemble this:

[Interface]
PrivateKey = <PRIVATEKEY>
Address = <IPv4Address>, <IPV6Address>
DNS = 1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, 2606:4700:4700::1001
MTU = 1280
[Peer]
PublicKey = <ServerPublicKey>
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <ServerEndpoint>

Use the contents of PrivateKey to overwrite the existing Private Key. This will automatically fill in your Public Key. Next, set your Tunnel IP to the value listed for IPv4Address. Remove the trailing slash and use that in the Netmask (my Netmask was a /32). Server Address is the value listed as ServerEndpoint. Check the port and include it as well. The Public Server Key is ServerPublicKey. Finally, add your DNS settings for IPv4 in the configuration and click Apply Changes.

After a few seconds, the status should change to “Connected”.

Next, you need to configure the Policy-Based Routes. This is located under the routing section, specifically under the heading “Policy-Based Routes.”

Here, you can name the rule and decide whether you want to send all traffic or specific traffic.

For all traffic, you can select a specific device or the entire network. For instance, in this example, all traffic from my Guest network will be routed through Warp:

You can also set it to send traffic to specific destinations:

Fallback allows it to fail back to one of the other connections if the Warp connection fails.

Finally, click Add Entry at the bottom. Now, run some tests on that machine and see the traffic counts increase.

That is now it. You can select what devices or networks, or even what destinations you want to send over Cloudflare. Happy hunting.

How to Set Up Mac OS 9 on QEMU

Want to experience the classic Apple operating system on modern hardware? Emulating Mac OS 9 using QEMU is the way to go! This guide will guide you through the process of setting up Mac OS 9 in QEMU, from creating a virtual hard drive to installing the operating system. Let’s get started!

Prerequisites

Before you begin, make sure you have these things:

A computer that can run QEMU (macOS, Linux, or Windows).

A Mac OS 9 installation ISO (like Mac OS 9.2.2 Universal Install. Check Archive.org).

A version of QEMU with sound support (like qemu-screamer).

You should also know a bit about using the terminal.

Step 1: Install QEMU

 

Download and install a version of QEMU that supports PowerPC emulation. The qemu-screamer fork is recommended for better audio support.

Clone the repository:

 

git clone -b screamer https://github.com/mcayland/qemu qemu-screamer

cd qemu-screamer

 

Configure and compile:

./configure --target-list="ppc-softmmu" --audio-drv-list="coreaudio" --enable-libusb --enable-kvm --enable-hvf --enable-cocoa

make

 

The compiled binary will be located in qemu-screamer/ppc-softmmu/qemu-system-ppc.

 

Step 2: Create a Virtual Hard Drive

Use the qemu-img tool to create a virtual hard drive for Mac OS 9:

 

./qemu-img create -f qcow2 macos9.img 2G

 

Replace 2G with your desired size if needed. Mac OS 9 does not require much space, so 2 GB is generally sufficient.

 

Step 3: Prepare the Installation Media

Ensure you have a bootable ISO of Mac OS 9. If you do not have one, download it from resources like “Mac OS 9 Lives.” Place the ISO in an accessible directory on your system.

Step 4: Start QEMU and Begin Installation

Run QEMU with the following command to boot into the Mac OS 9 installer:

./qemu-system-ppc \

-L pc-bios \

-cpu g4 \

-M mac99,via=pmu \

-m 512 \

-hda macos9.img \

-cdrom "/path/to/Mac_OS_9.iso” \

-boot d \

-g 1024x768x32 \

-device usb-mouse \

-device usb-kbd

 

Explanation of key flags:

 

    -cpu g4: Emulates a G4 processor.

    -M mac99,via=pmu: Sets the machine type to emulate a PowerMac G4.

    -m 512: Allocates 512 MB of RAM.

    -hda macos9.img: Specifies the virtual hard drive.

    -cdrom: Points to your Mac OS 9 installation ISO. Have a look on archive.org for the ISO.

    -boot d: Boots from the CD-ROM.

 

Step 5: Initialize and Install Mac OS 9

 

Once QEMU boots, open “Drive Setup” from the Utilities folder.

Select the uninitialized disk and click “Initialize.”

Choose “Mac OS Extended” as the file system and proceed.

After initializing, return to the installer and follow on-screen instructions to install Mac OS 9 onto your virtual hard drive.

 

The installation process typically takes about 7–10 minutes.

Step 6: Boot into Mac OS 9

After installation is complete:

 

Shut down QEMU.

Modify the boot command to boot from the hard drive instead of the CD-ROM:

    ./qemu-system-ppc \

    -L pc-bios \

    -cpu g4 \

    -M mac99,via=pmu \

    -m 512 \

    -hda macos9.img \

    -boot c \

    -g 1024x768x32 \

    -device usb-mouse \

    -device usb-kbd

 

Start QEMU again, and it should boot into your newly installed Mac OS 9 environment.

 

Optional: Enable Audio Support

If using qemu-screamer, audio can be enabled by ensuring CoreAudio is configured during compilation (–audio-drv-list=”coreaudio”). This setup allows sound output within Mac OS 9.

Tips and Troubleshooting

 

Backup Your Disk Image: After installation, back up your virtual hard drive (macos9.img) to avoid reinstalling if issues arise.

Adjust RAM: While Mac OS 9 can run on as little as 40 MB of RAM, allocating at least 512 MB ensures smoother performance.

Networking: Add networking support with flags like -netdev user,id=mynet and -device sungem,netdev=mynet.

 

By following these steps, you’ll have a fully functional emulation of Mac OS 9 running on QEMU! Enjoy exploring this nostalgic operating system.

Building Cloud Images for Proxmox

To create an Ubuntu VM for a Kubernetes cluster using Proxmox, follow these steps: download and tweak the base image, sysprep it, create a template with specified configurations, and clone the VM. Adjust settings such as memory, storage, and IP configurations. Fix shared IP issues by resetting the machine ID.

I needed to create a few Ubuntu VMs for a Kubernetes cluster for testing, and I wanted to make the process as simple as possible using Proxmox and some minimal automation. Here’s what I’ve done:

First, Download the base image:

wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

Then, tweak the image. Since I’m using my apt-cacher-ng proxy here, I’ve set the proxy for all VMs. You can remove it or adjust it as needed. If you want to remove it, simply remove the append-line option. Additionally, I’m installing qemu-guest-agent here. You can add any additional items at this point if desired.

sudo virt-customize -a jammy-server-cloudimg-amd64.img --install qemu-guest-agent --append-line '/etc/apt/apt.conf.d/00proxy:Acquire::http { Proxy "http://10.244.71.182:3142"; };'

Sysprepping the image resets it to the default stage. If you don’t perform this step, and you clone the machine multiple times, all the clones will have the same machine ID and IP address. [Note: This isn’t working fully for me. See below for the changes I made to the machine ID.]

sudo virt-sysprep -a jammy-server-cloudimg-amd64.img

Create the template. I used ID 9000 and assigned a name. You can modify this. Additionally, I’ve tagged mine with VLAN 72 (my Kubernetes VLAN). Feel free to change or remove this tag as needed. Furthermore, I set the disk size to add 50GB. Please replace any references to “godboxv2-tank” with your storage name.

sudo qm create 9000 --name "ubuntu-2204-cloudinit-template" --memory 4096 --cores 2 --net0 virtio,bridge=vmbr0,tag=72

sudo qm importdisk 9000 jammy-server-cloudimg-amd64.img godboxv2-tank

sudo qm set 9000 --scsihw virtio-scsi-pci --scsi0 godboxv2-tank:vm-9000-disk-0

sudo qm set 9000 --boot c --bootdisk scsi0

sudo qm disk resize 9000 scsi0 +50G

sudo qm set 9000 --ide2 godboxv2-tank:cloudinit

sudo qm set 9000 --serial0 socket --vga serial0

sudo qm set 9000 --agent enabled=1

sudo qm template 9000

Clone the VM into a new VM.

sudo qm clone 9000 2001 --name k8s-01

sudo qm set 2001 --sshkey godboxv3.pub

sudo qm set 2001 --memory 4096

sudo qm set 2001 --ciuser tiernano

sudo qm set 2001 --ipconfig0 ip=dhcp

Change tiernano and godboxv3.pub to your settings. Modify the names and memory as necessary.

As mentioned earlier, I’m still encountering the issue of IP addresses being shared. To resolve this, log into the boxes and execute the following command:

echo -n > /etc/machine-id

rm /var/lib/dbus/machine-id

ln -s /etc/machine-id /var/lib/dbus/machine-id

Reboot the computer, and the problem should be resolved.

Day 61 of #100daysofhomelab – swapping disks in a Hetzner Dedicated Machine

It’s been a while… So, for Day 61 of , I thought I should write up how to swap a disk in a Hetzner Dedicated Machine.

I have a dedicated server I rent from Hetzner in Germany. It has an Xeon E5-1650 V2 processor (6 cores, 12 threads, 3.5Gz base, 3.9Gz turbo), 128Gb RAM, and a pretty impressive 15 6Tb HDD. All drives are hooked to a Mega RAID controller, but because I am running ProxMox, I left it in JBOD mode and set up the 15 drives in RAIDZ-2. All 15 drives are in a single pool (probably not ideal, but it works for me). Now and again, I get a message from ProxMox telling me about bad blocks… and every time it happens, I have to remember what to do to find the bad drive, report it to Hetzner, wait for them to replace the drive and then add it back to the pool… Today, it happened, so I thought I better document it, to help future me, and hopefully someone else out there…

First, we need to find the drive in question. Usually, I’m my alerts, I get the Serial number of the drive causing problems. So, I ran the following command:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

This gives me a full list of drives along with the Slot Number (needed when sending to Hetzner) and the Serial Number. the data output starts with the “Enclosure Device ID:” so when you find the Serial number, look above it for the Slot Number… so, my issue is with the disk in Slot 10. I opened a support ticket with Hetzner requesting a replacement disk. It can take an hour or more for this, but sometimes faster. Depends on their load…

Once you get a confirmation that the disk is done, you now need to swap it into the zpool.

first, we must check if the new drive is set up correctly. Run the following:

megacli -PDList -a0 | grep Firmware

We are looking for “Firmware status: Online, Spun Up”. If we have anything marked as configured, we need to run the following:

megacli -CfgForeign -Scan -a0

This shows us any foreign configurations. If that’s more than 0, we run:

megacli -CfgForeign -Clear -a0

This clears out that configuration. Next, we need the Enclosure ID and Slot number for the new drive from:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

cause we need to run:

megacli -PDMakeGood -PhysDrv [<enclosure>:<slot>] -a0

Finally, run:

megacli -CfgEachDskRaid0 WB RA Direct CachedBadBBU -a0

Note: If that fails with a message about cache data, you may need to run:

megacli -DiscardPreservedCache -L"10" -a0

This will clear the cache and then you can run the CfgEachDskRaid0. This will mark all new disks as JBOD disks… used for ZFS. If you have something different, check the docs from Hetzner below.

Next, we need to swap disks in ZFS. Run

zpool status

to get the info about the missing disks. the missing disk will show as unavailable. Next, find the ID of the disk that was added.

cd /dev/disk/by-id/

ls

find the new disk (usually wont have any partitions on it). Now, its a matter of running the following:

zpool replace rpool /dev/disk/by-id/scsi-3600605b008f498802aa37da51674ea7e-part3 /dev/disk/by-id/wwn-0x600605b008f498802b2a3a683752e088

swap the scsi-36xxx and wwn-0x6xxx parts for the ones you found and rpool with your ZFS pool name.

finally, run

zpool status

to see the status, run:

zpool status -v -1

shows you the status with more info and refreshes every second. ZFS is now running in the background resilvering the drives and swapping out the old ones. since the old one is missing, it will wait till the new drive is sorted then remove the old one. This can take some time, depending on your disks and data size.

Hopefully, this helps someone!

Some links for info:

LSI RAID Controller – Hetzner Docs

Bulk updating Tasmota Devices over MQTT #100daysofhomelab

I have a load of these Smart Plugs from GoSund around the house (currently around 11, but more are still in boxes). The handy part of these is they can be re-programmed using Tuya Convert and using the following config you can get power usage and an on/off switch. I have mine hooked up to an MQTT server, and with the MQTT plug-in to Home Assistant, I get all the details about power usage and can control each device I need to (hence the 11 of them!).

But MQTT can be used for more than monitoring. I can send commands to the devices. Given that all of them are on a locked-down network, and only have access to the NTP server, internal DNS and the MQTT box, I needed to figure out how to get OTA updates to the box. Luckily, you can change the OTA update URL on the web interface and download it from a local endpoint… But, I am a Lazy Git, so I needed to figure out an automated way. Enter MQTT again.

First, you will need to log in to your Tasmota device, go to configuration, MQTT and Enter your MQTT host. Also, get your topic name while you are there and keep it handy.

Hit save and wait a few seconds for it to update. You need to watch the MQTT messages going through. I am using MQTT Explorer to see all messages.

For me, the tele topic has all my devices listed, plus stats around power, status, etc.

On the Tasmota site, they have documentation on sending commands over MQTT. I then installed the MQTT CLI on the mac (so I could automate this later) and ran the following commands:

mqtt pub -t cmnd/tasmota_<deviceid>/OtaUrl -m "<internal url hosting tasmota updates>/tamota.bin" -h <mqtt host> -p <mqtt port>

mqtt pub -t cmnd/tasmota_<deviceid>/Upgrade -m "1" -h <mqtt host> -p <mqtt port>

update <deviceid>, URLs, host and ports as required. for the internal URL, I just have a small copy of Nginx running in docker, and serving a folder with copies of the latest OTA files from the Tasmota Release page. I just wanted all the files and put them in the folder shared by NGinx. I need to automate it a bit better… maybe next time there is a new release?

the first one tells the device where the latest OTA file is. the second command kicks off the update. If your devices are not segregated, you can just leave the existing OTA Url there and kick off the upgrade task on its own… I wasn’t that brave…

Within a couple of seconds, you will start to see messages showing up in MQTT Explorer. After a couple of minutes, all devices will have been upgraded and rebooted (no power down, luckily) and all is good!

DNSControl and Github Actions #100daysofhomelab

I am participating in the #100daysofhomelab challenge and have been posting a lot on Twitter as @tiernano, but some posts and tasks I am doing will require longer-form write-ups. So, some updates will include either Videos (which will be published on my Youtube Channel) or blog posts, which will go here. This is the first of the blob posts.

DNSControl is a tool written by the Stackoverflow lads (when they called themselves StackExchange). It is designed to update DNS records and can work with DNS providers and registrars. I use it to update records in Cloudflare and Route53, but many providers are available. I wrote an article a while back about how to create reverse DNS records for IP space with Route53 and DNSControl, but most of it is still relevant, and the main documentation site for DNSControl has a lot of useful tips.

Up till this morning, if I wanted to update a record, I checked out the DNS records from my private Github repo, made the change, and ran the DNSControl commands on my machine (check for syntax checking the file, preview to show what will change at the provider level, and push to make the changes). But I wanted to have some automation for this. So, enter Github Actions.

I did a bit of digging and found a Github Action from koenrh called dnscontrol-action. The docs on this are quite simple to go through, so I created 2 action files for my Repo: preview and push. a Gist for Preview is below:

and the one for push is as follows:

The important parts are as follows:

In both preview and push, the check command does a syntax check of your DNS config file. Then preview will check the providers to see if any records need an update. When push runs, it will make the changes.

All my required secrets are set in the Github repo as secrets, so when the action is run, it will pull the required keys out. These are put into the environment variables. I use name.com and a registrar for some domains (though most have now moved to Cloudflare, and some, like my .ie domains, are with Blacknight, who are not supported on DNSControl). Cloudflare is used by the majority of my domains, and Route53 is used for 2 domains currently. There are around 53 domains current managed by this, and the plan is to add more. I also plan on getting some more automation around checking configs and sending alerts if anything changes.

So, enough “How it works” and show us it working!

Right. Let’s update my zt.tiernanotoole.net domain, which is used for Zerotier IPs internal to my network. It’s been a while since I did this, so most will be removed and a few adds… first, I create a new branch, called zt-update, and check it out in VSCode. I made my changes, git committed and git pushed to the branch.

at this stage, the actions have NOT run, since this is neither checked in to master, nor a PR for master.

I go into the create PR section, and I can see the changes I have made. in my case, I removed a load of unused records and added extras:

I now create my PR and wait for the checks to complete:

within a short time, I get an alert that all checks have passed, and I can find the results of the changes in the build (It was meant to add a note to the PR with the details, but I might be missing something in my config…)

Also, not sure why it is redacting part of my name here…

I check the rest of the list, and other than the deletes and creates in route53 for this domain, there are no other changes. So, being happy with that, I click the Merge Pull Request and the code is checked into master, and the DNSControl push command runs:

If i now go into Route53, i can see the records on the site:

Happy days! Next challenges to fix:

  • fix the PR to include the output of check and preview
  • only run a check and push on the master branch, and no need to run preview again…
  • run preview once a week and send alerts if anything has changed

Till next time, good luck!

Ubiquiti UDM Pro Fail over to Speedify

So, this has been a blog post in the making for a while now but never got around to fully writing it up, so here goes nothing…

I run a UDM Pro in the house. It has 2 WAN Links: 1 1Gb link and 1 10Gb Link. I also run AS204994, my own ASN with its own Transit and Peering connections, mostly in Europe. There is a VM in the house which acts as a connection to AS204994, which gives me a full connection to the Internet through my own ASN. More details on my AS204994 blog are here.

That connection is hooked up to the 10Gb Link on the UDM Pro, which is listed as the primary internet link. Details on how these works were uploaded in this video on YouTube:

In the video above, I was using OpenMPTCPRouter to connect to the internet, but it’s been causing some issues lately, I decided to try something else.

The new setup is an Intel Nuc (i3 with 32GB RAM and 2x512GB SSDs… VERY OVERKILL for the job at hand) running Ubuntu Linux. It has a USB Hub with 3 USB Ports and an Ethernet port connected, giving me 2 Ethernet ports on the box in total. 2 of the USB Ports are connected to USB 4G Modems from Huawei and the external ethernet port is directly connected to my cable modem.

USB Hub with 1 Huawei Modem and connection to second

Both modems and the ethernet port are connected to the NUC with full internet connections (The Huawei boxes give up NATed IPs, but the Cable modem is a full public IP) and then Speedify takes those 3 connections and does some bonding magic. Speedify is a handy little VPN service that does connection bonding. You can use it to make sure your internet is rock solid using multiple links, make sure streams are stable, etc. It can bond Wifi Links, LTE modems, Cable Modems, DSL, etc. Anything that can connect and be bonded. The only issue I have with it, compared to OpenMPTCPRouter is that you don’t control the upstream server…

Speedify is set in shared mode, so the internal port on the NUC is set to share the internet connection. This is hooked to the 1Gb WAN Port on the UDM Pro. This is set for failover only (currently the only option on a UDM Pro) so if my AS204994 link goes down (VM reboots, VM host dies, Cable modem connection goes out, etc) I will still have a connection. If the cable goes out, it will use just the 4G links, but if everything is running, I get all 3 connections.

Connecting to my car over ZeroTier

I use ZeroTier on my network for a good few things, including internal network peering between BGP VMs, management of machines, and now, connecting to my car over LTE. This is one of those posts that sounds silly, but is very handy! First, the parts list:

  • Car…
  • 3G/4G/5G modem of some sort. I am using a Huawei Wingle… Can be used without the Router below, but I wanted Zerotier, so I have it in modem only mode…
  • A router that supports Zerotier. I am using a modified TP-Link TL-WR703N upgraded to 16MB ROM and 64MB RAM. This is required for newer OpenWRT builds
  • a dashcam that connects over Wifi. I am using a BlackVue DR750S-2CH
  • Latest ROOter software from Of Modems and Men
  • Patients…

After installing the the latest copy of ROOter on the TPLink (or router of your choice) and getting the modem configured correctly (this took a while) you need to install the Zerotier software though the dashboard. Once installed, I joined my Zerotier network using the CLI (SSH into the router) and the approved it though the my.zerotier.com dashboard. Once its approved and connected, you can now go to the Zerotier IP and get to the router directly. From here, you can either setup a route in Zerotier to point at the internal network behind the router, or, in my case, setup a  SSH tunnel to the dashcam. I found the IP given to the dashcam and used SSH forwarding to get to it. Finally, i used the URLs from Digital-Nebula’s hackview repo to get to the different URLs. I use this to download stuff like GPS logs, emergency videos, etc. I have to clean up some scripts at some stage for this, and plan to upload them at some stage.

If anyone has any questions, leave a comment!

Domain Joining a machine over VPN and Password Resets/Changes with Azure AD

With the whole Work From Home thing probably becoming more and more normal in the years to come (I can count on 2 hands how many times I have physically been in my main office in the last 7 months) there are a couple of certainties in that people will come up against. One is passwords expiring and needing to be changed, one is password resets being required and finally laptops or desktops needing to be domain joined or connected to the domain before they can be fully provisioned. As the (currently only) IT guy in our office, I have had to deal with these first hand, and decide to write this post, helping both my fellow employees, and possibly other IT Admins stuck in this challenge.

So, as the IT person, there are a couple of assumptions:

  • You have on premises AD
  • You have Azure AD (P1 and above seems to be required if users are mixed AD and on prem. Free allows just Cloud users).
  • Azure AD Sync installed and enabled

If all above are set, you will need to follow the steps to Enable Azure Active Directory Self Service Password Reset. I have enabled this on our domain. Next, you need to get your users to setup their secondary authentication for backup. All our users have a 2FA requirement, so most of them had that already. New users need to go though those setups. Finally, if a user needs to change or reset their password, they can do so though https://aka.ms/sspr. If all is done well, that reduces the amount of support calls I (and you) get.

Now, the next task: domain joining over VPN. This is a bit more “fun” to play with.

First, you need a VPN connection. We use Meraki gear using Active Directory for RADIUS auth. I wont go into too much details on setting that part up, but the script we use to build the VPN connections for users is below. This will probably be different for different VPNs, but this is our starting point.

Lines you need to change are at 8, 9, 10 and 47. Line 39 can also be modified to change from Split Tunneling (only sending traffic to internal subnets) or full Tunneling (all traffic over VPN). If you have multiple internal subnets, Line 49 can be copied with more.

The most important part we need though is line 34. The -AllUserConnection allows the connection to be available to all users on the machine, but also on the start screen. This is important.

So, with all that in place, you will need to connect to the VPN

you should now be able to join the domain as if you where on your local network.

Enter Domain details and change name of machine if required
when asked enter your domain username and password
You will be welcomed to the domain
and then asked to reboot

reboot your machine as usual and when it boots, you should see a new option on the login screen

VPN login option

Click this icon and if you only have one VPN connection the screen below will show up. If you have more than one, you will be given a list of options to use.

Login to VPN at the login screen

Enter your domain credentials. Since our AD and VPN use the same credentials, it will automatically log you in aswell.

Machine is now domain joined and logged in, and in my case, finishing setup

So, there you have it. How to domain join a machine outside the network. Now, in reality, Azure Active Directory and Intune would probably be the better option, but that’s future work…

Auto deploying to multiple servers with GitHub and Webhooks

In yesterdays post, i mentioned that i wanted to try get an auto deploy working for this site. It already builds auto-magically using Forestry and puts the static HTML into a Github repo, but i needed to manually update the servers hosting the site… Well, not any more!

using the magic of Github’s Web hooks, the Webhook project and a small piece of bash shell script, i have managed to get this auto deploying…

First, Download the Webhook project (its a Go application, so it works pretty much anywhere). Copy it somewhere on your machine. Next, you need a config. I used the Github sample config from the project site and made tweaks to what script to run and what i was passing in.

next, the script to pull from Github was simple enough:

The repo should already be cloned into the folder, /var/www/localfolder and your web server should be pointing at that also. Then, its just a matter of running the command:

./webhook --hooks github.json --verbose

The --verbose tag gives you lots of info, so its handy for testing. and then your app is running and listening on the default port, 9000.

next, head over to your project on Github and go to settings:

select webhooks and add new web hook

Fill in the required details on the page, and click save.

Github will go out and have a chat with the webhook and verify it can send and receive stuff from the hook. You can see this in the deliveries section:

Clicking on these will show you the headers that were sent, along with the payload, and you can also see the response from your server. Finally, you have the option of re sending the payload, just in case anything goes wrong.

So, there you have it. A complete automated deploy across multiple servers! Any questions, leave a comment below!

[UPDATE] yesterday i mentioned i had to modify the sample that was included on the webhook site. Well, i noticed something this morning. The reason i needed it modified was the trigger rule was checking the header and the reference for the branch, but any time i ran it, it would not trigger… The reason was simple: the webhook app is expecting application/json but i had it set to application/x-www-form-urlencoded which is the default… the webhook app then couldn’t parse it correctly… changing that fixes the problem! happy days!