Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Currently Viewing Posts in Virtualization

VMWare Workstation Pro and Fusion Pro now available for free for everyone

Back in May, VMware announced that VMware Workstation Pro and Fusion Pro would be free for non-commercial use. This was fantastic news for non-commercial users. However, a few days ago, they made an even better announcement: the free edition is now available to all users, including commercial users. While you can still purchase a license if you require support, the free version functions just as well, albeit without any support.

 

It appears that they are discontinuing Workstation Player and Fusion Player, as the functionality they offered is now included in the free version of Workstation and Fusion. 

ESXi on Arm (and Raspberry Pi!)

A few days back (October 6th 2020) VMWare announced a new “Fling”: ESXi Arm Edition. Not completely sure what a Fling is, but anyway, I started reading, liked the idea and managed to download a copy for testing. I have 2 Pi 4s in the house, both 4Gb Models, and I wanted to play around with the new tech.

So, after some messing with UEFI stuff, formatting Micro SD cards correctly, copying files and some limitations, I managed to get 2 new ESXi servers running on Raspberry Pi!

There is a walk though Video showing everything I did to get up and running. Its embedded below. Some of the hardware I used is also mentioned below.

Equipment list:

  • 2 x 4G Raspberry Pi 4s
  • 2 x 16Gb Micro SD Cards (you could probably get away with 1Gb cards… You only need a small 256MB partition for the UEFI stuff)
  • 2 x 64GB Kingston DataTravler USB 3 Sticks (This is where ESXi is installed, plus the rest of the storage, if configured correctly, can be used for VMs).
  • 2 X POE to USB C Splitters. I used these so I can power both Pi’s though POE and can reboot them using the switch. You could use a USB Power Adapter like the Anker PowerPort 60W which would give you 6 ports to run your Raspberry Pi’s. I would probably limit it to running 4 Pi’s though, since the Pi 4 needs a bit more power…
  • Some way of installing the ISO to the Pi. I used an iodd Mini 256Gb for the task. I also did a video review of that here.
  • About an hour of your time.

As mentioned above, the USB key is used for storing ESXi when its installed. It can also be used for storing VMs. There is a command you run when installing to partition the drive in 2: 8GB for ESXi and the rest for storage. I managed to run this correctly on one, but missed it on the second. I might reinstall that Pi and get it up and running again soon. You also have the option of installing to iSCSI. That might be useful too…

Storage wise, VMWare recommend using usb3 or fast iscsi or nfs storage for vms. I’m using nfs on my workstation which seems to work OK. but you are still limited to 1Gb/s of the Raspberry Pi. They say it is possible to use extra USB network cards. Could be interesting to try that out.

So far i have managed to install a single VM on one of the Pis. I plan on migrating from a Physical PiHole instance to a virtual one. I also plan on getting a few 8Gb Pis and see where this rabbit hole gets me. It can also be managed with VSphere. Let’s see if I can get that working… Stay tuned!

If anyone has any questions, comments, etc., just shout. And if your interested in videos like these, subscribe and like the video!

Network Update Info April 2019

So, this post has been a long time coming! A load of different things to talk about, so lets get started!

GodBox V3

So, for a long time, I have been thinking about GodBoxV3, the replacement to GodBoxV2. And when planning this, i had some ideas of what it should be:

  • Minimum of 2×16 cores (double godboxv2)
  • About the same RAM, if not more
  • FAST STORAGE!
  • Is able to run my twin 30" 4K monitors
  • Would like 10Gb/s NICs

Well, It finally happened! I got the machine, built it and, well, its impressive! How did i do with specs? Well…

All is good! Photos, more details and benchmarks coming soon… stay tuned!

Finally 10Gb/s Networking!

Since GodBoxV3 had a few 10Gb nics, i needed to upgrade the network to support it. I ended up with a Ubiquiti Networks EdgeSwitch-XG. 16 ports (12 SFP+ and 4 RJ45). The SubperMicro board has 2xRJ45 ports. Due to lack of RJ45 ports, GodBoxV3 is connected to 1, GodBoxV2 is getting a 10Gb card soon, which will be connected to 1 port, and a new Sun Microsystems server (details below) will be getting the last 2… Of the SFP+ ports, 2 are connected to the EdgeSwitch Lite, 2 to the Synology (it got a 10Gig NIC reciently too!) and 2 to the new NAS (again, more details below!)

Good bye Mikrotik, Hello EdgeRouter 4

Since i was going all Ubiquiti gear (Wifi is Unifi gear) i got rid of the old Microtik and replaced it with a Ubiquiti ER4. Happy days! Got some plans for this, more details coming soon…

Updates to BGP Stuff, including IPv6

I lost one VPS in London, but replaced it with a new one from HostUS. I still use Vultr, Packet and VServer.Site as providers too. I am also adding more and more IPv6 stuff too… There is a post on AS204994 explaining a lot of this.

New NAS and more storage!

New NAS got purchased: QNAP TS-932X. I have 5X8TB spinny disks (shucked from 5 WD My Book 8TBs) + 4 X 500GB WD Blue SSDs.

New Servers and cooling updates

Moved lots of stuff around the room… Servers run cooler, and less noisy! happy days! I also got my hands on a very nice looking Sun Server X3-2. Its a Dual Xeon E5 (currently got quad cores, going to upgrade it to 8 cores) and i think its got 16GB ram and 4x300GB SAS Disks. It also has 4X10Gb nics! ESXi will probably go on here!

VMWare in the house

Up till recently, I ran Hyper-V all round. Its still on GodBox V2 and V3 (v1 has a HDD issue, so its off…), but the main VM hosts (the C6100’s) are being migrated to VMWare ESXi… Why? Its a learning exercise… We see how it goes…

So, long update… Any questions, comments, etc… shout!

Network and HomeLab V.Next (Part 2)

So, in my last post i talked about the requirements for the home lab, and in this post, im going to talk about a few more updates i have made in the last few weeks.

First, the processors: in the first post, i talked about Xeon D or Xeon E3… Well, i missed one… The Xeon E5. I have 2 of these in GodBox 2, and you can get them into a microATX board. There does seem to be some limits with the microatx boards, but hopefully enough searching will find me what i am looking for. Ideally, i want it to take “normal” DDR3/4 memory (not SODIMMs like the ASRock one above) and also take enough of them to run 64 or 128Gb of ram (thinking 8 would do the job!). Also, i would like to have 4 GigE ports onboard and 1 management port. 4 onboard is not a hard requirement: If i can get one with 2 ports, i can always get a 4 port card for the PCI-Express slot… Finally, i would like it to have at least 6 SATA ports and possibly an MSATA port. Thinking Boot off MSATA (Windows Server 2016 Nano Server would be used), 2 SSDs and 4 HDDs. Using Storage Spaces, use the 2 SSDs as “Fast” storage for the pool.

I also think i moved off the idea of 10Gb. I like the idea of it, but given a small 10Gb switch costs upwards of a grand, and the plan is to build a machine for that price, i would prefer a fifth machine and using my existing Cisco 48 port switch and leave 10Gb as a future upgrade.

Also, changed from last time round is machine count. Originally i was saying 3-4 machines… now i am thinking 6-7… 5-6 of them should be Hyper-V boxes and the last one would be a Media Box.

I also think the Synology or SAN requirement is out… Hyper-V can be setup to do replication between hosts, and with a 4Gb link to the LAN, i think i should be OK. Also, if i have the media box separate, i should be ok there too. I will detail the media center in a later post.

So, any suggestions or thoughts on what should and shouldn’t be looked at?

Network and Homelab V.Next (Part 1)

So, its that time again… HomeLab upgrade time… Or at least the planning for it. I am in the process of rebuilding my home lab, which involves pull all old servers out of the rack and replacing them with new ones… It also means rewriting the network, possibly upgrading some existing gear and hopefully getting the whole lot done on a budget of some sort…

So, why? Well, biggest reason for all this is currently heat and power usage. We use about 4-6x more electricity than the average house here in Ireland, which means our electricity bill is fairly high. It also means that the lab, which is also my office/bedroom, gets quite warm and uncomfortable during the summer month. There is an Air-Con unit in the room, and, well, that’s costing the most on electricity!

So, what I got is a basic overview of what I want from the homelab and hopefully in the next post, I will have an idea of what it will look like..

  • 3-4 machines running a Hyper Visor (HyperV, VMWare ESXi or other). Leaning more towards Hyper-V purely because its what I got currently and its what we use in our main office.
  • each machine should be connected to at least 2 networks: one for storage and migration, one for “public” to the LAN. There may be more VLANs for other networks, but 2 is a start.
  • ideally, 10Gb connections would be nice, but multiple 1Gb connections would also work.
  • shared storage (iSCSI, SMB3, etc) would also be a nice to have, but may bump up the server count (not actually a problem) but would increase power and cooling costs. An off the shelf box, like a Synology could do the job…
  • Lower power usage and less heat produced is also a major requirement. Most of the boxes I am decommissioning are older Xeon hardware (5000 series upto a 5200 series process and even an older Xeon P4!). The newer Xeon E3 and the even newer Xeon D are a lot more efficient, use less power, produce less heat and are way faster than what I currently have. The E3 can use up to 32Gb of RAM and the Xeon D top out at 128Gb… Me being me would like more than 32Gb RAM… 🙂
  • smaller machines would also be nice. I have been looking at both Xeon D and Xeon E3 Mini-ITX boards and cases for them. I do have a half height Dell Rack, which I host these machines, and ideally, these machines should be rack mountable, but micro ATX cases could work. 2 per shelf would work grand.
  • Onboard IPMI and KVM support is something I want too… I do have a KVMoIP switch in the house, and it works, most of the time, but getting a box that has this embedded into the board would be ideal… A lot of the server boards had it as standard or allowed it to be speced, so that’s all good.
  • I am also thinking of upgrading the router to a similar spec board… Possibly a Xeon E3, or even an i5…. Ideally it should have IPMI and KVMoIP on board and should produce less heat. Biggest issues is getting enough network cards into the box…

These are my requirements at a high level overview. Over time things may change, but lets see how we get on…

PFSense with Multiple Public IPs

So, a few weeks back, i got my hands on a Hetzner Dedicated box. It has a quad core Xeon, 32Gb ram, 3x3Tb hdds, RAID controller and KVMoIP. one of the first thing i did was get myself a /29 IP pool (8 total, 6 usable IPs). There where already 3 IPs given to me: 1 for the KVM, one for the box itself, and 1 as the router for the IP block.

So, i need to setup my own router, so i picked PFSense since its what i run in house. I gave it 2 network connections: 1 connected to the main network adapter on the VMWare ESXi box (public) and one to a virtual switch, which is only used by VMs. The public is the WAN link and it gets a static IP from Hetzner, and the virtual switch is then my “LAN” link. This allows me to have standard NATed network connections to any VM i have, but then, what do i do with those IPs?

So, after a lot of digging, i found the answer. So, this should help.

  • Under firewall, click on Virtual IPs.
  • Click the plus. I then selected IP alias, selected the WAN interface and set the IP to my first public IP i wanted to give. in my case, i was given a /29 block, and my first address was 176. This is the network address. I used 177. Likewise, my last address is 183, but that cannot be used either as its a broadcast address. give it a description and then hit OK. Repease for all IPs you want to use. TIP: Give each a meaningful description!
  • Next, click firewall, NAT and 1:1. Click the add button and select your interface as WAN. set the External Subnet IP as the one you want to use and your internal IP as the machine that will have it. Thats all i did on that screen…
  • Then go to Firewall, NAT, outbound… this is where things got complicated. Set the mode to “Manual outbound NAT rule generation (AON – Advanced Outbound NAT)” and click save.
  • Then create a new rule: Interface: WAN, Source, Network, IP of the internal machine and then under translation, under address select the IP you want to give it. If you followed my tip in step 2, you should see the descriptions in here.

After saving everything and reloading the firewall, visiting a page like WhatsMyIP or ICanHazIP should show you your public IP. You can then create firewall rules to allow access. Quick idea would be:

Firewall/Rules, Add, Interface WAN, Destination: Local IP you want to use, and give whatever “normal” rules you would (HTTP, lock down to source address, etc). Click apply and hitting that address using what ever method (SSH, HTTP, etc) should work.

YMMV, but hopefully this helps! Any questions, leave a comment.

Quick tip for internet facing ESXi servers

Quick tip for all you with internet facing VMWare ESXi Hosts. I
have just got my hands on a box on the Hetzner network (more on
that later) and using their LARA system i installed ESXi on it. All was good, then I tried login in a couple hours later and i kept getting errors about my password being wrong… So, i tried a few more times, got pissed off and rebooted the box (had to do a hard reboot, since i couldn’t even get in over KVM). I though this was a hardware issue, or a config issue, and left it… yesterday, i had the console open most of the day, and when looking at something i noticed this:

Well, that’s why I couldn’t login! So, tip: create a second user account, name it something other than root, give it a secure password and use that to login to your ESXi box. Ideally, your ESXi box should be behind a firewall, but in the case of a dedicated server, that may not be financially feasible… Hope this helps someone!

ZFS iSCSI NFS SFTP Hyper-V and more

As part of my new task to make my files safer and backups faster, and, well, cheap, I am looking into ZFS for my storage needs. My needs are as follows:

  • Allow me to store lots of different types of data (Photos, Videos, Music, VMs) in different formats (RAW and JPG photos, MP4, AVI and DivX Videos, with DVD and BluRay rips also a posibility, MP3 music and VHD files from HyperV, inclduing ISOs and Snapshots). I also need to store different file systems using iSCSI (Mac and Windows clients will be mounting the storage).
  • must be safe. DO NOT LOSE DATA!
  • must be somewhat fast. I have VHDs weighing in at 100Gb… my photo collection is 600Gb. If i need to move or copy files to the storage system, it must be fast.

So, ZFS offers all these features. I can export a file share as iSCSI, NFS, SMB, etc. All works well. But the replication stuff is the interesting part…

The plan, which i am working on, is as follows:

  • have 2 machines setup: one in house and one in a datacenter (I have a dedicated box in the Hetzner data center). both could be VMs (the one in the datacenter will more than likley be a VM).
  • use the storage on the local system for whatever i need backed up.
  • have a script which will take a snapshot of a given pool every 4 hours or so…
  • that script should also dump the snapshot to a temporary location on the machine using ZFS send.
  • that file should be checked, compressed, broken up into little bits and checked again… checking is important!
  • take those little bits and send them to the datacenter, which will do lots more checking and import the files into the ZFS pool over there…
  • there may even be a two way system to send from the datacenter back to the house…
  • finally, the remote pool should be dumped to an SFTP backup system that Hetzner give me… Currently set at 100Gb, but can be increesed as needed…

Thats the “plan”… Lets see how it actually works out…

Anyway, parts of the process i need to tweak:

  • uploading and using as much of my upload bandwidth as possible (2x10mb upload connections…) if i am backing up 800Gb, which should be my first backup, i would like to use both pipes to the fullest… on a single connection, at 50% capacity, it would take 15.1 days to upload. if i can get both connections to work at 80% capacity, giving me 16Mbits/s, it would be down to 4.7 days. With compression and Deduplication, i can probably bring that down a bit more…
  • backing up to SFTP… Reading different things is telling me this might not be such a good idea…

Some links which you might find useful:

Understanding Storage Spaces in Windows 8 and Windows Server 2012

So, Windows Server 2012 and Windows 8 have both RTMed in the last couple of weeks and will be available to the public in the next month or so (September for Server, October for Client). If you are an MSDN Subscriber, you already have Client, and will (hopefully) get server in the next couple of weeks… Fingers crossed… Anyway, one of the interesting features i am waiting for is Storage Spaces Tim Anderson’s Gadget Writing blog has some information on how Storage Spaces works. handy notes on what to do and what not to do.