Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Cloud Desktop becoming a reality

I have talked about the theory of the “Cloud desktop” twice on my older blog (Rackspace’s Hosted Virtual Desktop and More on the desktop in the cloud) way back since 2011. Since then, a few things have changed:

With all the increased bandwidth for mobile devices (4 and 5G, expanding wifi, etc) the idea of having your desktop live in the cloud is getting nearer… interesting times, my friend… interesting times…

[Update] Thinking a bit more about this, and if this was to work correctly, your phone could be everything required. Get up in the morning, check your emails on your phone, calander items, and news. head off, head to the coffee shop and plug your phone into a “laptop” style device like a NexDock and catch up on some emails, checking more news sites, etc. When you get into the office, plug your phone into the docking station and Remote Desktop in to your cloud desktop to do your development work, or whatever needs to be done. basic Office apps and Email can be run direct from the phone. When you get home, you can use the Microsoft Wireless Display adapter to watch videos on the big TV, or show web pages. And if your cloud desktop is available outside of your work network, you could work anywhere also…

double speed Internet Part 9 – Going Back

[NOTE] This part 9 in a series of posts. The rest can be found here

Well, the double internet experiment is about ready to be finished… After 9 posts, 4 months, lots of sweating, many painful nights trying to figure out why something stopped using, shouting when Netflix did not work, wondering why my internet connection was so slow, and many, many other problems, i have decided to wind down the project. in the last 9 posts, i have learned a lot, and i hope i have helped someone figure out some stuff on their end. Even though this is a wind up of the project, there are still new things i have to share.

  • I found another project that has potential for speeding up the internet: VTrunkd. after some testing, i does seem to manage to speed up the connection, but either limits on hardware i have in house, or limits of hardware in the cloud, or even the software, stopped me in my tracks… i did see 400mb/s out of it at one stage, using 200mb/s from each modem… its close, but its not the full 720…
  • messing with Quagga/Zebra as mentioned in the previous post has been, well, interesting… I did manage to get all OVH traffic sent though their server, Digital Ocean traffic sent over that box, and everything else over Hetzner. I added an Azure box to the mix for a while, aswell as a Vultr box, but it got very messey, very quickly. if i had something automated, it would be better.
  • the idea of having a /29 IP range in Hetzner and forwarding it though the tunnels back to the house did work. My Meraki MX64 had one IP address, i had a mail server on a second, everything else on a third, and was planning on using more… but its just, well, again, messy. So, i will be going back to the idea of 2 IP addresses, and hoping whatever i put infront of the network can figure stuff out…

So, what am i moving to? well, thats a question… Currently, i have the Meraki MX64 plugged directly into the modems, and protecting my LAN. So far, so good, but due to hardware limits, it maxes out at around 260mb/s. So, thats out of the question for the main network! I did at one stage have Sophos UTM Home edition running. Sophos also have their XG firewall available for home use, so i might try that… There is also PFSense which i used before also… And there may be more… Maybe there will be a new series reviewing these home firewalls? we will see…

Meraki and Ubiquiti networks gear Update

In part 6 of my Double Internet Series I mentioned i was running a Meraki MX64 in the network, and said i would write up about it. I am taking this opportunity to also write up about the Ubiquiti networks gear in the house also.

  • First on the list is my older Ubiquiti Edgerouter POE. It currently in the process of being decommissioned, or used for something else. It was the main edge router for the network: it had both internet connections connected, and did routing, firewalls, etc, but with the Proliant taking over as a router, it is not required as much any more… Its still on, mainly because its still a DHCP server, but not much else.
  • There are 2 Meraki MS220-8 switches next. GodBox1 and Godbox2 both connect in here, and are bonded, as is everything else on the network. The MS220-8 has 8 GigE ports, but also has 2 SFP ports. I bought 4 SFP Ethernet adapters and have a short calbe running between the switches. That uplink is also bonded. All going well so far!
  • All Meraki hardware can be managed though the Meraki dashboard. check out their site for more details and examples of how to use it.
  • I bought one of the MS220’s from eBay a few months back, and loved it. Then i realized that you can get your hands on free gear, the MX64, an MS220 and a Wi-Fi Access point if you attend their webinars. Terms and conditions apply, but check them out!
  • I have 2 Ubiquiti UniFi APs, one in the front of the house, one in the back. They are connected to one of the MS220’s, but dont work with its POE (maybe the EdgeRouter could do that, since its POE…) so there are injectors for them. Anyway, the network ports on there are VLANed to the MX64 (more on that later) and the default traffic is going to a management VLAN.
  • The MX64 has a static internal IP on my DMZ network, and uses the Proliant as an upstream connection. Upstream on the Hetzner server, all traffic coming from the MX64 ip uses one of my /29 ip block. all traffic to that ip is also forwarded directly to the MX64.
  • I has 2 small, unmanaged switches (a cheap 8 port Linksys and a 8 port TP Link) which are used for separate things: the Linksys has 4 Raspberry Pi’s, which run a GlusterFS cluster, plugged into it and the TP Link connects to my printers.
  • I also have a Mikrotik CRS226-24G-2S+IN which has 2 10Gbit SFP+ Ports, and plan on using this for higher speed networking soon, aswell as a Cisco 48 port 3560 which also has 4 SFP ports (GigE) and may come in handy for something soon…

So, thats the network currently. any questions, please leave a comment.

double speed Internet Part 8 – Routing Around

[NOTE] This part 8 in a series of posts. The rest can be found here.

At the end of my last post I asked the question about routing traffic to different servers based on thier distances, etc… Well, after a bit of messing, i can say it kind of works! here is a quick over view:

  • server in the house has now got multiple OpenVPN connections (2 to Hetzner, 1 to OVH (with a plan to double), 1 to Digital Ocean (again, to be doubled) and i am planning 2 to Azure as well).
  • Quagga/Zebra has static routes (currently static, planing on dynamic soon… more eventually) to different servers depending on where they are. for example, all traffic to the hetzner network (including their Storage Boxes) go though the hetzner link. Hubic traffic goes though OVH, Azure (currently) and AWS traffic, aswell as some CDNs go direct over either WAN1 or WAN2 in the house, and some other stuff (CrashPlan currently) goes though Digital Ocean. Everything that has no static route goes though Hetzner…
  • Ideally, the static side of things should be removed, and a more dynamic setup done. How that works, i have no idea… Spotify have 2 posts about their SDN Internet Router (part 1 and part 2) which is an interesting idea… More digging and research is required.

So, there you have it. Everything currently seems to be working, mostly, and tweaks can be made easily… I have a couple posts i have in my head, including something to do with automating bringing up new machines (probably with Ansible or something like it), more monitoring, and some other stuff too… Any questions, leave a comment, and i will get back.

[UPDATE] I wrote a quick and dirty app called WhoIsToZebraConfig which takes an AS Number, looks up the info in the Merit RADb (with the help of some code from Coder Buddy) and outputs what you need to put into your Zebra Config… should save me some time, and it might save you time too… shout if you have questions!

double speed Internet Part 7 – ECMP (kind of)

[NOTE] This part 7 in a series of posts. The rest can be found here.

In the last post I mentioned I am now using Hetzner for hosting a dedicated box. Thats still live, and going well. I have a /29 IP range (6 usable) and also 2 other IPs. So far, so good… But because i was using a Socks Server, I was not fully able to use the /29 ips… I use something like as follows:

essentially, for each public IP i have that i want to map to an internal IP, i have a POST and PRE ROUTING rule, plus the required forward rules… But, if socks are used, then that goes out the Window, since TCP traffic will look like its coming from the socks server… So, i killed the socks server, removed the IPTables rule, and then realized that while outgoing traffic was being balanced somewhat (2 default rules on the internal box pointing at the OpenVPN IPs from the Hetzner box) incoming was a problem. Hetzner knew how to get to my internal network, but only though one ip… enter Quagga and Zebra…

Quagga is a routing software suite, which can do protocols like OSPF, BGP and RIP, and Zebra is the component that does static routing. using their documentation on static routes, I created a static route to my internal network with 2 next hops, the OVPN IPs from the internal box… and, after restarting Quagga, all works! happy days! now i can forward ips from outside the network to inside the network correctly, and they look like they are the public ip!

So, whats next then? well, I now have a server in Germany (Hetzner) and one in France (OVH), and can spin one up in the UK or the US (Digital Ocean). Given that i have Quagga running on the box, i am now thinking of trying to see if its possible to route traffic depending on distance or something similar… If i am trying to hit a server in Hetzner’s DC, i should go though Germany. If its in Digital Ocean, go though either US or UK servers, same with OVH. Then figure out who has the fastest links to, say, Amazon, Azure, Netflix, BBC, Dropbox, etc, and add either static or dynamic links to the router… essentially, thats the theory… lets see how that works…

double speed Internet Part 6 – Hetzner Edition

[NOTE] This part 6 in a series of posts. The rest can be found here.

Its been a while, since I posted, and there are some, well, pretty major changes since the last time… Lets start are the beginning.

Last time I was using Digital Ocean for my hosting provider. I was using their $20 a month server (2 cores, 2GB RAM, 40GB SSD, 3TB transfer), and it was all good… But I noticed that every now and again I would need to reboot the box. I also noticed that when transferring large files or using higher bandwidth (400mb/s+) the 100% of both cores were being used. So, I wanted to move to something with more power…

I also was limited to IP addresses. Yes, Digital Ocean do offer IPv6, but I could only get 1 IPv4 address… and I wanted more…

So, I went back to some old friends of mine, Hetzner, and bought a dedicated server with a Quad Core Xeon E3, 32GB RAM, 4*3 TB HDDs, 1Gbit/s network connection with 30TB transfer per month and a KVMoIP plugged in. I also got 2 extra standard IPs and a /29 (8 IPs, 6 usable). I will explain that next. I installed Debian 8.4 on the first disk, and I am planning on using the other 3 for storage of some sort. I then installed the MPTCP kernel, OpenVPN, Squid and a Socks server (same as the Digital Ocean box) and reconfigured the home machine… All good! Now when browsing the web, everywhere thinks I am in Germany, but so far, so good… Speed tests are about the same, but I have my theory about ECMP to try this weekend.

Because of the extra IPs, I am working on doing full IP forwarding, not just port forwarding. One of my IPs in pointing directly at my Meraki MX64 in house (a post on the Meraki stuff is coming, eventually…) and another at the Proliant box, and I plan on pointing other IPs at machines in the DMZ or a firewall of some sort. the /29 is routed though the IP pointing at the Proliant, so that makes life easier. The original IP is only used for SSH and OVPN from the house. it should not do much else. All network traffic in house is coming from other of the other IPs.

Again, so far, so good. Hopefully the ECMP stuff works correctly, so I will do an updated post soon.

Useful Web and Desktop Apps 2016 edition

I have decided to do a post on some of my favourite tools to use for development, administration, etc. It’s kind of like Hanselman’s Ultimate Tools list, but not as popular and about 2 years newer… Anyway, the list is available here, and will be updated over time, much like my Daily Carry and Computers pages. If you are interested, you add links though GitHub by editing the toolslist.yml data file.

(Mad) Max Speed – The Road Warrior (Internet connection) (double speed internet Part 5)

[NOTE] This part 5 in a series of posts. The rest can be found here.

This post is going to be an update and theoretical post. probably very little “new” stuff going on here, mostly updates, and what I am planning on doing later on.

This week, I have been OOF sick, so I have not done much work, but I have been surfing the web, watching videos, downloading stuff, etc., so I have an idea of how things are going. First, as mentioned in the previous post I have MPTCP, Squid, Socks Servers, OpenVPN and IPTables doing their magic. 2 OpenVPN tunnels between the house and Digital Ocean. All TCP Traffic (bar port 80) is sent over socks to the box in the cloud using RedSocks. All UDP traffic is sent direct over OpenVPN. Since MPTCP is in the mix, all socks traffic is actually split over the 2 connections. All port 80 traffic, and 443 (if the client is using local Squid as their proxy) is sent round-robin between the 2 upstream IPs to Squid (2 OpenVPN end points).

Things I have noticed:

  • Every now and again, RedSocks crashes… just full on dies. It’s just a matter of starting again, but it’s a pain…
  • I have had to restart squid a couple of times… not too often though
  • there was a power outage in the house a few days back… so, when everything came back online, it was a bit of a pain bringing all connections back to life. I do have to figure out a better plan

I still have to read more on this ECMP stuff. Hopefully it will do what I am hoping.

Now for the theoretical stuff. I started thinking, could this work outside the house? Could you build this into something smaller, like a Raspberry Pi, and stick 2 or more USB Modems in, connect it back to a server in the cloud, setup P2P OpenVPN connections and then get more than a single modem speed download? The problems I can see are around MPTCP. I am not sure if it has been ported to ARM to run on a Raspberry Pi. Second, the max you could ever get out of it is 100Mbit/s, given the 10/100mb network port on board… and you may need extra power for the USB dongles. Also, getting P2P connections may be complicated, given the non-static IPs on the modems, though, in theory, non-P2P OpenVPN could work… Again, it’s a theory. I had the though and that’s where the title came from… anyway, throwing it out there…

2 Cable Modems = Double Speed? Part 4

[NOTE] This part 4 in a series of posts. The rest can be found here.

So, this week I went in a completely different direction that I have been thinking recently…

So, the basic theory is as follows:

  • I am still using MPTCP kernels on both upstream and local machine
  • now have 2 P2P UDP OpenVPN tunnels between house and cloud. Example config is here
  • all TCP traffic (bar port 80) that hits the router in house is redirected to RedSocks
  • RedSocks uses a socks server, Dante, as an upstream server on the cloud box
  • since the socks traffic is over TCP (inside the UDP OpenVPN tunnel) it uses MPTCP
  • having socks running, gives me quite the download speed, turning it off does not, hence the following tweet

  • I am also noticing that I am starting to hit the limits of my upstream VM. If downloading or uploading at speed, the processor cores (2 in the case of the box I am currently running) are pegged at pretty much 100% full… Well, 80ish, but that because the other 20% is being used by Dante. I am noticing I can hit a full 72Mbit/s up, but the max currently downloading is about 400, maybe 450… Need a faster box now…
  • I mentioned port 80 not being set over socks. That’s because its redirected to Squid. Squid (in house) then uses Squid (in cloud) as a parent. There are 2 round-robin parents for squid, one on each OpenVPN connection IP address.
  • all other traffic (UDP, ICMP, etc.) are sent over the OpenVPN connection… currently only one is picked, but I have a cunning plan…

The cunning plan? Well, if I am reading the internet correctly, and I would like to think I am, I think ECMP, or Equal Cost Multi-Path Routing, could help… Again, it’s a fledgling idea currently, and I am still reading the documentation, but if it works… Well… I not sure… let’s see…

Installing Jekyll on Bash On Ubuntu on Windows

At the 2016 Build conference, Microsoft announced that Bash on Ubuntu on Windows was coming. Well, it came out last week, and I installed it as soon as I could! My next challenge was to get Jekyll to run and install on it, so I can build and preview this site on my Surface Book.

So, first, I needed to install version 2.0 of Ruby. There is a bit of messing involved for this, but first

apt-get update
apt-get install ruby2.0

Now, when you run

ruby -v

you will still see ruby 1.9.x installed… and the github-pages gem, which includes Jekyll 3, requires ruby 2.0… ugh. after reading this very long bug report I got this:

# Rename original out of the way, so updates / reinstalls don't squash our hack fix
dpkg-divert --add --rename --divert /usr/bin/ruby.divert /usr/bin/ruby
dpkg-divert --add --rename --divert /usr/bin/gem.divert /usr/bin/gem
# Create an alternatives entry pointing ruby -> ruby2.0
update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby2.0 1
update-alternatives --install /usr/bin/gem gem /usr/bin/gem2.0 1

Now, when I run ruby -v, I am told I am on version 2.0! Happy days! Next, i installed bundler using

gem install bundler

which uses a Gemfile to install the required gems, so running

bundle install

should install all required gems but no luck… I tried adding ruby2.0 dev to the mix

apt-get install ruby2.0-dev

but running jekyll from bash said it did not exist, and running

bundle exec jekyll build

failed with a memory issue… ugh…

Anyway, next, I tried

gem install github-pages -V

-v makes it verbose, so you can see what’s going on… after a bit of time, and lots of output to the screen…

jekyll build
bash: /usr/bin/jekyll: No such file or directory

FECK! So, after a bit more digging, i find that jekyll is actually in /usr/local/bin/jekyll

ln /usr/local/bin/jekyll /usr/bin/jekyll

solves that problem!

now running

jekyll build

works perfectly, as does

jekyll serve

HAPPY DAYS! Mind you, I am only using this as a testing and writing system. I have not tried s3_website just yet, but that is being sorted elsewhere anyway. Maybe my next post will explain that…