Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

2 Cable modems = Double speed? Part 3

[NOTE] This part 3 in a series of posts. The rest can be found here.

In Part 1 of this series I explained the why and what I wanted to do for this “project”. In Part 2 I did some basic testing of both MPTCP and MLVPN. I also mentioned trying MMPPP using vtund but it has been a while since I did that testing, and it had not been on bare metal. So, this post is a follow up, where I am using bare metal.

So, first, the setup:

  • ProLiant box is running Debian 8.3 x64, and has both vtund and ppp installed
  • Digtial Ocean box also has Debian 8.3, vtund and ppp installed
  • walked though the guide from John Lewis and made some changes to the configs. the main ones are mentioned below

Once done, i installed both iperf and iftop on both boxes, and ran

iperf -s

on the Digital Ocean box and

iperf -c 192.168.10.1 -d

on the local box. And, well, the results where not as expected. Pretty poor actually:

First, using Squid installed on the DO box, i tried using WGET to download a file using it. If I did this on the DO box itself, i was getting 100MBytes/s… When I ran it over the MLPPP box, well, under 7 was achieved.

Then i though it might have been Squid. So, since the file had been downloaded to DO, i SFTPed into the box over the MLPPP link, and tried again… Again, pretty poor result. I think i seen it hit about 7MB a sec at one stage.

Here is what is showing on the DO box when running the SFTP download. You can see 2 connections from the 2 WAN links at home hitting the box, and they are balanced. Its just nowhere near the speed they are capable of.

I did not get a screen shot of this, but when I tried with iperf, thinking it might have been overhead of SFTP or Squid, I was getting results matching what I was seeing with SFTP. Downloads in the 55-60mbit/s range for download and 40ish for upload. 40 is still faster than 1 link, mind you…

I mentioned that I had made some minor tweaks to the configs from what John had written. Well, mostly it was config changes to how routing was done. In Johns case, he is bonding a DSL and a HSDPA connection, so he had setup to do for logging into his PPP modem and connecting. Also, when he setup the interfaces, he routing tables in there. I have mine setup in a single config file, like as follows:

I have changed the names from adsl1 and 2 to WAN1 and 2, and the IPs are changed from internal IPs to my public IPs. I manually run this when setting up my connection.

Nothing else on his config files have changed. I did not do any of the masquerading stuff, mainly cause this was testing. I just want a tunnel to start with. When reading the vtund.conf file, you can see that encryption and compression are both turned off, and and the same in the ppp configuration. I also don’t think the issue is to do with the CPU performance, since these are the screenshots of top running on both boxes:

in both cases, CPU usage is sub 6% for VTUN and SSH seems to be using less than 10%. So, now, I’m baffled as to why this is not performing as expected… More testing required!

[update 4/4/2016] – fixing images so they are clickable…

Bash on Ubuntu on Windows

Microsoft Build 2016 is on this week, and there were a lot of interesting developments yesterday, but the one that interested me the most is Bash on Ubuntu on Windows. Dustin from Ubuntu has a more details, and Scott Hanselman has posted a technical video about this. This is very interesting, and I CANT WAIT TO GET MY HANDS ON IT! But, I do have some questions, which I thought I would put down in blog format:

  • Based on the post by Dustin, it seems that low level Linux calls are being handled and translated to Windows system calls. Which makes me think, could any Linux Distro work? Could Arch Linux, RedHat or Centos work in the same way?
  • Will this Work on Windows Server 2016 when it launches?
  • Given that it calling down to a low level, could GUI applications work too?
  • Shut up and take my money! I WANT IT NOW!

So, there are my questions… This is very cool, and I cannot wait to get my hands on this. Just wondering if this will be available to Windows Insiders sooner, rather than later?

MPTCP, SSH, Squid, OpenVPN (and 2 Cable modems) = Double Speed? Not quite… Part 2

[NOTE] This part 2 in a series of posts. The rest can be found here.

In my previous post I explained what i was trying to do… This post explains what i have been working on recently, and performance results…
So, first, what have i tried… There are 3 different things i have tried, and here are some of their details. Some will need to be updated (other parts of this series), and others i will try get back to eventually.

Hardware and servers used

To test this, i am using my HP Proliant ML110 G5 running either Ubuntu or Debian Linux, with 2 GigE connections directly to the cable modems, and 1 connection to the LAN (for SSH and testing). The LAN has no gateway set, and the 2 WAN connections have DHCP enabled. They get fully public IP addresses. Upstream, I am using either Digital Ocean or ScaleWay VPS boxes.

Digital Ocean has the advantage of allowing different Kernels, so i have been using them for testing MPTCP. As for ScaleWay, well, their BareMetal C2S/M/L boxes have between 4 and 8 cores (4 for the S, 8 for the M and L) and between 8 and 32Gb RAM (S=8, M=16, L=32GB). The L model also comes with 256GB SSD (plus the boot disk, which seems to be a network disk of some sort) and they all come with lots of bandwidth (i use the L because its got about 800MBit/s to the internet).

Ping wise, Digital Ocean is about 20-30ms away from the house (I picked London to host the servers) and Scaleway is a little further at about 50ms (They are based in France).

MPTCP (MultiPath TCP)

MPTCP (their site is a bit wonky as of writing, so bare with me…) is a Linux Kernel patch that allows TCP connections to use multiple paths… Essentially, if you have Wifi and 4G in a phone, and MPTCP is enabled, it should allow you to use both connections for TCP traffic, as long as the server upstream supports it. It also allows for easy fail over if, say, you lose your wifi connection. There is an example video of it on YouTube which should show the fail over parts and this video shows how they managed to get 50Gbit/s out of a 6 10Gb Ethernet connections.

When i was using MPTCP, I had a copy of Squid on both boxes, and told Squid locally to use Squid on the upstream box (over a SSH tunnel, which was over the MPTCP link) as a parent cache. Using this method, i could see (using iftop) that both connections were being used. When trying proper performace testing, I setup a RAM disk on both machines and copied a Linux ISO to the Digtial Ocean Box. Then, using wget and Axel I downloaded the files using Nginx on the server, and checked the results. I can max out 1 single connection, plus use about 60-80mbit/s from the second. about 420-440mbit/s total. Disk was not the bottleneck, since I was writing to RAM, so more tests are required.

MLVPN

MLVPN is a pretty interesting project that caught my eye. The idea is quite simple: you configure the local box and server, as mentioned in their example guide and run the MLVPN program on the server, then the client. It creates 2 VPN tunnels between the 2 boxes, and bonds them… In my case, i was given an IP of 10.42.42.1 on my box in house and 10.42.42.2 on the server. Any traffic over that tunnel is bonded… Problem is, it seems to be quite processor intensive: my Digital Ocean box was showing one cpu core (out of 2) maxing out at around 80% and my Proliant in house maxing around the 70% mark… all while transferring data at around 100mbit/s. I tried iperf and got the following:

getting 50mbit/s upload is good, in reality, since in theory my max speed would be 72, without overhead. but 116mb/s down is less than a third the max speed of a single connection. So, I tried just uploads and downloads…

Upload Only (from local machine to server)

Download Only (from server to local machine)

As you can see, the download speed has increased a little, to 176Mbit/s, but the upload speed is now at over 60MBit/s!

Still.. download is as important as upload, and given I haven’t managed to get it to max out one connection, never mind 2, even more testing is required…

MLPPP (using VTUN)

This is one i need to come back to… Used the guide from John Lewis but was only managing to get about 100Mbit/s… I was originally using a VM (so disk may have been the issue) and also had the connection behind my EdgeRouter, so it might have been firewall rules causing a slow down. But I do need to come back to this soon… Watch this space.

Conclusions?

Well, at the moment, all I can conclude is that there is more testing required. Upload wise, i can somewhat use most of my bandwidth with MLVPN, and I did see promising results with MPTCP. I gave up a bit too early with MLPPP, so more testing is required with that. Also, all tests are using just iperf between boxes. I did use squid with the MPTCP box for a while, but not for proper performance testing. So, even once this is all sorted out, i will need to turn this into a proper “router” too… So, conclusion? this was originally meant to be a 2 parter… now it looks like I will require a lot more parts… Watch this space…

Continuous Integration and Blogging

Back in August of 2012, I started this site using Git and Jekyll. I hosted most of it at home, pushing to a server in house. Then, a few years back, I moved to pushing the files to Amazon S3 and had Cloud Front doing distribution. The last moved had me hosting the files in NearlyFreeSpeech.NET and Cloud Flare doing the content distribution… Well, that changed over the last few days… again…

Currently, you are still hitting Cloud Flare when you hit this site, but the backend is back to being hosted on Amazon S3. But the files getting to S3 is more interesting now. All the “code” for this site is up on a GitHub repo and any time something is checked in, Travis CI kicks off, builds the files using Jekyll and pushes to S3 using s3_website. All my “private” keys are hidden in Travis-CI, so no one can access them but me. This makes updating the site a lot easier. I can create a file in GitHub directly, preview it, make changes, etc., and then check in. Once checked in, Travis kicks off, builds and deploys. All Good!

It also means that if “bugs” are found on the site (by you, my dear reader), or if you have queries for some things, a “bug report” can be opened on the issues log. I already have a bug for making the site faster… Anything else you want me to change?

2 Cable modems = Double Internet Speed? Well… not really… Part 1

[NOTE] This part 1 in a series of posts. The rest can be found here.

First, a bit of background, and then I will explain what I am currently running in Part 2

For the last 15 or so years, I have had at least 2 internet connections in to the house… 2 of them have always been Cable Modems from NTL, which became UPC, and now is Virgin Media. When I started, i think the modems where 150/50kbit/s and 600/150kb/s, and have steadily increased in speed, currently at 360/36Mbit/s each… But they have always been somewhat separate, and single thread downloads have always been limited to 1 of the connections… I have been looking for ways around this for years…

It started with a Linksys RV042 router which allowed me to load balance my connections… At the time, and i cant even remember when this was, my total bandwidth would not exceed the router. The RV042 has 2 10/100mbit WAN links and 4 100mb/s LAN links…So, when the connection bandwidth increased, I moved to a new router…

The next router vendor i tried was Mikrotik. I tried a few different options, including an RB1100 and running their RouterOS on x86 hardware… Both worked, well, ok, and the Load balancing with nth stuff did do what i needed, along with other stuff, like routing traffic destined for some sites (like BBC iPlayer) to go over a VPN. But in the end, hardware issues and performance problems with the x86 machine (Mikrotik at the time was limited to 2GB of RAM on x86 hardware) I ended up at PfSense.

PfSense was installed on the same hardware, a HP ProLiant ML110 G5 with 8GB RAM, a Core2Quad processor and 12 GigE Network cards… And, on PfSense, things were good… Performance was stable, load balancing worked as expected, I could set some traffic to go over certain links, etc. all was good… But I lacked IPv6… Plus, the HP used a LOT of power…

The current instalment of my network uses a Ubiquiti Networks Edge Router POE. To show the difference in power, check out the graphs from my Ubnt MPower device. ProLiant first, EdgeRouter second:

Plus, the EdgeRouter does not produce as much heat, and its a LOT smaller that the PowerEdge! It does all the same things I could get PfSense to do, in a lot smaller package (I could, in theory, get a smaller box for PfSense).

So, where does that leave us? Well, I now have 720Mbit/s down and 72Mbit/s up, if I can do multiple threads for uploading… But what if I don’t? What’s next? Well, in the second post, I will explain what I have been trying to do in resent weeks, and what I can do now…

Announcing B2 Uploader and Hubic Testing 2.0

I have 2 new side projects to announce on the site today. First has been running for a while (first check-in was December 28th) and it’s called B2Uploader. Its a fairly simple Windows application to upload files to BackBlaze B2. If you are not familiar with BackBlaze, they provide unlimited backup storage for the low price of a fiver a month. They are the guys who design the BackBlaze storage pods (I want one, by the way!) that allow them to provide unlimited storage for the fiver a month (I currently backup over 4Tb to them!), and late last year, they started offing B2 which is a storage platform on their pods, and it has a (somewhat) easy to use API. AND ITS CHEAP! half a cent, up 0.5c, per gig stored per month! That’s crazy cheap!

B2Uploader uses the B2 API to upload files (it could do more, but currently, as the name suggests, its upload only). Its quite simple, and all the code is available. More stuff coming over the next few weeks. some of the usual badges for open source applications are below. if you want to shout at me, shout in the Gitter chatroom and I will reply. You can see the latest builds over on travis-ci, and the latest releases are available on GitHub.

Join the chat at https://Gitter.im/tiernano/b2uploader

Build Status

The second project is still in the planning phase, and it’s an update to an older project I was working on called HubicTesting. The name is very cleverly called, wait for it… HubicTesting 2.0! I have mentioned Hubic before here. Cheap (about a tenner a month) for lots of storage (10TB!) but an odd API.. It uses Swift for storage, but has a weird(ish) API for authentication. Anyway, more details will be on the site once I write it up.

So, anyone needing to upload files to B2, check out B2Uploader. Want to work with stuff on Hubic, check out HubicTesting 2.0. Any questions, drop me a mail or find me on the Gitter channel. Have a good one!

Edge Router, Sophos UTM, DMZ and LAN Networks

I have been using an EdgeRouer POE as my main router for most of the network (some of the network still uses PFSense as a router, but thats being removed soon) for the last few weeks, and i am quite happy with it. I also have a second router, a Sophos UTM VM between my first LAN (essentially a DMZ) and my client LAN (there will be more “LANs” over there soon). The Client LAN is NATed between the DMZ and the LAN, which means anything on the LAN i want to access from the DMZ has to be port forwarded… Ideally, not much from the LAN should be accessible though the DMZ, but in my initial setup, stuff like Plex, etc, is…

What i wanted to do was setup a proper firewall between both networks, without the use of NAT… Do do this, i first had to disable th masquerading rules in Sophos:

next, on the EdgeRouter, i added a static route to point at the new network:

And finally, under firewall rules, i allowed what i wanted to allow (in this case, SSH from any DMZ client (not advised) to my Mac Mini).

And that, as they say, is that! So far, so good!

Network and HomeLab V.Next (Part 4)

So, after some messing, tweaking, and thinking, I have made some progress with the home lab… or at least broken some stuff… I mentioned previously that i had a Ubiqititi networks EdgeRouter POE in the home lab. Originally, the plan was to use a Virtual PFSense box for my core router… Given the power usage of the current PfSense Box (I have 2 MPower Pro’s watching power in the lab) I am now thinking of moving to just the EdgeRouter for, well, edge routing… below is the usage of the ProLiant for the last 12 hours or so:

for the same period, here is the usage for the Edge Router:

I am also setting up a DMZ for front facing services, and then a LAN for inside facing machines. There will be a firewall (currently thinking Sophos UTM or similar) between the DMZ and the network. Some machines will be able to access the DMZ, and there may be machines allowed into the LAN, but only some things… not even sure if that would be done…

I also need to work out the VLAN side of things. I have currently though of the following VLAN setup:

  • WAN 1 (connected directly to the Cable modem)
  • WAN 2 (again, direct to cable modem)
  • LAN Network
  • DMZ Network
  • VoIP Network
  • IOT (stuff for running the house, like Nest, the MPower devices or the like)
  • Media Network (Plex, Roku, Apple TV, Chrome Cast, etc. Not sure if i need to separate this, but it might be done…)

The current Cisco 3560G switch should do all that, without problems, so no new switch needed… lets see what i can break over the next while…

Windows Server 2012 R2 returning to The GodBoxV2

After a few months of running Sabayon Linux on the GodboxV2, i am going back to Windows Server. Back around October of last year, i installed Windows 10 Preview on the GodBoxV2, and, well, there where issues with graphics drivers, etc. Then, some time after, i cant remember off hand when, i moved to Sabayon Linux. Its based on Gentoo but has a lot of the components pre-built. Gentoo is a “Build from scratch” sort of OS. You get a basic kernel and a basic set of components, but you build everything else from scratch… including rebuilding the kernel if you want. Sabayon, on the other hand has all that mostly prebuilt, though you can still use Gentoo’s Portage to build stuff yourself.

Anyway, for the last few months, all was going mostly well… but I miss Windows. And, given i have pretty much always ran a server OS on my main workstations, I am heading back to Server 2012R2. I was tempted by 2016, but its still very early days… Maybe i will run it as a VM for a while, but we will see…

ZFS Home storage pool

Over the weekend, my BTRFS pool for my /home directory on Linux failed… Not sure what happened, but it made me
do something i wanted to do for a while: Build a ZFS pool for my home dir.

First things first, the pool consists of 4 2Tb hard drives and 1 128Gb SSD. Its setup in RAIDZ1 (equivilent of RAID 5)
and then the SSD is set for caching.

To create the pool i ran

zpool create home raidz sda sde sdf sdg

then, to add the cache drive

zpool add home cache sdd

the pool (in my case) got mounted to /home, and then i restored my backup to it. to do some tests, i can the
following…

614MB/s write and 5.3GB a second read is nothing to be sniffed at! 🙂