Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Send Emails as a Distribution Group in Office365

I am in the process of moving my Email domains from one Office365 plan (a Professional) to an Enterprise Plan. During the move, i set up some Email Aliases for older email addresses, which are still being used, but i dont send a lot of email from. But if i do need to send emails from these addresses, with the help of Powershell, I managed to set it up. The full details are listed on “how to send as an alias in Office 365”. It works perfectly! And, as a bit of shamless self promotion, if you are interested in Office365, why not drop me a mail at Tiernan at LimitedSlipNetworks dot com and i can set you up with a trial and more information.

IPv6 Firewall rules for MikroTik RouterOS

After yesterday’s post on IPv6 Networking in the house, I realized that all machines internally had publically facing IPv6 addresses! I started to panic, then went looking online, and found the following script:

This script, when run on your RouterOS board, will allow Established and Related connections, allow outgoing connections, and drop anything incoming that has not been requested… so, now everything inside the network should be more secured… I am new to this IPv6 stuff, so I am still learning… but, i am getting there…

IPv6 + MikroTik + Linux + Windows

I have been wanting to setup an IPv6 network for a while now, but never had the hardware or network to support it. My broadband Modem, a Cisco EPC3925, was pretty useless… But with the advent of Bridging on the Cisco EPC3925 it now works!

The first thing i needed to do was setup a Tunnel Broker Account with Hurricane Electric. I got a /64 block of IPv6 addresses, which should do me for a while… 🙂

Next, I followed the config example from the MikroTik Wiki Page: My First IPv6 Network. In my case, i only ran though most of router 1’s config, and did not create the “routing between segments” and “ospv-v3” backbone… I did give my internal LAN port an IPv6 address, as well as an IPv4 address.

Next, on my Windows Server machine, i gave it a static IPv6 address (since i dont have an IPv6 DHCP setup… yet…) and told it to use the IPv6 address i gave the RouteBoard as its gateway. Then i told it to use the OpenDNS public IPv6 address. I then visited IPv6 Test and Google’s IPv6 page to confirm connectivity… SUCCESS!!!

On my Linux box, I followed Soflayer’s Adding an IPv6 IP tutorial.

So far, so good…

Compressing and UnCompressing Protobuf items in C#

Part of a project i am working on required sending large amounts of data between different instances. To get this to work efficially, we started using the ProtoBuf using ProtoBuf-net in .NET. but the files where still quite large (17mb, give or take). So, we looked into compression…

here is some examples of how we managed to compress the protobuf files. We got some decient compression: 3mb files, down from 17mb. very happy.

to compress an object (obj) and write to a temp file (tmpfile):

to decompress the object back to a known type:

GIT tips and tricks

I use GIT a lot for different things, including this blog. so, here are a few tips and tricks i have found useful over the while…

Symform – P2P Backup

I have previously posted about CrashPlan as my Backup System. I also, a long time ago, talked about Backing up SQL, MySQL and other stuff on my other blog. Well, CrashPlan is all good, but there are 2 “niggly” bits with it…

  • Its not FREE (well, this year i got it Free on Black Friday…) but it is cheap ($120 a year to backup 10 machines to the cloud aint bad.)
  • Its NOT FAST! The CrashPlan Datacenters all live in the US, and my servers live in Europe (either Dublin or Germany). So, bandwidth is limited… Getting less than 1Mbit/s most times, but have seen it reach 3… I have 20Mbits/s upload… even half that would be nice…

So, thats where Symform comes in. Symform is a P2P Backup Service, which runs on Windows, Linux and MacOSX. In theory, it should run anywhere that has a Mono runtime since its written in .NET. Anyway, you start with 10Gb of free storage, and you can increese that by one of 2 ways:

  • Pay money: for $0.15 per month, you get 1Gb of storage in the cloud
  • Pay Bytes: For every 2Gb “Contributed” (which is actually more like a pledge than a contribution… more on that later) you get 1Gb storage in the cloud.

It works very well, and is nice a fast too. I have a few machines in house which are contribting stoage, a total of about 2Tb, and I have been given 1Tb storage in “The Cloud”. There is a lot more on how this works on their “How Symform Works” section of their site.

I mentioned the “Contribution” VS “Pledge” up above… I have a machine in the house where i have Pledged 1Tb of storage. In reality, Symform can use the full 1Tb of storage, if it needs to, but is currently only using 168Gb. Now, that could just be that the machine is still getting files, and it will end up using the full 1Tb eventually, but either way, its all good.

Also, as a couple of notes on Contribution and Backups:

  • The machine needs to be online and accessable on the internet at least 80% of the time, but 24/7 is ideal. If you drop below the 80%, your account can be suspended.
  • your machine needs to be publically accessable, meaning port forwarded. I have a couple contribution machines in house, so they each have seperate ports forwarded to them.
  • Given the P2P nature of the software, lots of connections to different machines are made… if you are behind a firewall, you may need to allow all or most outgoing connections. If you are on a really restrictive firewall, you may want to stick a contribution box in your DMZ and probably use the Turbo Seeding feature.
  • Turbo Seeding is a handy feature, especially for Laptops… only problem is its Windows Only… So, importing and exporting does not work on Linux or OSX.
  • The software can managed Work and Non Work hours, and will limit the upload and download speed during this time. Also a nice feature…

So far, so good. Very happy with the software, but would like a nicer interface to see whats going on. At the moment, you are either limited to using the web interface, which aint bad, but not great, or watching the log files… I would also like the ability to prioritize certain files or folders, so, for example, upload my documents folder before anything else, and if anything changes in there, even if its uploading from somewhere else, pause and upload the documents folder… Just a thought…

moving your TMG SQL server Logs DB and other TMG tips

In house, I have been using Microsoft TMG 2010 Server for a while now. I use it as a firewall for some of the machines on the network, and also as a proxy for most, if not all, machines. When acting as a Firewall, all traffic flows though the machine, be it HTTP/HTTPS, SMTP/POP3/IMAP, or anything for that matter. You can also lock down ports on the box, which is a feature of most firewalls, but i like TMG due to its relitive ease of use…

Anyway, one problem with routing all traffic from different machines though TMG is after a while, the logging starts getting big. Single TMG by default is set to use SQL Server, it can start using lots of memory, hard drive space, etc. So, there are a couple of articles which should make moving your TMG’s SQL DB to a different machine easier…

Some other tips you may find useful

  • If you have Malware Inspection turned on, but you know there are certain sites that wont serve Malware (for example, Ubutnu Archives or YouTube.com) you can add these to the “Destination Exceptions” list. Under “Web Access Policy”, click “Malware Inspection” and click “Destination Exceptions”. Double click on the “Sites Exempt from Malware Inspection” and add your URL. I put *.ubuntu.com and *.youtube.com in here (Microsoft Updates are already on the list). Now, when downloading files from these locations, they do not run though inspection and save CPU cycles. WANRING You need to trust these sites!
  • There is a nice little app to add into TMG called Bandwidth Splitter which allows you to not only monitor what traffic is going though your network, but also put limits on different machine sets, users, etc. There is a Free editon which works with only 10 clients, but does what i need it to do to start with.

RouterOS Dynamic IP Updates

I have been using a MikroTik RouterBoard RB750 for a while now, and i love it! Over the weekend, i upgraded it to a RB1100. Its the same software running the device, but the device is faster (800Mhz PowerPC chip VS MIPS-BE at 680Mhz), has more memory (512Mb upgradable to 1.5Gb vs 32Mb) and more storage (think its 512Mb on board, plus 4Gb MicroSD card vs 32mb…). It also has more ports (13 GigE VS 5) and 2 Switch Groups, which i have no idea what they do just yet…

Anyway, part of getting the RB1100 online and taking over from my existing router was getting Dynamic DNS updating. I use both Dyndns and NoIP for DNS, but I also like the look of Amazon Route 53. For updating NoIP, I am using the Alternitive script from The Mikrotik Wiki. To get this to work with DynDNS, it would just be a matter of chaning the URL you point to… I am going to write a web script which will sit internally on the network, use that instead of the No-IP url. When that script gets called, i can log the info, update DynDNS, NoIP and Router53 all in one go. I will be posting more about that soon…

And while we are on the topic of Scripting RouterOS, checkout the Mikrotik Script Example Page. Lots of good stuff up there!

Custom MSDeploy OverWrite Rules

I have a project which we are trying to automate the deployment system. The plan is to automatically deploy the project to a staging server anytime the build succeeds from SVN.

I have had a few problems with this, but here are some of the links which may come in handy for you.

Still some tweaks to get this to work… If i find any more links i will put them here… The problem we are haivng is when a deploy happens, the WebDeployment tool cannot overwrite the log files directory, since they are in use… one option would be to restart IIS, which would be ok in staging, but we want to keep the logs in test and production, so, we need to figure out how to tell Web Deploy not to over write the files.