Adventures in DNS

I just posted about my new PA-220 firewall and mentioned URL filtering.  I have a number of categories blocked, including web-advertising, adult content, malware, etc.  But you can always make something better, right?

The PA-220 has a feature to enforce safe search with various search engines.  Unfortunately, it seems to not work very well on my iPhone, or in Safari on my Mac.  It could be the 8.0.2 firmware, or perhaps it’s something that I’m doing wrong.  In any case, I wanted to fix it, as it was annoying.

Both Google and Bing support a feature to enable Safe Search for your network via DNS.  What you have to do is, when someone requests google.com, make your DNS return a CNAME record for forcesafesearch.google.com.  While this might sound easy, as I discovered, its a bit more complex than perhaps it should be.

First, the DNS proxy feature in my PA-220 does support configuring static entries, so I could add an entry for http://www.google.com, but I can’t set it to CNAMEs, only IP addresses.  I  would have to hard code the IP address for forcesafesearch.google.com, which could potentially change at any time, breaking things.

After a bit of research, my first candidate to truly do the CNAME change was found.

DNSmasq

On my unRaid box, I installed a docker of Pi-Hole, which is a DNS based system (meant for the Raspberry Pi, but capable of running on other platforms) which blackholes DNS queries to Web advertising sites, etc.  It uses DNSmasq and has the ability to run DHCP as well as DNS.  With this integration, it can resolve local hostnames to their DHCP assigned addressing.  I could do that now by adding static entries to my DNS Proxy instance on the PA-220, but it wouldn’t pick up on DHCP entries.  But, alas, DNSmasq treats a CNAME entry added manually differently than I had hoped.  It will ignore it unless it has that record defined somewhere, such at a static definition or via DHCP…  It won’t resolve an external CNAME like a normal query and return it.  And since if I were to define forcesafesearch.google.com as an A record in DNSmasq, that would really defeat the whole purpose of using the CNAME.

Pi-Hole does have a very nice modern web interface with statistics, graphs, and it looks extremely easy to whitelist or blacklist sites.  It gives you great visibility into what devices on your network are doing the most DNS lookups, and if you are wondering where your IoT devices go on the Internet, you can even filter the logs to see what an individual device is performing lookups against, assuming you have all your devices directly querying Pi-Hole, instead of chained like I’m doing here.  In fact, you can even disable the blocking functionality if you like.  With it disabled, it won’t block, but you’ll be able to see all the statistics and logs it has to offer, even showing you what it would have blocked.  Today, it has blocked about 8.8 percent of my DNS queries, though I haven’t really noticed much different than when I simply go through my PA-220.

Dingo

While looking for other DNS packages that could do this CNAME trick, I ran across one that looked very interesting for a different reason.  Dingo is effectively a DNS resolver that takes requests in on port 53, and resolves them over encrypted HTTP/2.  It can be used with both Google and OpenResolve (by OpenDNS).  I installed it as another docker and it seems to work fine.  I did increase it to use 25 worker threads instead of the initial 10.  I don’t know if I’ll keep using this or not, but I’ll see how it goes.

Bind

Other research turned up some settings for Bind that would let me add the CNAME records I needed to for Google and Bing to enforce safe search, and yet another Docker was installed.  The one I chose included Webmin for easy administration of Bind.  It worked just fine.

So, now I have the initial DNS queries pointing to the PA-220, taking advantage of the Threat/URL Filtering there, then forwarding to a docker running Bind to handle google and bing domains, which forwards to Pi-Hole (which I may end up removing from this chain), and finally to Dingo to perform the actual DNS lookups over encrypted HTTP/2.

Whew!

That sounds like a lot, but not including the PA-220 (which was doing this job before), I’ve added three hops that all exist on the same box.

May 21, 2017 at 7:48 pm Leave a comment

The PA-220 Firewall is here!

The PA-220 has 8 ports of Gigabit goodness on the front, aside from the management port.

The PA-220 supports some pretty high-end features, making it suitable for use in a small business office.  First, there is High Availability mode (HA), if you have a pair of PA-220s and duplicate your connectivity (even to your WAN, so you’d need a switch between a Cable/DSL modem and the pair of firewalls)  Another big feature is LACP support (Link Aggregation Control Protocol), so you could have multiple connections between your firewall and an Ethernet switch.  This redundancy is something that small offices would likely want, as when the WAN connection is down, there is probably work that can’t be done.

The PA-220 comes with a template and hardware to mount it sideways on a wall, something that I plan to do at some point but haven’t gotten around to yet.

Since the speed that the PA-220 handles traffic is limited to about 500 Mbps firewalled, and down to about 150 Mbps with Threat enabled, I recommend only putting relatively low speed or volume devices directly on the ports of the firewall itself, if the primary thing they are communicating to is also on the local LAN.  You could always add a rule in for intrazone traffic to be allowed and not place any Threat profiles on that rule, giving you the maximum 500 Mbps speed to the internal network.

I’ve got it in place, doing SSL decryption, Threat, URL filtering, Wildfire, and GlobalProtect VPN.  It seems to perform pretty well so far.

May 21, 2017 at 11:20 am 7 comments

UnRAID experiences

Recently, NewEgg had a deal on an HP ML10 V2 server for about $170 after rebate.  It included an i3 processor at 3.5 Ghz, a 500 GB hard drive, and 8 GB of ECC ram.  I had a hard time passing up that good of a deal, so I didn’t.

After playing with VMware ESXi 6.5 on it for a bit, I decided to try UnRAID.  I was interested in using Docker on it, something I have dabbled with on my Synology.

Having used UnRaid for more than a week, I think I’m about ready to get rid of my NAS and use this instead.

The initial setup was easy.  I loaded the software on a USB drive, put several low capacity drives in it (largest being 1 TB) and created the array using the web interface.  It began the parity process and I started setting up shares and using it.

Let me explain a bit about how UnRAID works.  It’s not your traditional RAID array.  You basically put in whatever disks you want, select the largest one as the Parity drive, and start using your array (there are some WebGUI steps involved, but it’s very easy).  I understand that you can even take drives that already have data on them (in a format UnRAID uses), and that data is preserved, with the exception of the parity drive.  With UnRAID, you get the advantage of parity protection, so if a single disk dies, just replace it and it rebuilds.  If there’s a problem with more than one of your disks at once, you only lose data on the failed drives.  Your remaining working drives have all their data intact.

Another differences is the way shares work.  It has your traditional disk based shares, where you add a share for an individual disk, and write files to it the usual way, and it will create parity info on the parity drive should that disk fail, so your data is safe….  And it has what it calls “user” shares.  These shares span your disks.  So, you might have a media share, for example.  You copy a video over to it, which gets dropped on disk 1.  Later, you copy another video, and it gets dropped onto disk 2.  When you view directory listing of the share, though, you see a single view with all the files presented as if they were in a single structured set of folders, so you don’t have to know which disk a specific file is on…  UnRAID tracks that for you and presents it all as if it’s a single, large share.

Anyhow, over the next few days, I set up three Time Machine shares, along with a couple others and copied over the majority of the data from my NAS to it.  (I have not been storing nearly as much on my NAS recently, having cleaned off tons of media some time ago.)

The Docker container functionality is great.  You can load a docker container based on templates, so there’s not much to do but point and click, though you may have to type in a path or two, here or there.  Think of it sort of as Plug-ins or Apps – there’s a Plex container, MythTV, SageTV, and many, many more.

After the initial parity calculation was done, I moved my 4TB drive from my NAS over, replacing the parity drive in UnRAID.  It rebuilt the parity info after I adjusted the config in the WebGUI.  Then, I proceeded to swap another drive with a 3 TB drive, and let it rebuild that., and I’ve done that with yet another 3 TB drive.  At this point, only one of the original hard drives is in the array.

And, I actually want to remove that last 750GB drive from the array.  With traditional RAID, that’s pretty much a no-go.  With Synology’s hybrid RAID, or a Drobo’s approach to RAID, I think you have to stay with the same number of disks in the array, short of copying all the data off and recreating the array fresh with fewer disks.

With UnRAID, though, I’m now copying all the data from disk 3 to disk 2 using a simple rsync command.  Afterwards, according to what I’ve read, I can simply remove the disk, then create a new array with one less disk and it will recreate the parity information.

Why would I want to do this, you may wonder?  To add a cache drive.  UnRAID lets you add a cache drive (an SSD, or perhaps just a 10K RPM or 7200 RPM drive), and set up your shares to take advantage of the cache drive.  When data is written, it goes to the cache drive, and at 3:40 AM, data is moved off the cache drive to the other drives in the array, at which time parity info is calculated.

Now, if you run a business and keep critical data on UnRAID, you shouldn’t entrust the safety of your data to a single cache drive, as the parity info associated with the cache drive is only generated once per day, so there is the potential to lose whatever data has been written to the cache drive.  But if you are a home user, mainly using it for entertainment purposes, you can probably take the chance, for the performance improvement (especially with an SSD cache drive).

Although I’m still within my first 30 days of using UnRAID, it’s safe to say I’ll be buying it soon.

April 25, 2017 at 8:55 pm Leave a comment

Palo Alto PA-220

About a month ago, Palo Alto announced their new 8.0 firmware, along with some new hardware.  The most exciting new product to me, personally, is their new PA-220.

The PA-200 is a unit I have a lot of experience with.  It’s got 4 Gig ports for traffic, supports 100 Mbps of firewall throughput, dropping to 50 Mbps with Threat prevention enabled.  It’s a good unit for a small office.

The PA-220 is better, sporting 8 Gig ports for traffic, 500 Mbps of firewall throughput, dropping to 150 Mbps with Threat enabled.  It is without fans, and since it uses EMMC for storage (32 GB), there shouldn’t be any moving parts to break down.

Basically, it’s got more power than a PA-500, the same number of ports, and it’s in an even smaller package than the PA-200.

Best of all, it’s at a much better price point than the PA-200.

March 7, 2017 at 11:20 pm 1 comment

Sous Vide Flank Steak

Last week, I tried flank steak sous vide.  I cooked them at 131 degrees.  The recipe I went by was for 90 minutes, though I may have gone a bit longer.  I seared them and cut them into strips.  We put some of the meat into tortillas and topped them with various taco/fajita toppings.  I wasn’t very impressed with the result.

Tip:  When you buy steak, etc. from a warehouse club, you might get a good number in one big package.  When you get them home, portion them, season them and seal them in individual sous vide bags (vacuum or water displacement method) just like you are about to sous vide them, but then place them in the freezer.  This makes the next time you want to cook them super easy to get started.  No need to thaw – if it’s a short cook time, like an hour or so, add 50% more time to it.  If it’s a long cook time, like overnight, there’s no need to add any time to it.

Since I had four more flank steaks in the freezer, already prepped this way, I figured I’d try again this weekend.  But with a long cook time, and a bit higher of a temperature.

Friday evening, I dropped a frozen pair of them into a water bath.  This time, I set it to 140 degrees.

Saturday evening came (probably around 20 hours in the bath, maybe more, maybe less).   I cut them into strips, then did a really quick sear of the strips in a pan set to about 6 (so, a bit higher than medium).  The resulting meat was very tender, but I wouldn’t say it had reached the mushy stage.  It was very good.  Next time I cook flank steak sous vide, this will probably be the way I do it.

March 5, 2017 at 11:52 am Leave a comment

MythTV on a Synology NAS

I have a DiskStation 1512+, which has an Atom D2700 CPU with two cores running at 2.13 Ghz and 3 GB of RAM.  While it’s not speedy by today’s standards, DSM is easy to use and includes the ability to do a lot with a few mouse clicks, including run Docker, which in itself gives you a lot of flexibility.  I also have an HDHomerun Prime tuner, which would seem to be a good match for MythTV, if I could get it running on this NAS.

I’ve looked into running MythTV on Docker in the past.  Searching around the net, I’ve found people talking about it, and there are even some Docker images available for MythTV, but documentation hasn’t exactly been a strong point.  There is an image for Unraid of an older version of MythTV, but I wanted to use .28.  Fortunately, someone made a newer container with that version!  It works on a Synology, if installed correctly…  With a few caveats.  And since I had trouble finding good instructions on getting this to work on a Synology NAS, I thought I would post it here for anyone else who wants to try this.

My Installation
1. DSM 6.1-15047
2. Docker 1.11.2-0316 installed via Package Center

To Install

  1. Login to DSM and start Docker
  2. Go to the Registry and search for MythTV
  3. Download mp54u/myth28:latest
  4. When done, to the the Image section, click on the image and hit the Launch button
  5. In the Create Container window that pops up, hit the Advanced Settings button
  6. Click on the Network tab, then check the box Use the same network as Docker Host
  7. On the Volume tab, create three mount paths:
    1. Create/Select /media/MythTV and set the path to /home/mythtv
    2. Create/Select /media/MythTV/recordings and set the path to /var/lib/mythtv
    3. Create/Select /media/MythTV/db and set the path to /db
  8. On the Environment tab, add a variable called TZ and set the value to  the appropriate timezone.  In my case, this was America/New_York.  There should be no spaces in the name here.  Google for Linux and TimeZones and you’ll probably be able to find a list of them.  Make sure to use the right one.
  9. Launch your new MythTV container.  Give it a couple minutes before continuing.
  10. Open a VNC client.  Put in your DiskStation’s IP, and remote control it.
  11. You should be logged into your Docker now.

Note:  The username and password are both mythtv.

At this point, you should be able to run the MythTV Backend Setup tool and configure MythTV.  After it’s configured, MythWeb will be running on your DiskStation, port 6760.

This forum post, which is a little specific to Unraid and involves the older docker image with MythTV .25 or so, should help you set it up past this point.  Note – He talks about using RDP to control it, but that did not seem to work when using Host networking, but VNC did from my Mac (using Chicken of the VNC).  Be aware that I have had issues exiting from the MythTV Backend Setup tool where VNC seems to lock up.  I’ve had another time where it exited normally, but the Backend didn’t seem to start afterward.  In both cases, a quick restart of the Docker container got it back up and running again.  If anyone comes across these issues and figures out a good long term fix, please leave it in the comments!

Front End
In my opinion, about the best front end that will run on a set top box is MrMC.  It’s available for AppleTV and for FireTV and is pretty inexpensive.  It includes the MythTV PVR add-on and is easy to configure, especially if you have experience with Kodi.

February 28, 2017 at 11:33 pm 2 comments

Switching to nYNAB – Web edition

In the past, I’ve been somewhat outspoken in my dislike for changes made from YNAB4 to nYNAB.  It’s now had a year to improve…

I probably made too much of some of the issues with it, though there were two main issues that kept bothering me:

  1. Scheduled transactions don’t take effect until the scheduled date hits, and there’s no way to “force” them to take effect earlier, other than pre-dating them.  This means your category balance will show more than is truly available (since you’ve effectively spent some of that money), and your account balances will not show your planned activity.
  2. The Red Arrow to the Right – to push negative category balances to the future.

For #1…  Well, this still bothers me.  I really do wish there way some way to mark a “scheduled” transaction as if it had already passed, other than pre-dating it.  I think some people have basically said to treat it like a check…  You generally write those out and mail them, but date them when you write them.  I don’t particularly like that, but it works.

As for #2…  The Red Arrow was a tool I used readily.  As it turns out, I think I used it far too much.  One recent month had 11 categories pushing negative balances forward.  I do think that not having that easy ability will actually be a good thing for me, in terms of budgeting.  The money will need to come from somewhere, forcing me to make decisions that I’ve pushed off in the past.

One other issue that I think I first thought would be a bad thing is the single month view, versus the previous multi-month view.  But now that I’ve really been using it for a bit, I don’t think it’s really that big of a deal.

Anyhow, I’m really trying to switch this time.

January 5, 2017 at 7:15 pm Leave a comment

Older Posts


Categories

  • Blogroll

  • Feeds