Importance of Traffic Logs even for the home network

My little firewall logs just about everything that goes on. Blocked? Log it. Allowed? Log it. Most of the time, these logs roll over and I never even see the contents. However, every once in a while they come in very handy.

My wife usually spends a little while on Sunday evenings preparing attendance sheets for CCD (think Sunday school, but for Catholics). Our parish takes it very seriously, and they have given her a remote login to their data software, so she can update the attendance on-line, and they’ll have accurate records. This software appears to be SaaS (Software as a Service). Unfortunately, it’s not a web-based service. It is hosted on some remote system, and they provide her with something akin to a Citrix login to access the data. This software is PDS (Parish Data System) by ACS Technologies.

Recently the UPS on her computer started acting up. We had a quick blip tonight and her computer rebooted. When it came back up, she proceeded to connect back to this software, and was prompted with a small box asking for the Host. We don’t recall this being asked previously, as it usually just pops up a login box.

So, we checked the support website to see if they had any hints. A quick look around there seems to show that to get to any real support info, you need a Site code and a PIN, and my wife doesn’t know that. Their Live Chat support didn’t work. The only other options are Email (which also seems to require site details) and a toll free number, but they apparently don’t work weekends.

Thinking about the situation logically, I concluded that somehow this system “forgot” the remote hostname to which it normally connects. That’s what it’s prompting for connection details with a “Host” prompt.

It struck me that I might be able to find it in the logs, so off to my firewall I went. I filtered by my wife’s IP address, and tried filtering for the application “Citrix”. Zilch. Next, I started filtering out ports and applications that I knew it wouldn’t be, and told the firewall to lookup hostnames. Finally, after filtering out port 80, Facebook-base, Facebook-chat, iCloud-base, Twitter-base, and port 993 (secure Gmail in this case), I jumped from page 1 to page 10 (to get to a more appropriate time, prior to the power outage), and there it was. I recognized the name “”, so I tried that as the host. I believe at that point, I got a different error. So, we closed and restart the application, and it popped up and worked just fine.

So, if you have lots of logging going on with your firewall at the house, don’t bother trying to weed it down, just let it go. One day, it just might save you lots of time.

September 26, 2015 at 9:34 pm Leave a comment

PHP Rest Curl modified for use with CradlePoint routers

Here’s a modification for Jordi Moraleda’s excellent PHP Rest Curl class that I’m using with CradlePoint routers, which return results as JSON. I call it…

Please note, wordpress wasn’t happy with me when I tried pasting the entire class file (with the header comments, pointing to github), so here’s that, in a separate code block.

* (c) 2014 Jordi Moraleda

Class RestCurl {
public static function exec($method, $credentials, $url, $obj = array()) {

//echo $url . "\n";
$curl = curl_init();

switch($method) {
case 'GET':
if (sizeof($obj) > 0) {
if(strrpos($url, "?") === FALSE) {
$url .= '?' . http_build_query($obj);

case 'POST':
curl_setopt($curl, CURLOPT_POST, TRUE);
curl_setopt($curl, CURLOPT_POSTFIELDS, $obj);

case 'PUT':
case 'DELETE':
curl_setopt($curl, CURLOPT_POSTFIELDS, http_build_query($obj)); // body
curl_setopt($curl, CURLOPT_CUSTOMREQUEST, strtoupper($method)); // method

curl_setopt($curl, CURLOPT_HTTPHEADER, array('Content-Type: application/x-www-form-urlencoded; charset=UTF-8'));
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_HEADER, TRUE);
////curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, TRUE);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, FALSE);
//curl_setopt($curl, CURLOPT_VERBOSE, TRUE);
// Optional Authentication:
curl_setopt($curl, CURLOPT_USERPWD, $credentials);

// Exec
$response = curl_exec($curl);
$info = curl_getinfo($curl);

// Data
$header = trim(substr($response, 0, $info['header_size']));
$body = substr($response, $info['header_size']);

return array('status' => $info['http_code'], 'header' => $header, 'data' => json_decode($body,true));

public static function get($url, $cred, $obj = array()) {
return RestCurl::exec("GET", $cred, $url, $obj);

public static function post($url, $cred, $obj = array()) {
return RestCurl::exec("POST", $cred, $url, $obj);

public static function put($url, $cred, $obj = array()) {
return RestCurl::exec("PUT", $cred, $url, $obj);

public static function delete($url, $cred, $obj = array()) {
return RestCurl::exec("DELETE", $cred, $url, $obj);

To use it, do something like this:

$ipAddr = ''; // Your CP router IP
$cred = "username:password"; // Your credentials
$product_info_url = "/api/status/product_info/";
$result = RestCurl::get('https://'.$ipAddr.$product_info_url, $cred);
if ($result['status'] == 200) {
// Got good reply

September 21, 2015 at 5:34 pm 1 comment

Ad Blocking is stealing

I saw a quote the other day from someone in the online ad industry (I believe) who said that using an Ad Blocker is stealing.


I can see the argument related to movies and music.  I mean, for those items you have to buy a CD or a movie ticket (or buy a digital copy).  Downloading the content without legitimately purchasing it… Yea, I can see that being stealing.

Running an ad blocker and visiting a website, though?

Sorry, but no.  It’s not even in the same realm.

The real question publishers and ad companies need to ask is:

Why has Ad Blocking risen so much recently?

A recent focus has been Apple, with their release of IOS 9 that supports “content blocking”, which thus far has mainly been used to create ad blockers.  Why is this the case?

When I first got an iPhone 3GS, my first iPhone, browsing was fast.  Over the years, more and more advertising has been injected into mobile websites.  Advertising web servers are notoriously slow.  Advertising on mobile platforms has become more aggressive.  All this while bandwidth usage has spiked and most carriers have forced bandwidth caps on their customers.

With all these factors combined, the user experience is very poor.  To see how big of a difference it makes in load time, I invite you to try an ad blocker on on iPhone.  Visit sites that you normally visit, and you’ll see that the site pops up much faster than normal.  I expect that if you surf on your phone frequently, once you see the difference, you’ll want to keep using it.

September 20, 2015 at 9:48 pm Leave a comment

Monitoring a network with EIGRP

Most network monitoring involves polling.

So, you have a server (or farm of them) going out across the WAN every minute or so, talking to every remote device to ensure that they are up and running.

There are a number of products out there that do this, but what if you can do it smarter?

At my day job, we have hundreds of remote sites connected via T1 and they have an alternate link, soon to be LTE across the company.  We run EIGRP across our links so our routers know which links are available for traffic.  Yes, even our LTE links.  They all terminate on GRE tunnels on one router.  We set the EIGRP Hello time to 20 seconds and the Hold time to 60 seconds.  If 60 seconds pass without seeing a Hello, the link gets marked down.

I wrote a PHP program to handle this monitoring in a very efficient way.  Every minute, it performs an SSH into this router and runs a “show ip eigrp neighbors” command to get a list of all active neighbors.  This tells me that each of those neighbors are active at the time I performed the command.  I log this info to a database table.  I also run a command like “show ip route | inc Tu”.  Due to our database, my program knows which EIGRP neighbor is each location and which route belongs to each location.  If I see a connected route to any Tunnel, I know we are actively running traffic across the LTE link to that location.  Since this is done every minute, I’m logging each time that a remote device has an EIGRP connection to headquarters.  I track the state of all the locations and send SNMP traps to our central manager to create alarms when I see that an EIGRP connection that should be there is missing and when a route exists (meaning the LTE link is being actively used).

This database is tracking the total number of polls and the number of successful polls.  This lets me calculate an “Availability” number for that GRE Tunnel.  Note, this isn’t a real “Availability” number for the LTE link.  It’s an Availability number for the Tunnel, meaning it can easily be worse than the LTE link availability (if the remote router is down, perhaps).

If you described this to me as a monitoring solution, I wouldn’t expect it to work well.  The fact is that we’ve been running with this sort of solution for several years.  The difference now is that I’ve reduced the polling cycle from every 5 minutes to every minute to give me better granularity.  And it still works great, even with 150+ sites.  The beauty of this system is that adding more sites doesn’t really add more time (technically, it does, but it’s such a small number that it’s pretty much irrelevant).

September 18, 2015 at 9:59 pm Leave a comment

Best Cell Carrier coverage in the SouthEast US

Where I work, we wanted to put in LTE backup at all of our retail locations to handle communications in the event that our T1 circuit fails.  There are around 800 locations stretching from Louisiana, south to Key West, all the way to North Carolina.  We have relationships with the big three carriers, so we build survey boxes housing three CradlePoint cellular broadband adapters, one configured for each of the carriers, then took them around to our locations and ran a battery of Netperf tests to get real results for each location which were logged into a database.

Armed with that database of over 7000 test results, we selected the best carrier at each location by looking at the raw data.  My general criteria?  Look for the carrier with the best SINR (Signal to Interference + Noise Ratio), along with the best speed.  We are less concerned with cost, since they are all under $30 a month for our limited, pooled data plan.  Our goal is that we have a reliable backup that is at least as fast as the T1 circuit it would be “covering for” in the event of a T1 outage.  Most T1 outages would be measured in hours, so it needs to be available when we need it, first and foremost.  That said, we want better than 1.5 Mbps in both directions so that it can be a true T1 backup.  Looking at the data and making the selection was sometimes difficult, but we made our best guess in those cases.

I only have the actual numbers for the first 155 locations we have installed, which break down as follows:

AT&T was selected 50.9% of the time.
Verizon was selected 30.9% of the time.
Sprint was selected just over 18% of the time.

From the numbers I have seen (in passing), this pattern is pretty representative of the overall totals.

Now, I’m not much of an AT&T fan, but this is pretty impressive.


September 18, 2015 at 9:37 pm Leave a comment

Mass upgrading Palo Alto firewalls

My company just bought 900 PA-200 firewalls.  Unfortunately, they all are pre-loaded with firmware version 5.0.6.  The current version is 7.0.1.  To get from 5.0.6 to 7.0.1, you must install a newer content version, then upgrade to version 6.0, then to 6.1, and finally to 7.0.1. Oh, and we want to install A/V as well, in preparation for shipping them to the stores.

They have a product called Panorama that manages their firewalls (can’t manage hundreds of firewalls without, if you ask me).  It can perform upgrades from one version to another, but isn’t smart enough to know what steps must be taken to get from 5.0.6 to 7.0.1.  Someone would need to know the process, and direct Panorama to do it, each step of the way.  Since I have 900 of them to upgrade, I needed to come up with a better way!  Waiting until they were at the store connected via a T1 circuit is not a good option either, as the content, A/V, and all the firmware upgrades would be over 1.1 GB in size.

A great feature for Panorama would be to have a “base” template you set for each Device Group.  That “base” template would include things like what Content and A/V versions, and what firmware for all the devices in the group.  Whenever devices are added to this device group, Panorama should automatically set them to the proper content, A/V, and firmware versions.

But, since Panorama isn’t that smart yet, the Palo Alto API and scripting magic to the rescue.

Since I’ve been writing a script to handle our installation process, I written a Palo Alto class to handle all the communications to the PA-200s and to Panorama.  I did have to add a few more routines to the Palo Alto class to handle everything that I needed, but it now works.

Our process works this way:
1.  A tech unpacks 10 PA-200 firewalls and attaches their Management port to a subnet on our corporate network.
2.  The tech scans the serial number bar codes on the back of the PA-200s, adding them to Panorama as “Managed Devices”.
3.  The tech adds them to the appropriate Template and a special device group that exists just for the upgrade process.
4.  The tech sets an IP address, Mask, and Gateway on each unit, pointing them to DNS servers and the Panorama server, then commits the change.  (This is a copy/paste process where the IP is different for each of the 10 units being upgraded.)
5. Finally, the tech performs a Commit in Panorama.
6.  The tech then gets back to other work, waiting for an email that will be sent once all the devices are upgraded.  This should happen about 1:35 to 1:45 minutes after the Panorama commit is done.

The real work gets done in a script that runs every 5 minutes.  This script:
1.  Gets a list of all the devices in the special device group.
2.  Attempts to create an object of my custom PA class for each device.  If it can’t communicate to it, that one is discarded for now, since this script will retry in a few minutes.
3.  Panorama is checked to make sure there are no active jobs for this serial number.  If so, it’s removed from further checks.
4.  Each firewall is checked to make sure there are no active jobs.  If so, it’s removed from further checks.
5.  The content version is checked for each PA-200.  If one isn’t found, it’s serial number is added to the Content queue and it’s removed from further checks.
6.  The anti-virus version is checked for each PA-200.  If one isn’t found, it’s serial number is added to the Anti-Virus queue and it’s removed from further checks.
7.  If the firmware starts with “5”, it’s serial number is added to the 6.0 upgrade queue and it’s removed from further checks.
8.  If the firmware starts with “6.0”, it’s serial number is added to the 6.1 upgrade queue and it’s removed from further checks.
9.  If the firmware starts with “6.1”, it’s serial number is added to the 7.0.1 upgrade queue and it’s removed from further checks.
10.  If 7.0.1 is installed, it sets the IP address back to the default and issues a commit.
11.  Finally, if 7.0.1 has been installed, and the box is unreachable (because the commit has taken effect), the device is removed from the special device group and moved to a Pending group.
12. All the various “queues” I mentioned get kicked off, with the serial numbers of the devices that need that step performed passed to Panorama via the XML API.  There’s additional logic to send emails when all the devices are out of the device group.

In practice, this is taking about 1:35 to fully upgrade 10 firewalls, though I suspect we could ramp this up to 20 or more, and it would likely take very close to the same time, since Panorama is upgrading all the devices in parallel.

This will have to do until Palo Alto upgrades Panorama to do it for me.

August 9, 2015 at 5:08 pm Leave a comment

Palo Alto and the power of an API

We recently bought Palo Alto PA-200 firewalls for our retail locations to replace our aging CheckPoint UTMs.  I didn’t investigate their API at all during the time we were looking at CheckPoint competitors.  I knew it had one, but hadn’t really given it a lot of thought.  Now that we have a massive roll-out ahead of us, I’ve started scripting parts of the process.  I must say that I love the flexibility that their API gives us.

In the past, for any major roll-out, I’ve scripted the process using telnet / SSH / HTTP (for web scraping), basically whatever interface the vendor allowed.  My goal is to make the installation fast and easy to support, while reducing the chance of human error as much as possible.  The hassle with CLI scripting for remote devices is always the parsing.  While it’s possible to do a good job parsing things manually, it’s time consuming and prone to error.  With an API, it’s faster and easier to code and you get data back in a predictable format.

If what you want to do can be done via SSH, Palo Alto has included a “Secret Decoder Ring” to help you figure out the API…  The secret is that the WebGUI and CLI both use the API whenever you do most anything.  So, in the CLI you can simply turn on “debug cli on”, and get most of the XML you need to pass to issue your API call by watching what the CLI does.  For example, if I do a “show jobs all”, I get this XML back:

<request cmd=”op” cookie=”8856737959639002″ uid=”500″><operations><show><jobs><all/></jobs></show></operations></request>

To do an API call to get the status of all your jobs, add in the blue and red portions from above appropriately:

http(s)://hostname/api/?type=op&cmd=<show><jobs><all/></jobs></show>&key=[Your API Key]

To reboot your firewall via the API:

http(s)://hostname/api/?type=op&cmd=<request><restart><system></system></restart></request>&key=[Your API Key]

Granted, there are some things I’ve not been able to figure out how to do via the API, like checking for the existence of an imported config file.  Via the CLI, just enter “show config saved ” and hit TAB after the last space.  The auto-complete feature of the PA CLI will show you a directory listing of saved config files.  If you do this with debugging turned on, you’ll note that you don’t see any “debug” info, so the autocomplete function must not use the API (or debugging autocomplete is disabled for readability purposes).

I expect that everything I need to do relative to the installation process can be handled via the API:

1. Import a pre-generated configuration file
2. Load the imported configuration file
3. Issue a local Commit
4. Check the status of the Commit
5. Read the Serial Number of the remote device being installed
6. In Panorama move the device from the “Pending” device group to the “Production” device group
7. Issue a Panorama commit for this device (by Serial Number)

If you have any need to programmatically interact with a Palo Alto firewall, I encourage you to dig into the API.  There’s tons of very good data, just waiting to be accessed.  Very easily.


July 23, 2015 at 7:33 pm Leave a comment

Older Posts


  • Blogroll

  • Feeds


    Get every new post delivered to your Inbox.