Posts filed under ‘Programming General’

Mass upgrading Palo Alto firewalls

My company just bought 900 PA-200 firewalls.  Unfortunately, they all are pre-loaded with firmware version 5.0.6.  The current version is 7.0.1.  To get from 5.0.6 to 7.0.1, you must install a newer content version, then upgrade to version 6.0, then to 6.1, and finally to 7.0.1. Oh, and we want to install A/V as well, in preparation for shipping them to the stores.

They have a product called Panorama that manages their firewalls (can’t manage hundreds of firewalls without, if you ask me).  It can perform upgrades from one version to another, but isn’t smart enough to know what steps must be taken to get from 5.0.6 to 7.0.1.  Someone would need to know the process, and direct Panorama to do it, each step of the way.  Since I have 900 of them to upgrade, I needed to come up with a better way!  Waiting until they were at the store connected via a T1 circuit is not a good option either, as the content, A/V, and all the firmware upgrades would be over 1.1 GB in size.

A great feature for Panorama would be to have a “base” template you set for each Device Group.  That “base” template would include things like what Content and A/V versions, and what firmware for all the devices in the group.  Whenever devices are added to this device group, Panorama should automatically set them to the proper content, A/V, and firmware versions.

But, since Panorama isn’t that smart yet, the Palo Alto API and scripting magic to the rescue.

Since I’ve been writing a script to handle our installation process, I written a Palo Alto class to handle all the communications to the PA-200s and to Panorama.  I did have to add a few more routines to the Palo Alto class to handle everything that I needed, but it now works.

Our process works this way:
1.  A tech unpacks 10 PA-200 firewalls and attaches their Management port to a subnet on our corporate network.
2.  The tech scans the serial number bar codes on the back of the PA-200s, adding them to Panorama as “Managed Devices”.
3.  The tech adds them to the appropriate Template and a special device group that exists just for the upgrade process.
4.  The tech sets an IP address, Mask, and Gateway on each unit, pointing them to DNS servers and the Panorama server, then commits the change.  (This is a copy/paste process where the IP is different for each of the 10 units being upgraded.)
5. Finally, the tech performs a Commit in Panorama.
6.  The tech then gets back to other work, waiting for an email that will be sent once all the devices are upgraded.  This should happen about 1:35 to 1:45 minutes after the Panorama commit is done.

The real work gets done in a script that runs every 5 minutes.  This script:
1.  Gets a list of all the devices in the special device group.
2.  Attempts to create an object of my custom PA class for each device.  If it can’t communicate to it, that one is discarded for now, since this script will retry in a few minutes.
3.  Panorama is checked to make sure there are no active jobs for this serial number.  If so, it’s removed from further checks.
4.  Each firewall is checked to make sure there are no active jobs.  If so, it’s removed from further checks.
5.  The content version is checked for each PA-200.  If one isn’t found, it’s serial number is added to the Content queue and it’s removed from further checks.
6.  The anti-virus version is checked for each PA-200.  If one isn’t found, it’s serial number is added to the Anti-Virus queue and it’s removed from further checks.
7.  If the firmware starts with “5”, it’s serial number is added to the 6.0 upgrade queue and it’s removed from further checks.
8.  If the firmware starts with “6.0”, it’s serial number is added to the 6.1 upgrade queue and it’s removed from further checks.
9.  If the firmware starts with “6.1”, it’s serial number is added to the 7.0.1 upgrade queue and it’s removed from further checks.
10.  If 7.0.1 is installed, it sets the IP address back to the default and issues a commit.
11.  Finally, if 7.0.1 has been installed, and the box is unreachable (because the commit has taken effect), the device is removed from the special device group and moved to a Pending group.
12. All the various “queues” I mentioned get kicked off, with the serial numbers of the devices that need that step performed passed to Panorama via the XML API.  There’s additional logic to send emails when all the devices are out of the device group.

In practice, this is taking about 1:35 to fully upgrade 10 firewalls, though I suspect we could ramp this up to 20 or more, and it would likely take very close to the same time, since Panorama is upgrading all the devices in parallel.

This will have to do until Palo Alto upgrades Panorama to do it for me.

August 9, 2015 at 5:08 pm Leave a comment

Palo Alto and the power of an API

We recently bought Palo Alto PA-200 firewalls for our retail locations to replace our aging CheckPoint UTMs.  I didn’t investigate their API at all during the time we were looking at CheckPoint competitors.  I knew it had one, but hadn’t really given it a lot of thought.  Now that we have a massive roll-out ahead of us, I’ve started scripting parts of the process.  I must say that I love the flexibility that their API gives us.

In the past, for any major roll-out, I’ve scripted the process using telnet / SSH / HTTP (for web scraping), basically whatever interface the vendor allowed.  My goal is to make the installation fast and easy to support, while reducing the chance of human error as much as possible.  The hassle with CLI scripting for remote devices is always the parsing.  While it’s possible to do a good job parsing things manually, it’s time consuming and prone to error.  With an API, it’s faster and easier to code and you get data back in a predictable format.

If what you want to do can be done via SSH, Palo Alto has included a “Secret Decoder Ring” to help you figure out the API…  The secret is that the WebGUI and CLI both use the API whenever you do most anything.  So, in the CLI you can simply turn on “debug cli on”, and get most of the XML you need to pass to issue your API call by watching what the CLI does.  For example, if I do a “show jobs all”, I get this XML back:

<request cmd=”op” cookie=”8856737959639002″ uid=”500″><operations><show><jobs><all/></jobs></show></operations></request>

To do an API call to get the status of all your jobs, add in the blue and red portions from above appropriately:

http(s)://hostname/api/?type=op&cmd=<show><jobs><all/></jobs></show>&key=[Your API Key]

To reboot your firewall via the API:

http(s)://hostname/api/?type=op&cmd=<request><restart><system></system></restart></request>&key=[Your API Key]

Granted, there are some things I’ve not been able to figure out how to do via the API, like checking for the existence of an imported config file.  Via the CLI, just enter “show config saved ” and hit TAB after the last space.  The auto-complete feature of the PA CLI will show you a directory listing of saved config files.  If you do this with debugging turned on, you’ll note that you don’t see any “debug” info, so the autocomplete function must not use the API (or debugging autocomplete is disabled for readability purposes).

I expect that everything I need to do relative to the installation process can be handled via the API:

1. Import a pre-generated configuration file
2. Load the imported configuration file
3. Issue a local Commit
4. Check the status of the Commit
5. Read the Serial Number of the remote device being installed
6. In Panorama move the device from the “Pending” device group to the “Production” device group
7. Issue a Panorama commit for this device (by Serial Number)

If you have any need to programmatically interact with a Palo Alto firewall, I encourage you to dig into the API.  There’s tons of very good data, just waiting to be accessed.  Very easily.


July 23, 2015 at 7:33 pm Leave a comment

CradlePoint API info

Every CradlePoint router (with at least a reasonably recent firmware) includes a very nice API.

However, if you search looking for documentation on their website about it, you’ll only find information on the API for ECM, their central management service.

Here are a few very useful URLs that you can call with the Restful client of your choice:

Figure out what model of CradlePoint you’ve reached, and/or the serial number:
https://  [CradlePoint IP]/api/status/product_info/

“data”: {
“company_name”: “Cradlepoint, Inc.”,
“copyright”: “Cradlepoint, Inc. 2015”,
“mac0”: “REDACTED“,
“company_url”: “;,
“manufacturing”: {
“board_ID”: “050000”,
“mftr_date”: “20150401”,
“serial_num”: “REDACTED
“product_name”: “CBA850”
“success”: true

Get your firmware version (major.minor.patch):
https:// [CradlePoint IP]/api/status/fw_info
“data”: {
“build_date”: “Thu Feb 19 12: 00: 07 MST 2015”,
“manufacturing_upgrade”: false,
“major_version”: 5,
“custom_defaults”: false,
“minor_version”: 3,
“fw_update_available”: false,
“patch_version”: 4,
“upgrade_minor_version”: 0,
“build_version”: 13953,
“upgrade_major_version”: 0,
“upgrade_patch_version”: 0,
“build_type”: “RELEASE”
“success”: true

Find out if you’re connected:

https:// [CradlePoint IP]/api/status/wan/connection_state

“data”: “connected”,
“success”: true

Get your WAN interface IP:
https:// [CradlePoint IP]/api/status/wan/ipinfo
“data”: {
“netmask”: “”,
“dns”: [
“ip_address”: “”,
“primary”: “lte-REDACTED“,
“gateway”: “”
“success”: true

Too much good diag stuff to mention: 

Please note, I REDACTED most of the unique identifying info, but these fields are all available on your gear.  To get the portion of the URL that’s redacted, look in the “primary” key of the result of your WAN ip info, shown just above.

https:// [CradlePoint IP]/api/status/wan/devices/lte-REDACTED/diagnostics

“data”: {
“HM_PLMN”: “310410”,
“CELL_ID”: “176898562 (0xa8b4202)”,
“CS”: “UP”,
“PRD”: “MC400LPE (SIM1)”,
“VER_PKG”: “,005.010_002”,
“MDL”: “MC400LPE (SIM1)”,
“TXCHANNEL”: “20576”,
“MODEMOPMODE”: “Online”,
“ROAM”: “1”,
“VER”: “SWI9X15C_05.05.16.02 r21040 carmd-fwbuild1 2014/03/17 23:49:48”,
“CFGAPNMASK”: “65534”,
“MODEMPSSTATE”: “Attached”,
“RXCHANNEL”: “2576”,
“VER_PREF_PKG”: “,005.010_002”,
“RSRQ”: “-7”,
“RSRP”: “-90”,
“DBM”: “-69”,
“SCRAPN”: “16”,
“SS”: “100”,
“LAST_PIN”: “”,
“BANDULFRQ”: “824-849”,
“TX_LTE”: “-6.5”,
“RFBAND”: “Band 5”,
“SELAPN”: “1”,
“SINR”: “21.2”,
“EMMSTATE”: “Registered”,
“CHIPSET”: “9X15C”,
“MODEMTEMP”: “40”,
“HW_VER”: “1.0”,
“IS_LTE”: “true”,
“MFG_MDL”: “MC7354-CP”,
“MFG”: “CradlePoint Inc.”,
“PRLV”: “1”,
“LAST_PIN_VALID”: “False”,
“PRI_VER”: “05.03”,
“DEFAPN”: “1”,
“DORMANT”: “Dormant”,
“PUK_RETRIES”: “10”,
“EMMSUBSTATE”: “Normal Service”,
“MODEMIMSSTATE”: “No service”,
“CUR_PLMN”: “310410”,
“BANDDLFRQ”: “869-894”,
“RFCHANNEL”: “2576”,
“PRI_ID”: “9903437”
“success”: true

My favorite (so far) is a bit difficult to explain in this blog post, but I’ll try:

https:// [CradlePoint IP]/api/control/netperf

To use this, you need 5.4.0 or newer firmware, and you’ll really need your own NetPerf server, but if you get that set up, you should be able to initiate your own speed tests across the LTE link.  You’ll need to pass data to this one, though, so it’s a bit harder.  Here’s my data template, with words surrounded by percent signs as variables.

$json_template = ‘{“input”:{“options”:{“limit”:{“size”:%size%,”time”:%timeout%},”port”:””,”host”:”%host%”,”ifc_wan”:””,”recv”:%recv%,”send”:%send%,”tcp”:true,”udp”:false},”tests”:null},”run”:1}’;

After customizing this for the test that I want to perform, I do a HTTP PUT of this data.  In my case, with PHP, I have to pass my $json like this:  array(‘data’ => $json).

Anyhow, doing this kicks off a speedtest that runs for %timeout% seconds.  You can then to a GET to the /api/control/netperf URL and get a status, like so:

https:// [CradlePoint IP]/api/control/netperf

“data”: {
“input”: {
“tests”: null,
“options”: {
“udp”: false,
“limit”: {
“size”: 0,
“time”: 10
“tcp”: true,
“recv”: true,
“port”: null,
“send”: false
“output”: {
“results_path”: null,
“status”: “idle”,
“command”: null,
“error”: null,
“progress”: 0,
“guid”: -1
“success”: true

In the “output” section above, had I just performed a test, I could look at the value of “results_path”, which is a URL to the results of the test.

There is a TON of great info you can get from the CradlePoint API.  CradlePoint built their web interface off of the API, so pretty much anything you see in the web interface can be accessed via the API.  In fact, if you simply use a tool like HTTPwatch to look at the interaction between your web browser and the remote CradlePoint device, you’ll be able to learn how to do all this yourself.


June 30, 2015 at 8:29 pm 10 comments

Migrating an app from one database to another

We have a bunch of apps that run against a database, dating back about 10 years.  These apps have grown over the years.  The original database was done in MSSQL.  The apps that access this are mostly done in PHP.  We’ve recently needed to add a few fields to the database, but instead of trying to get them added to the MSSQL database, I took the approach of building a class to access the data.  This class actually reads some fields from the original MSSQL database, and others from a MySQL database.  The update logic is built into the class, so it doesn’t matter which database contains the field, it will perform the update to the appropriate DB.

So, this has all been working well now for the last 5 or 6 months, and now we want to take the next step.  We want to move all the fields that are still relevant to the MySQL database, and just stop using the MSSQL database altogether.  This is made more complicated by the fact that numerous apps access the MSSQL database directly.  In particular, there is one old VB.NET app that we want to eliminate.  I’ve already created a web page that does the majority of the things this old VB.NET app does, and it uses my new class.  There’s still a little work to do to get everything, but it’s close.

I’ve been trying to think of how to best handle the DB migration.  I see a couple of options:

1.  Duplicate the fields exactly (column name included) from the MSSQL DB into the MySQL DB, and have the class stop using the MSSQL DB.  This would require all the apps that access the MSSQL DB directly to be switched to MySQL.  That’s a lot of work, and will likely take a few days, perhaps even a week or so (especially given this time of year and the vacation days people take).  During that coding time, some apps are using the records from MySQL and others are using the MSSQL DB.  Since these apps manage the configuration of hundreds of routers and thousands of switches in about 700 remote sites, we really don’t want anything going wrong, as it could be a major pain to fix.

2. Create new fields in the MySQL DB to match the required fields from the MSSQL DB, but name them differently.  This gives us the ability to search easily through source code that accesses various field names (the MSSQL versions) to make the MySQL changeover.  This would be nice, but I believe this would also require us to code the class so that the class would synchronize the “duplicate” field in the MySQL DB with the appropriate field in the MSSQL DB.  If a record is updated using the MySQL field name, it would need to also update the corresponding MSSQL field with the same value.  Similarly, if the update used the MSSQL field name, it would need to update the MySQL field.  Finally, upon each load of data into the class, we’d need the class to compare the  corresponding fields, and if the values weren’t equal, make a decision on which one “wins”, and is then copied into the “loser” field.  The start with, we’d probably want the MSSQL side to win.  When we are certain we were done with all the code changes across a few servers, then it would be time to drop out the MSSQL code altogether.

I’ll probably give this a good bit more thought, and look through the source code a while before I come to a 100% answer, but I’m leaning toward option #2 at this point.  Once the coding is done to the class to keep everything in sync, this should allow us more time to finish the other coding changes.

November 27, 2013 at 12:21 am Leave a comment

Managing thousands of network devices

I’m a network designer and the tools I’ve written manage 700+ networks for one company.  Each of these networks contain a Cisco router (various models), between two and six Cisco switches (mostly various 2960 models), and a CheckPoint UTM.  Altogether, it’s over 3000 switches.  We have heard hints from management that we are growing to around 1000-1100 sites in the near future.  With our network design, expanding to that many sites will be almost effortless.

My day to day job involves a lot of configurations and a lot of data.  We don’t use any software from Cisco or any 3rd party to manage our configurations.  The tools that generate and manage the configurations were all written by me.  I have not yet seen a piece of COTS software that can manage router & switch configurations in a manner suitable for our business.

Generating Configurations

Each device config is generated from a template.  Most hardware models have a unique template.  These templates contain placeholders for items that are unique for each location.  Various database tables track these unique values, and my tools drop the right values in the right spot.

In the case of the routers, these templates are almost 1000 lines of commands.  Routers have very complex configurations and the widest range of variables.  Over 125 variable substitutions occur for every router configuration.

The switch templates are a little simpler, but are still over 400 lines of commands each.  One database table tracks the admin status of each switch port, along with the speed and duplex setting.  Those settings are tracked so that if that switch config gets regenerated, the admin status, speed and duplex settings are retained in the resultant config.

Oh, the switch configurations can be a bit tricky, partially because we have different switch designs at different sites.  Tracking that is no problem, though, thanks again to the database.  We also name our configurations, and each location has a slot in the database to track that as well.

The UTMs are somewhat unique, in that they house a 3G connection.  This is backed by a Ruby on Rails web app that let’s the Network Operations team pair the UTM with a 3G modem from one of two vendors, and assign a unique static IP.  This database also tracks serial numbers and phone numbers for modems and SIM cards.  Once a device is assigned to a location, a configuration is generated within a couple of minutes.  This database is versioned, and I’ve provided a web interface so the NOC can “go back in time”, so to speak, and see exactly when and what changes have been made to this database.  This was very important as this helps track actual hardware, as people make mistakes when faced with an easy-to-alter database.  Other tools coordinate the configuration of routers with each UTM, so that a backup WAN link can run over this 3G connection.

Managing Configurations

Notice that the title of this post contains the word Managing.  My job isn’t done at just generating complex config files that work together.  We have a team of network operations folks that handle the day to day care of these devices.  They need to have a high level of access to do their job, but they occasionally change things, in the course of troubleshooting, and don’t always put everything back.

I hate it when that happens.

So, I audit.  I have a series of tools that works hand-in-hand with the generation tools that I mentioned above.  Every day, these audit tools read the configuration of every router, switch, and UTM.  Configurations are generated and diffs are performed.  When differences are found, they are pinpointed (down to the interface, or sub-section of the config) and emailed out to the team, highlighting the lines that are either missing (part of the template, but not in the actual config of the device) or present but not expected (extra lines in the config that don’t exist in the template).  This allows us to quickly find and clean up the human error that slips in whenever humans are involved.

This is vitally important.

About a year and a half ago, I was tasked with incorporating our network with that of another company.  While the network team of that company had kept things running, their configurations were far from standard.  If an issue arose at one site, a band-aid was applied to work around the problem.  In many cases, problems were forgotten and a proper solution was never implemented.  Rinse.  Repeat.  This appeared to have happened for years, with various technicians implementing their “fixes” on their own.  There was a “port standard” for VLANs, but that was at least partially abandoned in most locations.  The result was that practically every site was “a one-off”, a unique non-standard configuration.  This makes standardization a nightmare.

By performing daily audits, we can catch these sorts of problems.  Network techs that might come from an environment where they can change whatever they feel are more conscientious, knowing that their actions are being monitored to ensure that the configurations stay standardized.  While only a few issues are caught each week by these audits, it’s easy to see how it keeps our network constantly snapping back to a desired state.

Remember, above, when I mentioned the switch configurations having names?  That goes hand-in-hand with the auditing tools.  A switch is audited against the switch configuration style that shows up in the database for that location.  So, if you implement a new config style in 10 locations, the audit tools will be auditing the switches at those locations against the new templates, not against the templates driving the configurations in the other locations.

Audits also serve another important purpose.  Once every couple of months or so (sometimes much more frequently), configuration templates change.  The audit tools are written such that these differences can be programmed for.  If a particular set of differences are found, the audit tool itself will actually perform the commands to get the device configured properly.  In the event that hardware is being upgraded, routers may be configured months ahead of deployment.  If configuration changes happen in the meantime (like they often do), the next audit after the device installation will bring the config up to the standard.

In addition to auditing each of the remote site devices, another tool audits the various central routers that the remote sites connect into.  These routers literally contain many thousands of lines of configuration, all of which must be exactly correct in order to properly work.

Making Big Changes

I’m currently involved in a project to consolidate two networks together.  Essentially, we have an pair of central routers with high-speed links connecting to them.  These routers connect to an old network that is slowly going away.  While technicians are on-site at the location, we will be implementing changes to swing these locations to the portion of the network that will remain.  We’ve attached these central routers to the new network via a Gig interface, which is part of a new VRF.  Once techs are on-site to strip the old gear out, leaving only our new gear, we can make the required configuration changes to swing them from the old network to the new network, just by running a script.  It’s actually much more complex than that, but that’s all the Operations team will have to know, as the intricacies of the changes are mostly hidden from them.  These hidden changes include not just the central routers that the circuits terminate on, but database changes, and another pair of core switches and firewalls that require route changes at cut-over time.  By using the template approach, all of the network side changes are possible without significant programming, planning, scheduling, or implementation effort.

Data, Data, and Even More Data

Managing these networks doesn’t just mean managing their configurations.  In addition, there’s lots of data collection that goes on.

Every router, switch, and UTM has various information polled each day.  The model of device and level of firmware is pulled from each, along with other hardware specific data.  This ensures that, for example, if a device experiences a hardware failure and gets replaced with a piece of hardware running another version of IOS, it gets noticed reasonably soon so that it can be corrected.  In the case of the UTMs, this daily data pull includes the firewall policy that is active on the UTM.  The CheckPoint management server occasionally has issues where not all devices get updated, and a simple sortable html table showing this data lets us easily see which devices haven’t been updated to the latest policy yet.  A simple html table for the routers and switches gives totals of what model of hardware is running which version of IOS, as well as how many of each model are in the field.

The above paragraph just hits the easy stuff.  In addition, each day the entire MAC address table is pulled from every switch, along with the ARP table from the corresponding routers.  By cross referencing the MAC addresses (associated with the ports) with the routers ARP table, we can tell which IP device is attached to which physical switch port.  CDP info is also pulled.  The result is a web page where you can enter a site ID and get a chart of all switches at that site, the state of each switch port (including how many days since the last status change), the cross connections between switches (and other CDP capable devices), and every IP device attached to the switch, right down to the port.  (Though devices that don’t communicate across the router much may not be caught in the ARP table, so a couple devices might be missing, but this gets about 95+% of devices.)   Various people across different areas of I.T. use this data daily to help them quickly locate and troubleshoot equipment.  This info is also very valuable if you are trying to move toward a standard layout of equipment to specific ports on specific switches.

Another relatively recent addition to our data tracking is WAN interface errors.  We poll for interface errors on the WAN link throughout the day.  Sites with any errors get polled more frequently.  If these sites with previous errors continue to rack up more on subsequent polling, emails are sent to alert the NOC of a continuing issue with the WAN link.  A beautiful dot chart created with this data lets us see trends in these errors over the course of the month, with a different background color for weekends, when we’d expect less vendor changes on the MPLS network.  This has even helped us find problems with the uplink from a Central Office to the MPLS cloud, when we noticed that numerous sites in the same vicinity all started having a similar pattern of errors.

To be clear on these WAN interface errors, these are problems that we were not tracking at all until very recently, but they are very real issues.  By looking into the WAN link when a location is getting a few thousand errors in a day, we might head off a T1 circuit outage.

3G Link Monitoring

Most monitoring systems don’t have a great method of monitoring a secondary link that’s really a backup WAN link.  I’ve seen them implemented by having the route for monitoring go across the backup link, but that is dependent on routes being configured properly.

In our case, we chose a different path.  I wrote a monitoring tool that logs into the central router for these backup links every 5 minutes.  It pulls the EIGRP neighbor table to see which locations have operational 3G links.  Further, it pulls the route table to find out which locations are actively routing across the 3G links.  Some database and parsing magic combine to give us a monitoring system that sends SNMP traps to our NMS station that will give us an alarm that “3G link is active” (when the normal WAN link is down) and another alarm “3G link is inoperable” when the 3G link itself is down.  The data this tool collects is also available on a web page, where timestamps are displayed showing the “Last Active on 3G”, “Last Contact”, and other similar fields.

To be completely honest with you, the above method of monitoring the 3G link doesn’t sound like it would be very effective.  I had some doubts when writing it.  To my happy surprise, it’s extremely efficient, taking only a couple of seconds to do all that, once every 5 minutes.  It has been monitoring our 3G links now for about 3 years (since soon after installing them), and it works amazingly well.

Managing Big Networks

You can make things easier if you have control over the entire design from the beginning, but who is that lucky?

Liberal use of databases, combined with competent programming, are the key to managing networks of any significant size without losing all your hair.

September 9, 2013 at 11:54 pm Leave a comment

NetFlix Watch Instantly on IOS devices via Proxy

It looks like NetFlix Watch Instantly on IOS devices uses a trick to try to avoid proxy servers.  More on that in a bit.

I found this out using a great Mac application called GlimmerBlocker.  It’s a proxy server that blocks ads, enhance web sites, and more.  Kind of like GreaseMonkey for Firefox, except this is in Proxy server form.  It allows you the opportunity to transform the content of everything going through it.  Various programmers create filters, and publish them for anyone else to use.  The site lists ones as specific as a “Facebook ‘Like’ Disabler” and the “Facebook 3rd party block”, to ones such as “SSL:ify Sites”, which forces all communications to Facebook, LinkedIn, Dropbox, and Google to their SSL versions.  If you have kids, you might want “SafeSearch”, which enforces safe-search on Google, Yahoo, and Bing.  You can even chain GlimmerBlocker to a Caching proxy to save bandwidth.

Unfortunately, my iPhones and iPad can’t use NetFlix’s Watch Instantly feature through GlimmerBlocker, or any other proxy for that matter.  Today, using the debug logging built into GB, I was able to figure out why.  Somewhere just before the video starts to play, the NetFlix IOS application tries to perform a CONNECT to the address on port 4343.  Oddly enough, this IP address resolves to, which fails since the proxy server is trying to talk to itself.  It appears the the IOS application is listening on port 4343 for this in some sort of strange attempt to keep you from running this through a proxy server.  I don’t know if they are sending this through to the client unencrypted and don’t want the video cached, or what they are trying to do here, but there is a way around it, though not very elegant…

On the proxy server itself, set a hosts file entry for to be the local IP address of the IOS device.  You might have to set a static IP on your iPad, or use a static DHCP reservation, but it works. The problem here is if you have more than one IOS device.

A far more elegant way to handle it would be via a GlimmerBlocker filter that looks for as the destination host, then changes the request to the IP address of the requesting client machine.  This way, you could have multiple IOS devices going through the proxy server simultaneously, and it would translate all the “” requests back to the original IOS device that made the request.  I’ve tried my hand at creating a GB filter to do this, but so far I’ve been unable to.  I’m not experienced at this, so perhaps it is possible without changes to GB, but it appears that there isn’t a variable containing the requesting client’s IP Address.

If you have any ideas on how to make this work with multiple IOS devices via proxy at the same time, feel free to chime in.

January 7, 2011 at 9:24 pm Leave a comment

Blogging about programming, networking, and computers in general

Welcome to my new blog. Previously, I blogged about PHP at As it happens, I’m doing somewhat less PHP programming now. I’ve also began to do some Ruby programming. I work with just about anything network related (Cisco Routers, Cabletron Switches, Cisco Switches, etc.) Since then, I’ve also switched from a Windows machine as my primary box to a Mac Mini… Then, I switched to an Intel iMac, just after they were introduced. So, there’s no telling what I’ll find interesting to blog about… That’s why I started the Jack of All I.T. blog.

May 2, 2007 at 4:49 pm Leave a comment

An Important Programming Truth, re-discovered

This is mostly a tip, but it does deal directly with my FAQ project… Tonight, I re-discovered an important programming truth…

All programmers who have looked at code they wrote 6 months or more earlier know that documentation is an important thing.. I mean, I have a hard time remembering having WRITTEN code for some projects, much less remembering the specifics of how said code actually works… All serious programmers should know that good documentation is absolutely essential for all but the simplest of code.

But, the programming documentation that is perhaps the most important of all is documentation for RECURSIVE FUNCTIONS… Recursion is so powerful if used properly… In a scant few lines of code you can acomplish a lot of complex work. But recursion code is also very difficult to scan and determine what is going on. (At least, with any code that actually performs a real-world job..) I re-discovered this important programming truth tonight as I worked out a bug in my recursion code…

My recursive function consists of a grand total of 6 lines of code. After having difficulty reading this code (which I wrote only about a week ago!) I now have written seven full lines of text (consisting of 102 words total) to explain how this code works.

Of course, now that I’ve written this documentation, we all know that this will be the most bug-free portion of code in the entire project, and I won’t need to go back and look at it for years, if ever… 🙂

August 22, 2004 at 4:00 pm Leave a comment

Image Optimization

While reviewing a site of mine, I found that I was guilty of something that I only thought everyone else was guilty of…

Non-optimized images.

There. Now you know.

Image optimization is something that I have suggested to others many times as a way to speed up their website, while conserving bandwidth at the same time. Why not kill two birds with one stone?

Well, while I was doing a bit of fiddling in my favorite image editor tonight, I discovered that I had an image on one of my sites that was not optimal in terms of size. It was a header for a page… Prior to my CSS layout, it was broken into three separate images, each about 9-10K… With CSS, I found that it would be best to have it as a single image, but it was around 40K. (I did it in a bit of hurry, though 40K seems huge in retrospect!) I messed around with various settings, looking at the final filesize and the quality of the image and finally settled on a variation that was about 20K in size. This reduction in size is enough to make a significant improvement in download speed while the image quality was only slightly lower.

The thought then crossed my mind that I should probably check some other image files, as I might be able to shave off a little more. Well, I didn’t keep an exact count, but I’m guessing that I shaved another 15-25K off of the front page of the site alone, as I was able to cut off a few hundred bytes on up to a few thousand bytes on some images.

Here are a few image optimization tips:

1. Try different formats! You may have everything on your site in GIF today, but that might not be best. In my testing, some images looked best and were most efficient in GIF format, but PNG may have an advantage with some image files. JPEG was a big favorite of mine as well. For certain types of images, JPEG is superior to GIF and PNG.

2. For GIF and PNG formats, don’t forget to lower the color depth. I found that in some cases I was able to lower the color depth down to as low as 8 colors. Simply dropping from 256 to 128 colors in many cases resulted in an almost unnoticable reduction in quality, while dropping a significant percentage of the file’s size. Depending on the imdividual image, you might be able to drop down to 64, 32, or even fewer colors.

3. For JPEG formated images, use the quality scale! Again, depending on the image you can get away with a significantly lower quality setting without significantly affecting the actual image quality. This can result in BIG size savings.

4. While some people who offer ready-to-download icons (for their products, etc) are putting out highly optimized images for others to use, not all ready-to-download images are optimized. The result? Image files that are 2-3 times larger than they need to be, if not more. So, before you use that new snazzy version of Tux you found on the web, load it into your favorite optimizer to see if it’s ready for prime time.

August 12, 2004 at 3:53 pm Leave a comment

My Programming History

I recently realized that I hadn’t adequately explained what my programming, PHP, or web development experience level is.

I’ve been programming in various languages on multiple platforms now for about 20 years. Languages that I’ve worked with extensively (and roughly in order) include: Turbo Pascal, C, C++, Rexx, C#, and now PHP. I’ve written applications in these languages on platforms ranging from DOS, OS2, Palm OS, Solaris, and Windows.

Applications that I’ve written include a BBS Door game, a Palm Address book, an SNMP Trap-mapper, numerous Rexx utility programs, various Windows command line applications, a Windows GUI application to certify Frame Relay circuits, and (within the last 6 months) numerous small command-line PHP utilities for Solaris.

The vast majority of my PHP experience (at this point, at least) is with a Solaris system, with command-line PHP apps… Early this year I realized the power that PHP had in the form of the many extensions that are easily enabled. After explaining the advantages, my managers allowed for a trial run of PHP to see how well it worked in our environment. The ability to read and update a MS-SQL database from Solaris has proven to be a huge time-saver, not to mention the ability to easily script telnet sessions, and even perform SNMP queries directly from PHP. We have been slowly converting from Rexx to PHP for quite a few functions.

Sounds fairly impressive, right?

Now, I have very little web development experience…

1. My first web application was written in C# and it simply accepted input from a Solaris application via HTTP POSTs, populated a MS-SQL database, and displayed filtered and sortable results sets on various web pages. It has been in use for about 2 years now, so it must not be that terrible.

2. More recently I’ve written a C# app to manage an APC remote reboot switch which only has a single user.. My C# application takes a username and password from the user, determines which ports they can manage, shows the status, and lets them reboot the device(s).

3. My most recent web development project was related to MRTG. It was written in PHP and essentially restricts a customer’s view of MRTG to only the ports they are utilizing.

So, my main PHP experience thus far is writing database related command-line utility applications for Solaris. Oh, and by the way, my PHP web experience is with IIS under Windows, which throws another monkey wrench into things.

There you have it. I am by no means an expert on PHP for the web, but do have a good bit of PHP experience with unusual uses of PHP, and I hope to be able to share some of those experiences with you here, as well as share in the building-up of my web-related skills.

July 23, 2004 at 3:11 pm Leave a comment

Older Posts


November 2021

Posts by Month

Posts by Category