Archive for October, 2010

CheckPoint/Sofaware FlashForward

Tonight, 10-30-2010, at 09:57 PM EST, it appears that all CheckPoint Sofaware based UTM boxes worldwide running at least 8.0 firmware rebooted. (Much like the plot to the TV series Flashforward, where practically every human worldwide blacked out at the same time.)

This hardware includes the CheckPoint Safe@Office series (probably Safe@Home too), the UTM-1 series, the ZoneAlarm Z100G, along with at least one re-branded product from D-Link (the DFL-CPG310) and perhaps others.

My company has hundreds of UTM-1 Edge units that are centrally managed.  We started having very unusual problems sometime before 8:30 PM EST. We were unable to communicate via SNMP or via the WebGUI to the majority of our UTMs.  They seemed to be intermittently dropping traffic, but mostly passing data as usual.  Oddly, while these problems were happening, we did find at least one that seemed unaffected.  It was still reachable via WebGUI and SNMP.

Of all our corporate UTMs, one isn’t managed by a central management server.  Oddly enough, this UTM was exhibiting the same problems as the rest.  Just after 10PM, we saw that the problem appeared to have cleared up.  Checking the logs on our UTMs, we saw that they all appeared to reboot at about 9:57 PM EST.  I even checked the UTM that seemed to have been unaffected by the problem.  It had also rebooted.  We initially thought it was an issue with the central management server, but then I looked at the location with the UTM that isn’t centrally managed.  It also rebooted.  At just about the SAME TIME.  I was completely floored at this finding, as I couldn’t see how this could have happened.

But wait, there’s more.  Myself and another technician were trying to troubleshoot this issue from our homes.  Both of us experienced issues connecting to our corporate desktops.  What do we have in common?  We both use UTMs like those in our corporate locations for our home connections to the Internet.  Rebooted at about the same time!

Further, I have another, older model CheckPoint Safe@Office unit (small business version of the UTM hardware) that I use essentially as an access point.  Guess what?  It rebooted too, just a bit before 10 PM!

We contacted CheckPoint support, and they stated that they have reports trickling in from other customers with the same symptoms.

Let me remind everyone reading – I really, really like these little firewalls.  They are just packed with features and are (normally) very, very reliable boxes.

As I mentioned, everything seems to be back to normal now.  But, this really has us unnerved, as just like the characters from the TV series, we don’t know if or when another FlashForward event will occur.

Update 1: This problem is confirmed on the latest firmware 8.1.46, as well as on 8.1.37, and 8.0.39.  I have access to UTMs running these firmware revisions, and they all had the problem.  The CheckPoint support technician we spoke with stated that, based on initial reports, it wasn’t tied to any particular firmware.

Update 2: Gave a bit more detail above.

Update 3: In the case of my company, our hundreds of UTM-1 Edge boxes do not directly connect to the Internet.  This pretty much rules out any sort of public worm causing this issue.

Update 4: According to other sources, this appears to have been related to a time change event that took place in some parts of the world yesterday.  Here’s to hoping that this isn’t a preview of next weekend, when we move our clocks here in the U.S.

Update 5: Final word from CheckPoint sounds like a timer overflowed.  They said it won’t happen again for 13.6 years.

October 30, 2010 at 10:22 pm 1 comment

NexentaStor Community Edition

Recently, I posted about a few new NAS distributions that I wanted to look into further.  Today, I put together some spare parts to try out the NexentaStor Community Edition (version 3.03).

My test machine consisted of a Core2Quad processor, 2 GB of RAM, a 160 GB drive (for the OS) and two 500 GB drives for data.

Installation went smoothly.  One thing to note is that when it offers to set up “up to three” drives during the OS install, it’s just referring to the OS.  It is nice that it will automatically set itself up in a redundant mode.

After the OS is installed and you’ve set your IP, you bring up the web GUI and finish the install.  I had trouble getting my two 500 GB drives in a Mirror.  Using Chrome, the “Redundancy Type” pull down is empty.  I tried in Firefox and Safari also.  The pull down showed options there, but all were grayed out except the “None” option.  After trying various methods to see if I could unlock the Mirror option, I continued to create the volume using the “Force creation” checkbox. Update: Through some more testing, I found that you must highlight all the disks that you want in a mirror or RAID set at once, then hit the pull-down and the appropriate options will be available (in Safari at least), then add them.

After this, I created the folder, and the “Wizard” that was guiding me through these steps was about done.  In order to actually share anything, you must go to the Data Management Menu, select Shares, then check the box in the CIFS column.  The other options on this screen are NFS, FTP, RSYNC, WebDAV and Index (to create an indexer for that folder).

I next attempted to connect from my Mac, without success.  Even though the CIFS settings for this share had “Anonymous Read-Write” checked, I couldn’t connect in Guest mode.  Update: Umm… Found out that their idea of “Anonymous Read-Write” isn’t the same as mine…  In Nexenta, it means you use a username of “smb” and password of “nexenta”.  Not what I was expecting, but it worked.  Finally, I added a user and was able to connect, but still could not write.

Through the WebGUI, I went to edit the Folder and “Added Permissions for Group” for the staff group (the one my new user was a member of).  After that, I could write to the share and delete files without a problem.

I copied 1.06 GB to the share.  This took much, much longer than I anticipated  About 10 minutes later, the 1 GB copy (5 files, about 200 MB each) was complete.  About 1.5 – 2 MB/second isn’t exactly speedy.  I had enabled deduplication and compression, so that may have something to do with the speed, but this was shockingly slow to me.

I deleted those files, disabled compression, and recopied them over.  This time it was somewhat faster, at about 6 minutes.  That’s still excruciatingly slow compared to my ReadyNAS Pro or even my Drobo-FS.

I’ve read that Realtek network chips aren’t the best with Nexenta, and since I’m not sure what my motherboard has, I grabbed an Intel 1000GT and dropped it in.  But, it didn’t help.

Perhaps my issues are related to my inexperience using the product, or perhaps my spare hardware has some actual hardware issues, or it isn’t quite compatible with OpenSolaris (on which NexentaStor is based) .  I plan to try unRAID with this same hardware, so we’ll see if it works out better.

Update: I installed a fresh VM of Nexenta to try to take the hardware out of the equation.  This VM had 1 core (that runs at a faster clock speed than the C2Q I originally tested with) and 2GB of RAM  It took almost 7 minutes to copy my test files (same ones as before).  This was with a Mirror set, and deduplicate set to “verify”, which uses a weaker hash algorithm to look for duplicate data (making it faster), but if it finds what it thinks is duplicate data, it will actually verify both blocks contain the same data.  I wasn’t going for exact duplicate settings of my previous test, just trying to put together something to see if it was reasonably fast.  About 2.5 MB/sec is still very disappointing…

October 10, 2010 at 6:56 pm Leave a comment

Other storage options

I’ve been doing some reading tonight and found that there are a few options in network storage that I was unaware of…

1. FlexRAID – This is an interesting concept…  Basically, you set up directories you want protected and you set up “parity” directories.  (Sorry, directories is probably not the right word for it.)  Anyhow, they can be across multiple drives, or even multiple PCs on a network.  It can run under Windows or Linux, or even some machines on Windows, others on Linux, working together across the network.  From what I gather, you schedule it to sync at particulars times, and when it does, all your data (at that time) is protected.  So, after the sync, if you lose a drive the only data that is really lost is whatever changed since the last sync.  This makes it unsuitable for situations that have continuous change, like a database, or even your OS partition, but it sounds perfect for music, movies, photos, and other large sets of data that don’t change much.  With the parity data, you can lose a single disk, but keep all your data.  An advantage of this over a RAID5 is that if you lose more than one disk, the data on your remaining disks is still available, unlike RAID5, where the data is gone if you lose more than one drive.  With FlexRAID, you only lose the data on the disks that died.  Oh, and this is FREE.  They have plans to make FlexRAID Live and FlexRAID NAS versions, which would continuously keep the data in sync.

2. NexentaStor Community Edition – This basically looks like a slightly limited free edition of Nexenta, with a nice WebGUI front end.  Nexenta is an Enterprise storage solution, so it costs big bucks if you need their for-pay product, but this edition is limited in that you can’t do add-ons, and it only allows you to use up to 12 TB of storage.  Most people don’t have 12 TB of stuff, so that’s not a major hassle.  A big feature for me, personally, is that you do a bare-metal install, so there’s no messing around with an OS layer, then install an application atop it.  The head-line features are  ZFS, with in-line deduplication and compression, and the ability to automatically expand pools.  One minus for me (and most other Mac users) is that it doesn’t natively support AFP.  But it does support running as an iSCSI target, so you could set up your Mac to connect to the iSCSI target and use that for TimeMachine.

3. unRAID – Ok, actually, I had heard of this one, but see that it’s still alive and kicking.  They sell complete systems that take up to 20 drives, or you can buy the software and install it yourself on your own hardware.  You can even try it for free with up to 3 hard drives.  Their “unraid” approach uses parity drives, but the data isn’t striped across the other drives in the set.  Not all drives have to be the same size, or even speed.  It’s easy to expand, and since each drive actually has it’s own file system, it’s very unlikely to have a catastrophic failure where you lose everything.  Since the data isn’t striped, you can be reading files off of one drive while other drives are sleeping, reducing your power consumption.

All three of these solutions are compelling in their own ways, and I can see situations where each one would be a good fit.  They all look good enough to at least investigate further.  Best of all, they each have at least some FREE component, so you can try them out yourself.

October 8, 2010 at 10:32 pm Leave a comment

Drobo-FS Drive Upgrade

Last night I bravely removed a 500 GB drive from my Drobo-FS and replaced it with a 1 TB drive I had just removed from another machine.  Before this, my Drobo-FS was home to three 1 TB drives, a 500 GB drive, and a 1.5 TB drive.  Altogether, this amounted to a little over 3 TB of redundant storage, and was using just about half of it.

Though it may not sound like that brave of an action, consider what actually happened:

1. A member of the RAID array was removed, taking the redundancy you get from a Drobo-FS away.
2. An empty, larger capacity drive replaced the missing drive.
3. The Drobo stabilized the data across all of the new drives, ensuring redundancy.
4. The Drobo expanded the capacity of the RAID array by 500 GB.

And it did all of the above while I could continously access all of my data.  My son was watching streaming videos stored on the Drobo, in fact, while this was happening.  There weren’t issues with dropped frames, or anything.  It was just smooth playback, as usual.

That is a significant feat in the personal storage world.

October 6, 2010 at 5:44 pm Leave a comment

The ReadyNAS Upgrade Dilemma

In a recent blog post, I quickly mentioned the dilemma that you face when you go to upgrade a ReadyNAS box.  I don’t think I made my case very well in that post, so I’m going to go through the process for real, as I’m actually considering upgrading just to get a bit more room.

To understand why there is a dilemma, consider this:

1. To add to an existing array, you must add drives that are the same model as the existing drives.  Usually, these are older, smaller drives than the current “high-end” drives, but since they are rarer now, they may cost more than the original drives did.

2. To easily upgrade an entire ReadyNAS array, you must replace all the drives in the array.  So, if you have a 3 drive array, you must replace all three drives, but if you have a 6 drive array, your only easy upgrade path is to replace all 6 drives.  NOTE: You could rebuild your entire array with fewer, higher capacity drives, but that’s not normally possible for home users, as you’d need to back up the entire array to be able to do this.  And you’d back up your array to…????

So, ideally, you want as few drives as possible, with the highest capacity per drive.

Current Configuration:

3 Western Digital RE3 750 GB Hard Drives out of 6 bays.  This gives me 1.5 TB of redundant storage. (These were originally purchased for $93.02 each on 08-19-2009 from Amazon)

I could add up to three more of these drives, for $105 each (from, or $315, giving me 3.75 TB of redundant storage.  Notice how this is more (not less, as you’d expect) than I paid for the first three.

Or, I could buy 3 WD RE4 2 TB drives, which go for about $290 each, or $870 total.  That’s a lot…

ReadyNAS’s website contains their official compatibility list which does have some Desktop class drives with 3 year warranties.  The Hitachi 7K2000 goes for about $130 each, or $390 for a set of three, giving me a redundant capacity of 4 TB.

So, if I’m okay with Desktop class drives, for about $30 more than adding 750GB drives, I’d get 250 GB more capacity, but more importantly, I’d get a easier upgrade path.  Instead of being stuck with six 750GB drives that I’d have to replace to add capacity, I could then choose between adding drives 2 TB at a time, or possibly upgrading to a higher capacity drive, should the ReadyNAS support anything larger.

Decisions, decisions…  Of course, I don’t NEED to go from 1.5 TB of capacity all the way to 4 TB right now…  And I do feel more comfortable with the Enterprise class drives…

Ultimately, I think I’ll hold out on upgrading for another month or so, as I did what is probably the most cost effective thing….  I went through and did some cleanup.

October 6, 2010 at 4:32 pm Leave a comment

More Drobo-FS/ReadyNAS Love

My Drobo-FS and ReadyNAS Pro Pioneer complement each other very well.  As I’ve mentioned in a recent post, the Drobo-FS is great for mass storage that grows as you need it (or as you find good sales), since you can put just about any SATA drive you find laying around in it, and it will work.  I’ve also recently mentioned that the ReadyNAS Pro is extremely fast, but requires drives that are more expensive, plus they have to be the same model, etc.

I’ve recently reconfigured my SageTV server to record TV shows across the network to my ReadyNAS Pro.  Since making this change, I’ve had multiple instances of two OTA HD shows recording at the same time.  When I watched them, they looked and sounded perfect.

For shows that I want to keep long term (like all the shows my kids watch), SageTV is set to automatically transcode them to a more storage friendly format (about 25% of the original size, or even smaller), with the resulting file automatically being placed on the Drobo-FS.

For shows that I just watch once and delete, I leave them in their original full size and resolution on the ReadyNAS Pro, since they’ll usually only be there for a few days.

As side effect of this distributed redundant storage, I now have the option to move my SageTV server into a very, very small box.  A Mac mini would be just about perfect.  If I decide to go home-built (what with the economy and all), it wouldn’t need anything more than built-in video, a reasonably fast dual core processor (for transcoding), a couple gigs of RAM, and something as small as a 30-40 GB hard drive for the OS and application.

October 5, 2010 at 8:27 pm Leave a comment


October 2010

Posts by Month

Posts by Category