Archive for June, 2007

non-IT related (Honeywell Thermostat)

I’ll admit it. I may code applications to read details out of bridge tables, build massive databases of technical info, and I may build my own computers, but I’ve never thought of myself as mechanically inclined. Especially around the house.

Anyhow, I was checking some stuff out at the EnergyStar website (stupid electic bills!) and one of the recommendations they have is to use a programmable thermostat. This might be a very good one for us, since we live in Florida, the land of near-constant AC usage. Our old thermostat is programmable, but we have power outages like every two weeks or so, and our thermostat doesn’t keep settings across power outages. The result? We pretty much left it on a single setting, all the time.
This isn’t the kind of project that I usually try to take on, but I read up on them a bit. One of the products I was considering said that it was a “Universal” Thermostat. It worked with conventional systems, heat pumps, whatever you got, so I decided to try it. This particular unit is a Honeywell RTH7500. If you can make it through the entire installation manual, you can probably handle swapping out your thermostat too.

After opening it, I looked through the manual and was extremely surprised at how easy it looked. Basically, disconnect the old wires, use the paper labels that came with the new thermostat to label them with the appropriate letters (based on the lettering in the old thermostat I removed them), then attach them to the new thermostat in the appropriate spot. If you have a heat pump, you use one set of labels, if you have the conventional system you use a different set of labels. Sounds pretty easy. There were several gotchas along the way that you could experience, and if you did the manual said to stop and call a professional, which is exactly what I did not want to do. Overall, I was pretty pumped.

Anyhow, I wired it up, mounting the new thermostat where my old one was, popped the batteries in (power outages will not stop me!), and my unit was up and running. I set it to run and turned it on, only to be suddenly aware of why I don’t usually do things like this. Even though the temperature was 80 and the thermostat was set for 76, the HEAT was on (though it said that it was cooling).

I shut off the breaker, pulled the thermostat out again to check my connections and went back to the wiring diagram in the book. I found one line that stated if I had a W1 wire but not a W2 wire that I should make a small jumper and hook two of the terminals together. So I did. Powered the unit back up. Got hot air.

Back to the manual I went. I turned the page beyond the wiring and saw “Setup”. Yea, I had already set the date, time, etc. Next page? More Setup… Waitaminute! I didn’t do this part.

So, the lesson here is read the entire installation manual. Once I finished the setup procedure, telling it that this was a heat-pump system, it worked fine. Hopefully, I’ll end up saving enough money to pay for the thermostat within a few months or so.

June 17, 2007 at 5:51 pm Leave a comment

Eager Loading

I have an unusual Rails application. This web app has about 1000+ rows of data that it loads each time someone hits the main page. This is pretty much the entire database. It takes around 9 seconds to load this data and draw the page, about 2 seconds of that is database access (about 400 SQL queries). Fortunately, this isn’t a public web app, as those load times would kill it.

I recently read about eager loading, a Rails feature that loads your data in with a single database call. The default database method Rails uses is to only pull data as it is needed. So, as you actually start using the data, it then performs the database queries to grab it. This is normally for the best so you don’t waste time loading data that you don’t even use. With this particular web application, the entire database is brought up when you hit the main page of this site, so eager loading seemed to make sense.

(Note: For those of you worried about the users of this app, most operations that they need to perform are done using AJAX after the initial page load, so the users aren’t suffering needlessly.)

Here’s the data model I’m using (though I’ve pulled the validation related code):

class CoreRouter < ActiveRecord::Base
has_many :ports
end

class Port < ActiveRecord::Base
belongs_to :core_router
has_many :addresses
end

class Address < ActiveRecord::Base
belongs_to :port
end

My first attempt at eager loading:
@core_routers = CoreRouter.find(:all, :include => [ :ports, :addresses])

But that fails:

ActiveRecord::ConfigurationError (Association named ‘addresses’ was not
found; perhaps you misspelled it?):

After a bit of thought, it makes sense. The CoreRouter class doesn’t have a direct association with the Address class, so it doesn’t know how to associate the data.

Fortunately, you can nest this:

@core_routers = CoreRouter.find(:all, :include => {:ports => :addresses})

This essentially joins the Ports table with the Addresses table, then joins that result to the CoreRouter table.

That shaved the time for SQL on this web app from about 2 seconds to less than 0.5 seconds.

Now, I just have to find the time to go back and rewrite the code that takes about 6 seconds to format the data for the website…

June 15, 2007 at 7:42 pm Leave a comment

WPA Enterprise security

Stop the presses!

WEP is broken! It’s been broken for years!

You aren’t still using WEP are you?

Tell me you are at least using WPA-PSK (PreShared Key). Of course, WPA-PSK is for average consumers because it is relatively easy and relatively secure. Sure, it’ll probably keep out your gray-haired neighbors, but if you are using a pre-shared key made up of a word or a phrase, you are hardly invulnerable. In fact, if someone is persistent enough, it doesn’t matter what your pre-shared key is, it can be hacked.
That’s where WPA Enterprise security comes in. This is the technology companies use who want to ensure that their cash registers on wheels can take credit cards without revealing private customer data to anyone within 100 yards with a laptop.

Call me crazy. Call me paranoid. Perhaps it’s because I’m a programmer and guard most of my source code like it’s my first born child. Whatever the case, I didn’t want wireless on the same segment of my network with my main programming machine unless it was secured with WPA Enterprise.

The down side of WPA Enterprise security?

  • Time consuming to implement
  • Most documentation for free software is extremely complex
  • Requires a RADIUS server

A what, you say? A RADIUS server! You know, those are the servers that are about half the size of DIAMETER servers. 🙂

The most popular RADIUS server is FreeRADIUS. Probably due to the price. But, have you tried reading any of the tutorials?

Not to brag, but I’m proficient at programming in Pascal, C, C++, C#, Rexx, PHP, and Ruby. I design networks, configure routers, and switches. I even write programs that interact with network gear via SNMP, telnet, and SSH.

Even with all the technical knowledge I have, the documentation found on the Internet seems as if it were written by a hundred clowns with bees in their pants. (No offense meant, if you happen to have bees in your pants 😛 ) Most of the docs are written as if the reader is already familiar with RADIUS, EAP, PEAP, Certificates, 802.1X and the myriad of other terms associated with WPA Enterprise security.

So what’s my point? Well, after trying multiple times over the course of months using various methods, I found a distribution of Linux that I thought would do the trick. It’s a web-based Linux called ZeroShell.
I thought “Great! A distro of Linux that should be super easy to use!”

Well.

After initially loading it, I was surprised. Yes, it was what it claimed to be. A distro that could be used entirely via web interface. However, I was still no closer to figuring out how to get WPA Enterprise security working with it. This distro has web pages allowing you to create users, certificates, configure RADIUS, etc. But, there was no documentation telling you what steps to go through to get it up and working. Asking on the forum was pretty much fruitless.

I dropped this disto and kept looking around.

About two weeks later, I came back to it. I ended up vaguely following some documentation through the web interface, and was amazed. I got it working!

So, maybe Zeroshell isn’t so bad after all..  In fact, as I’ve picked through it, I’m learning to like it more and more all the time.

I believe it was the night after I first got WPA Enterprise working that I decided to start over and document the process that I went through to get it working in the first place.   Since there wasn’t a WPA Enterprise configuration document for Zeroshell yet, I decided that I would give back to the community and write one.  The resulting PDF file is now available via the Documentation link on Zeroshell’s site. From the posts in the forums, lots of others have had success following my PDF, so it seems like a decent document. I admit, though, that I could add a lot more detail, and I should probably go back and document things better, ultimately documenting what happens when your certificates start expiring, along with other options, like integrating your Active Directory domain. Hopefully, I’ll be able to go through that right sort of thing right here in this blog over the coming weeks. (Reminder: Get Active Directory domain)

Actually, Zeroshell could be used for a lot more than just a RADIUS server for WPA Enterprise security. It can be used as your firewall. It even has a Captive Portal feature. Just today, I ran across a configuration screen that lets you specify password complexity, minimum length, maximum # of days without changing the password, etc. I haven’t tried the majority of these features, but it sounds like some really fancy stuff especially for free software!

June 13, 2007 at 9:38 pm 3 comments

iTunes and a network share

After getting my ReadyNAS NV+, I decided to alleviate some of the storage issues I had on my iMac. I have recently become a fan of the show Monk, and have purchased several seasons of it from the iTunes Store. As such, I was down to under 20 GB of free space.

After a quick Google search, I found this article on Apple’s site that explains exactly how to move your iTunes library. I followed that procedure and moved my entire library to the media share on my NV+.

Within a day or so, I was ready to buy a new season of Monk, so off to the iTunes store I went. The episodes started downloading and suddenly, iTunes complained that there was not enough space available (error -34 I believe). Well, it was wrong, obviously, as I have a few hundred Gigabytes of unused space on my NV+. I ended up disconnecting, stopping iTunes, reconnecting the share, then restarting iTunes and it worked fine. Later, I found this thread on the Infrant forums that mentioned the same problem. It seems that there is a bug in the AFP support so that the ReadyNAS sometimes reports that it is full.

Now, I have made a few other purchases and had this problem once more, but that’s it. Switching to SMB seems to fix it completely, according to the forum, but I’d rather use AFP if I can.

Note: Infrant has released an add-on here that upgrades the AFP software to fix a few minor issues with their implementation. I’ve not tried it yet, and there are no reports as to whether this fixes the iTunes problem, but it sounds like a good thing to try, at least until version 4.0 of the ReadyNAS software comes out.

June 10, 2007 at 5:54 pm Leave a comment

New OS X Native Word Processor!

Allow me to introduce BEAN.  It’s a new free open source word processor designed to be a light-weight Cocoa application.

As for features, it has a Page Layout mode, a slider to let you easily zoom in/out, a live word count, AutoSave, and much more.   Plus, it’s fast.

It’s not going to be a replacement for MS Word or OpenOffice if you are serious about your word processing, but it is a big step up from TextEdit.  Plus, the majority of people don’t need the full power of MS Word or OpenOffice.  For this majority, Bean will probably be just enough.

June 1, 2007 at 7:38 pm Leave a comment

ReadyNAS NV+ Read/Write performance test

All of these tests were performed using a ReadyNAS NV+ with 256 MB of RAM. All machines mentioned were directly connected to a Netgear 8 port Gigabit Ethernet switch. Jumbo frame support was not enabled.

In each instance, a 2.02 GB file (2,169,841,445 bytes to be exact) was copied from or to the ReadyNAS NV+. The web interface of the ReadyNAS was not being used during these tests. Anti-virus was disabled on all machines for the duration of the test.

The ReadyNAS has a feature to “Optimize for OS X” for the SMB protocol. Be aware that this feature is known to cause issues with Windows NT clients. It did not seem to cause issues with Windows XP clients during my testing. To see just how this option affects performance, both read and write tests were performed with this option enabled and again with it disabled.

For comparison, the maximum theoretical speed of 100 Megabit Ethernet (with absolutely no protocol overhead) can be calculated this way:

100 Megabit = 100 million bits per second = 100,000,000 bits per second
There are 8 bits in a byte, so:
100,000,000 / 8 = 12,500,000 bytes per second
Convert bytes to Megabytes:
12,500,000 / 1024 / 1024 = 11.92 MB/second
11.92 MB/second = 715.2 MB/minute = 41.9 GB/hour

The following tests were performed with this client:
20″ Intel iMac Core Duo at 2.0 Ghz w/ 2 GB RAM running Mac OS X 10.4.9

Test: READ via AFP
Time: 2:20
Speed: 886.8 MB/minute (51.9 GB/hour)

Test: WRITE via AFP
Time: 3:35
Speed: 577.5 MB/minute (33.8 GB/hour)

Test: READ via SMB w/ “Optimize for OS X” disabled on ReadyNAS
Time: 4:20
Speed: 477.5 MB/minute (27.9 GB/hour)

Test: WRITE via SMB w/ “Optimize for OS X” disabled on ReadyNAS
Time: 4:10
Speed: 496.6 MB/minute (29.1 GB/hour)

Test: READ via SMB w/ “Optimize for OS X” enabled on ReadyNAS
Time: 2:24
Speed: 862.2 MB/minute (50.5 GB/hour)

Test: WRITE via SMB w/ “Optimize for OS X” enabled on ReadyNAS
Time: 3:30
Speed: 591.2 MB/minute (34.6 GB/hour)

The remaining tests were performed with this client:
Dell 1.8 Ghz Celeron w/ 1.5 GB RAM and dual 36 GB 10000 RPM Raptor drives using an onboard Intel 1000 NIC running Windows XP Pro SP2.

Test: READ via SMB w/ “Optimize for OS X” enabled on ReadyNAS
Time: 2:23
Speed: 868.2 MB/minute (50.8 GB/hour)

Test: WRITE via SMB w/ “Optimize for OS X” enabled on ReadyNAS
Time: 4:37
Speed: 448.2 MB/minute (26.2 GB/hour)

Test: READ via SMB w/ “Optimize for OS X” disabled on ReadyNAS
Time: 2:19
Speed: 893.2 MB/minute (52.3 GB/hour)

Test: WRITE via SMB w/ “Optimize for OS X” disabled on ReadyNAS
Time: 4:23
Speed: 472.0 MB/minute (27.6 GB/hour)

I find it very interesting that the WRITE performance from a Windows client was so poor regardless of the “Optimize for OS X” setting. I expected it to be at least on par with the AFP performance for a Mac client. Perhaps this was simply a problem with this individual client, but I do not have any other Windows machines that are connected via Gigabit to test with currently.

Conclusion:
If you want to use SMB for your mixed network (with no Windows NT clients), go ahead and enable the “Optimize for OS X” option. It appears to have a minimal negative effect on real Windows clients and provides a major boost in performance for Mac clients.

If you do have Windows NT clients or you don’t mind using both AFP and SMB, then leave the “Optimize for OS X” option off and just use AFP for Mac clients and SMB for Windows clients.

As for speed, when using large files, Read speeds just north of 50 GB per hour and Write speeds of around 30 GB per hour are typical for clients of a ReadyNAS NV+ where all parties are connected at Gigabit speed.  Since the theoretical maximum for 100 Megabit Ethernet is 41.9 GB per hour, having a Gigabit connection to a ReadyNAS NV+ is worth it at least for the read performance, though it’s not a huge improvement over what I would expect to see (Just a bit south of 40 GB per hour) from plain old 100 Meg Ethernet.

June 1, 2007 at 7:26 pm Leave a comment

Why does copying over a network take so long?

First, some background:
I use Sage TV, a great application that can turn your Windows machine into a real TIVO killing PVR. To top it off, it is networked, so you can write everything to a single server (with multiple TV tuners in that box) and you can connect in from a remote client machine (or even a network attached set top box they sell) and watch it around the house. Awesome functionality.

The down side? Well, with any PVR solution, you are going to start having to deal with absolutely Ginormous (Gigantic + Enormous) files. In my situation, I have like a hundred shows that have been saved off for the kids, and probably another hundred shows that my wife either hasn’t watched yet, or wants to keep. Most of these files are in the 2 GB size range. Normally, this isn’t a problem since you probably aren’t copying them all over the place. If you do need to copy them off for any sort of system maintenance, though, this can be a problem.

I recently went through this myself, copying literally hundreds of files across the network to my new ReadyNAS NV+, then later copying them back, and it took SOOO long to copy. I’m talking hours and hours.

To illustrate why, I did a bit of math:

The theoretical maximum speed of 100 Megabit Ethernet (with absolutely no protocol overhead) can be calculated this way:

100 Megabit = 100 million bits per second = 100,000,000 bits per second
There are 8 bits in a byte, so 100,000,000 / 8 = 12,500,000 bytes per second
Convert bytes to Megabytes:
12,500,000 / 1024 / 1024 = 11.92 MB/second
11.92 MB/second = 715.2 MB/minute = 41.9 GB/hour

So, without the overhead of any protocol at all, the maximum speed you could hope for with 100 Megabit Ethernet is about 41.9 GB/hour.

That’s why copying over a network is so slow – Because I have GINORMOUS files…

June 1, 2007 at 1:31 pm Leave a comment


Calendar

June 2007
S M T W T F S
 12
3456789
10111213141516
17181920212223
24252627282930

Posts by Month

Posts by Category