Adventures in Mac Data Recovery

A few days ago, I attempted to update my wife’s aging iMac (2009 era) to High Sierra.  I ended up at a screen telling me something about a disk error.  After that, it seemed to be in a boot cycle, where it simply returned to a similar screen with a slightly different message.

I found some CLI commands to make my own USB installer.  Ultimately, when I booted from it, I ended up with a message saying it could not install.  Thinking that perhaps the hard drive was going, I think I attached an external drive and tried to install to it, ending with the same failure.  At some point throughout this process I made the colossal mistake of formatting the internal hard drive.  We can get back to that later.

I knew I needed to get my wife a new machine, as this one is quite old and the next release of Mac OS X won’t support this hardware.  Fortunately, Best Buy had a sale on the higher end iMac with the smaller screen, which is just what I thought she needed.

Got it home, booted it up and connected to my NAS to restore her TimeMachine backup.

Uh oh.  It saw her backup there, but it said something about No Volumes.  I tried booting from the same USB stick to reinstall her new iMac, and ended up with the same sort of generic failure reasons I had been getting since after the original disk error on the first iMac.

Apparently, my installer was bad.

I moved her old iMac to my desk and was running a variety of tools to scan the hard drive, trying to recover whatever data I could.  The one that seemed to get the best results was Disk Drill, which seemed to have been able to recover the HFS directory structure and everything.  About $80 later, I could try restoring it.  Unfortunately, it was unclear if it was possible to simply have it restore the files to their original locations, so I tried having it restore to an external drive.  About 30 GB into the copy, it seemed to hang.  The iMac was still working, but no more data seemed to get moved.  I thought that perhaps I should try another method.

While researching my problem, I had previously found a link that talked about fixing Time Machine backups.  This involved running some CLI commands that seemed likely to potentially break things, so I took a few minutes to figure out how to backup my Time Machine sparsedisk bundle.  After looking around a bit, I found a page recommending SuperDuper for the task.  Using SuperDuper, I created a new sparsedisk bundle as the destination, and let it copy.  I think somewhere north of 19 hours later, it was done.

I followed the steps found on this blog entry on my copy of the data, leaving the original unaltered.

Everything went well for the repair portion, but the final steps involved editing a .plist file that should have been sitting in the root of the sparsedisk, but it was missing for some reason.

So, I tried running the repair steps on the original TimeMachine backup.  It failed.

In a last ditch effort to get TimeMachine working, I copied the com.* files from the original TM backup over to my SuperDuper copy.  I figured since the repair worked on it, perhaps I can just take the files that didn’t seem to get copied, and move them over as well, and finish the process.

I built a new Mac OS X installer (using this great little tool that I sorta wish I had found originally), reinstalled OS X on the new iMac, then tried the TM restore.  I pointed to the new TM backup I had made, and was happy to see that it saw it, and that it saw backup data there.  I started the restore process, which, probably around 3 hours later, completed successfully.

I rebooted, and it came up and worked.  I was able to login to my wife’s account and her data seemed to be in-tact.

I have since set up SuperDuper to clone her drive on a schedule to an external drive.  I’ll probably start the Time Machine backup process as well in a few days or so, once we feel secure that her files are fine, so she’ll have a few backups just in case.

Advertisements

June 15, 2018 at 12:50 pm Leave a comment

Channels DVR

Many years ago, I used SageTV for my DVR.  After it was sold to Google, I tried MythTV with some success.  I’ve since used Plex briefly, and the HD Homerun DVR for several months.  Recently I tried SageTV again, the open source version.  It’s still very much like it was years ago, with the AndroidTV app working very well now.  Controlling the app with the ShieldTV remote could be better, though.  There’s also the lack of a IOS version, and I don’t think it plays well with other apps sharing HD Homerun tuners, though this may have been addressed.

Plex has a few issues.  No grid guide is a big one.  Another is you can’t watch a show while it’s recording.  Commercials are removed from recordings, not just marked.

While looking at a Plex forum, I read about Channels, an IOS app for watching TV with an HD Homerun tuner which has a DVR component.  I had seen this app before but didn’t try it due to the cost.  After reading more about the app, I found that it gets rids of all my complaints about Plex.  It also allows for remote connections, so you could even stream TV from it remotely, like Plex.

The cost is a bit steep, being $24.99 for the AppleTV app, another $14.99 for the iPhone/iPad app.  I think there are similar prices for AndroidTV, Amazon FireTV, and other versions.  The DVR feature is $8 per month after your first month…  So, it’s a bit expensive, but it’s very good (I think Android versions have various states of support for the DVR feature.).

Regardless of the price, I can say this is about the best DVR experience I’ve run across yet on AppleTV.  The user interface is very  intuitive.  It integrates well with the AppleTV, including support for adding shows to the top bar on the AppleTV home screen.

The DVR component runs on just about anything, even the ShieldTV.  It can use the hardware acceleration on the ShieldTV for transcoding, and if your processor supports Quick Sync, it is supported as well.

I’ve chosen to run mine under docker in UnRAID.  It works very well, even when transcoding.

It still early days of my trying it, but I’m pretty happy with it so far.

May 20, 2018 at 8:22 pm Leave a comment

Running a PA-VM on KVM under UnRAID

Getting PanOS up and running on KVM under UnRAID was not easy.

I started with the KVM version of the PA-VM firewall and copied that to my ISOs directory.

Through trial and error, I finally got it working.  In the UnRAID UI, I selected two separate CPUs, about 9.5 GB of memory, the i440fx-2.11 machine type, with the SeaBIOS and the USB controller to the default 2.0 (EHCI).

I manually copied the image to the appropriate directory for this VM.  I did have to add multiple brX interfaces (via the networks settings page) to be able to add multiple NICs to the VM.

When booting up, I encountered a problem where it would reboot multiple times on its own.  Finally, I was given the maintenance prompt.  I did a factory reset, and afterward, it booted as I would expect.  At this point, I could log in and set the IP address and everything seemed to be working as expected.

Posting here to help anyone else who wants to try running a PA-VM on UnRAID.

Edit:  Also, set the NICs to type e1000 in the XML (not in the form).  I think VirtIO is supposed to be supported, but they didn’t seem to work until set to e1000.

May 8, 2018 at 2:41 pm Leave a comment

Adventures in DNS

I just posted about my new PA-220 firewall and mentioned URL filtering.  I have a number of categories blocked, including web-advertising, adult content, malware, etc.  But you can always make something better, right?

The PA-220 has a feature to enforce safe search with various search engines.  Unfortunately, it seems to not work very well on my iPhone, or in Safari on my Mac.  It could be the 8.0.2 firmware, or perhaps it’s something that I’m doing wrong.  In any case, I wanted to fix it, as it was annoying.

Both Google and Bing support a feature to enable Safe Search for your network via DNS.  What you have to do is, when someone requests google.com, make your DNS return a CNAME record for forcesafesearch.google.com.  While this might sound easy, as I discovered, its a bit more complex than perhaps it should be.

First, the DNS proxy feature in my PA-220 does support configuring static entries, so I could add an entry for http://www.google.com, but I can’t set it to CNAMEs, only IP addresses.  I  would have to hard code the IP address for forcesafesearch.google.com, which could potentially change at any time, breaking things.

After a bit of research, my first candidate to truly do the CNAME change was found.

DNSmasq

On my unRaid box, I installed a docker of Pi-Hole, which is a DNS based system (meant for the Raspberry Pi, but capable of running on other platforms) which blackholes DNS queries to Web advertising sites, etc.  It uses DNSmasq and has the ability to run DHCP as well as DNS.  With this integration, it can resolve local hostnames to their DHCP assigned addressing.  I could do that now by adding static entries to my DNS Proxy instance on the PA-220, but it wouldn’t pick up on DHCP entries.  But, alas, DNSmasq treats a CNAME entry added manually differently than I had hoped.  It will ignore it unless it has that record defined somewhere, such at a static definition or via DHCP…  It won’t resolve an external CNAME like a normal query and return it.  And since if I were to define forcesafesearch.google.com as an A record in DNSmasq, that would really defeat the whole purpose of using the CNAME.

Pi-Hole does have a very nice modern web interface with statistics, graphs, and it looks extremely easy to whitelist or blacklist sites.  It gives you great visibility into what devices on your network are doing the most DNS lookups, and if you are wondering where your IoT devices go on the Internet, you can even filter the logs to see what an individual device is performing lookups against, assuming you have all your devices directly querying Pi-Hole, instead of chained like I’m doing here.  In fact, you can even disable the blocking functionality if you like.  With it disabled, it won’t block, but you’ll be able to see all the statistics and logs it has to offer, even showing you what it would have blocked.  Today, it has blocked about 8.8 percent of my DNS queries, though I haven’t really noticed much different than when I simply go through my PA-220.

Dingo

While looking for other DNS packages that could do this CNAME trick, I ran across one that looked very interesting for a different reason.  Dingo is effectively a DNS resolver that takes requests in on port 53, and resolves them over encrypted HTTP/2.  It can be used with both Google and OpenResolve (by OpenDNS).  I installed it as another docker and it seems to work fine.  I did increase it to use 25 worker threads instead of the initial 10.  I don’t know if I’ll keep using this or not, but I’ll see how it goes.

Bind

Other research turned up some settings for Bind that would let me add the CNAME records I needed to for Google and Bing to enforce safe search, and yet another Docker was installed.  The one I chose included Webmin for easy administration of Bind.  It worked just fine.

So, now I have the initial DNS queries pointing to the PA-220, taking advantage of the Threat/URL Filtering there, then forwarding to a docker running Bind to handle google and bing domains, which forwards to Pi-Hole (which I may end up removing from this chain), and finally to Dingo to perform the actual DNS lookups over encrypted HTTP/2.

Whew!

That sounds like a lot, but not including the PA-220 (which was doing this job before), I’ve added three hops that all exist on the same box.

May 21, 2017 at 7:48 pm Leave a comment

The PA-220 Firewall is here!

The PA-220 has 8 ports of Gigabit goodness on the front, aside from the management port.

The PA-220 supports some pretty high-end features, making it suitable for use in a small business office.  First, there is High Availability mode (HA), if you have a pair of PA-220s and duplicate your connectivity (even to your WAN, so you’d need a switch between a Cable/DSL modem and the pair of firewalls)  Another big feature is LACP support (Link Aggregation Control Protocol), so you could have multiple connections between your firewall and an Ethernet switch.  This redundancy is something that small offices would likely want, as when the WAN connection is down, there is probably work that can’t be done.

The PA-220 comes with a template and hardware to mount it sideways on a wall, something that I plan to do at some point but haven’t gotten around to yet.

Since the speed that the PA-220 handles traffic is limited to about 500 Mbps firewalled, and down to about 150 Mbps with Threat enabled, I recommend only putting relatively low speed or volume devices directly on the ports of the firewall itself, if the primary thing they are communicating to is also on the local LAN.  You could always add a rule in for intrazone traffic to be allowed and not place any Threat profiles on that rule, giving you the maximum 500 Mbps speed to the internal network.

I’ve got it in place, doing SSL decryption, Threat, URL filtering, Wildfire, and GlobalProtect VPN.  It seems to perform pretty well so far.

May 21, 2017 at 11:20 am 7 comments

UnRAID experiences

Recently, NewEgg had a deal on an HP ML10 V2 server for about $170 after rebate.  It included an i3 processor at 3.5 Ghz, a 500 GB hard drive, and 8 GB of ECC ram.  I had a hard time passing up that good of a deal, so I didn’t.

After playing with VMware ESXi 6.5 on it for a bit, I decided to try UnRAID.  I was interested in using Docker on it, something I have dabbled with on my Synology.

Having used UnRaid for more than a week, I think I’m about ready to get rid of my NAS and use this instead.

The initial setup was easy.  I loaded the software on a USB drive, put several low capacity drives in it (largest being 1 TB) and created the array using the web interface.  It began the parity process and I started setting up shares and using it.

Let me explain a bit about how UnRAID works.  It’s not your traditional RAID array.  You basically put in whatever disks you want, select the largest one as the Parity drive, and start using your array (there are some WebGUI steps involved, but it’s very easy).  I understand that you can even take drives that already have data on them (in a format UnRAID uses), and that data is preserved, with the exception of the parity drive.  With UnRAID, you get the advantage of parity protection, so if a single disk dies, just replace it and it rebuilds.  If there’s a problem with more than one of your disks at once, you only lose data on the failed drives.  Your remaining working drives have all their data intact.

Another differences is the way shares work.  It has your traditional disk based shares, where you add a share for an individual disk, and write files to it the usual way, and it will create parity info on the parity drive should that disk fail, so your data is safe….  And it has what it calls “user” shares.  These shares span your disks.  So, you might have a media share, for example.  You copy a video over to it, which gets dropped on disk 1.  Later, you copy another video, and it gets dropped onto disk 2.  When you view directory listing of the share, though, you see a single view with all the files presented as if they were in a single structured set of folders, so you don’t have to know which disk a specific file is on…  UnRAID tracks that for you and presents it all as if it’s a single, large share.

Anyhow, over the next few days, I set up three Time Machine shares, along with a couple others and copied over the majority of the data from my NAS to it.  (I have not been storing nearly as much on my NAS recently, having cleaned off tons of media some time ago.)

The Docker container functionality is great.  You can load a docker container based on templates, so there’s not much to do but point and click, though you may have to type in a path or two, here or there.  Think of it sort of as Plug-ins or Apps – there’s a Plex container, MythTV, SageTV, and many, many more.

After the initial parity calculation was done, I moved my 4TB drive from my NAS over, replacing the parity drive in UnRAID.  It rebuilt the parity info after I adjusted the config in the WebGUI.  Then, I proceeded to swap another drive with a 3 TB drive, and let it rebuild that., and I’ve done that with yet another 3 TB drive.  At this point, only one of the original hard drives is in the array.

And, I actually want to remove that last 750GB drive from the array.  With traditional RAID, that’s pretty much a no-go.  With Synology’s hybrid RAID, or a Drobo’s approach to RAID, I think you have to stay with the same number of disks in the array, short of copying all the data off and recreating the array fresh with fewer disks.

With UnRAID, though, I’m now copying all the data from disk 3 to disk 2 using a simple rsync command.  Afterwards, according to what I’ve read, I can simply remove the disk, then create a new array with one less disk and it will recreate the parity information.

Why would I want to do this, you may wonder?  To add a cache drive.  UnRAID lets you add a cache drive (an SSD, or perhaps just a 10K RPM or 7200 RPM drive), and set up your shares to take advantage of the cache drive.  When data is written, it goes to the cache drive, and at 3:40 AM, data is moved off the cache drive to the other drives in the array, at which time parity info is calculated.

Now, if you run a business and keep critical data on UnRAID, you shouldn’t entrust the safety of your data to a single cache drive, as the parity info associated with the cache drive is only generated once per day, so there is the potential to lose whatever data has been written to the cache drive.  But if you are a home user, mainly using it for entertainment purposes, you can probably take the chance, for the performance improvement (especially with an SSD cache drive).

Although I’m still within my first 30 days of using UnRAID, it’s safe to say I’ll be buying it soon.

April 25, 2017 at 8:55 pm Leave a comment

Palo Alto PA-220

About a month ago, Palo Alto announced their new 8.0 firmware, along with some new hardware.  The most exciting new product to me, personally, is their new PA-220.

The PA-200 is a unit I have a lot of experience with.  It’s got 4 Gig ports for traffic, supports 100 Mbps of firewall throughput, dropping to 50 Mbps with Threat prevention enabled.  It’s a good unit for a small office.

The PA-220 is better, sporting 8 Gig ports for traffic, 500 Mbps of firewall throughput, dropping to 150 Mbps with Threat enabled.  It is without fans, and since it uses EMMC for storage (32 GB), there shouldn’t be any moving parts to break down.

Basically, it’s got more power than a PA-500, the same number of ports, and it’s in an even smaller package than the PA-200.

Best of all, it’s at a much better price point than the PA-200.

March 7, 2017 at 11:20 pm 1 comment

Older Posts


Categories

  • Blogroll

  • Feeds