How To

To split or not to split your Mac’s Fusion Drive

In a recent post, I wrote about upgrading the original (and failing) blade SSD in my iMac to a bigger and faster NVMe module. During that upgrade process, I wondered whether splitting my Mac’s Fusion Drive would result in better performance, but decided against it for simplicity’s sake.

Even though I decided against splitting my Fusion Drive at that time, I read articles that advocated for it and suggested even better performance was to be had by allowing the SSD and HDD to run as separate volumes. The idea is to install the OS and select files and folders on the SSD, with the bulk of the files on the HDD. For the sake of experimentation and learning something new, I decided to tinker with my iMac and see if I could squeeze out some extra speed.

For those who are wondering what I’m talking about, Fusion Drive is an Apple technology built into macOS that creates what is essentially a hybrid drive, by combining an SSD module (NAND flash) with a traditional HDD (platter drive) and presenting the two as a single volume to the user. The protocols that govern the data I/O are called Core Storage. Apple writes: “Presented as a single volume on your Mac, Fusion Drive automatically and dynamically moves frequently used files to flash storage for quicker access, while infrequently used items move to the high-capacity hard disk. As a result, you enjoy shorter startup times and — as the system learns how you work — faster application launches and quicker file access.”

I’ve been using Fusion Drive since it came out, retrofitting my iMac at the time with a new SSD and thus making it run faster than its original specs. I love this technology, because it offers significant performance improvements for a fraction of the cost of buying a large SSD, which used to be be quite expensive a few years ago.

The long and the short of it is that it’s not worth it to split your Mac’s Fusion Drive. If you’re currently running Fusion Drive on your Mac, keep doing that, you won’t see any significant performance improvements if you split it. Actually, some things may run slower than before, and you’ll also have to deal with a few inconveniences, as detailed below.

I’ll present both scenarios here and you can decide what to do for yourself. There are multiple methods to it. These are the methods I’ve chosen. The number of Terminal commands that you have to run for either scenario is minimal, and the time involved has to do mostly with backing up your computer, waiting for the OS to reinstall and for your data to be restored from backup. For example, if you’ve got a 3TB drive and you’re at about 50-60% usage (and you should be at that threshold or lower on any hard drive), then you should figure on 4-5 hours for either of the two scenarios.

How to split your Fusion Drive

First and foremost, did you backup your computer? If you did, go ahead and create a bootable drive using Apple’s instructions, then boot into it by pressing the Option key as soon as your Mac restarts and holding it down until you see the Apple logo. You need to boot into a separate drive because you’ll be deleting your internal drives entirely, including the boot and recovery partitions.

Once you’re in, open Terminal and get a listing of your disks and volumes.

diskutil list

Your Fusion Drive presents itself as a logical volume group that appears as a separate disk with an HFS+ or APFS partition. Say your SSD is disk0 and your HDD is disk1, your Fusion Drive would be disk2 or disk3. In my case, it was disk3 (disk2 being the bootable recovery drive). Now unmount your internal disks.

diskutil unmountDisk disk0

diskutil unmountDisk disk1

You’ll want to delete that entire disk containing Fusion Drive. Be forewarned, this deletes all you data. Did you backup your computer?

diskutil apfs deleteContainer disk3

Now that Fusion Drive has been nuked, you’ll still have your separate drives that you’ll want to make sure are erased. The eraseDisk command requires that you offer a new name for each disk, so I chose to name them SSD and HDD, to keep things simple.

diskutil eraseDisk JHFS+ SSD disk0

diskutil eraseDisk JHFS+ HDD disk1

Now you’ll want to do a fresh install of macOS onto the SSD, and after that’s complete, you’ll boot up into your fresh install and go to Utilities/Migration Assistant, in order to do a selective data restore. Here you’ll have to decide for yourself, based on the total size of your SSD and your data set, how much of it you’ll want to restore onto the SSD. The rest you’ll need to copy manually from the backup drive onto the HDD. In my case, I restored my user settings and the system and libraries folders onto the SSD, and I copied the following folders onto the HDD: Documents, Downloads, Movies, Music, Parallels (in case you’re running some kind of VM software) and Pictures. Each of those folders was too big to keep on the SSD, even though I have a 512GB module (remember the rule about keeping your drive at or below 50-60% usage).

Once you complete all that work, you’ll need to create links to these folders on the HDD in place of your folders on the SSD. Mojave won’t let you do this when you’re logged into your account, so you’ll need to boot up into recovery mode and open Terminal once more.

Go to your homefolder on the SSD.

cd /Users/yourusername

Delete the folders that are now present on the HDD. You’ll need to do this for each folder that you’ve moved there. Hopefully you’ve written down their names ahead of time.

sudo rm -rf foldername

In your homefolder on the SSD (same location as above), make links to the folders on the HDD. I chose to put mine at the drive’s root level. You may choose to put them in a folder. Just don’t give it the same name as your username, I hear that may cause problems. You’ll need to do this for each folder.

ln -s /Volumes/HDD/foldername

That’s it, restart and use your computer. However, you may find a few inconveniences — these are the ones I experienced:

  • I noticed no performance improvements. There wasn’t even an improvement in the bootup time. Nothing, nada, zilch.
  • While Apps may open up faster, if they’re still accessing files on the HDD, editing will still be sluggish. In order for you to see that performance boost talked about with SSDs, both apps and their files need to be on the SSD.
  • In my case, I had to keep the Photos library on the HDD, because it was too big to keep on the SSD, and while Photos may have opened up fast, loading up the library took forever, until enough of the recent photos were cached on the SSD to allow me to work with my library. So things were a LOT slower with this app.
  • I kept my mailboxes on the SSD so I was hoping for better performance from Mail, but I didn’t get it. I have a lot of mail stored locally, so in theory, things should have worked faster because everything was on the SSD, but they didn’t. I also experienced odd issues, like when moving messages between mailboxes, it took a lot longer and sometimes didn’t register. I’d drag and drop them, then come back to the app a little while later and find them in the same place, just as if I hadn’t moved them.
  • iCloud would display an odd notification icon, but when I’d go into it, there was no message. This icon was displayed continually for as long as my Fusion Drive was split. See the screenshot below.
  • While Time Machine will backup both internal drives, data restores will only restore the files from the SSD. I don’t know why and I don’t know how to fix that, so keep this limitation in mind. You can go into the Time Machine drive manually and copy the files over afterward, but if you run a restore operation on your computer and you wonder where most of your stuff is after it’s completed, don’t freak out, just know you’ll need to get it manually from the drive.
See that “1” over iCloud? It was there all the time.
This is the kind of performance the SSD provided when my Fusion Drive was split. It looks impressive, but hold on until you see the same test with Fusion Drive enabled, later down in this post.

How to enable your Fusion Drive

After about a week of running my Mac with a split Fusion Drive, I’d had enough and decided to re-enable it. Here’s how I did it. Before you proceed with this, I’ll ask you again, did you do a full backup of your computer? This will wipe all your data.

Using the same bootable drive, I booted into it and opened up Terminal. Since you’ll be wiping all your internal drives again, you need to be booted from an external drive.

Apple recommends this single Terminal command that is supposed to do everything in one fell swoop. It didn’t work for me, perhaps because my SSD module was a newer NVMe running off an adapter card, not the Apple-approved blade SSD manufactured specifically for this kind of thing.

diskutil resetFusion

I had to do it with a few more commands. First, find out your disk IDs.

diskutil list

Now unmount your internal disks.

diskutil unmountDisk disk0

diskutil unmountDisk disk1

Then create a merged virtual hard drive with Core Storage.

diskutil coreStorage create Macintosh\ HD disk0 disk1

Now get its logical volume group name (the very long alphanumerical name that appears in Terminal after you type this command).

diskutil coreStorage list

Now format and create the JHFS+ volume that will run Fusion Drive.

diskutil coreStorage createVolume yourlogicalvolumegroupname jhfs+ Macintosh\ HD 100%

Don’t worry about formatting the drive to APFS. That’ll happen automatically when you install Mojave. Besides, APFS is not an entirely separate file system, it’s a container running inside HFS+, so like I said, don’t worry about it. That’s it. Now quit Terminal and do a full restore from Time Machine, but prepare yourself for an incomplete data restore (see the reasons given in the previous section). Once the data restore is complete, you’ll need to manually copy the folders that are missing from the Time Machine drive. Or, as I did, you can do a full restore to a backup set that existed before I split my Fusion Drive, which means you’ll get all your old data back in all the right places, but you’ll still need to get your newer files manually from the Time Machine drive.

In my case, I needed to copy the mailboxes, which are located in ~/Library/Mail/V6 from the newest backup set (the one with the split drive) to my computer, and that gave me all my mail, including the interim stuff. I also copied the latest Photos library, and that gave me all my photos, including the interim stuff. Then I went through the Documents and Downloads folders on the Time Machine drive, sorted by date modified and copied the interim files onto my computer. I didn’t need to go through the other folders because I knew I hadn’t worked on other stuff. And once I did this, my data restore was complete. Mail and Photos still needed to rebuild their libraries though, and that took a while.

And because I use Backblaze to backup my computer offsite, I also needed to uninstall and reinstall that, then inherit a previous backup state (don’t worry about this if you’re not using Backblaze).

When that was done, Backblaze told me it had “made” my computer inherit my backup state, as if it had forced it to do this, in a non-consensual way. Kind of a funny way to word things, but their service works well.

Here’s the kicker. I ran another drive performance test after all this, and these were the results.

Actually a little bit faster than before 🧐

Everything runs fast now, and it runs as expected, without hiccups.

As I said at the start of this article, if you’re already running Fusion Drive, do yourself a favor and leave it running. You’ll avoid headaches you don’t need, unless you like complications.

The only way I can see to speed up my iMac even more, is to purchase a large 3-4TB SSD and run it as my only internal drive. That might be a little faster. But as you can see from the test screenshot shown above, my iMac is no slouch right now. And 4TB SSDs are still fairly expensive. It might actually be cheaper (and possibly faster) to get a 2TB SSD and a 512GB NVMe module, and run them together with Fusion Drive, although the overall capacity wouldn’t be the same. Food for thought.

Standard
Reviews

Data loss on the Drobo 5D

I bought a Drobo 5D on the 29th of December, 2012, after experiencing catastrophic data loss with the 1st Gen and 2nd Gen Drobo. During multiple phone conversations with Data Robotics’ CEO at the time, Tom Buiocchi, he convinced me that they were much better-engineered than previous-generation Drobos and they had built-in batteries and circuits that would automatically shut them down safely in case of power loss. I was also told the new firmware running inside them would be checking the data constantly to guard against file corruption or data loss. All of these were problems that I’d experienced with my existing Drobos, so even though I was exhausted after my ordeal and so weary of storage technology, I went ahead and purchased the new model and also agreed not to publish an account of what the Drobos had done to my data at the time. I want to make it clear that I paid for my Drobos, so I didn’t feel that I owed him anything, but I did want their company to do well, because back then they were new and deserved a second chance. Now though, there is no excuse for the multiple times their Drobos have lost my precious data. They’ve been around for 13 years and they’ve had plenty of time to make their technology stable.

What’s probably kept them on the market is the willingness of paying customers like me to take a chance on the uniqueness of their proprietary RAID: as far as I know, they are (unfortunately) the only RAID array that lets you store a large amount of data on a single volume that grows automatically as you add drives and also protects (except when it doesn’t) against hard drive failures.

However, after five years of using my Drobo 5D on a daily basis, I can tell you without any doubt that the Drobo 5D does not keep your data safe. Look elsewhere for safe data storage devices. I certainly cannot trust it with my data anymore, so I’ve elected to publish my account of data loss from 2012, as well as an account of my present data loss. Caveat emptor, lest you also lose your data. I’ve also had multiple problems with my two Drobo 5Ns. Because of these problems, some of which have led to significant data loss and to significant time and effort expended in order to restore my data from on-site and off-site backups, I cannot place my trust the new Drobo models that are available, either: I’m talking about the 5D3, 5N2 and the 8D. I see no reason at all to spend more money on more empty promises from Data Robotics.

I can’t say exactly what happened with my Drobo 5D. Drobo Support could not or did not choose to tell me, even though I sent them multiple diagnostic logs from the Drobo and I asked them to tell me what happened. My best guess is that the 5D kept “healing” a 6TB WD drive with bad sectors instead of asking me to replace it. Then a different drive from the array failed. Once the Drobo told me to replace that drive, I did. But during the process of rebuilding the data set, the Drobo 5D decided it didn’t like the 6TB WD drive it had been healing, and chose to tell me that I needed to replace it now. When I did, it would tell me I shouldn’t have taken it out and I should put it back in. I’d put it back in, and then it’d go through its internal processes, only to find out that it wouldn’t like that drive again, but it didn’t stop there. It’d reboot. At first this reboot cycle would take 10-15 minutes, allowing me to copy some data off it, but then it began doing it every 5 minutes. Since it takes a good 3-4 minutes to boot up, this meant I’d have only 1-2 minutes to copy data off before it’d reboot again. This was not workable. After opening a case with Drobo Support, they told me to put it in Read Only mode. You press Ctrl+Opt+Shift+R while you’re in Drobo Dashboard and this reboots it in that mode, which means it’s not going to try and rebuild any data internally, it’s simply going to present the volume to you as it is. This also turned sour quickly, because after allowing me to copy a small amount of data for a few hours, it began that same 5-minute reboot cycle. So I had no way of getting the data off the damned thing unless I mounted it through Disk Warrior and used the Preview app built into that software, which meant having to put up with USB 1.1 transfer speeds. More on the reduced transfer speeds below.

My take on the situation is that it’s a failure in Drobo’s firmware design. It should have asked me to replace the 6TB WD drive instead of working around its bad sectors. Because it didn’t ask me to replace it in time, it then failed when rebuilding its data set after the second drive went bad. That’s not two drives going bad at the same time, that’s a drive going bad and a few weeks later another drive going bad. The Drobo had plenty of time to fix the ongoing situation if its internals had been programmed correctly, but it didn’t, because of inadequate firmware running the device. That’s bad technology at work, causing me repeated data losses.

The 6TB WD drive had 9 bad sectors, but the Drobo 5D kept insisting on healing it until it was too late.

Here’s another example of Drobo’s crappy firmware: for the past three years, I have had to force my iMac not to go to sleep, because every single time I’d wake it up, the Drobo 5D would refuse to mount, forcing me to reboot the iMac and/or the Drobo and also disconnect/connect the Thunderbolt cable from my iMac in order to get the computer to see it. Data Robotics tried to fix this horribly annoying problem (which can also cause data loss) through multiple firmware fixes, but I can safely tell you that they still haven’t fixed it. Before I stopped using my Drobo 5D, I was on Drobo Dashboard 3.3.0 and Drobo 5D firmware 4.1.2, while my iMac is on MacOS Mojave 10.14.2, and this problem still very much occurred. Well, it definitely occurred before my 5D crapped itself. Oh, how I’d like to be back to those simpler times when all I had to deal with was keeping my iMac from going to sleep! But no, now I have to deal with massive data loss, once again. For the goddamned umpteenth time, Drobo!

Except I didn’t cause the disconnect, Drobo’s crappy firmware would cause it to happen, over and over and over…

What do you think would happen after enough improper disconnects? That’s right, volume corruption!

Look at the difference in size between the two directories. The first is a backup, the second is supposed to be the working directory. That’s the Drobo 5D, actively losing your data…
Only 54,890 missing photographs? Thanks, Data Robotics! (This was in 2017.)

Here’s more proof of data loss from 2017. Those yellow triangles are indications of missing video clips.
Here’s proof of more data loss from 2015, when even Disk Warrior couldn’t get my data back.

Data Robotics marketing speak tells you of super-fast transfer speeds and protection against data loss when you buy their devices. The things you most need to remember are that you will experience data loss (that’s a given) and you won’t be recovering your data at super-fast transfer speeds such as USB 3.0/USB-C or Thunderbolt 1/2/3. You will instead be forced to use Disk Warrior’s Preview application (if you’re on a Mac) and you will be recovering your data at USB 1.1 speeds. That’s right, take a moment to think about that! In order to recover data from my Drobo 5D, I have to use Disk Warrior (a Mac app known for its ability to recover failed volumes), because it won’t mount any other way. It certainly doesn’t mount through the Finder or through Disk Utility.

“Work” is the Drobo 5D volume that refuses to mount

The size of my Drobo volume is about 12 TB. Thank God some of it was backed up locally with Resilio Sync and some of it I recovered from online backups with Backblaze, so I only needed to recover part of my data from the Drobo 5D itself, but for a single 3 TB Final Cut Pro library, it took roughly 300 hours to transfer it to another drive! 300 HOURS! It might be somewhat tolerable (in some masochistic sort of way) if I knew the damned thing would stay on for all that time, allowing me to copy the data in one go, but you never know when it’s going to restart. It can go at any time.

I’ve posted illustrative screenshots below. Feel free to do the math yourselves as well. Also think about how much worse this problem gets when your Drobo volume is much, much bigger. The Drobo 5D and 5D3 can go to 64 TB, while the Drobo 8D can go to 128 TB. Do you really want to be stuck copying 50-60 TB of data at USB 1.1 speeds? How about 100 TB of data? Think about that before you click on the Buy button and get one of those shiny black boxes.

The hard truth is that you can’t put all your eggs in one basket. Having all your important data on a single volume, which is what the Drobo lets you do, is a dumb idea. If that volume goes, all your data goes along with it, and it can take MONTHS to recover it from backups.

After more than 12 hours…
Transfer speeds vary between 100 kB/s – 4 MB/s but mostly stay around 100 kB-200 kB/s
Two days for 350 GB… Great…

I don’t know about you, but I don’t want to spend anywhere between $5,000-$10,000 with a data recovery firm every few years, when a Drobo unit decides to fail and lose my data. But that’s what Drobo Support advises you to do from the very beginning. They advise against trying to recover the data yourselves, even while working with them, and instead try to convince you to send your Drobo into a drive recovery firm. For a device that’s supposed to protect your data and a company that brags about “protecting what matters”, that’s disgusting. How exactly are they protecting your data when their devices fail miserably every few years? I guess “protecting what matters” really means “protecting their bottom line” by ensuring suckers keep buying their product.

Do you want to know what happens when I try to open a ticket on the Drobo Support website? This:

For months now, all I get is this grey page with a rotating status ball when I visit Drobo Support

If a company can’t even fix its support page so that it loads up for its customers, that’s cause for worry. I tried explaining the issue to them, I even sent them screenshots, but the techs I spoke with seemed unable to comprehend why the website wouldn’t load. My guess is they’ve got a geofence that stops visits from Romanian IP addresses.

When it comes to their marketing people, I cannot describe them as anything but a bunch of slime buckets. Twice now they’ve ignored me in dire situations when I reached out to them for help, hoping they would redirect my messages internally and get someone to pay proper attention to my case. Back in 2012, back when my two DAS Drobo units lost over 30,000 files, they began ignoring me after I told them what had happened. Now, in 2018, they pretended to help, just so it looked good at first glance on social media, but they didn’t follow through on their promise and ignored me afterward. See my comments on their tweet here. I also wrote a message to some guy who calls himself the Drobo CTO but gives no real name, asking him to have a look at my case, but he ignored me. If it’s a fictitious account then it’s understandable, but if it isn’t, then he’s a turd as well for ignoring a legitimate and polite request for help from a customer.

I also reached out to Data Robotics’ current CEO, Mihir Shah, to ask for his help, and after an initial reply that said, “I am looking into this case for you. We will get back to you. Thanks.” — his line went dead. I am left to conclude that he’s of the same breed as his marketing people, who did the same thing when I reached out to them.

You might be asking yourselves why I chose to reach out to these people when I had already opened a case with Drobo Support? Because I felt the technician handling my case wasn’t doing a good job. This case wasn’t a freebie, I paid for it, and I wanted to get real help, not bullshit. Here are a few examples of the kind of “support” I received:

  • He told me to clone a drive using Data Rescue (a Mac app). This was supposed to fix the situation, but it didn’t, because he told me to clone the wrong drive. That’s right, he had me waste an entire day cloning the wrong drive, before I pointed it out and he came back with sheepish excuses, asking me to clone another.
  • He let me buy two new 8 TB hard drives in order to go through the data recovery process, without saying a damned thing, before another tech stepped in to tell me I should have bought a 6 TB hard drive instead, because the cloning process needed for my scenario requires a drive of the exact same size as the 6 TB WD drive that the Drobo 5D refused to use anymore. Why couldn’t he tell me that from the start? Besides, isn’t the whole point of BeyondRAID, which is Drobo’s proprietary technology, to let people use drives of varying sizes? There goes that advantage when you need it…
  • Having worked in IT for a long time, and having worked my way up from the help desk, I could tell the technician didn’t give a crap about the case. It was clear to me that he wasn’t interested in helping me or solving the case, he was simply posting daily case updates of 1-2 sentences with incomplete and unclear replies to my questions on how to proceed and dragging the case on, probably hoping I’d give up.

This time around, I’m faring better when it comes to data recovery than I fared in 2012. Unfortunately, I don’t have a full local backup of my data, even though the Resilio Sync software was supposed to mirror it from my Drobo 5D to my Drobo 5N. I do have a full online backup with Backblaze, but getting back about 12 TB of data through online downloads will prove to be a cumbersome and slow affair. They only let you download up to 500 GB at a time, which means that if I want to download all of it, I’d have to create at least 24 restore jobs on their servers, each of which would generate a ZIP file that would need to be downloaded and then unzipped locally. I could also choose their HDD recovery option which ships your data to you on 3.5TB drives, but I’d need 4 of them, which means it would cost $189*4 or $756. It would probably work out to something like $850-1,000 for me in the end, because I would incur a high shipping cost from the US to Romania and I would also be responsible for customs fees. Theoretically, Backblaze offers a money back guarantee if the drives are returned within 30 days, but I doubt I could return them in that time span, given they’d have to make it to Romania and back. They’re supposed to be working on a European data center and it might open this year, and while that’s going to be nice in the future, it’s not going to work for my situation at this time. No, the much more workable solution would have been for me to have a full local backup of the files on the Drobo 5D. But thanks to Data Robotics, that option got shot down and I didn’t even find out until it was too late…

You see, the Resilio Sync software was set to mirror my data from the Drobo 5D to a Drobo 5N. On my gigabit network, that would have worked just fine. The app was running on my iMac and it was also running on the 5N through the DroboApps platform. The software had a few months to do a proper job syncing my data between the two devices and it indicated to me that it had done it. Unfortunately, after my Drobo crashed, I found out that it hadn’t. The reasons offered by Resilio Support were the following:

  • The Drobo 5N’s processor wasn’t powerful enough
  • The Drobo 5N’s RAM wasn’t enough
  • The Drobo 5N ended up using swap memory and somehow it messed up the sync logs, which caused it to think it had finished the sync when it hadn’t

If I’m to take them at their word, the Drobo 5N is too underpowered to handle Resilio Sync. That’s not to say there isn’t plenty of blame to be assigned to the Resilio Sync software as well. After all, it should have kept track of the data sync accurately. What good are they if they can’t perform at their main advertised task?

If I am to look at what Drobo is saying about the successor to the 5N, the 5N2, namely that it “provides up to 2x performance boost with an upgraded processor and port bonding option”, that would certainly mean the 1st gen 5N is underpowered. Finally, who do you think recommended Resilio Sync to me? It was none other than Data Robotics, whom I called to ask for details on DroboDR, their own data sync application. They said it was too barebones of an app for my needs and suggested I look into Resilio Sync, which was much more robust and full-featured. Thanks once again to my trust in the Drobo, my backup plans got sabotaged, forcing me to waste weeks of my time recovering data from a failed Drobo volume.

But enough about the Drobo 5N, I’ll have a separate post where I’ll talk about how that device has also lost all of my data recently… Back to the Drobo 5D.

To recap, the Drobo 5D lost my data in 2015, 2017 and 2018. That’s three separate, significant, data loss events, the last of which wasted more than a month of time till I was back on my feet.

I am back on my feet now. It’s the 26th of January 2019 and I finally got the last of my data recovered. I’ve been down, unable to do my work, since the beginning of December 2018. I can’t even remember what day it was the Drobo 5D failed. It’s a blur now.

I ended up buying a third new hard drive, specifically a 6TB hard drive, to match the size of the drive that the Drobo didn’t want to use any more, and I cloned that drive, which turned out to have 5 clusters of bad sectors, onto the new drive, using Data Rescue software for the Mac. I had to use its “segment” cloning mode, which attempts to get around the bad sectors, and this meant the cloning operation took more than 120 hours!

A screenshot taken after 117 hours…

Around hour 122, my iMac crashed and I had to reboot it, so I don’t know if the cloning operation was completed or not. My screen went black and stayed that way. It could be that the cloning ended and then my iMac crashed, or simply that my iMac had had enough of staying up to work through a lengthy cloning process and went poof. I don’t know, and I wasn’t about to start cloning the damn drive again. I took my Drobo 5D out of Read-Only Mode, stuck the cloned drive into the Drobo 5D and booted it up, expecting some data loss. Besides, even if the cloning operation had completed successfully, I’d have still lost some data. If you look above, you’ll see that there were 298,2 MB of bad, unrecovered data. That wasn’t from one portion of the drive, that was from five different portions, because there were five significant slow-downs in the cloning process.

The 5D booted up and started its data protection protocol

Once it booted up, I started copying my data off it, not knowing how long it might stay on. For all I knew, it could go into a vicious reboot cycle at any moment. I’d already recovered all I could from Backblaze and from my local backups, and I needed about 6-7 TB of data from the 5D before I was back up to normal… “normal” being a loose term given how unreliable the Drobo is and how much data I’d already lost.

Sure enough, data loss soon reared its ugly head. Thanks, Drobo, you bunch of slime buckets!

One of the damaged FCPX libraries with lots of missing video clips

So far, two FCPX libraries containing the videos I’d made in 2017 and 2018, are damaged. I don’t know if I’ll be able to get those videos back.

I consoled myself with the idea that at least I was able to recover most of my data at Thunderbolt speeds, not at USB 1.1 speeds, so that saved me about 2-3 more weeks of painful waiting.

All of my data is now off the Drobo 5D, and I don’t plan on using it again. I’ve put it in a storage closet. I’m done with it, and I recommend you also stop using your Drobos, if you are using one, because it’s only a matter of time before they will lose your data. In my opinion, they offer no more protection than using individual drives. They just cost more, force you to buy more hard drives and they make more noise. My office is so much quieter now that the Drobo 5D is gone. I’m using individual drives which I plan on cloning locally. I will continue to use Backblaze for cloud backups, and I may also do some periodic local backups to a NAS of some sort. I have found out during these past few months that I also cannot rely on the Drobo 5N NAS to store my data, because one of my Drobo 5N units has just lost all of the data I’d entrusted to it. I am now in the process of recovering that… It’s non-stop torture, time and data loss with a Drobo!

I have been a Drobo customer since December 2007, when I bought my first Drobo. I was also among the first Drobo “Evangelists”, as they called their enthusiastic customers back then. It’s now January 2019, and I am done with Drobo. I was a loyal customer for 12 years and I stuck with them through an incredibly disheartening amount of data loss, problematic units and buggy firmware. That’s enough of that. Caveat emptor. Drobo no more.

Standard
Thoughts

The flip side of digital photography

Should you be old enough, you’ll remember how different photography was before the arrival of digital cameras. Not only was it difficult to get great photos, the kind that were good enough for publication, but it was difficult to develop them and reproduce them. There were real barriers to entry and to success in the field. They weren’t insurmountable, but they were there.

Nowadays, digital cameras make it so easy for us. Even a novice can occasionally get a great photo simply by clicking the shutter button, because modern cameras can pretty much handle all situations. They don’t do everything, you still need to know what you’re doing in some scenarios, but they’ll get you pretty close to your desired result by themselves, most of the time. So not only is it easy to take photos, but it’s also easy to “develop” them using your computer, and you can reproduce them endlessly. The barriers to entry and success in the field are now almost gone.

However, one thing we all learn as we age is that everything comes with pluses and minuses. Just like film photography had certain minuses, digital photography comes with plenty of unpleasantries on its flip side.

Publications that used to hire photographers and pay them good wages are dwindling. How many do you know of that still have on-staff photographers, or hire photographers for their stories? And how do their salaries compare with those of photographers in the past if they’re adjusted for inflation?

Stock agencies are decreasing the payouts to photographers. There is a lot of competition in that market, paired with a real glut of photographs. And when the supply always outnumbers the demand, prices will fall. There are but a few stock agencies left. There are a ton of microstock agencies which sell photos for piddly sums and pay cents on the dollar to photographers, and they’re also getting bought out and merging with each other in order to survive. If it wasn’t clear a few years ago, it’s becoming painfully clear now that a photographer cannot make a living selling microstock. There are a few who manage to do it, but it’s clear that on average, microstock yields a non-livable income.

There are so many photographs being made that people don’t truly appreciate them anymore. Do you remember how we used to admire photographs in the past? We’d stare at them for 5-10 minutes at a time, taking in each detail. We’d cut them out of magazines and paste them in scrapbooks. We’d look at them and look at them and look at them… Now we’re lucky if a photo gets 5 seconds of someone’s time. There are so many of them that people just gloss right over a photo that took days or hundreds of tries to make. Perhaps you’ll understand this better if I compare it to a periodical cicada emergence. In just a few days, animals that would eagerly consume them as they came out, would become so glutted that they’d simply lay on the ground and watch them crawl around and over them, unable to eat a single morsel. That’s what’s going on with photographs now. Each of us has a rhythm, a rate of “ingesting” digital content and we’ve all reached our max, but the photographs just keep coming. They keep coming and their rate of production is actually increasing. We cannot keep up.

Digital photography gear is made to become obsolete, causing you to spend more money every few years. Remember how you could use the same film camera for 10-20 years, even a lifetime, if you took care of it? That’s not the case with digital cameras, which typically last about 4-5 years before something goes bad. Even if you’re willing to pay a repair shop to have it fixed, camera manufacturers stop stocking parts for older cameras after a certain number of years, because they want to force you to buy a new model. I wanted to send my Canon 5D in for repairs last year, but I couldn’t. The repair shop said I shouldn’t bother, because Canon actually doesn’t allow them to work on the 1st gen 5D anymore and they’ve stopped stocking parts. Not that Canon repair experiences were so great to begin with, but at least they got the job done. I also sent in my Olympus PEN E-P2 in for repairs last year, but it didn’t get repaired. It came back just as I sent it, with a message that offered apologies for the inconvenience and explained that they’d stopped stocking parts for that model just a few months back; support had been discontinued by Olympus. I don’t understand it: there’s money to be made with service and repairs, so why stop supporting a model? Why not keep servicing it for as long as the customer is willing to use it? That business model has been proven to work a long time ago by the car industry.

Cameras, lenses and flashes are getting more expensive each year. Manufacturers can call them inflation adjustments all they want, but price hikes still feel very much like price hikes. And when they’re coupled with no real way to make money from your photos anymore, what are you left with? Doing weddings? Yuck. I don’t know how photographers are coping with all of this. I have a nagging feeling that wedding photographers are pretty much the only ones making money from photography these days. They’re certainly the bulk of the paying customers for camera manufacturers. It’s them and the online “experts” that have sprouted like mushrooms after rain, offering “advice” about which camera model to buy on YouTube and other video sites. It’s a new model/brand each week of course, unless they’re getting paid by a manufacturer to promote a certain brand.

There are real costs associated with processing, storing and archiving digital photographs. We’re told that digital photographs are pretty much free and there’s never been a better time to take many, many photos in order to learn the craft, but there are significant costs that come into play when you add the price of a good computer and good software and the storage and backup solutions that you will absolutely need unless you want your photos and your hard work to go up in a puff of virtual smoke. I’d like to challenge you to add up the costs of your camera gear (camera, lenses, flashes, adapters, tripods, etc.) and computer equipment (laptop/desktop, external hard drives, backup equipment/services) and once you have a total, divide it by the number of photographs you’ve taken with your camera so far. That’ll give you a pretty good idea of the cost per image, and you’ll see that digital photographs are not free. Granted, that cost per image will go down the longer you keep your current equipment and the more photos you take with it, although the cost of storage and backup will still be there for your larger collection of photographs. Do you realize you’ll likely need to pay for a backup subscription for the rest of your life? It’s no wonder that more and more people choose to take photos with their smartphones and edit them directly on those devices, forgoing the cost of computer equipment. And when smartphone manufacturers also offer direct and almost instantaneous cloud backup of the images and videos taken with the phones (at somewhat reasonable prices) it becomes a very attractive offer.

It’s so easy to reproduce digital photographs that it’s actually a problem, because anyone can steal and plagiarize them. Theft of online photographs is rampant. It’s one thing for a fan to repost your photos on another site — I’d go so far as to say that’s fine… but it’s quite another thing for someone to download your photos, enlarge them in Photoshop and repost them on a stock site or use them in ad campaigns, and this is happening quite a lot.

There is no consistent way to attribute photographs online, which means a photographer’s name is likely to get lost in the shuffle. Sure, you can use a caption that lists the photographer’s name, but that only works if you’re the primary publication and you’ve worked with the photographer. Most software used to export and compress images for online publication generally strips EXIF and IPTC copyright information. And most online platforms also have no consistent way of keeping that information inside the photographs, instead offering excuses about file size and compression algorithms which sound very empty given how far we’ve come with computer technology. Have you ever tried to find a photographer’s name for a photo reposted on social media? Good luck… Unless they’ve got a tasteful watermark somewhere on the photo, the metadata’s been wiped clean by these sites. Even Flickr still does not keep a photographer’s name in the metadata of a photo. Should you be able to download a photo from a Flickr contact, you’ll get a link to the page where it was found and maybe a caption, but you will not get something as basic as the photographer’s name, much less the rest of the copyright information.

I’m not saying we should go back to film and analog equipment. I love digital cameras and their ease of use. And I love the various advances being made in digital camera gear. Some of the minuses listed above can even be fixed. I’m just not enthusiastic about their flip side. When photographs were harder to make, we appreciated them more and good photographers stood a good chance of making good money with them. Now that photographs are easy to make, we don’t appreciate them and income from photographs has gone down to pennies on the dollar, if at all. Thank goodness I take photographs for the sake of it, as a creative endeavour that relaxes me after working on my various projects, but I wonder how others are coping with these changes. And it’s also not to say that I wouldn’t mind making money from my photographs on my own terms.

Standard
Reviews

A follow-up to my review of Google’s Backup and Sync

I reviewed Google’s Backup and Sync service back in December. There were several issues with the service that I outlined in there, such as the app backing up files that it was not supposed to back up, the service counting files toward the quota even though it was supposed to compress them and allow unlimited free storage, etc. I thought I’d do a follow-up because, as you may have guessed already, there are more issues I want to point out and also a few pieces of advice that might help you in your use of the app.

One issue that occurs over and over is that the app crashes. It gives an error message popup and says it needs to quit. Which is somewhat okay, but when you start it back up, it does a re-check of all the files it’s supposed to back up, and that is an energy-hungry process. You can see it at the top of the active apps in Activity Monitor (on the Mac), eating up all the processor cycles as it iterates through its list of files. And even when it does its regular backup in the background, it’s still climbing toward the top of the active apps. To be fair, Flickr’s own Uploadr is also an energy-hungry app. Neither of the apps allow you to make them use less energy (to work slower, etc.), so they churn away at your computer’s resources even though they’re supposed to work quietly in the background.

Another issue that still occurs is that the app backs up files that it’s not supposed to back up. I have it set to back up only photos and videos and yet it backs up a lot of files with strange extensions that end up counting toward my storage quota on Google Drive. Have a look at the screenshots taken from my settings for the app below.

I had it set on backing up RAW files too, but it wasn’t backing up anything but CR2 (Canon RAW files) and DNG (Adobe RAW files, or digital negatives). And it had problems backing those up as well, because when the size of a DNG file was over a certain limit (it’s somewhere around 50 MB I think), it backed it up but it didn’t compress it, so it counted toward the storage quota.

It wasn’t compressing ORF (Olympus Raw files) while backing them up, so they counted toward my quota. Since I shoot only with Olympus gear these days, that was no good to me. So what I did is I chose not to let it back up any RAW files and I set my camera to shoot in ORF + JPG format. I work with the ORF files in Lightroom and the unedited JPG files get backed up with the app.

Here’s a list of files whose extensions it might be helpful for you to add to its settings, so the app won’t put them on your Google Drive. Of course, as mentioned above, the app backs up all sorts of files it’s not set to back up. It’s like it ignores the settings and just does what it wants, so ymmv.

  • cmap
  • data
  • db
  • db-wal
  • graphdb
  • graphdb-shm
  • graphdb-wal
  • heic
  • heif
  • ithmb
  • lij
  • lisj
  • orf
  • plist
  • psd
  • skindex
  • tif
  • tmp
  • xmp
  • zip

You may have noticed HEIF and HEIC in the list above. Those are the new image and video standards used by Apple because they offer much higher quality and compression than JPG and H.264. And even though it’s not logical that Google wouldn’t know or want to compress them and back them up properly, they don’t. The app will simply copy them to Google Drive, uncompressed, and they’ll count toward your quota. So all of you who have iPhones and iPads and use the Backup and Sync app or the Google Photos app, you are currently backing up the photos taken with your devices on Google Drive, but they count toward your storage quota even if you don’t want them to. Keep in mind that this may be a temporary thing and Google may choose to rectify this issue in the coming months.

The storage options on Google Drive are another issue I want to talk about. I had to upgrade my storage to 1 TB because of all these issues. At one point, I had over 400GB of unexplained files taking up space in there and I had to upgrade to the 1 TB plan, which costs $10/month. Now I don’t know about you, but that pisses me off. It’s one thing if I choose to upgrade my storage plan because I want to do it, and it’s another thing altogether to be forcibly upsold because the Backup and Sync app might be used as a funnel to generate gullible leads for Google Drive’s storage plans. Notice I said “might be”; I have no proof of this. It could be that the app is just full of bugs and not well-maintained.

So I did two things: one was to downgrade my storage plan to the minimum of 100 GB at $2/month, and the second was to start looking through my Google Drive in order to see what files were taking up space. I found them but let me tell you, getting rid of them is like pulling teeth. It’s like Google doesn’t want you to get rid of them, so they keep on taking space there and you keep on paying. It’s not right. Let me show you: first you go to Google Drive, and at the bottom of the sidebar on the left, you’ll see how much space you’ve got. My storage quota is under control now, but this is my second day of working on this. Can you believe it? Google has made me waste almost two work days in order to correct a problem that it created.

If you click right on the space used, in my case the 76.6 GB, it’ll take you to a page where it begins to list all of the files that are taking up space on your Google Drive, in descending order based on file size. Here’s where it might be confusing for some: the files that are compressed and don’t count toward your quote are listed with a file size of 0 bytes. This is not an error, those files aren’t really 0 bytes, but they’ve been compressed and as far as your quota is concerned, they’re okay. The files that do count toward your quota will be listed at the top. That’s how I found out that Google doesn’t compress PSD files or TIF files or large DNG files. I had images that were over 100 MB in size, some close to 1 GB in size, that it wasn’t compressing, so I had to delete those. If you want to bring down your storage requirements on Google Drive, you’ll have to do the same. Here’s a screenshot of the page I’m talking about, but keep in mind that I’ve already done the work, so I have no more uncompressed files taking up space. Whatever’s left, it’s in the Trash.

So this part is like pulling teeth. Even though I was using Google’s own browser, Google Chrome, and working on Google’s own service, Google Drive, it was excruciatingly slow to list the files I needed to delete. The page would only pull something like 50 files to display, and if you wanted to see more, you had to scroll down and wait for it to pull up more… and then the browser would almost freeze and give you a warning to let you know the page was eating up too many resources… ugh… what a nasty thing to do to your customers, Google!

Have a look at the resources Chrome was eating up during this whole thing:

This “fun activity” took up most of my two days. Not only did it work like this when I needed to identify the files that I needed to delete, but once they were in the Trash, that page also worked the same way. In the web browser, it would only pull up about 50 files or so for me to delete at once. Even though the “Empty Trash” option was supposed to clear the Trash of all of the files in it, it would only delete the 50 or so files that it pulled up. Sure, you can scroll down, wait for it to pull up 50 more files, scroll down again, etc. until Chrome gives you a warning that the page isn’t working properly anymore, then you can empty the trash, deleting a few hundred files, then go again and again and again. I tell you, I suspect that Google is doing this on purpose so you don’t clean up your Drive and are forced to upgrade your storage plan…

I looked this thing up, and some people had more luck emptying the trash by using the mobile app (for iOS or Android). I tried it on my iPhone and it hung, then crashed. I tried it on my iPad and it would hang, the little Googley kaleidoscope wheel going on and on for hours, and then it would either crash or keep on twirling. I left my iPad with the app open all night after issuing the Empty Trash command and when I came back to it in the morning, it was still twirling away and the files hadn’t been deleted.

So now it’s back to the browser interface for me until I clean up all the files. See the screenshot below with the twirly blue thing in the middle? That’s me waiting on Google to list those files in the Trash… By the way, I bought a 2 TB storage plan on Apple’s iCloud to back up my phones, tablets and computers, and I can share that plan with my family. It costs the same as Google’s 1 TB plan: $10/month.

Will I keep using the Backup and Sync app? Yes, at least for now. The promise of unlimited storage of all my compressed images is a tempting thing. I realize there’s a loss in resolution and quality but God forbid something happen to my files and my backups, at least I have them stored somewhere else and I can recover them; they might not be their former selves, but I’ll have something.

Just FYI, I back up locally and remotely. For local backups I use Mac Backup Guru and for the remote backups I use Backblaze, which I love and recommend. Their app is amazing: blazing fast, low energy footprint, works quietly in the background and has backed up terabytes of data in a matter of 1-2 weeks for me. And as for my hardware, I still use Drobos and I love and recommend them as well. I’ve been using them since 2007 and while I’ve had some issues, I still think they’re the best and most economical expandable redundant storage on the market. I use a Drobo 5D next to my iMac and two Drobo 5N units on the network.

I hope this was helpful to you!

Standard
Reviews

A review of Google’s Backup and Sync

google-drive-to-backup-sync

Google launched this new service in the second half of 2017. I remember being prompted by the Google Drive app to install an upgrade, and after it completed, I noticed a new app called “Backup and Sync” had been installed, and the Google Drive app had become an alias.

Screen Shot 2017-12-26 at 14.08.28.png

The new app sat there unused for some time, until I discovered its new capability, namely to back up and sync other folders on my computer, not just the Google Drive folder. This was and is good, new functionality for Google, because it ties in very nicely with its Photos service, which has already been offering the ability to back up all of the photos and videos taken with mobile devices to the cloud through the Google Photos mobile app. I’ve been using Google Photos for several years, going back to when it was called Picasa Web.

I set it to back up all of my photos and videos, allowing Google to compress them so I could back up the whole lot. (It’s the “High quality (free unlimited storage)” option selected in the screenshot posted below.)

Screen Shot 2017-12-26 at 15.01.14.png

I already back up all of my data with Backblaze, which I love and recommend, but it doesn’t hurt to have a second online backup of my media, even if it gets compressed. Having lost some 30,000 images and videos a few years back, I know full well the sting of losing precious memories and when it comes down to it, I’d rather have a compressed backup of my stuff than none at all.

Screen Shot 2017-12-26 at 14.15.12.png

The thing is, there are shortcomings and errors with this new service from Google, which I will detail below. The backup itself was fast. Even though I have several terabytes of personal media, they were uploaded within a week. So that’s not the issue. After all, Google has a ton of experience with uploads, given how much video is uploaded to YouTube every single day.

Screen Shot 2017-12-26 at 14.08.51.png

As you can see from the screenshot posted above, it was unable to upload quite a few files. The app offers the option of uploading RAW files in addition to the typical JPG, PNG and videos, but it couldn’t upload RAW files from Olympus (ORF), Adobe (DNG) and Canon (CR2). They were listed among the over 2700 files that couldn’t be backed up.

Screen Shot 2017-12-26 at 14.09.56.png

I ended up having to add the extensions of RAW, PSD, TIFF and other files to an “ignore” list located within the app preferences. This is the full list I’ve added there so far: DNG, TIFF, RAF, CRW, MOV, PSD, DB, GRAPHDB, PLIST, and LIJ. It seems there’s a file size limit on images and on videos, because most of my large images (stitched panoramas) and videos of several GB or more didn’t get uploaded. That’s a problem for an app that promises to back up all your media.

There were also quite a bit of crashes. The app crashed daily during the upload process and even now, it crashes every once in a while. I set up my computer to send crash reports to Apple and to the app developers, so I assume that Google got them and will at some point issue an upgrade that fixes those bugs.

I also kept running out of space on my Google account. Given that I’d set the app to compress my images so I’d get “free unlimited storage”, and I’d also set it to back up only my images and videos, this didn’t and doesn’t make sense. Add to this the fact that it’s trying to back up unsuccessfully all sorts of other non-image files (see the paragraph above where I had to add all sorts of extensions to the ignore list) and once again, this app seems like it’s not fully baked. I ended up having to upgrade my storage plan with Google to 1 TB, so it’s costing me $9.99/month to back up most (not all) of my images and videos, compressed, to a service that offers “free, unlimited storage”. The app says I’ve now used up 408 GB of my 1 TB plan. Before I started backing up my media, I was using about 64 GB or so, adding together Gmail and Google Drive. So about 340 GB are getting mysteriously used by some invisible files that I can’t see in Google Photos or Google Drive, but they’re obviously stored somewhere by the Backup and Sync app.

Remember, this is Google. They have a ton of experience with apps, with images and with videos, so why did they push this out when it still has all these issues?

Standard
Thoughts

Fun with technology

I’ve had multiple Drobo units since 2007. To this day, I still enjoy adding a hard drive to a Drobo. It’s one of those things that can be an ordeal on other tech, but on a Drobo, it’s been made fun through proper planning and design.

It lets you that it’s low on space, you order a drive, and when it comes, you look at the app, which tells you exactly what size-drive is in each bay. Pressing a small lever on the side of the bay releases the drive, which slides out. You put the new one in, the Drobo immediately checks it and formats it, then begins striping the data set across it. By the way, that’s a screen shot showing my Drobo 5D.

Screen Shot 2017-12-21 at 12.48.30.png

I love this process. It’s so simple and so fun! The Drobo doesn’t care what hard drive you buy, as long as it’s larger than what you already had. It allows you to grow the capacity of your Drobo in time, as the prices for newer, bigger hard drives decrease, without any sort of headaches. This is technology done right.

Standard

We need to focus our efforts on finding more permanent ways to store data. What we have now is inadequate. Hard drives are susceptible to failure, data corruption and data erasure (see effects of EM pulses for example). CDs and DVDs become unreadable after several years and archival-quality optical media also stops working after 10-15 years, not to mention that the hardware itself that reads and writes to media changes so fast that media written in the past may become unreadable in the future simply because there’s nothing to read it anymore. I don’t think digital bits and codecs are a future-proof solution, but I do think imagery (stills or sequences of stills) and text are the way to go. It’s the way past cultures and civilizations have passed on their knowledge. However, we need to move past pictographs on cave walls and cuneiform writing on stone tablets. Our data storage needs are quite large and we need systems that can accommodate these requirements.

We need to be able to read/write data to permanent media that stores it for hundreds, thousands and even tens of thousands of years, so that we don’t lose our collective knowledge, so that future generations can benefit from all our discoveries, study us, find out what worked and what didn’t.

We need to find ways to store our knowledge permanently in ways that can be easily accessed and read in the future. We need to start thinking long-term when it comes to inventing and marketing data storage devices. I hope this post spurs you on to do some thinking of your own about this topic. Who knows what you might invent?

How To

How to create a Fusion Drive on a mid-2011 iMac

Yes, you can enable Fusion Drive on older Macs. I’m not sure how this method will work with Macs older than 2011, but I know for sure that it works on mid-2011 iMacs, and quite possibly on other Macs made since then. I have just completed this process for my iMac and I thought it would help you if I detailed it here.

I like Fusion Drive because it’s simple and automated, like Time Machine. Some geekier Mac users will likely prefer to install an SSD and manually separate the system and app files from the user files which take up the most space, which is something that gives them more control over what works faster and what doesn’t, but that’s a more involved process. Fusion Drive works automatically once you set it up, moving the files that are used more often onto the SSD and keeping the ones that are accessed less often on the hard drive. This results in a big performance increase without having to fiddle with bash commands too much.

The hardware

My machine is a 27″ mid-2011 iMac with a 3.4 GHz processor and 16GB of RAM. I bought it with a 1TB hard drive, which I recently considered upgrading to a 3TB hard drive but decided against, given the fan control issues with the temperature sensor and the special connector used on the factory drive.

imac-basic-specs

I purchased a 128GB Vertex4 SSD from OCZ. It’s a SATA III (6 Gbps) drive and when I look in System Info, my iMac sees it as such and is able to communicate with it at 6 Gbps, which is really nice.

ocz-vertex4-ssd-128gb

ssd-specs

The hardware installation is somewhat involved, as you will need to not only open the iMac but also remove most of the connections and also unseat the motherboard so you can get at the SATA III connector on its back. You will also need a special SATA wire, which is sold as a kit from both OWC and iFixit. The kit includes the suction cups used to remove the screen (held into place with magnets) and a screwdriver set.

2nd-drive-ssd-kit

You can choose to do the installation yourself if you are so inclined, but realize that you may void the warranty on the original hard drive if something goes wrong, and this is according to Apple Tech Support, with whom I checked prior to ordering the kit. Here are a couple of videos that show you how to do this:

In my case, it just so happened that my iMac needed to go in for service (the video card, SuperDrive and display went bad) and while I had it in there, I asked the technicians to install the SSD behind the optical drive for me. This way, my warranty stayed intact. When I got my iMac back home, all I had to do was to format both the original hard drive and the SSD and proceed with enabling the Fusion Drive (make sure to back up thoroughly first). You can opt to do the same, or you can send your computer into OWC for their Turnkey Program, where you can elect to soup it up even more.

The software

Once I had backed up everything thoroughly through Time Machine, I used the instructions in this Macworld article to proceed. There are other articles that describe the same method, and the first man to realize this was doable and blog about it was Patrick Stein, so he definitely deserves a hat tip. I’ll reproduce the steps I used here; feel free to also consult the original articles.

1. Create a Mountain Lion (10.8.2) bootup disk. Use an 8GB or 16GB stick for this, it will allow you to reformat everything on the computer, just to clean things up. Otherwise you may end up with two recovery partitions when you’re done. I used the instructions in this Cult of Mac post to do so. The process involves re-downloading 10.8.2 from the Apple Store (if you haven’t bought it yet, now is the time to do so) and an app called Lion Diskmaker.

2. Format both the original HD and the SSD, just to make sure they’re clean and ready to go. Use Disk Utility to do this, or if you’re more comfortable with the command line, you can also do that (just be aware you can blow away active partitions with it if you’re not careful).

2. List the drives so you can get their correct names. In my case, they were /dev/disk1 and /dev/disk2.

diskutil list

3. Create the Fusion Drive logical volume group. When this completes, you’ll get something called a Core Storage LGV UUID. Copy that number, you’ll need it for the following step.

diskutil coreStorage create myFusionDrive /dev/disk1 /dev/disk2

4. Create the Fusion Drive logical volume. I used the following command:

diskutil coreStorage createVolume paste-lgv-uuid-here jhfs+ "Macintosh HD" 100%

5. Quit Terminal and begin a fresh install of Mountain Lion onto the new disk called “Macintosh HD”.

6. Restore your apps, files and system settings from the Time Machine backup using the Migration Assistant once you’ve booted up. Here’s an article that shows you how to do that. When that completes, you’re done!

The result

Was it worth it? Yes. The boot-up time went from 45-60 seconds to 15 seconds, right away. And over time, the apps and files I use most often will be moved onto the SSD, thus decreasing the amount of time it’ll take to open and save them.

At some point, I expect Apple to issue a utility, like Boot Camp, that will allow us to do this more easily and automatically. Until then, that’s how I set up Fusion Drive on my iMac, and I hope it’s been helpful to you!

Standard
Reviews

Hardware preview: ioSafe N2 NAS

ioSafe, the company famous for its line of rugged external drives that can withstand disasters such as floods, fires and even crushing weight, has come up with a new product: the N2 NAS (Network Attached Storage) device.

The N2 device comes at the right time. The market for NAS devices is maturing and demand is growing. Western Digital has even come out with a line of hard drives, the WD Red, specifically targeted to NAS enclosures. To my knowledge there is no such other NAS device out there, so ioSafe’s got the lead on this.

The N2 appliance is powered by Synology® DiskStation Manager (DSM) and is aimed at the SOHO, SMB and Remote Office Branch Office (ROBO) markets.

The high performance 2-bay N2 provides up to 8TB of storage capacity and is equipped with a 2GHz Marvel CPU and 512MB of memory. The N2 uses redundant hard drives as well as ioSafe’s patented DataCast, HydroSafe and FloSafe technologies to protect data from loss in fire up to 1550°F and submersion in fresh or salt water up to a 10 foot depth for 3 days.

Features:

  • Local and Remote File Sharing: Between virtually any device from any location online
  • Cloud Station: File syncing between multiple computers and N2 (like Dropbox)
  • iTunes Server
  • Surveillance Station: Video surveillance application
  • Media Server: Stream videos and music
  • Photo Sharing: Photo sharing with friends and family
  • Mail Server: Email server
  • VPN Server: Manage Virtual Private Network
  • Download Station: Post files for others to download
  • Audio Station: Stream audio to smartphone (iOS/Android)
  • FTP Server: Remote file transfers
  • Multi-platform compatibility with Mac/PC/MS Server/Linux

Hardware:

  • Dual Redundant Disk, RAID 0/1, Up to 8TB (4TB x 2)
  • 2GHz Marvel CPU and 512MB memory
  • Gigabit Ethernet Port
  • Additional ports for USB 3, SD Memory Card
  • User replaceable drives
  • Protects Data From Fire: DataCast Technology. 1550°F, 1/2 hr per ASTM E119 with no data loss.
  • Protects Data From Flood: HydroSafe Technology. Full immersion, 10 ft. 3 days with no data loss.
  • FloSafe Vent Technology: Active air cooling during normal operation. FloSafe Vents automatically block destructive heat during fire by water vaporization – no moving parts
  • Physical theft protection (optional floor mount, padlock door security – coming Q1 2013)
  • Kensington® Lock Compatible

Support and Data Recovery Service (DRS):

  • 1 Year No-Hassle Warranty (for N2 Diskless)
  • 1 Year No-Hassle Warranty + Data Recovery Service (DRS) Standard (for loaded N2)
  • DRS included $2500/TB for forensic recovery costs for any reason if required
  • DRS and Warranty are upgradeable to 5 years ($.99/TB per month)
  • DRS Pro available includes $5000/TB + coverage of attached server ($2.99/TB per month)

Operating Environment:

  • Operating: 0-35° C (95°F)
  • Non-operating: 0-1550°F, 1/2 hr per ASTM E119
  • Operating Humidity: 20% – 80% (non-condensing)
  • Non-operating Humidity: 100%, Full immersion, 10 feet, 3 days, fresh or salt water

Physical:

  • Size: 5.9″W x 9.0″H x 11.5″L
  • Weight: 28 lbs

The N2 appliance is being brought to market with funding obtained through IndieGogo. I know it’s hard to believe it when you look at their products, but ioSafe only has about 20 employees. Sometimes they have to be creative in the ways they fund their R&D.

The ioSafe N2 will begin shipping in January 2013 and will be available in capacities up to 8TB. Introductory pricing for the ioSafe N2 diskless version is available for $499 on Indiegogo ($100 off the retail price of $599.99) if you want to get your own hard drives.

I’ve also written about ioSafe Solo, the ioSafe Rugged Portable and the ioSafe SSD devices.

Standard
Thoughts

What’s next in data storage?

My recent musings on high definition and the state of the technology behind it have spurred me to think about data storage (not that it’s a new subject for me). But so far, I’ve commented only on what’s already been developed, and didn’t take the time to think about what’s next.

What’s the motivation behind this post? It’s simple. For Ligia’s Kitchen, it costs me about 10.5 GB for 5 minutes of final, edited footage of show, with a one-camera setup. What goes into the 10.5GB? There’s the raw footage (and sound files, if I use a standalone mic), the edits, and the final, published footage. When I use two cameras, the space needed can easily go up by 1.5-2.5x, depending on the shots I need to get. I shoot and edit in 1080p, and output to 720p.

My storage needs are okay for now. I’ve got plenty of space, and if I keep going at this rate, I should be fine. But… and there’s always a but, isn’t there… I have more show ideas in mind. And there’s the hypothetical possibility of shooting with a RED camera at some point in the future, if certain factors come together to allow it. So I’m thinking ahead.

Current hard drive technology (bits of data on disks) has certainly come a long way. Those of us who’ve been in the business long enough know what prices used to be like for capacities that are laughable by today’s standards. Back in 1999, I paid $275 for a 27GB hard drive. My laptop’s drive in college could store a grand total of 120MB. And when I began to learn programming, I’d load the code into memory from tape…

I remember being really excited about Hitachi’s new Perpendicular Magnetic Recording Technology, which came out in early 2006. They even had an animation on their website, which they’ve taken down since. That technology is behind all of the new hard drives that are on the market today, by the way. Hitachi came up with a way to get the bits of data to stand up (hence the term perpendicular) instead of lying down on hard drive platters, thus doubling the amount of data that could be stored onto them.

There are two roads ahead when it comes to data storage, of which one is more likely to succeed:

  • Optical storage (this is probably the future of storage)
  • Biological storage

Let’s first look at biological storage. One particular article made the rounds lately: researchers at the Chinese University in Hong Kong have managed to store 90GB of data in 1g of bacteria. While it sounds exciting, the idea of storing my data in petri dishes on my desk doesn’t readily appeal to me, and certain complications come up:

  • 1g of bacteria is about 10 million cells (that’s a LOT); one must start thinking about the potential for bio hazards when you work with bacteria.
  • The data is stored in a bacteria’s DNA, which means it’s encrypted (a good thing), but it’s also subject to significant mutation (a bad thing) and it takes a long time to retrieve it because you need a gene sequencer, which is tedious and expensive (a bad thing).

I’m not against this. Hey, if they can make it safe and fast, okay. But I believe this is going to be relegated to special applications. The article suggests the technique is currently used to store copyright information for newly created organisms (I wonder how many new bacteria researchers as a whole have created, and is it any wonder antibiotics have such a hard time working against them when we keep playing God). I also see this sort of data storage as a way for spies to operate, or for governments to keep certain secrets.

Okay, onto more cheery stuff, like optical storage. I’ve always thought there was massive potential here, and am glad to see significant work has already been done to make this a reality. There are two technologies which are feasible, according to research that’s already been done:

  • HDSS (Holographic Data Storage Systems), which so far can store up to 1TB of data in a crystal the size of a sugar cube, but doesn’t yet allow rewrites
  • 3D optical data storage, which so far can store up to 1TB of data onto a 1.2mm thick optical disc

These developments are very encouraging. Optical storage is safe, and its potential capacities are huge, possibly endless. And when you think about computer hardware, and how manufacturers are looking at using optical technology in the bridges and buses and wires inside the hardware, because it’s incredibly fast, you start to see how optical makes sense. Let’s also not forget fiber optic cabling, and its incredible capacity to carry data. It certainly looks like optical is the future!

So what’s going to happen to the standard 3.5″ form factor of today’s hard drives? Well, it’s likely that it will stay the same, even though it the storage technology inside it might change. We’ll have crystals and lasers instead of platters and heads, but they’ll likely be able to fit them in there somehow. I don’t think we’ll need to start keeping crystal libraries on our desks, like in Superman’s Crystal Cave, and sticking various-sized crystals into our computers any time soon, although it did look pretty cool when Christopher Reeve did it in the movie.

It really all depends on how soon this new technology will come to market. Right now, there’s clearly enough vested interest in the 3.5″ and 2.5″ form factors to motivate drive manufacturers to shoehorn the new technologies into those shapes, but if optical hard drives won’t be here for the next 5-10 years, then it’s possible that the form factor will change as well. We are after all moving to smaller, sleeker shapes for most computers, notebooks and desktops alike.

Standard