A review of the Stellar Phoenix Photo Recovery software

Having lost photos and videos in the past, I am fairly cautious about my media these days. I keep local and remote backups and I use hardware that writes my data redundantly onto sets of drives, so that I don’t lose anything if one of the drives goes down. I have also purchased data recovery software, just in case something goes bad: I own both Disk Warrior and Data Rescue.

When someone from Stellar Phoenix contacted me to see if I’d be interested in looking at their Photo Recovery software, I agreed. I wanted to see how it compared with what I have. In the interest of full disclosure, you should know they gave me a license key for their paid version of the software.

I put it to a test right away, on what I deemed the hardest task for data recovery software: seeing if it could get anything at all from one of the drives I pulled out of one of my Drobo units.

As you may (or may not) know, Data Robotics, the company that makes the Drobo, uses their own, proprietary version of RAID called BeyondRAID. While this is fine for the Drobo and simple to use for Drobo owners, it also means that data recovery software typically can’t get anything off a drive from a Drobo drive set. Indeed, after several hours of checking, Stellar Phoenix’s software couldn’t find any recoverable files on the drive. I expected as much, because I know specialized, professional-grade software is needed for this, but I gave it a shot because who knows, someday we may be able to buy affordable software that can do this.

Screen Shot 2018-02-24 at 16.30.13
The Seagate 8TB drive is the one I pulled out of the Drobo
Screen Shot 2018-02-24 at 18.29.31.png
What the software found is data gibberish; there were no MP3 or GIF files on that drive

Now onto the bread and butter of this software: recovering photos and videos from SD cards. I made things harder for it again, because I wanted to see what I’d get. I put a single SD card through several write/format cycles by using it in one of my cameras. I took photos until I filled a portion of the card, downloaded them to my computer, put the card back in the camera, formatted it and repeated the cycle. After I did this, I put the software to work on the card.

Before I tell you what happened, I need to be clear about something: because no camera that I know of and no SD card that I know of has any hard and fast rules about where (more precisely what sector) to write new data after you’ve formatted the card, the camera may very well write the bits for new photos/videos right over the bits of the photos/videos you’ve just taken before formatting the card. This makes the recovery of those specific photos that have been written over virtually impossible. What I’m trying to tell you is that what I did results in a file recovery crapshoot: you don’t know what you’re going to get until you run the software on the card.

When I did run it, it took about 40 minutes to check the card and it found 578 RAW files, 579 JPG files and 10 MOV files. Since I write RAW+JPG to the card (I have my camera set to record each photo in both RAW and JPG format simultaneously), I knew those files should be the same images, and they were.

Screen Shot 2018-03-12 at 18.05.31
The software found photos and videos from several sessions and dates
Screen Shot 2018-03-12 at 18.05.54.png
As you can see from the dates, they ranged from March 11 to February 13

I then told the software to save the media onto an external drive, so I could check what it found.

Screen Shot 2018-03-12 at 18.37.08.png
It took about 30-40 minutes to recover the data

When I checked the files, I saw that it recovered two sets of JPG files: each one contained 579 files, but one of the sets began its file names with “T1-…”; they were the thumbnails of the images. All of the JPG files were readable on my Mac. It was a different story with the RAW files. It recovered three sets of RAW files, each containing 578 files. The first set was readable by my Mac. The second set, marked with “T1-…” wasn’t readable at all and the file sizes were tiny, around 10KB in size; they were the thumbnails of the RAW files. The third set, marked with “T2-…” was readable, but the file sizes were around 1MB a piece; they were the mRAW files written automatically by the camera, at a resolution of 3200×2400 pixels. A typical RAW file from the camera I used for my testing ranges in size from 12-14MB and its resolution is 4032×3024 pixels. It’s kind of neat that the mRAW (or sRAW) files were recovered as well.

Now I took 3,328 photos with that camera from February 13th – March 11th. It recovered 578 photos, so that’s a 17% recovery rate. Granted, I made it very hard for it by writing to the card in several cycles and reformatting after each cycle. When I only look at the last set of photos recorded to the card, before the last reformat, I see that I took 523 photos on March 10th and 3 photos on March 11th. The software recovered 525 photos on March 10th (so there’s some doubling up of images somewhere) and 2 photos on March 11th. However, don’t forget about the JPG files, which contained the missing image. So that’s a 100% recovery rate.

In all fairness, there is free software out there that can do basic recovery of images from SD cards and other media, so the quality of a piece of software of this nature is determined by how much media it recovers when the free stuff doesn’t work. I believe I made things hard enough for it,and it still recovered quite a bit of data. That’s a good thing.

Let’s not forget about the video files. Those were written to the card with another camera and they ranged in dates from November 3-6, 2017. I’m surprised it recovered any at all. It gave me 10 video files, out of which 5 were readable, so that’s a 50% recovery rate.

Just for kicks, I decided to run Data Rescue on the SD card as well. It also found 579 JPG files and 578 RAW files. All were readable by my Mac. It also found 10 video files, but none were readable. However, I have Data Rescue 3, which is quite a bit old. Data Rescue 5 is now out, but I haven’t upgraded yet. It’s possible this new version might have found some more files.

Price-wise, Stellar Phoenix Photo Recovery comes in three flavors: $49 for the standard version (this is the one I got), $59 for the professional version (it repairs corrupt JPG files) and $99 for the premium version (it repairs corrupt video files in addition to the rest).

The one thing I didn’t like is that the Buy button didn’t go away from the software even after I entered the license key they gave me. As for the rest, it’s fine. I think it crashed once during testing and it didn’t happen while actually recovering data. The design is intuitive and at $49, this is software you should definitely have around in case something bad happens to your photos or videos. It may not recover all of what you lost, but whatever you get back, it’s much better than nothing, which is what you will definitely get if you don’t have it. It’s also a good idea to have multiple brands of this kind of software if you can afford them, because you never know which one will help you more until you try them all. And believe me, when you’re desperate to get your data back, you’ll try almost anything…

Remember, back up your data and have at least one brand of data recovery software in your virtual toolbelt. Stay safe!

Permanent data storage

We need to focus our efforts on finding more permanent ways to store data. What we have now is inadequate. Hard drives are susceptible to failure, data corruption and data erasure (see effects of EM pulses for example). CDs and DVDs become unreadable after several years and archival-quality optical media also stops working after 10-15 years, not to mention that the hardware itself that reads and writes to media changes so fast that media written in the past may become unreadable in the future simply because there’s nothing to read it anymore. I don’t think digital bits and codecs are a future-proof solution, but I do think imagery (stills or sequences of stills) and text are the way to go. It’s the way past cultures and civilizations have passed on their knowledge. However, we need to move past pictographs on cave walls and cuneiform writing on stone tablets. Our data storage needs are quite large and we need systems that can accommodate these requirements.

We need to be able to read/write data to permanent media that stores it for hundreds, thousands and even tens of thousands of years, so that we don’t lose our collective knowledge, so that future generations can benefit from all our discoveries, study us, find out what worked and what didn’t.

We need to find ways to store our knowledge permanently in ways that can be easily accessed and read in the future. We need to start thinking long-term when it comes to inventing and marketing data storage devices. I hope this post spurs you on to do some thinking of your own about this topic. Who knows what you might invent?

A comparison of CrashPlan and Backblaze

I’ve been a paying CrashPlan customer since 2012 and my initial backup still hasn’t finished. I’ve been a paying Backblaze customer for less than a month and my initial backup is already complete. 

I’m not a typical customer for backup companies. Most people back up about 1 TB of data or less. The size of my minimum backup set is about 9 TB. If I count all the stuff I want to back up, it’s about 12 TB. And that’s a problem with most backup services.

First, let me say this: I didn’t write this post to trash CrashPlan. Their backup service works and it’s worked well for other members of my family. It just hasn’t worked for me. This is because they only offer a certain amount of bandwidth to each user. It’s called bandwidth throttling and it saves them money in two ways: (1) they end up paying less for their monthly bandwidth (which adds up to a lot for a company offering backup services) and (2) they filter out heavy users like me, who tend to fill up a lot of their drives with unprofitable data. My guess (from my experience with them) is that they throttle heavy users with large backup sets much more than they throttle regular users. The end result of this bandwidth throttling is that, even though I’ve been a customer since 2012 — at first, I was on the individual backup plan, then I switched to the family plan — my initial backup never completed and I was well on track to never completing it.

When I stopped using CrashPlan’s backup services, out of the almost 9 TB of data that I need to back up constantly, I had only managed to upload 0.9 TB in FOUR YEARS. Take a moment and think about that, and then you’ll realize how much bandwidth throttling CrashPlan does on heavy users like me.

Screen Shot 2016-10-20 at 23.37.07.png
After four years of continuous use, I backed up a grand total of 905.7 GB to CrashPlan

To be exact, counting the various versions of my data that had accummulated on the CrashPlan servers in these four years, I had a total of 2.8 TB stored on their servers, but even if you count that as the total, 2.8 TB in FOUR YEARS is still an awfully small amount.

Screen Shot 2016-10-27 at 00.42.14.png
Space used on CrashPlan’s servers: 2.8 TB

Tell me honestly, which one of you wants this kind of service from a backup company? You pay them for years in a row and your initial backup never finishes? If a data loss event occurs and your local backup is gone (say a fire, flood or burglary), you’re pretty much screwed and you’ll only be able to recover a small portion of your data from their servers, even though you’ve been a faithful, paying customer for years… That just isn’t right.

I talked with CrashPlan techs twice in these fours years about this very problematic data throttling. Given that they advertise their service as “unlimited backup”, this is also an ethical issue. The backup isn’t truly unlimited if it’s heavily throttled and you can never back up all of your data. The answer was the same both times, even the wording was the same, making me think it was scripted: they said that in an effort to keep costs affordable, they have to limit the upload speeds of every user. The first time I asked them, they suggested their Business plan has higher upload speeds, so in other words, they tried to upsell me. During both times, they advertised their “seed drive service”, which was a paid product (they stopped offering it this summer). The gist of their paid service was that they shipped asking customers a 1 TB drive so you could back up to it locally, then send it to them to jumpstart the backup. Again, given my needs of backing up at least 9 TB of data, this wasn’t a userful option.

Screen Shot 2016-10-31 at 15.57.25.png
This is false advertising
Screen Shot 2016-10-31 at 15.59.41.png
This is also false advertising

Some of you might perhaps suggest that I didn’t optimize my CrashPlan settings so that I could get the most out of it. I did. I tried everything they suggested in their online support notes. In addition to tricking out my Crashplan install, my computer has been on for virtually all of the last four years, in an effort to help the Crashplan app finish the initial backup, to no avail.

Another thing that bothered me about CrashPlan is that it would go into “maintenance mode” very often, and given the size of my backup set, this would take days, sometimes weeks, during which it wouldn’t back up. It would endlessly churn through its backup versions and compare them to my data, pruning out stuff, doing its own thing and eating up processor cycles with those activities instead of backing up my data.

Screen Shot 2016-10-22 at 19.40.33.png
Synchronizing block information…
Screen Shot 2016-10-23 at 14.39.36.png
Compacting data… for 22.8 days…
Screen Shot 2016-10-23 at 16.58.23.png
Maintaining backup files…

I understand why maintenance of the backups is important. But what I don’t understand is why it took so long. I can’t help thinking that maybe the cause is the Java-based backup engine that CrashPlan uses. It’s not a Mac-native app or a Windows-native app. It’s a Java app wrapped in Mac and Windows app versions. And most Java apps aren’t known for their speed. It’s true, Java apps could be fast, but the developers often get lazy and don’t optimize the code — or that’s the claim made by some experts in online forums.

Another way to look at this situation is that CrashPlan has a “freemium” business model. In other words, their app is free to use for local (DAS or NAS) backup or offsite backup (such as to a friend’s computer). And one thing I know is that you can’t complain about something that’s given freely to you. If it’s free, you either offer constructive criticism or you shut up about it. It’s free and the developers are under no obligation to heed your feedback or to make changes because you say so. As a matter of fact, I used CrashPlan as a free service for local backup for a couple of years before I started paying for their cloud backup service. But it was only after I started paying that I had certain expectations of performance. And in spite of those unmet expectations, I stuck with them for four years, patiently waiting for them to deliver on their promise of “no storage limits, bandwidth throttling or well-engineered excuses”… and they didn’t deliver.

Here I should also say that CrashPlan support is responsive. Even when I was using their free backup service, I could file support tickets and get answers. They always tried to resolve my issues. That’s a good thing. It’s important to point this out, because customer service is an important aspect of a business in the services industry — and online backups are a service.

About three weeks ago, I was talking with Mark Fuccio from Drobo about my issues with CrashPlan and he suggested I try Backblaze, because they truly have no throttling. So I downloaded the Backblaze app (which is a native Mac app, not a Java app), created an account and started to use their service. Lo and behold, the 15-day trial period wasn’t yet over and my backup to their servers was almost complete! I couldn’t believe it! Thank you Mark! 🙂

I optimized the Backblaze settings by allowing it to use as much of my ISP bandwidth as it needed (I have a 100 Mbps connection), and I also bumped the number of backup threads to 10, meaning the Backblaze app could initiate 10 separate instances of itself and upload all 10 instances simultaneously to their servers. I did have to put up with a slightly sluggish computer during the initial backup, but for the first time in many years, I was able to back up all of my critical data to the cloud. I find that truly amazing in and of itself.

Screen Shot 2016-10-14 at 21.36.27.png
This is what I did to optimize my Backblaze installation

As you can see from the image above, I got upload speeds over 100 Mbps when I optimized the backup settings. During most of the days of the initial upload, I actually got speeds in excess of 130 Mbps, which I think is pretty amazing given my situation: I live in Romania and the Backblaze servers are in California, so my data had to go through a lot of internet backbones and through the trans-Atlantic cables.

The short of it is that I signed up for a paid plan with Backblaze and my initial backup completed in about 20 days. Let me state that again: I backed up about 9 TB of data to Backblaze in about 20 days, and I managed to back up only about 1 TB of data to CrashPlan in about 4 years (1420 days). The difference is striking and speaks volumes about the ridiculous amount of throttling that CrashPlan puts in place for heavy users like me.

I also use CrashPlan for local network backup to my Drobo 5N, but I may switch to another app for this as well, for two reasons: it’s slow and it does a lot of maintenance on the backup set and because it doesn’t let me use Drobo shares mapped through the Drobo Dashboard app, which is a more stable way of mapping a Drobo’s network shares. CrashPlan refuses to see those shares and requires me to manually map network shares, which isn’t as stable a connection and leads to share disconnects and multiple mounts, which is something that screws up CrashPlan. I’m trying out Mac Backup Guru, which is a Mac-native app, is pretty fast and does allow me to back up to Drobo Dashboard-mapped shares. If this paragraph doesn’t make sense to you, it’s okay. You probably haven’t run into this issue. If you have, you know what I’m talking about.

Now, none of this stuff matters if you’re a typical user of cloud backup services. If you only have about 1 TB of data or less, any cloud backup service will likely work for you. You’ll be happy with CrashPlan and you’ll be happy with their customer service. But if you’re like me and you have a lot of data to back up, then a service like Backblaze that is truly throttle-free is exactly what you’ll need.

The value of a good backup

While working on the fifth episode of RTTE, I learned first hand the value of a good backup. The hard drive on my editing computer (my MacBook Pro) died suddenly and without warning. Thankfully, my data was backed up in two geographically different locations.

The day my hard drive died, I’d just gotten done with some file cleanups, and was getting ready to leave for a trip abroad. I shut down my computer, then realized I needed to check on a couple things, and booted it up again, only this time, it wouldn’t start. I kept getting a grey screen, meaning video was working, but it refused to boot into the OS. And I kept hearing the “click of death” as the hard drive churned. I tried booting off the Snow Leopard DVD, but that didn’t work either. I’d tested the hard drive’s SMART status just a couple of weeks before, and the utility had told me the drive had no problems whatsoever.

I had reason to worry for a couple of reasons:

  1. The laptop refused to boot up from the OS X DVD, potentially indicating other problems than a dead hard drive. I do push my laptop quite a bit as I edit photos and video, and I’d already replaced its motherboard once. I was worried I might have to spend more than I wanted to on repairs.
  2. All of the footage for the fifth episode of RTTE was on my laptop. Thankfully, it was also backed up in a couple of other places, but still, I hadn’t had reason to test those backups until now. What if I couldn’t recover it?

I had no time for further troubleshooting. I had to leave, and my laptop was useless to me. I left it home, and drove away, worried about what would happen when I returned.

A week later, I got home and tried to boot off the DVD again. No luck. I had to send it in, to make sure nothing else was wrong. In Romania, there’s only one Apple-authorized repair shop. They’re in Bucharest, and they’re called Noumax. I sent it to them for a diagnosis, and a couple of days later, I heard back from them: only the hard drive was defective, from what they could tell.

I was pressed for time. I had to edit and release the fifth episode of RTTE, and I also had to shoot some more footage for it. I didn’t have time to wait for the store to fix the laptop, so I asked them to get it back to me, while I ordered a replacement hard drive from an online store with fast, next-day delivery (eMag).

The hard drive and the laptop arrived the next day. I replaced the hard drive, using this guide, and also cleaned the motherboard and CPU fans of dust, then restored the whole system from the latest Time Machine backup. This meant that I got back everything that was on my laptop a few hours before it died.

I’d have preferred to do a clean OS install, then install the apps I needed one by one, then restore my files, especially since I hadn’t reformatted my laptop since I bought it a few years ago, but that would have been a 2-3 day job, and I just didn’t have the time. Thankfully, OS X is so stable that even a 3-year old install, during which I installed and removed many apps, still works fairly fast and doesn’t crash.

Some might say, what’s the big deal? The laptop was backed up, and you restored it… whoopee… Not so fast, grasshopper! The gravity of the situation doesn’t sink in until you realize it’s your work — YEARS of hard work — that you might have just lost because of a hardware failure. That’s when your hands begin to tremble and your throat gets dry, and a few white hairs appear instantly on your head. Even if the data’s backed up (or so you think) until your data’s restored and it’s all there, you just don’t know if you can get it back.

I’ve worked in IT for about 15 years. I’ve restored plenty of machines, desktops and servers alike. I’ve done plenty of backups. But my own computer has never gone down. I’ve never had a catastrophic hardware failure like this one until now. So even though I’ve been exposed to this kind of thing before, I just didn’t realize how painful it is until now. And I didn’t appreciate the value of a good backup until now.

So, here’s my advice to you, as if you didn’t hear it plenty of times in the past… BACK UP YOUR COMPUTER!

If you have a Mac, definitely use Time Machine. It just works. It’s beautifully simple. I’ve been backing up my laptop with Time Machine to the same reliable drive for years. It’s this little LaCie hard drive.

But the LaCie drive might fail at some point, which is why I also back up my data with CrashPlan. For this second backup, I also send my data to a geographically-different location. Since we live in Romania these days, I back up to my parents’ house in the US, where the backup gets stored on a Drobo. And the backup is also encrypted automatically by CrashPlan, which means it can’t be intercepted along the way.

It’s because of my obsessive-compulsive backup strategy that I was able to recover so quickly from the hardware failure. Thankfully, these days backups are made so easy by software like Time Machine and CrashPlan that anyone can keep their work safe. So please, back up your data, and do it often!

One more thing. You know the old saying, every cloud has a silver lining? It was true in my case. When I ordered the new drive for my laptop, I was able to upgrade from its existing 250GB SATA hard drive with an 8MB buffer and 5400 rpm to a spacious 750GB SATA hard drive with a 32MB buffer and 7200 rpm, which means my laptop now churns along a little faster, and has a lot more room for the 1080p footage of my shows. 🙂

Save the data!

Some of the most important technology programs that keep Washington accountable are in danger of being eliminated. Data.gov, USASpending.gov, the IT Dashboard and other federal data transparency and government accountability programs are facing a massive budget cut, despite only being a tiny fraction of the national budget.

Help save the data and make sure that Congress doesn’t leave the American people in the dark.

What’s next in data storage?

My recent musings on high definition and the state of the technology behind it have spurred me to think about data storage (not that it’s a new subject for me). But so far, I’ve commented only on what’s already been developed, and didn’t take the time to think about what’s next.

What’s the motivation behind this post? It’s simple. For Ligia’s Kitchen, it costs me about 10.5 GB for 5 minutes of final, edited footage of show, with a one-camera setup. What goes into the 10.5GB? There’s the raw footage (and sound files, if I use a standalone mic), the edits, and the final, published footage. When I use two cameras, the space needed can easily go up by 1.5-2.5x, depending on the shots I need to get. I shoot and edit in 1080p, and output to 720p.

My storage needs are okay for now. I’ve got plenty of space, and if I keep going at this rate, I should be fine. But… and there’s always a but, isn’t there… I have more show ideas in mind. And there’s the hypothetical possibility of shooting with a RED camera at some point in the future, if certain factors come together to allow it. So I’m thinking ahead.

Current hard drive technology (bits of data on disks) has certainly come a long way. Those of us who’ve been in the business long enough know what prices used to be like for capacities that are laughable by today’s standards. Back in 1999, I paid $275 for a 27GB hard drive. My laptop’s drive in college could store a grand total of 120MB. And when I began to learn programming, I’d load the code into memory from tape…

I remember being really excited about Hitachi’s new Perpendicular Magnetic Recording Technology, which came out in early 2006. They even had an animation on their website, which they’ve taken down since. That technology is behind all of the new hard drives that are on the market today, by the way. Hitachi came up with a way to get the bits of data to stand up (hence the term perpendicular) instead of lying down on hard drive platters, thus doubling the amount of data that could be stored onto them.

There are two roads ahead when it comes to data storage, of which one is more likely to succeed:

  • Optical storage (this is probably the future of storage)
  • Biological storage

Let’s first look at biological storage. One particular article made the rounds lately: researchers at the Chinese University in Hong Kong have managed to store 90GB of data in 1g of bacteria. While it sounds exciting, the idea of storing my data in petri dishes on my desk doesn’t readily appeal to me, and certain complications come up:

  • 1g of bacteria is about 10 million cells (that’s a LOT); one must start thinking about the potential for bio hazards when you work with bacteria.
  • The data is stored in a bacteria’s DNA, which means it’s encrypted (a good thing), but it’s also subject to significant mutation (a bad thing) and it takes a long time to retrieve it because you need a gene sequencer, which is tedious and expensive (a bad thing).

I’m not against this. Hey, if they can make it safe and fast, okay. But I believe this is going to be relegated to special applications. The article suggests the technique is currently used to store copyright information for newly created organisms (I wonder how many new bacteria researchers as a whole have created, and is it any wonder antibiotics have such a hard time working against them when we keep playing God). I also see this sort of data storage as a way for spies to operate, or for governments to keep certain secrets.

Okay, onto more cheery stuff, like optical storage. I’ve always thought there was massive potential here, and am glad to see significant work has already been done to make this a reality. There are two technologies which are feasible, according to research that’s already been done:

  • HDSS (Holographic Data Storage Systems), which so far can store up to 1TB of data in a crystal the size of a sugar cube, but doesn’t yet allow rewrites
  • 3D optical data storage, which so far can store up to 1TB of data onto a 1.2mm thick optical disc

These developments are very encouraging. Optical storage is safe, and its potential capacities are huge, possibly endless. And when you think about computer hardware, and how manufacturers are looking at using optical technology in the bridges and buses and wires inside the hardware, because it’s incredibly fast, you start to see how optical makes sense. Let’s also not forget fiber optic cabling, and its incredible capacity to carry data. It certainly looks like optical is the future!

So what’s going to happen to the standard 3.5″ form factor of today’s hard drives? Well, it’s likely that it will stay the same, even though it the storage technology inside it might change. We’ll have crystals and lasers instead of platters and heads, but they’ll likely be able to fit them in there somehow. I don’t think we’ll need to start keeping crystal libraries on our desks, like in Superman’s Crystal Cave, and sticking various-sized crystals into our computers any time soon, although it did look pretty cool when Christopher Reeve did it in the movie.

It really all depends on how soon this new technology will come to market. Right now, there’s clearly enough vested interest in the 3.5″ and 2.5″ form factors to motivate drive manufacturers to shoehorn the new technologies into those shapes, but if optical hard drives won’t be here for the next 5-10 years, then it’s possible that the form factor will change as well. We are after all moving to smaller, sleeker shapes for most computers, notebooks and desktops alike.

CrashPlan works for transatlantic backups

Updated 11/01/16: I’ve revised my opinion of CrashPlan. See here for the details.

Last week, I wrote an article called “What’s On Your Drobo“, and in it, I mentioned that I was going to try to use an app called CrashPlan to do backups from my photo library in Romania to my backup location in the US. I’m happy to say that it works as expected, and no, this isn’t an April Fool’s joke. Here’s a screenshot of an active backup. At the time, I was getting 2.7 Mbps throughput.

There is a bandwidth bottleneck somewhere, though I’m not sure where it is. My broadband connection in Romania sits at 30 Mbps up and down, as I mentioned here, and my parents’ broadband connection clocks in around 16 Mbps down and 4 Mbps up. Theoretically, since I’m uploading and they’re downloading, I should be getting at least 15 Mbps, but I’m not. So it looks like there’s either a bottleneck as my data exits Romania, or as it goes through the transatlantic fiber optic cables. If someone can chime in on this, I’d love to find out more. I do know that I hit that same 2.5 Mbps ceiling as I upload to SmugMug, YouTube and blip.tv.

Bottlenecks aside, I’m just happy I can do off-site backups, and at least given my current setup, it’s free! CrashPlan works as advertised! I have to admit I was a skeptic when I downloaded it and installed it. I figured it would work on the local network, which is where I did the initial backups, but it would surely run into some firewall issues when I tried it from another location. Nightmares of re-configuring my parents’ firewall remotely flashed before my eyes… Amazingly enough, I didn’t have to do any of that! It just works!

So, if you’re interested in doing this sort of thing, download CrashPlan (it’s multi-platform), install it on both computers where you want to use it, configure it (use the help section), test it, then let it do its thing!

One thing I need to mention is that if one of the computers falls asleep, the backup will be paused until it wakes up. Even though I set my parents’ iMac to wake up for network traffic, CrashPlan doesn’t seem to be able to wake it up when I try to start the backup from my end. Keep that in mind and plan your backups accordingly.