On Friday, I wrote about my plan to gradually replace my external spinning hard disks used for backups with external Solid State Disks using Flash memory (SSDs). "My Classic Hard Disks Are Spinning Right into the Graveyard." This is over and above the internal SSDs used for OS X on my Macs. I haven't purchased a Mac with an internal hard disk as the boot drive since 2009.
This is still a very good idea (for me), but in the process of writing the article, I glossed over some technical considerations that come into play. You should know about them. Here they are.
1. Supplier affirmation. I asked Other World Computing, the maker of the Envoy Pro EX that I already use, about its applicability for use with Apple's Time Machine app. That's a particularly stressing situation because Time Machine backs up hourly. Here's what the OWC tech people wrote me. (The "as well" text below refers to also using them as boot drives for OS X.)
The Envoy Pro EX will work fine as a Time Machine back up as well. The write cycle limits on drives nowadays are so high it doesn’t make a difference. Time Machine is not going to wear the drive out to a point where the end user should be concerned about the drive life being affected significantly.
2. Published Endurance Tests. Back in 2013, the Tech Report published a detailed report entitled: "Introducing the SSD Endurance Experiment Just how long do they last, anyway?"
The article explains, in detail, how the Flash memory works and why it fails over time. And they all do. (But so do hard disks.) This is known as write cycle limits. For technical reasons, explained in the article, SSDs literally wear out. Some relevant quotes are as follows.
Clearly, many factors affect SSD endurance. Perhaps that's why drive makers are so conservative with their lifespan estimates. Intel's 335 Series 240 GB is rated for 20 GB of writes per day for three years, which works out to just under 22 TB of total writes. If we assume modest write amplification and a 3,000-cycle write/erase tolerance for the NAND, this class of drive should handle hundreds of terabytes of flash writes.
Manufacturers use several different technologies to offset the inherent weaknesses of SSDs. The article continues....
To offset this block-rewrite penalty, the TRIM command and garbage collection routines combine to move data around in the flash, ensuring a fresh supply of empty pages for incoming writes. Meanwhile, wear-leveling routines distribute writes and relocate static data to spread destructive cycling more evenly across the flash cells.... SSD makers tune their algorithms to minimize write amplification and to make the most efficient use of the flash's limited endurance. They also lean on increasingly advanced signal processing and error correction to read the flash more reliably. Some SSD vendors devote more of the flash to overprovisioned spare area that's inaccessible to the OS but can be used to replace blocks that have become unreliable and must be retired. SandForce goes even further, employing on-the-fly compression to minimize the flash footprint of host writes.
(Note that this test was done with now obsolete SSDs acquired in the summer of 2013, almost three years ago.)
A lifetime of hundreds of terabytes written is exactly what the testing bore out. However, after about a petabyte of data was written, just about every drive had failed. And so the trade-off is the following:
"Given the current cost of external SSDs and their lifetimes, is it cost-effective and technically realistic in 2016 to phase out one's hard disks?" That's a question that each user has to answer. As for me, I'm going to bet that by the time my 2016 SSDs fail in 2019, the replacement cost will be very palatable.
3. Forensics. SSDs store their data electrically, in a fashion that isn't amendable to cusomary data recovery tools. Once an SSD fails, it's literally a black box. On the other hand, there are some instances in which the failure of a hard disk makes it possible for data recovery services to extract a subset of the data in the drive. It's expensive, but it can be done.
Accordingly, there may be some enterprise applications in which high quality, relatively less expensive hard disks are the better choice.
4. Redundancy, Cost and Optimization. Every article I've ever read about Apple's Time Machine suggests that it not be the only backup you have of your boot drive. That includes my own advice. "Why Apple’s Time Machine Utterly Fails User Needs." In my own case, I also back up my Mac three times a week to a 240 GB SSD with Prosoft's Data Backup 3. These are only the files that have changed or been added.
And so I have a trade-off. One (planned) SSD will have Time Machine updates every hour but it will wear out sooner than the SSD that has Data Backup 3 updates performed only three times a week. Finally, I do a Carbon Copy Cloner backup to a hard disk, because I have one handy, every few weeks, just to be sure.
Ano so, as you can see, a backup strategy has to be multi-tiered. One could argue that it makes sense to reverse the logic of what I've casually started with and replace backup destinations that are least often updated with SSDs and use cheap HDDs, updated every hour, for Time Machine. That's a safe, low cost way to go. (But not for me; I like to push the limits.)
The bottom line, however, is that if it suits your backup plan, a major supplier confirms that modern SSDs will work just fine as external backup destinations, even with hourly updates in Time Machine. Just be prepared with a multi-tiered backup strategy and be prepared to replace the SSDs when they eventually fail.