I don't know about you, but every time I turn around I see a new way of doing storage that I have to sit down and think about. Things such as new protocols, new products, and new disks make our 'standard' processes subject to revision at any time. One thing I've learned over the last few years about disk-based backups is that there are plenty of things that can go wrong. So, I thought it worthwhile to explain a few points that I've learned over the years so you can help provision your storage resources to avoid some of the pitfalls I've seen (and experienced myself!).
NFS vs. CIFS
The first storage question I usually address is, "Should I use NFS or CIFS?" CIFS stands for Common Internet File System and the newer version of that protocol is SMB (Server Message Block). NFS is Network File System and has deep UNIX and Linux roots. But it is important to note that CIFS isn't really CIFS anymore, and it's now effectively SMB, making CIFS uncommon. CIFS is an older file protocol that Windows has supported, but now heavily favors SMB. We may naturally prefer to use CIFS because it is easy. I spend most of my time in Windows, and NFS from Windows is really not pleasant (though it is much better with Windows Server 2012).
So, because of this plight; many people choose to put their backups on a CIFS or SMB resource. That's all fine until one critical situation. The backup that needs to be restored is part of the Active Directory domain. When it comes to accessing this storage, simply using CIFS or SMB may require that Active Directory be running. The moment your restore scenario becomes hosed because you can't log into the storage resource, you can realize the problem. In this situation where the storage resource requires Active Directory to access the backup data for its CIFS or SMB implementation, NFS may be a better choice. Since it uses Linux authentication, Active Directory isn't required. This is especially helpful should the restore be of Active Directory. Trust me: I've learned that one the hard way.
SAN gotchas
Another tip related to disk-based backups I'd like to share is considering domains of failure on block-based storage resources such as a SAN. I've seen a number of situations where a SAN controller provides multiple disk tiers or arrays and backup data is put on the different disk channels. This could be as simple as the SAS (higher speed, higher price, lower capacity) drives contain the running data profile (VMs, servers, LUNs, etc.) and the SATA drives (lower speed, lower cost, higher capacity) contain backups of the other disk profile. That's great until the SAN controller fails, making both drive arrays inaccessible. I know most storage systems are built with dual-controller systems, but if it can go wrong, it just may. Same goes for the storage network in place. If the storage network itself (Ethernet, iSCSI, Fibre Channel, etc.) fails, does that remove access to storage for the recovery scenario?
There are plenty of things that can go wrong, but what do you do to your storage for disk-based backups to ensure that they are kept available? Any tips you can share? Start a comment below!
Sent iPadn Ť€©ћ№©¶@τ
No comments:
Post a Comment