ZFS In Action
Well we’ve now had a few months of full production ZFS usage. We’ve had our first drive failure which exposed the oddities of drive failures under ZFS. It does work REALLY hard to cover them up, so much so that it never really quite gave up on the dead drive until I ran zpool offline on the drive. That said there was NO effect to users at all as far as I can tell, despite the drive producing errors and just generally not responding the only commands suffering were zpool related commands that actually went to access the affected drive directly, overall performance and function of ZFS didn’t degrade. Once I told ZFS to offline the offending drive the zpool commands didn’t hang anymore. The never hung indefinitely, just for a timeout period. Later we actually replaced the physical drive, part of which was figuring out how to get the new drive visible since we’re using a RAID card, but aren’t doing any RAID with it, and finally running zpool replace. Side rant – LSI‘s MegaCli is…Sometimes hard to understand. Forget about the [] means optional convention. When it’s talking about [E0:S0] it means literally that. except E0 and S0 are just numeric so enclosure 2525 slot 1 in say the -pdInfo command is expressed as MegaCli -pdInfo -PhysDrv[252:1] -a0
so in BASH and BASH-like shells you need to actually type MegaCli -pdInfo -PhysDrv\[252:1\] -a0
in order to make sure you don’t run afoul of the fact that [ introduces a filename expansion in bash — why can’t they just follow the normal man page conventions on SYNOPSIS sections and plain old good practice with regards to command line arguments?
The quirkiest bit is that a simple zpool status, would indicate everything was A/OK and healthy. You really do have to have “something” monitor the READ/WRITE/CSUM columns because ZFS is only going to go into DEGRADED states if the device becomes completely absent, and even then due to it’s fault tolerance that might not stop the filesystem. Depending on the filesystem settings, it’ll either try to keep working from cache, block, return i/o errors, or panic the system. This is actually a Good Thing. Because say you’re moving a fibre channel loop or an iSCSI cable? Well ZFS will handle that gracefully even if ALL the drives disappear. Folks, don’t try that with EXT2/3 or ReiserFS. Trust me, just Don’t Ask How I Know.