Your Linux Data Center Experts

My “Ultimate Storage Box” has been in service 18 months now, so I thought I'd do a status report. Particularly as ZFS-FUSE still kind of has a “bad rap” as being a kludge. Which, you know, it kind of is… But it's the only option Linux users have for a modern file system (ext4 is just, IMHO, basically a polishing of the Berkeley FFS from 1983). I wish we could have ZFS in the kernel, but the powers that be have decided that will never happen. And btrfs is still years away (in the last month it did get the ability to remove snapshots finally, yay!).

Read on for my experiences over the last 18 months with ZFS-FUSE.

The short answer is that it has not been trouble-free, but the benefits have outweighed the issues for my specific use: A large storage server.

First of all let me say that I am running a very old version of ZFS-FUSE now. I picked up the version at the head of the ZFS version control when I deployed the system, as it had many important fixes over the then-current 0.4.0 release (IIRC). I have not kept up with new releases because I just didn't want to risk having to start over from scratch.

I haven't had any data loss. The file-system has worked well, with the only issues being that about monthly the zfs-fuse daemon has died and needs to be restarted. It has always just picked up from where it left off when I had to restart it.

In addition to the file storage, I also had been running backups to the system of a number of the systems at my house and a couple of servers at our hosting facility. I would do an rsync, and then take a snapshot. I had tools that would go through and delete the old snapshots such that I kept a year of monthly backups, 6 weeks of weekly, and 2 weeks of daily copies.

I had to disable the backups around 2 months ago, because all these snapshots started causing the zfs-fuse daemon to die every few days if I tried to manipulate them. They are still there, I'm just not actively writing to them or creating/deleting snapshots. Since disabling backups, I don't believe I've had a single instance of zfs-fuse dieing.

Note that I'm not considering this a ZFS-FUSE issue – as when I was running native ZFS (before this storage server), I also had problems with this same usage pattern – though in that case the whole system would reboot as opposed to just having to stop and restart ZFS.

One rather interesting thing has been that ZFS has occasionally detected and corrected errors on the discs. Weekly on Sunday I have it set up to do a “zpool scrub”, which goes through and checks the data on the discs against the expected checksums. A couple of times over the 18 months I've noticed that during normal operation the “zpool status” has shown checksum errors. So in that respect I feel like ZFS has been beneficial in preventing what would otherwise likely have been silently corrupted data.

A word about performance. It's not that good. But for my use it is definitely acceptable. I really don't have serious performance demands for it. The biggest issue I've noticed is that somehow it doesn't cache directory information. So if I do a “du” of my photo collection, then I do another one (or maybe a find on those same files), it seems to have to re-read all that data and take several minutes. So “locate” is definitely my friend here…

The ZFS on Crypto is working very well. I seem to have had absolutely no issues with running ZFS on encrypted partitions. This was the primary reason I was running ZFS-FUSE on my storage server. It's been nice having the peace of mind that if the system gets stolen, my personal data (like scanned documents) is relatively secure.

The external eSATA port multiplier has been working well in this case. However, on another system I have declared that the eSATA cabling is just too wimpy to be replied upon. It's extremely easy to disconnect the cables by bumping them. I can't think of any cable I've used that is quite so easy to accidentally disconnect – probably the round power brick connectors are a distant second place. And one of the 3 external eSATA bays I've got seems to sporadically completely freak out and need to be power-cycled. Luckily, not the one on my main storage server…

Memory use could have been a problem, but I had planned for it. The storage system has 8GB of RAM in it, and the zfs-fuse process is currently using 2.6GB of that. I was expecting it to use a lot of memory though.

One issue I was worried about was whether ZFS-FUSE was going to continue to be maintained and upgraded after the Summer of Code project that funded it's initial development was over. While there weren't many releases of the code, there was definitely active development going on, so this fear did not end up being an issue. Of course, I didn't do any updates, so it didn't really impact me.

In conclusion, for my use, I would declare ZFS-FUSE to have been successful, though with some bumps along the way. Considering how long it takes file-systems to mature, I'm expecting that ZFS-FUSE is going to continue to be the only real option for Linux users to have access to a modern file-system for at least the next 2 years.

comments powered by Disqus

Join our other satisfied clients. Contact us today.