ZFS
ZFS is a radical departure from the standard file system handling that has been provided with Solaris in the past; ZFS provides not only a new filesystem for the OS, but a volume manager and software RAID facilities too.
Gone are fixed partitions; instead, you can create millions (well, 2^64, to be more precise) of filesystems within a pool, and resize them on demand. Linux users will be somewhat familiar with some of these concepts via LVM, but ZFS leaves LVM for dead. Having spent an entire weekend putting new disks in my LVM-on-RAID1 Linux systems and wasting much time trying to copy across the OS and data without resorting to a complete reinstall, I can see how much easier ZFS would have made the process.
Mirroring the system disk is simple; no more messing around with Disksuite and having to reboot, it's just a matter of running
zpool attach rpool c3d0s0 c3d0s1 and bang, your mirror is running.
That said, it's not all goodness and light with ZFS. One major issue that I have with it is that it has no support for user or group quotas - by design. The idea here is that you're meant to create a completely separate ZFS filesystem for every user (which, of course, ZFS can easily handle) and then impose a limit on the size of each filesystem. So this means, if you have 10000 users on your system, you're going to end up with 10000 filesystems, which will make the df command next to useless for getting a quick overview of the state of your filesystems. I guess that's just a matter of aesthetics and is something I'll just have to put up with.
More importantly, however, are the speed issues that arise from this. As a test, I tried creating 10000 users and a filesystem for each. Now, I'll freely admit that I don't have the fastest computer around (2005-era Athlon 3200 with 1Gb of memory), but even so, I should reasonably expect to be able to serve files to that many users from this system with adequate performance; however after about five minutes, it had only created around 300 or so filesystems. I hope that this is just a bug, not the usual performance of ZFS, because it will need to do better than that, if I have to do without user quotas.
I decided to run
Russell Coker's
Bonnie++ benchmarking tool on this system, while running a combination of Linux (ext2, ext3, xfs, reiserfs) and Solaris (ufs, zfs) filesystems:
Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
linux-ext2 2G 39096 91 49561 8 21307 4 39380 82 47634 5 169.7 0
linux-ext3 2G 36127 87 41850 12 21125 5 38900 82 48127 5 163.6 0
linux-reiserfs 2G 35495 89 46021 16 21645 6 38810 83 48127 7 163.5 0
linux-xfs 2G 41750 92 50572 11 22701 5 42736 86 49102 5 168.7 0
solaris-ufs 2G 42781 50 41972 9 7216 2 47894 60 49450 9 141.9 0
solaris-zfs 2G 25941 26 27586 6 17993 4 47917 56 50466 3 208.3 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
linux-ext2 16 4804 96 +++++ +++ +++++ +++ 4894 93 +++++ +++ 17388 99
linux-ext3 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
linux-reiserfs 16 +++++ +++ +++++ +++ 24398 83 +++++ +++ +++++ +++ 27329 100
linux-xfs 16 344 1 +++++ +++ 311 1 365 1 +++++ +++ 246 1
solaris-ufs 16 8169 32 +++++ +++ 15370 55 14912 52 +++++ +++ 1694 9
solaris-zfs 16 24000 93 +++++ +++ +++++ +++ 18323 99 +++++ +++ +++++ +++
The +++ symbols indicate where an operation took less than 500ms to complete, and hence bonnie won't give an accurate result. As can be seen, there's a bit of a performance hit taken by zfs over ufs when writing files, although it's fairly even for reading.
If you found this article helpful, consider making a donation to offset the costs of running this server, to one of these addresses:
Tracked: Jul 25, 07:29