This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new Linux.com!

Linux.com

Feature: System Administration

Use xfs_fsr to keep your XFS filesystem optimal

By Ben Martin on July 18, 2008 (9:00:00 AM)

Share    Print    Comments   

The XFS filesystem is known to give good performance when storing and accessing large files. The design of XFS is extent-based, meaning that the bytes that comprise a file's contents are stored in one or more contiguous regions called extents. Depending on your usage patterns, some of the files contained in an XFS filesystem can become fragmented. You can use the xfs_fsr utility to defragment these files, thus improving system performance when it accesses them.

When you copy a file onto an XFS filesystem, you usually end up with a file that has one extent that contains the entire contents of the file. If you want to extend the file or overwrite its contents with new data, the area after the file might not be available, so the file might be split into two extents at different locations on disk. Of course, the applications accessing files do not need to worry about this; they can just read the contents from start to end and lseek(2) around in the file as though it were a linear range of bytes. There is however a performance penalty for storing a file's contents scattered over the disk in many extents.

You can use the xfs_bmap utility to see the extent map for a file that is stored on an XFS filesystem. If you execute it with the -v verbose mode you can see the mapping of file offsets to blocks in the filesystem. In the case of the file shown below, I was unlucky; the filesystem split the 300MB tarball file over two extents.

# xfs_bmap -v sarubackup-june2008.tar.bz2 sarubackup-june2008.tar.bz2: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..350175]: 264463064..264813239 10 (2319064..2669239) 350176 1: [350176..615327]: 265280272..265545423 10 (3136272..3401423) 265152

If you want to see what the fragmentation is like for the whole filesystem, use the xfs_db utility. The -r option tells xfs_db to operate in read-only mode, which lets you use it on a mounted and in-use filesystem, and is probably a good idea anyway unless you really want to modify the filesystem. The utility's frag command causes disk activity for a number of seconds and then reports the fragmentation of the filesystem, as shown below.

# xfs_db -r /dev/mapper/raid2008-largepartition2008 xfs_db> frag actual 117578, ideal 116929, fragmentation factor 0.55%

The xfs_fsr(1) program is contained in the xfsdump package in Fedora 9 and in Debian-based distributions. This is a real shame, as xfs_fsr is an extremely useful tool, and placing it in xfsdump makes it a great deal less likely to be installed and used than it would be if placed it into the xfsprogs package along with mkfs.xfs. xfs_fsr is a filesystem reorganizer, designed to be run regularly from a cron job to defragment XFS filesystems while they are mounted.

You can run xfs_fsr in two ways; either pass it a duration and it will loop through all your XFS filesystems, attempting to optimize the most fragmented files on each filesystem until that duration has passed, or you can explicitly defragment a specific XFS filesystem or file on an XFS filesystem. When you run xfs_fsr with a duration and it runs out of time, it stores information about what it was doing to a file in /var/tmp so that it can continue from the same point the next time it is executed with a duration. This way you can have a cron job perform a little bit of optimization every day when your machine is experiencing a period of low activity.

To optimize a file, xfs_fsr creates a new copy of an existing fragmented file with fewer extents (fragments) than the original one had. Once the file contents are copied to the new file, the filesystem metadata is updated so that the new file replaces the old one. This implies that you need to have enough free space on the filesystem to store another copy of anything that you want to defragment. The free space issue extends to disk quotas as well; you cannot defragment a file if storing another complete copy of that file would exceed the disk quota of the user that owns that file.

Because xfs_fsr will by default defragment all your XFS filesystems when you give it a duration, there are a few subtle issues that might pop up extremely rarely. If you are using a boot loader like LILO that relies on its configuration file being at a fixed location on disk, xfs_fsr might break it by moving the file to defragment it. For such cases you can flag specific files or directories with a special no-defrag flag using the command xfs_io so that xfs_fsr will never attempt to defragment those files. If you mark a directory as no-defrag, files and directories created in that directory will inherit the no-defrag flag. See the xfs_fsr manual page for information about the no-defrag flag and how to set it.

Because the sarubackup-june2008.tar.bz2 file shown in the xfs_bmap output above contains two extents, we can use it to show the invocation of xfs_fsr explicitly on a file on an active XFS filesystem. Note that after running xfs_fsr below there is only a single extent used to store this file.

# xfs_bmap -v sarubackup-june2008.tar.bz2 sarubackup-june2008.tar.bz2: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..350175]: 264463064..264813239 10 (2319064..2669239) 350176 1: [350176..615327]: 265280272..265545423 10 (3136272..3401423) 265152 # md5sum sarubackup-june2008.tar.bz2 123b9db92b31bea5f60835920dee88d5 # xfs_fsr sarubackup-june2008.tar.bz2 ... 300Mb file, takes a few seconds on of grunting on the RAID ... # xfs_bmap -v sarubackup-june2008.tar.bz2 sarubackup-june2008.tar.bz2: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL 0: [0..615327]: 267173832..267789159 10 (5029832..5645159) 615328 # md5sum sarubackup-june2008.tar.bz2 123b9db92b31bea5f60835920dee88d5

To run xfs_fsr regularly from cron, you can simply invoke it without any arguments, perhaps redirecting its output so that you do not get regular email from it. The only parameter that you are likely to want to use is -t to specify how long (in seconds) you would like xfs_fsr to run. The default is 7200 (two hours); for a desktop machine you might like to make it six hours and place it in your regular sleep time. as shown below:

# cd /root # mkdir -p mycron # cd mycron # vi xfs-fsr.cron 30 0 * * * /root/mycron/xfs-fsr.sh # vi xfs-fsr.sh /usr/sbin/xfs_fsr -t 21600 >/dev/null 2>&1 # cat *.cron >|newtab # crontab newtab

It is a shame that Linux distributions tuck this utility away with filesystem dump and restore tools rather than install it as prominently as mkfs.xfs, perhaps in the same xfsprogs package. If you have been running an XFS filesystem for a few years and do not know about the xfs_fsr utility, you could get improved filesystem performance by running it over your system a few times.

Ben Martin has been working on filesystems for more than 10 years. He completed his Ph.D. and now offers consulting services focused on libferris, filesystems, and search solutions.

Share    Print    Comments   

Comments

on Use xfs_fsr to keep your XFS filesystem optimal

Note: Comments are owned by the poster. We are not responsible for their content.

Any chance of some ext3 articles

Posted by: Anonymous [ip: 10.241.128.10] on July 18, 2008 10:37 AM
Ben,

Thanks for your series of articles. I'm in no position to judge, but you seem to know what you are talking about.

Given that many of us use ext3, any chance you could post some articles on how to get the most from this filesystem? Any tips on setting up partitions, caveats or gotchas?

TIA

NMP

#

Re: Any chance of some ext3 articles

Posted by: Anonymous [ip: 92.245.42.1] on July 18, 2008 12:15 PM
Hey-hey, many from us use xfs! :)
Great article. Thanks.

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 161.231.132.17] on July 18, 2008 12:56 PM
I am extremely happy with XFS for my /home partition, where I do have huge files (DVD Isos for my home videos). I did have lots of trouble using it as the root directory (boot loader issues, etc), so now I am sticking to ext3 for "/". And yes, I do run this utility (fsr) frequently.

What I wonder is what kind of performance gain you can get. I just run it 'cause there is no harm. Has anyone heard of benchmarks of disk performance before/after xfs_fsr? (I know this would be entirely case-specific, but it'd be nice to see some examples)

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 80.6.186.72] on July 18, 2008 01:02 PM
I did wonder why it was so slow...

mythtv ~ # xfs_db -r /dev/hda4
xfs_db> frag
actual 609732, ideal 6144, fragmentation factor 98.99%
xfs_db>

Whoops :p

#

Re: Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 24.180.36.90] on July 21, 2008 07:04 PM
Mine is bad too. MythTv has nearly filled it too many times :-)

xfs_db> frag
actual 1717744, ideal 69990, fragmentation factor 95.93%

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 77.83.206.223] on July 18, 2008 06:52 PM
/boot on ext3
All others on xfs. RAM happy fs and love it.

#

Re: Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 206.73.209.94] on July 21, 2008 01:15 AM
my desktop

/ on xfs. I tender to try all kinds of new application, thus need constant drag to keep system performance

/boot on ext2, there is no point to run journal file system on /boot, since the files under /boot are barely change.
btw, I had a lot problem with latest version of grub when have /boot on xfs. It works sometimes, but very easy to broken with no obvious reason. Never have /boot on xfs with grub, although grub claim the problem has been fix, unless you want to experiment. Some distro only allow lilo when detecting /boot is on xfs.

all other on jfs. It is good on operating big files, best for mythtv, low CPU usage, but not particular fast in overall. However, most important, reading from a lot post, it seems the most power-failure resistant that is critical to my data.

#

Re(1): Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 206.73.209.94] on July 21, 2008 01:22 AM
In addition to my post

fsck on large xfs file system require a lot memory. I read a post that an admin were not able to fix a large xfs file system because of running our of memory.
In contrast, fsck on jfs require fix amount of memory, the memory usage is not correlated to file system size. This is another reason I opt jfs on my large data parition.

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 64.115.215.203] on July 18, 2008 09:35 PM
Thanks for this article. I store mp3 on a resier FS now and since his conviction and the project dieing looks like I will have to move to xfs or ext4.

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 190.139.107.26] on July 20, 2008 12:27 AM
I'm bookmarking this, I use a XFS partition to store movies :-).

Thank you!

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 61.68.237.226] on July 20, 2008 03:05 PM
Not wanting to troll or anything but.....
I formatted a 500GB maxtor external drive with XFS, used for mythtv and backup duties
It was fast for mythtv thats for sure, but after a power faliure during the night i noticed that some backup directorys had vanished ( not good ). The volume continued to mount correctly and record Judge Judy... ( ok it was a test :-) ). I ran the fskckeck...or something and was told superblock missing ... i tried to repair it ( cant remember the command ) but it was in vain, i lost my backups. I am now testing JFS on multiple machines from 733mhz, 20GB machines to Duel core 640GB machines.. and i like it a lot.

#

Re: Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 24.62.146.88] on July 21, 2008 11:09 PM
Don't blame power failures and file systems for your own inability to create and restore backups properly.

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 61.68.237.226] on July 22, 2008 11:05 AM
It WAS holding the backups - and they vanished - ive no problem creating or restoring them ( as long as XFS hasnt fu**d them, it maybe fast but the xfs_repair took hours... and failed ) .. just my 2 cents

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 192.88.168.35] on July 22, 2008 03:04 PM
Wow! Thanks. That'l teach me to use a non-default tech without researching it.
Over the years, my MythTV box had gotten inexplicably slow with lots of disk access for many operations like starting playback of a recorded video. With the advice here, I measured 98.6% fragmentation. A few sample files I checked that were 1-6GB captured MPEG-2 videos were stored on over 30,000 extents!
After 2 nights of de-fragmenting, I'm down to 17% fragmented and startup time of playing a video is noticeably faster with less hard drive activity.

#

Use xfs_fsr to keep your XFS filesystem optimal

Posted by: Anonymous [ip: 61.68.237.226] on July 22, 2008 03:43 PM
With all this talk of defrag we should mention that the best way ( if u have the space and can handle the downtime ) is to rsync the files to another drive, reformat, then rsync them back again.... sorted

#

This story has been archived. Comments can no longer be posted.



 
Tableless layout Validate XHTML 1.0 Strict Validate CSS Powered by Xaraya