This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new Linux.com!

Linux.com

Feature: Reviews

30 days with JFS

By Keith Winston on September 14, 2007 (9:00:00 AM)

Share    Print    Comments   

The Journaled File System (JFS) is a little-known filesystem open sourced by IBM in 1999 and available in the Linux kernel sources since 2002. It originated inside IBM as the standard filesystem on the AIX line of Unix servers, and was later ported to OS/2. Despite its pedigree, JFS has not received the publicity or widespread usage of Linux filesystems like ext2/3 and ReiserFS. To learn more about JFS, I installed it as my root filesystem. I found it to be a worthy alternative to the bigger names.

To give JFS a try, I installed Slackware 12 on a laptop, choosing JFS as the filesystem during installation. I performed no special partitioning and created one JFS filesystem to hold everything. Installation was uneventful and the system booted normally from GRUB. Not all distributions offer JFS as an install option, and some may not have JFS compiled into their default kernels. While Fedora and SUSE users can use JFS, they both default to ext3. Slackware, Debian, Ubuntu, and their derivatives are good choices for anyone who wants to try JFS.

One of the first things I noticed about my new system was the absence of a lost+found directory, which is a relic of lesser filesystems.

JFS is a fully 64-bit filesystem. With a default block size of 4KB, it supports a maximum filesystem size of 4 petabytes (less if you use smaller block sizes). The minimum filesystem size supported is 16MB. The JFS transaction log has a default size of 0.4% of the aggregate size, rounded up to a megabyte boundary. The maximum size of the log is 32MB. One interesting aspect of the layout on disk is the fsck working space, a small area allocated within the filesystem for keeping track of block allocation if there is not enough RAM to track a large filesystem at boot time.

JFS dynamically allocates space for disk inodes, freeing the space when it is no longer required. This eliminates the possibility of running out of inodes due to a large number of small files. As far as I can tell, JFS is the only filesystem in the kernel with this feature. For performance and efficiency, the contents of small directories are stored within the directory's inode. Up to eight entries are stored in-line within the inode, excluding the self (.) and parent (..) entries. Larger directories use a B+ tree keyed on name for faster retrieval. Internally, JFS uses extents to allocate blocks to files, leading to efficient use of space even as files grow in size. This is also available in XFS, and is a major new feature in ext4.

JFS supports both sparse and dense files. Sparse files allow data to be written to random locations within a file without writing intervening file blocks. JFS reports the file size as the largest used block, while only allocating actually used blocks. Sparse files are useful for applications that require a large logical space but use only a portion of the space. With dense files, blocks are allocated to fill the entire file size, whether data is written to them or not.

In addition to the standard permissions, JFS supports basic extended attributes, such as the immutable (i) and append-only (a) attributes. I was able to successfully set and test them with the lsattr and chattr programs. I could not find definitive information on JFS access control list support under Linux.

Logging

The main design goal of JFS was to provide fast crash recovery for large filesystems, avoiding the long filesystem check (fsck) times of older Unix filesystems. That was also the primary goal of filesystems like ext3 and ReiserFS. Unlike ext3, journaling was not an add-on to JFS, but baked into the design from the start. For high-performance applications, the JFS transaction log file can be created on an external volume if one is specified when the filesystem is first created.

JFS only logs operations on meta-data, maintaining the consistency of the filesystem structure, but not necessarily the data. A crash might result in stale data, but the files should remain consistent and usable.

Here is a list of the filesystem operations logged by JFS:

  • File creation (create)
  • Linking (link)
  • Making directory (mkdir)
  • Making node (mknod)
  • Removing file (unlink)
  • Rename (rename)
  • Removing directory (rmdir)
  • Symbolic link (symlink)
  • Truncating regular file

Utilities

JFS provides a suite of utilities to manage its filesystems. You must be the root user to use them.

Utility Description
jfs_debugfs Shell-based JFS filesystem editor. Allows changes to the ACL, uid/gid, mode, time, etc. You can also alter data on disk, but only by entering hex strings -- not the most efficient way to edit a file.
jfs_fsck Replay the JFS transaction log, check and repair a JFS device. Should be run only on an unmounted or read-only filesystem. Run automatically at boot.
jfs_fscklog Extract a JFS fsck service log into a file. jfs_fscklog -e /dev/hda6 extracts the binary log to file fscklog.new. To view, use jfs_fscklog -d fscklog.new.
jfs_logdump Dump the journal log to a plain text file that shows data on each transaction in the log file.
jfs_mkfs Create a JFS formatted partition. Use the -j journal_device option to create an external journal (1.0.18 or later).
jfs_tune Adjust tunable filesystem parameters on JFS. I didn't find options that looked like they might improve performance. The -l option lists the superblock info.

Here is what a dump of the superblock information looks like:

root@slackt41:~# jfs_tune -l /dev/hda6
jfs_tune version 1.1.11, 05-Jun-2006

JFS filesystem superblock:

JFS magic number:       'JFS1'
JFS version:            1
JFS state:              mounted
JFS flags:              JFS_LINUX  JFS_COMMIT  JFS_GROUPCOMMIT  JFS_INLINELOG 
Aggregate block size:   4096 bytes
Aggregate size:         12239720 blocks
Physical block size:    512 bytes
Allocation group size:  16384 aggregate blocks
Log device number:      0x306
Filesystem creation:    Wed Jul 11 01:52:42 2007
Volume label:           ''

Crash testing

White papers and man pages are no substitute for the harsh reality of a server room. To test the recovery capabilities of JFS, I started crashing my system (forced power off) with increasing workloads. I repeated each crash twice to see if my results were consistent.

Crash workload Recovery
Console (no X) running text editor with one open file About 2 seconds to replay the journal log. Changes I had not saved in the editor were missing but the file was intact.
X window system with KDE, GIMP, Nvu, and text editor in xterm all with open files About 2 seconds to replay the journal log. All open files were intact, unsaved changes were missing.
X window system with KDE, GIMP, Nvu, and text editor all with open files, plus a shell script that inserted records into a MySQL (ISAM) table. The script I wrote was an infinite loop, and I let it run for a couple of minutes to make sure some records were flushed to disk. About 3 seconds to replay the journal log. All open files intact, database intact with a few thousand records inserted, but the timestamp on the table file had been rolled back one minute.

In all cases, these boot messages appeared:

**Phase 0 - Replay Journal Log
-|----- (spinner appeared for a couple of seconds, then went away)
Filesystem is clean

Throughout the crash testing, I saw no filesystem corruption, and the longest log replay time I experienced was about 3 seconds.

Conclusion

While my improvised crash tests were not a good simulation a busy server, JFS did hold up well, and recovery time was fast. All file-level applications I tested, such as tar and rsync, worked flawlessly, and lower-level programs like Truecrypt also worked as expected.

After 30 days of kicking and prodding, I have a high level of confidence in JFS, and I am content trusting my data to it. JFS may not have been marketed as effectively as other alternatives, but is a solid choice in the long list of quality Linux filesystems.

Share    Print    Comments   

Comments

on 30 days with JFS

Note: Comments are owned by the poster. We are not responsible for their content.

700 days with JFS

Posted by: Anonymous [ip: 62.225.112.236] on September 14, 2007 09:44 AM
I have been running JFS on my MythTV box for over 2 years. The server crashes once in a while, as I like to try out bleeding edge features... JFS has NEVER had any problem in recovering. The speed that it has deleting gigabyte sized files is amazing and very useful for big digital tv files. I can only recommend it!

#

30 days with JFS

Posted by: Anonymous [ip: 76.185.112.148] on September 14, 2007 10:08 AM
That's the longest anybody has ever spent with JFS

#

6 years with with JFS

Posted by: Anonymous [ip: 91.84.39.186] on September 14, 2007 10:48 AM
I've used JFS every day since 2001 on several machines with no problems: it's fast and reliable.
The only drawback is that you can't shrink a JFS filesystem so you lose some flexibility when using LVM.

#

Why would I want this over ext3?

Posted by: Anonymous [ip: 77.100.5.174] on September 14, 2007 12:32 PM
I personally used to use reiser3 for my servers, but eventually moved back to ext3 due to better support with the distros I use.


Just what advantages does JFS have over ext3 which would make me want to use it? When you compare ext3 with a filesystem like ZFS on Solaris the differences are easy to notice, but what makes JFS better?

#

Re: Why would I want this over ext3?

Posted by: Anonymous [ip: 161.231.132.16] on September 14, 2007 01:02 PM
For large fs, and/or large files. See the Some Tips post

#

Re: Why would I want this over ext3?

Posted by: Anonymous [ip: 85.48.227.128] on September 14, 2007 01:23 PM
Hi,
As first commenter post, it's really very faster than ext3 when deleting big files (1GB and above), so it's recommended by MythTV for the store partition. I myself also use it for the same reason.

#

Re: Why would I want this over ext3?

Posted by: Anonymous [ip: 68.83.193.172] on September 16, 2007 03:43 AM
A few things...

1) Large numbers of files in a directory

We've used JFS with over 600,000 files in a directory with good performance. Performing an "ls" on it is evil, but if you know the filenames, or use something other than "ls" to get a directory, it is very responsive. Ext2/3, and ReiserFS all start having performance problems after you get 10K-30K files or so in a directory - it gets bad in a hurry. The only other filesystem I've seen come close is NTFS, and that doesn't do well under Linux. I'd love to try a ZFS port, though.

2) Ext2/3 still occasionally wants to fsck with your disk.

That defines slow - well, almost. CHKDSK/Scandisk truly defined slow... Normally, with crash recovery, ext3 will recover using the journal. Periodically though, its ext2 underpinnings show through and it insists on performing a full check. That's not fun on a large disk. You don't see that with JFS or even ReiserFS.

3) Speed

For our application, it was the fastest filesystem we found running under Linux.

For me, the configuration I'd recommend is a small /boot partition using ext2, and JFS for everything else.

#

Re(1): Why would I want this over ext3?

Posted by: Anonymous [ip: 127.0.0.1] on September 17, 2007 02:17 AM
Athlon 1600 Xp
Ext3 File system

Try this with JFS :
[admin@one test]$ time seq 1 600000 | xargs touch

real 1m15.083s
user 0m2.160s
sys 0m47.947s
[admin@one test]$ time ls > /dev/null

real 0m15.272s
user 0m11.573s
sys 0m1.368s
[admin@one test]$ time echo $RANDOM*10+$RANDOM | bc | xargs ls
155941

real 0m7.245s
user 0m4.544s
sys 0m1.100s
[admin@one test]$ time echo $RANDOM*10+$RANDOM | bc | xargs ls
43537

real 0m6.829s
user 0m4.308s
sys 0m1.096s
[admin@one test]$ time echo $RANDOM*10+$RANDOM | bc | xargs ls
269262

real 0m6.957s
user 0m4.616s
sys 0m1.124s
[admin@one test]$


Is it better with JFS ?

#

Re(1): Why would I want this over ext3?

Posted by: Anonymous [ip: 10.249.1.245] on September 17, 2007 09:37 AM
The routine fsck with ext3 is annoying, but completely unnecessary. Turn it off with "tune2fs -c 0 /dev/whatever". Problem solved.

#

Some Tips

Posted by: Anonymous [ip: 161.231.132.16] on September 14, 2007 12:53 PM
I switched to JFS this year (all my partitions). It is extremely fast with large files (dvd isos, etc), and large partitions. Formatting a large partition is a Snap. Using it as a root partition is, in principle, allowed, but there are some incompatibilities (google for them, specifically issues with grub). I had problems booting, even from lilo, on and off.

My next setting, whenever I have some time to do it, will be ext3 for the root partition, jfs everywhere else (/home partition and /data partition).

#

30 days with JFS

Posted by: Anonymous [ip: 68.2.221.244] on September 14, 2007 01:23 PM
My experience with it and Slack was that I had many, many problems with moving files from ext3 boxes and often had kernel panics resulting in reboots. I switched my jfs box back to etx3 and never had an issue since

#

About nine months with JFS

Posted by: Anonymous [ip: 67.90.11.226] on September 14, 2007 08:51 PM
My main home desktop is a Debian machine that is set up with two disks RAID1/LVM with JFS for all the LVM partitions (/, /usr, /var, /tmp, /home) and another small RAID1 non-LVM partition for /boot (also JFS). I use GRUB. I set it up around nine months ago and haven't had a single problem - really "set it and forget it". Just my experience, but JFS seems to be a really solid, professional, production-quality file system.

#

Re: 30 days with JFS

Posted by: Anonymous [ip: 85.141.73.231] on October 15, 2007 03:01 AM
You did something wrong, Luke.Blame yourself.JFS itself is reliable and rock-solid.I'm dealing with 200+Gb files on it and checking MD5 sometimes.No crashes, no panics, fast file operations and files are not corrupted (checked with MD5sum)

#

30 days with JFS

Posted by: Anonymous [ip: 132.250.112.46] on September 14, 2007 02:54 PM
It is incorrect to say that JFS was ported from AIX to OS2.

The OS2 JFS was a version 2, and rewritten from origial design. This was because the original AIX JFS also had volumn management builtin.

The OS2 JFS was then used as a base for JFS in Linux, and the new JFS in AIX.

#

Good for Laptops

Posted by: Anonymous [ip: 75.34.17.86] on September 14, 2007 03:47 PM
JFS needs low processor resources & hard drive access remains quick. It also does not get corrupted easily & recovery from corruption works well.

All of these point to an excellent file system for notebooks & any other machine that needs low power usage.

#

Minimum filesize

Posted by: Anonymous [ip: 75.33.142.86] on September 14, 2007 05:45 PM
The minimum filesystem size supported is 16MB.

Does this mean that every file takes up at least 16MB of space? That sounds quite wasteful for things like /etc....

#

Re: Minimum filesize

Posted by: Anonymous [ip: 89.216.171.97] on September 14, 2007 06:13 PM
no, it for whole filesystem, not for single file

#

Re: Minimum filesize

Posted by: Anonymous [ip: 189.30.50.224] on September 14, 2007 06:48 PM
<span style="font-style:italic"> The minimum filesystem size supported is 16MB.</span>



It says that the smallest <span style="font-style:italic">partition</span> size allowed is 16MB, not the smallest <span style="font-style:italic">file</span>.

#

Re: Minimum filesize

Posted by: Anonymous [ip: 68.83.193.172] on September 16, 2007 03:46 AM
That's a partition size. The minimum filesize is going to be tied to the blocksize, which I think defaults to 4K. It's that way with other filesystems as well.

#

Little note

Posted by: Anonymous [ip: 83.24.6.7] on September 15, 2007 12:47 AM
While Fedora and SUSE users can use JFS, they both default to ext3. Slackware, Debian, Ubuntu, and their derivatives are good choices for anyone who wants to try JFS.


But keep in mind that AFAIK JFS does not support extended attributes (XATTR) which are used in Linux for access control lists (ACL) and Fedora uses them for SELinux - so you will have to disable SELinux if you want to use JFS with Fedora.

Also JFS is not "supported" in Fedora so that means you need to manually add some parameters when booting the installer or there wont be even an option to choose this filesystem.

Also I would rather see some serious benchmarks comparing filesystems in common usage scenarios.

#

Re: Little note

Posted by: Anonymous [ip: 190.24.79.213] on September 15, 2007 02:20 AM
Hmm, obsolete knowledge. JFS has supported ACLs and extended attributes for several years now, so in theory you can run SELinux, AppArmor, GrSec or whatever on top of a JFS filesystem. The fact that Fedora doesn't have support for it is an irrelevant piece of trivia (regardless what the Fedora community leaders say or may want you to believe, Fedora *is* still the testing ground of RHEL an that won't change for a long while yet, perhaps Fedora 10). The real fact is that no one has been motivated enough to make sure that SELinux runs on top of JFS. If RedHat sticks to the only ext2/3 filesystems supported mantra is because of two things, First it matches its business model and appeals to its (corporate) customers used to UNIX(R) where there is one and only one filesystem (that varies among vendors) and second, it is justifiable considering the enormous amount of money and human resources it has thrown into that project in the last ten years (who do you think has paid for ext3 and is paying for ext4? The tooth fairy?)

#

Re(1): Little note

Posted by: Anonymous [ip: 84.90.182.83] on September 16, 2007 12:03 AM
Oh...I see, so RedHat *payed* for ext3 and ext4 did it? I guess the community contributions to ext3/ext4 don't count much do they?

#

My filesystems benchmark including JFS ;)

Posted by: Anonymous [ip: 69.60.241.238] on September 15, 2007 03:30 AM
hey there, I did do a benchmark that includes JFS, XFS, Ext3 and Reiser3
http://sidux.com/PNphpBB2-viewtopic-t-5275.html
Yes I posted it in the sidux forums since my primary distro is sidux, enjoy!! :)

#

Bad Blocks?

Posted by: Anonymous [ip: 201.231.127.79] on September 15, 2007 02:38 AM
How is bad block management in JFS ?
Once upon a time, i switched to XFS and was amazed at the speed of it,
Considering i was running it on a pretty old server (a k6-2-400 or so), it
performed much much better than Reiser3 or Ext3 at the time..
However the drive started presenting bad blocks, and to my surprise,
XFS had no bad blocks support, which made me lose quite a bunch of data.
Does JFS deal well with this?

#

jfs features

Posted by: Anonymous [ip: 67.171.70.68] on September 15, 2007 03:39 AM
Just a couple corrections to your article:

Extended ACL lists are supported under jfs. I googled and found patches dating back to at least early 2002 that provided this. From my kernel config:
CONFIG_JFS_POSIX_ACL:

Posix Access Control Lists (ACLs) support permissions for users and
groups beyond the owner/group/world scheme.

Also, jfs is not the only fs to dynamically allocate inodes. At least one other filesytem, xfs, does this as well --and did so for years before jfs was even around.


#

Debian Administration articles

Posted by: Anonymous [ip: 84.12.25.220] on September 15, 2007 10:42 AM
http://www.debian-administration.org/articles/388

It's an interesting article as is the discussion thread.

#

30 days with JFS

Posted by: Anonymous [ip: 59.145.121.3] on September 16, 2007 09:43 PM
shud have been 35 days with jfs think thats as long as jfs lasted on my laptop before i left it on once without the plug on and the next morning i boot to find that there was some kind of double block error and it told me the journal could not fix the error and so i was left with a filesystem i cud never put back into right mode due to a bad block that couldnt be fixed, so i had to put everything on a portable drive and reinstall everything, back on ext3, tried tested stable slightly slower but im now happy to pay that price an fsck every 50 odd boots is time worth spending given the time it takes to backup and reinstall a whole system, oh and btw, there was no trick i missed out on, the problem was unfixable, there something to be said in terms of support for ext given how long its been about.

#

2 years with jfs but its EOL

Posted by: Anonymous [ip: 76.108.130.138] on September 16, 2007 11:31 PM
I use jfs on very large clusters (16tb+) for very high traffic/profile sites. Its performance/maintenance couldnt be beat. Under high load it never faulters and keeps data flowing. We evaluated xfs vs jfs for awhile. JFS won because of its ability to deal with failing devices better, not bringing a system to its knees at high load, not needing 2tb of swap space for an fsck and fsck didnt take 4hours per 5tb of data. XFS was a little faster with larger files.

Ive heard that IBM "End Of Life'd" jfs for linux, but Ive yet to get a solid confirmation.

#

case IN-sensitivity

Posted by: Anonymous [ip: 74.97.79.184] on September 17, 2007 05:59 AM
one interesting feature of JFS is that it optionally supports case INsensitivity (ie: just like windows/NTFS). this has been useful for me in consulting with companies moving from DOS and windows that didn't want to change all the code to ignore case. instead we just flipped the switch on JFS and it "just worked"

#

Re: case IN-sensitivity

Posted by: Anonymous [ip: 66.203.47.166] on September 18, 2007 01:41 PM
That feature is also useful for serving Macs with Netatalk since Netatalk doesn't do the automatic case handling that Samba does for smb.

#

Don't use GRUB with JFS on /boot

Posted by: Anonymous [ip: 195.135.221.2] on September 17, 2007 08:58 AM
grub has some limited support for JFS, but it will fail to boot if JFS thinkgs that it must be checked. Unfortunately, it cannot get any data of an unchecked JFS, but it needs to read all necessary information from disk.

#

Re: Don't use GRUB with JFS on /boot

Posted by: Anonymous [ip: 70.130.160.170] on December 03, 2007 07:40 AM
put "ro" on the kernel line, that should fix it.

#

JFS + Deadline rocks

Posted by: Anonymous [ip: 212.18.162.33] on September 17, 2007 10:52 AM
echo deadline > /sys/block/sda/queue/scheduler, echo 1024 > /sys/block/sda/queue/nr_requests, echo 250 > /sys/block/sda/queue/iosched/read_expire . Tried JFS for the first time, with CFQ seems slower than Ext3, but with Deadline it really shows its muscle. Googled a bit before, seems that in every benchmark explicitly using the deadline scheduler JFS wins over XFS and the rest at almost every aspect. Also seems common feedback that is less prone to power surge problems than XFS. Never seen my machine as fast as it is now. Well, new tuned scheduler and reorganized files after restoring the backup, YMMV, but google around for JFS and deadline. I wish someone would make a serious benchmark again but also envolving schedulers. Google results point the deadline scheduler as great for all filesystems, specially for database usage. But my laptop is really responsive, and JFS spends less CPU than other FS according to most benchmarks.

#

It's not EOL

Posted by: Anonymous [ip: 212.18.162.33] on September 17, 2007 11:31 AM
Check http://jfs.sourceforge.net/ : last version 1.1.12 dated 2007-08-24 .

#

Problems with JFS Utils 1.1.12

Posted by: Anonymous [ip: 84.90.182.83] on September 23, 2007 12:09 AM
I have not been able to use JFS Utils 1.1.12 to manage JFS formatted partitions. Its checker and even mkfs binaries don't seem to work properly (fsck.jfs always fails and mkfs.jfs fails on very large partitions).
Everything worked fine as soon as I reverted to 1.1.11. Just posting this so that anyone thinking of using JFS after having read this article doesn't spend hours trying to figure out why the damn filesystem won't check at boot with jfsutils 1.1.12 installed ;)

#

Re: Problems with JFS Utils 1.1.12

Posted by: Anonymous [ip: 70.113.82.32] on October 03, 2007 10:33 PM
jfsutils-1.1.12 was built with a broken version of autoconf that doesn't work with glibc-2.3 kernels. (Large file support is broken.) You may want to try this version: http://www.kernel.org/pub/linux/kernel/people/shaggy/jfs/jfsutils-1.1.12.tar.gz

#

30 days with JFS

Posted by: Anonymous [ip: 75.152.236.18] on December 04, 2007 05:04 AM
I found that it came up with some bizarre error when installed on a 32-bit system on any AMD64-based machine. It would lockup, complain about errors during mount, and then mount readonly.

This was my first exposure to JFS and it got the boot pretty quick!

#

30 days^H^H^H^Hminutes with JFS

Posted by: Anonymous [ip: 75.152.236.18] on December 04, 2007 05:05 AM
More appropriate title?

#

1 plus years with JFS

Posted by: Anonymous [ip: 70.77.110.98] on February 04, 2008 07:52 AM
I have been using jfs for all partitions except for the small ext2 boot partition for over one year.
In order for the jfs file system to be repaired, all one has to do is insert ro (read only) on the kernel line with grub which allows the fsck.jfs to fix the file system before fstab mounts the partitions as read/write. I have tested this by turning the power button off while the system is running just after a large file write. The file check is over in a few seconds even on large partitions without any file corruption or loss.
The deadline scheduler allows the "ls" to read the partition in a quick and responsive manner. Adding elevator=deadline on the kernel line in grub will set up the deadline scheduler. Of course if one compiles their own kernel, the deadline scheduler can be made to be the default.
I do like the jfs performance with it's low cpu load. The jfs's the ability to fix files after power loss is something xfs cannot accomplish. I would recommend jfs over xfs to anyone because of this. I believe that at this time, jfs is the file system to use.

#

This story has been archived. Comments can no longer be posted.



 
Tableless layout Validate XHTML 1.0 Strict Validate CSS Powered by Xaraya