This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new!

I think dbench takes a better approach

Posted by: Anonymous [ip:] on July 02, 2008 07:17 AM
Unfortunately, Bonnie++ is single-threaded for all but the random-search test. It shows how fast files can be read and written, but doesn't scale well for SMP or CPU speed. On my 2-way AMD64 system with 2G RAM and SATA drives, the default number of files for create/stat/delete was way too low; stat() almost always came from cache, and that test would end within 1/2 second, giving no speed results. In order to overcome this, I had to specify a number of files so high as to make the test almost unbearably long for a personal research project.

Contrast this with the approach dbench takes: start N threads, then each performs some interesting mix of create, stat, delete, rename, read, write, and other things, and count the operations through the test duration. Not only were filesystem differences greatly magnified, I was able to determine how long a test should run, rather than fiddle with an iteration count.

(Long story short: using an external journal, preferably on a different controller; SMP helps lots in this regard. With all other factors being the same, XFS won hands-down, with >400M/sec throughput using the noop I/O scheduler. ext3 won with an internal journal, but still not as fast as XFS+ext journal. JFS lost every test. NB: XFS+ext journal *cannot* be the very first mount Linux does during boot; initrd hacking is necessary to make XFS+ext journal the root mount. YMMV, caveat emptor.)


Return to Using Bonnie++ for filesystem performance benchmarking