This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new!


Linux needs better network file systems

By Mark Stone on December 03, 2004 (8:00:00 AM)

Share    Print    Comments   

In a previous article we looked at local file systems in Linux. In this article we'll examine the range of choices available for Linux network file systems. While the choices are many, we'll see that Linux still faces significant innovation challenges; yesterday's network paradigm isn't necessarily the best approach to the network of tomorrow.
The Traditional Paradigm

Our current model of the network file system is defined by the paradigm of the enterprise workstation. In this model, a large enterprise has a number of knowledge workers based at a single campus, all using individual work stations that are tied together on a singel local area network (LAN).

In this model, it makes sense to centralize certain services and files so that those services and files reside on only one (or a few) servers rather than replicating them on every single workstation. The resulting efficiencies fall into three categories:
  • Administration. The fewer machines the IT staff has to touch, the more efficiently they can operate. File backup and restore is a simple example. Having a backup/recovery plan for a central file server for critical files is much easier than having a backup/recovery plan for every workstation on the LAN.
  • Resources. Not all resources need to be used all the time. Making infrequently used resources available to all on a central server is more efficient. Printing is a simple example. The cost, maintenance, and management overhead of attaching a printer to every workstation would be prohibitive, and indeed most printers would sit idle most of the time. A central, shared print server makes much more sense.
  • Collaboration. Groups working on a common project need to share and exchange files regularly. Dispersing group data to individual workstations makes it more difficult to share files, and also leads to confusion over which copy of a file is the master copy. Better to have a central file server for the work group to which each group member has access.
Not all knowledge workers fit the traditional paradigm. Companies have multiple campuses. Some workers work remotely. But for the era in which standard network file systems were developed, the single campus-single LAN model was fine.

Traditional Solutions: NFS and Samba

By their very nature, network file systems are superimposed on top of the local file system; without a local file system already in place, there is nothing the network file system can identify to mount over the network. Linux really doesn't have a native network file system, no network equivalent of ext2/ext3. In the LAN environment, Linux's file system capabilities have been born of the necessity to get along with other operating systems.

NFS, then, is the main network file system used by Linux in Unix envrionments. Samba is the main network file system used by Linux in Windows environments that depend on Microsoft's SBM protocol for network file sharing. Born of different operating system environments, NFS and Samba also use somewhat different metaphors.

NFS borrows its terminology from that of local file systems. Accessing a directory on another computer over the network looks like mounting a partition on a local file system. Once network-mounted, the directory is accessible as if it were another directory on the local machine.

Samba's metaphor is based notion of services. "Share," as in sharing a file or directory, is possible service. Once sharing is authorized, Samba's behavior toward the end user looks similar to NFS. Samba understands other services, however, such as "print," which lets you access another machine's printer but not its files.

Both NFS and Samba were created in a world where the dominant network paradigm was the LAN on a single campus. While both file systems have adapted to changing network conditions, that adaptation has at times been awkward.

The New Paradigm of Occasional Connectivity

Two innovations have dramatically changed the requirements for network file systems subsequent to the initial development of NFS and the SMB protocol:

  • The first, most obvious change is the widespread proliferation of Internet connectivity in the mid to late 90s, transforming corporate LANs from isolated to interconnected networks. This changed security demands dramatically; suddenly outside intrusion over the network was a serious concern. The Internet also changed use profiles; suddenly knowledge workers expected corporate network access from home or from on the road.

  • The second, more subtle change has been the proliferation of wireless network technology and portable computing devices that use wireless technology. The result is a paradoxical notion in which connectivity is both pervasive and sporadic: pervasive, in that we are now accustomed to thinking of network access as never more than a hotspot or cell phone call away; sporadic, in that users at the end of a wireless tether are still at best occasionally connected.
To understand how these changes impact file systems, consider a simple model: The original Palm handheld. Sitting in its cradle, it was one computing device networked (in a limited sense) to another. Removed from the cradle it became a roaming device only occasionally connected. It shared files with a desktop computer, and those files had to be synchronized. An address book or calendar entry could be changed on the Palm, on the desktop, or independently and differently on both. All of these changes had to be kept in proper synchronization.

Palm's simple approach to synchronization was to update files from whichever device had a change since last synchronization, and, when in doubt, to duplicate entries. That taught users to treat the Palm as much as possible as a read-only device and do their data entry on the desktop. The complexities that arose from this simple network structure foreshadowed many of the challenges of network file systems today.

Once your address book and calendar could go with you everywhere, knowledge workers expected to be able to access and update them everywhere. Pre-Palm, you accepted that calendar and address book updates that arose away from the office would have to wait until you returned to the office. Now Palm has spoiled us all; we expect such changes and updates to be available on demand, any time, from anywhere.

Add to that the notebook computer, at most a novelty device when NFS and SMB were born. Now not just address books and calendars are on the road, but all of a knowledge worker's digital work. To that mix we now add cell phones that act like PDAs, and a current generation of PDAs that include much of a notebook's functionality. Finally, none of these devices now need to depend on any kind of cable or wire to access a network. Fixed-point access is becoming a thing of the past.

What's emerging is a network of computing devices where any device could be connected from anywhere at any time, but where connectivity can also be lost at any time. This kind of network environment introduces three main challenges:
  • Authentication
  • Data Transport
  • Synchronization
Traditional network file systems often prove ill-adapted for these challenges. In the original design of NFS, authentication was done for hosts, not users. Thus anyone who could gain access to a given machine could also gain access to all of the machines for which that one was a valid NFS host. The addition of access control lists and privilege limiting has mitigated this problem, but these are ad hoc fixes for a system not designed for the current network environment.

Further, both NFS and the SMB protocol send data in clear text over the network. At a time when LANs were mostly isolated rather than interconnected this wasn't a problem. Today it's a major security risk.

Of course, not all problems necessarily need to be solved at the file system level. NFS can run over an ssh tunnel, allowing ssh to provide encrypted data transport and an extra level of authentication. Similarly, in a Windows environment Microsoft's VPN provides an encrypted tunnel.

What none of these approaches handle very well is synchronization. Think of someone copying a file onto a laptop, working on it on the plane, then reconnecting to a home or corporate server later. Now suppose that in the interim someone else in the group has been making different changes to the same file.

Some of these issues can be dealt with at the application level rather than the file system level. Rsync, a powerful program that came out of the Samba project, provides remote file synchronization over the network. Tackling integration problems at the application level, however, leaves either the user or IT staff responsible for setting up, managing, and tracking synchronization. To accomplish all of this seamlessly at the file system level, we aren't talking about just a network file system. We're talking about a distributed file system.

New Tricks from an Old Approach: Coda

Much of the theoretical work done on modern file systems stems from research at Carnegie Mellon University (CMU). An alternative to NFS, for example, is AFS, derived from the Andrew File System research project at CMU.

Perhaps the most ambitious file system project at CMU is Coda. Coda is a distributed file system derived originally from AFS2. It is the brainchild of Professor Satyanarayanan. Coda is designed for mobile computing in an occasionally connected environment, is designed to work in an environment of partial network failure, and is designed to respond gracefully to total network failure. Encryption is built in for data transport, with additional security provided by authentication and access control.

The basic ideas behind Coda are:
  • The master copy of a file is kept on a Coda server
  • Coda clients maintain a persistent cache of copies of files from the server
  • Coda checks the network both for the availability of connections between client and server, and for the approximate bandwidth of the connection
  • The client cache is updated intelligently based on available bandwidth; the less bandwidth, the smaller the update increments, all the way down to a worst case of zero bandwidth, i.e. no connection
  • Updates from the client to the master must be complete; no partial file changes are ever written in the master copy
All of this sounds like a big step forward in solving the problems of a distributed file system. The technical challenges are not small, however, and Coda is still very much a work in progress. Work on Coda began in 1987, and the FAQ for the project reports, "a small userbase (20-30 users) and a few servers are pretty workable. Such a setup has been running here at CMU for the past couple of years without significant disasters. Don't expect to easily handle terabytes of data or a large group of non-technical oriented users."

Coda's Descendants: Intermezzo

Keep in mind that Coda is a research project. It aims to solve the distributed file system problems in a fundamental and comprehensive way. In the real world, an 80% solution will often do. Towards that end, a lighter weight descendant of Coda has been designed for Linux: Intermezzo.

Intermezzo has been developed by kernel hacker, file system guru, and former Coda project member Peter Braam.

Intermezzo follows a similar architectural philosophy to Coda. There is a server element to the file system, and a client element, with the client side relying on a persistent cache to keep files in synch. Communication between client and server is handled by a separate program, InterSync.

Intermezzo has been included as a file system option for the Linux kernel since kernel version 2.4.15. Like Coda, it is far from a finished project, but still represents an important future direction for Linux file systems.

A Word About Clusters

The haphazard network world of the Internet and mobile users may seem the very opposite of the tightly structured network of Linux clusters. Surprisingly, the file system challenges are quite similar.

Think of the Internet as a cluster in slow motion. In a cluster environment of fibre channel interconnects, the lag time associated with disk access can look like a server failure or network outage does on the Internet. What might look like continuous availability in another context looks like intermittent connectivity in the high demand cluster context.

Thought of in this way, it should come as no surprise that the most direct application of Intermezzo is not for mobile users, but for clusters. In fact Peter Braam and his team are working on a commercial version of their file system architecture, called "Lustre, that is available through Braam's company, ClusterFS. Lustre has been used at Lawrence Livermore National Labs, the National Center for Supercomputing Applications (NCSA), and other centers for supercomputing.

The Future of Network File Systems

In today's network paradigm, the network file system challenge has become the distributed file system challenge, as we have moved from self-contained LAN environments to a world of occasionally connected computing. To be competitive in this environment, an operating system must have a file system that handles distribution and synchronization problems smoothly and securely.

Apple understands this. Apple's relentless focus on the "digital lifestyle" has led them to work hard at getting a wide array of devices, from cell phones to iPods to video cameras, to connect and communicate. MacOS X gets high marks for its capabilities in this area.

Microsoft certainly understands the challenge as well. While Windows-based networks today are still mostly locked into a complex of VPNs and SMB, the plans for Longhorn are quite different. The whole .Net infrastructure, and the way Avalon aims to leverage it, should address many distributed file system issues in a way that is transparent to the user.

Will Linux compete? The potential is there, and projects like Intermezzo show that many of the right building blocks are in place. What remains is for a high profile company or project to step forward and make distributed file problems a priority. So far, that hasn't happened.

Share    Print    Comments   


on Linux needs better network file systems

Note: Comments are owned by the poster. We are not responsible for their content.

Productized Lustre is at HP

Posted by: Anonymous Coward on December 04, 2004 03:25 AM
It's called HP StorageWorks Scalable File Share (SFS), based on Lustre™ technology.


NFS v4

Posted by: Anonymous Coward on December 04, 2004 04:06 AM
Future of NFS (v4) looks bright.



Posted by: Anonymous Coward on December 04, 2004 04:39 AM
early next year you'll be able to get the novell nss 64 bit file system on linux.
see<nobr>m<wbr></nobr> l



Posted by: Anonymous Coward on December 04, 2004 07:27 AM
Your link is broken perhaps you meant this?:<A HREF="" title="">nss</a>


SFS? davfs2?

Posted by: ruiner on December 04, 2004 07:21 AM
<A HREF="" title=""></a>

<A HREF="" title=""></a>

Do your homework before posting.


Don't forget to mix in SE Linux into the pot...

Posted by: Anonymous Coward on December 04, 2004 09:11 AM
SE Linux and the future of networking must go hand and hand.

How well, or not so well, do the discussed systems like the SE Linux directions?


Re:Don't forget to mix in SE Linux into the pot...

Posted by: Anonymous Coward on December 04, 2004 09:39 AM
What the heck does SE Linux have to do with the above article?! Ground control to Major Tom......


Re:Don't forget to mix in SE Linux into the pot...

Posted by: Anonymous Coward on December 04, 2004 12:19 PM

(Lameness filter encountered. Post aborted! Lameness filter encountered. Post aborted! Lameness filter encountered. Post aborted! Lameness filter encoun... *ERROR ENCOUNTERED*)


Samba is a networking thechnology?

Posted by: Anonymous Coward on December 04, 2004 09:39 AM
IIRC Samba is a software suit for *NIX to connect to Windows Networking, hardly a networking technology in itself (If someone wrote a NFS client called foo you wouldn't refer to NFS as foo) SMB would be the correct term for the "Samba Network"


Re:Samba is a networking thechnology?

Posted by: flacco on December 05, 2004 11:25 PM
IIRC Samba is a software suit for *NIX to connect to Windows Networking, hardly a networking technology in itself.

it's a stand-alone implementation of SMB, both client *and* server. if "windows networking" would magically cease to exist, there would still remain a networking technology called SAMBA.


Re:Samba is a networking thechnology?

Posted by: Anonymous Coward on December 07, 2004 04:36 AM
Actually, if Samba continues to exist, "Windows networking" also continues to exist. Windows networking is SMB/CIFS, regardless of the platform that is doing the actual client or server work.


Has "author" heard of a old saying.......

Posted by: Anonymous Coward on December 04, 2004 01:10 PM
That goes "If it ain't broke don't fix it?"


Re:Has "author" heard of a old saying.......

Posted by: Anonymous Coward on December 05, 2004 12:21 AM
That is a typical reaction of a lazy person.
Take another approach: it is good to change is the change is an improvement.
Also consider why almost everything that comes from Japan is technologically superior to what comes from the USA. Reason: they don't stick to the 'works-don't-fix' nonsense; they improve when/where they see a possibility.

And if it hurts to read that Japanese products are better and more reliable, too bad for you.


Re:Has "author" heard of a old saying.......

Posted by: flacco on December 05, 2004 11:18 PM
i believe the author made a case that it is, in fact, broken.


This is why Linux will never take off...

Posted by: Anonymous Coward on December 08, 2004 01:47 AM
People spend way too much time redesigning file systems for every minute feature, instead of trying to iron the crinkles out of half-baked applications. I'd wager that a good 75% of all OSS projects out there are reinventing the wheel. "Hey, it's been 6 months, why don't we create a new file system?"

Come on, guys. Only a *small* percentage of users really care about the underlying file system. How many people really gripe about Microsoft's file system? Only a fraction of those making complaints, which usually focus on the UI or on applications. Get the kinks out of some major apps, and watch as Linux expands its desktop share.


Digital Had it Then

Posted by: Anonymous Coward on December 05, 2004 08:42 AM
All of this network file system stuff makes me nostalgic for the ancient days of DEC computers. DECnet on RSX-11M and VMS had a distributed file system in 1980 that in some ways is way ahead of what the Linux and Unix worlds usually have today.

Using DECnet, it was possible to simply prepend the node name ahead of the filename on the remote system, for example:


You could then open such a file in an application across the network and it would sort of work like a local file. How "sort of" depended on the application, but often it was quite impressive. DECnet worked with the local ODS-1 (RSX) or ODS-2 (VMS) file systems, including (optional in RSX, standard in VMS) the built-in indexed file capability.

NFS has nice transparency, but only within the relatively simple Unix file semantics (I know, Unix folks think that's the correct way). But it's done atop UDP, which is ugly and has bad congestion properties. (That's fixed within the NFS V3 application itself, but it took years. DECnet was designed for WAN use, not LAN only as with the early NFS.)

I'm not saying that DECnet in 1980 was something we should go back to, but I am concerned that the past decade and a half has seen too much reinvention of wheels and too little original research.


Re:Digital Had it Then

Posted by: Anonymous Coward on December 06, 2004 01:39 AM
"Using DECnet, it was possible to simply prepend the node name ahead of the filename on the remote system"

You mean like<nobr> <wbr></nobr>/net/host/foobar with NFS or<nobr> <wbr></nobr>//host/mount with SMB?

The reason people don't like that (in particular in the UNIX world) is because encoding host names in paths is an administrative nightmare for a large site. DECNET was too simplistic in this area (as in many other areas).

"but I am concerned that the past decade and a half has seen too much reinvention of wheels and too little original research."

There has been a lot of research in this area: CODA and Intermezzo, Plan9, user-mode file systems, etc. People just need to implement and use the research.


Why no mention of OpenAFS?

Posted by: David Mohring on December 05, 2004 10:46 PM
<A HREF="" title="">What is AFS?</a>
AFS is a distributed filesystem product, pioneered at Carnegie Mellon University and supported and developed as a product by Transarc Corporation (now IBM Pittsburgh Labs). It offers a client-server architecture for file sharing, providing location independence, scalability and transparent migration capabilities for data.

IBM branched the source of the AFS product, and made a copy of the source available for community development and maintenance. They called the release OpenAFS.

There a a few <A HREF="" title="">OpenAFS Success Stories</a> and lots more details about successful deployments from the recent <A HREF="" title=""> OpenAFS Best Practices Workshop</a>.


Re:Why no mention of OpenAFS?

Posted by: Anonymous Coward on December 07, 2004 12:01 AM
Highlighting CODA rather than OpenAFS it just plain dumb.

Really the Linux kernel hackers should kick Intermezzo and start working in OpenAFS instead. The AFS client from RedHat, which is currently included in kernel 2.6 is just sad, it's read only and has not authentication support.

Only problem with OpenAFS is that it's rather complicated to setup.


Re:Why no mention of OpenAFS?

Posted by: Anonymous Coward on December 08, 2004 07:46 AM
kafs will probably be fixed now that the proper infrastructure is there (fs-cache and in-kernel key-ring).

Sure, it could have been hacked together without those bits, but it would have been a an ugly hack.

But there are already two working AFS clients. If code needs to be written, it's not yet another AFS client, but rather a new distributed file system, like AFS, but better. AFS is very useful, but far from perfect.


windows clients

Posted by: flacco on December 05, 2004 11:31 PM
whatever one comes up with, a transparent virtual filesystem windows client would help up-take.


OpenAFS has Windows 2000/XP/2003 clients plus ...

Posted by: David Mohring on December 06, 2004 04:39 AM
Aside from the portable source code, <A HREF="" title="">OpenAFS has clients</a> for AIX,Darwin,Debian GNU/Linux,Digital UNIX,Fedora,Irix,MacOS,RedHat,Solaris,Windows 2000/XP/2003.


it's already here

Posted by: Anonymous Coward on December 06, 2004 01:43 AM
We already have an excellent next-generation network file system protocol: WebDAV. Properly implemented, it's been shown to be more efficient than SMB and NFS, there are lots of servers, and it has all the features (locking, versioning, metadata, security, authentication, etc.) you would want.

It doesn't work well in either Linux or Windows kernels, but that's not a problem with WebDAV, it's a problem with the kernels: any wide area networked file system is going to have lots of failures and the Linux and Windows kernels don't deal well with failures in their network file system code. They also don't have the hooks for dealing with something as general and powerful as WebDAV.

The real problem is that the kernel hackers have been driving network file systems, as opposed to people like the WebDAV developers. The kernel needs to become more general and robust, and the kernel network file system support needs to get better before anything is going to improve.


Re:it's already here

Posted by: Anonymous Coward on December 06, 2004 02:14 PM
I don't think so. Event if it has those features, it's too much HTTP-like -- textual, generating a lot of network overhead, stateless, so it's not really adequate for LANs.


NFS does NOT send the data in the clear

Posted by: Anonymous Coward on December 06, 2004 04:45 AM
Everything you asked for on the security attributes of NFS is already there. NFS runs over RPC and you can choose which RPC security mechanism is used, for example RPCSEC_GSS using Kerberos as the mechanism. With Kerberos you can choose between, authentication,integrity protcetion, privacy or all three.

How to do this is all documented in publically availble IETF RFCs for NFSv3. Solaris has had this for over 10 years. There is a lot of work going on in various places to bring Linux upto this and to implement NFSv4 where this is mandatory to implement.

Repeat after me: NFS is NOT insecure, you just need to read the documentation and not choose the default RPC AUTH_SYS mechanism which is only suitable for a trusted network.

As for the delegation of files, that is in NFSv4 as well.


I had a look at the alternatives to NFS...

Posted by: Anonymous Coward on December 06, 2004 05:55 AM
...but wasn't impressed. Intermezzo seems to be an utterly dead project (I believe they author(s) are working for Lustre now) and Coda is just ridiculously complex to set up (and has some serious deficiencies).

What I like about NFS is that it's a) standard with *every* Linux/BSD/UNIX out there (so no messing with kernel rebuilds), b) it's very easy to set up and c) you can play around with automounting/rsync/soft-links to create a failover environment (I'll leave you to think about how you do that..).


Re:I had a look at the alternatives to NFS...

Posted by: Anonymous Coward on December 06, 2004 03:04 PM
.. and I totally agree.

Of course, Intermezzo was doomed to failure: who in their right mind would put FS over the wire as Web traffic? Sorry, but my FS needs something a bit better than the application space.

Lustre isn't a distributed file-system. It's another doomed iteration of someone trying to make a coda-lite fs. Replicating journal blocks via an application is just shooting yourself in the foot.

Some part of that argument, I'm sure, applies to the webDAV people, but I haven't seen just how good their aim is.


This story has been archived. Comments can no longer be posted.

Tableless layout Validate XHTML 1.0 Strict Validate CSS Powered by Xaraya