This is a read-only archive. Find the latest Linux articles, documentation, and answers at the new Linux.com!

Linux.com

Feature: System Administration

Get to know Ubuntu's Logical Volume Manager

By Benjamin Mako Hill, Corey Burger, Jonathan Jesse, and Jono Bacon on July 30, 2008 (9:00:00 AM)

Share    Print    Comments   

Hard drives are slow and fail often, and though abolished for working memory ages ago, fixed-size partitions are still the predominant mode of storage space allocation. As if worrying about speed and data loss weren't enough, you also have to worry about whether your partition size calculations were just right when you were installing a server or whether you'll wind up in the unenviable position of having a partition run out of space, even though another partition is maybe mostly unused. And if you might have to move a partition across physical volume boundaries on a running system, well, woe is you.

This article is excerpted from the newly published book The Offical Ubuntu Book, Third Edition published by Prentice Hall Professional, June 2008, Copyright 2008 Canonical, Ltd.

RAID helps to some degree. It'll do wonders for your worries about performance and fault tolerance, but it operates at too low a level to help with the partition size or fluidity concerns. What we'd really want is a way to push the partition concept up one level of abstraction, so it doesn't operate directly on the underlying physical media. Then we could have partitions that are trivially resizable or that can span multiple drives, we could easily take some space from one partition and tack it on another, and we could juggle partitions around on physical drives on a live server. Sounds cool, right?

Very cool, and very doable via logical volume management (LVM), a system that shifts the fundamental unit of storage from physical drives to virtual or logical ones. LVM has traditionally been a feature of expensive, enterprise Unix operating systems or was available for purchase from third-party vendors. Through the magic of free software, a guy by the name of Heinz Mauelshagen wrote an implementation of a logical volume manager for Linux in 1998, which we'll refer to as LVM. LVM has undergone tremendous improvements since then and is widely used in production today, and just as you expect, the Ubuntu installer makes it easy for you to configure it on your server during installation.

LVM theory and jargon

Wrapping your head around LVM is a bit more difficult than with RAID because LVM rethinks the whole way of dealing with storage, which expectedly introduces a bit of jargon that you need to learn. Under LVM, physical volumes, or PVs, are seen just as providers of disk space without any inherent organization (such as partitions mapping to a mount point in the OS). We group PVs into volume groups, or VGs, which are virtual storage pools that look like good old cookie-cutter hard drives. We carve those up into logical volumes, or LVs, that act like the normal partitions we're used to dealing with. We create filesystems on these LVs and mount them into our directory tree. And behind the scenes, LVM splits up physical volumes into small slabs of bytes (4MB by default), each of which is called a physical extent, or a PE.

You take a physical hard drive and set up one or more partitions on it that will be used for LVM. These partitions are now physical volumes (PVs), which are split into physical extents (PEs) and then grouped in volume groups (VGs), on top of which you finally create logical volumes (LVs). It's the LVs, these virtual partitions, and not the ones on the physical hard drive, that carry a filesystem and are mapped and mounted into the OS. If you're confused about what possible benefit we get from adding all this complexity only to wind up with the same fixed-size partitions in the end, hang in there. It'll make sense in a second.

The reason LVM splits physical volumes into small, equally sized physical extents is that the definition of a volume group (the space that'll be carved into logical volumes) then becomes "a collection of physical extents" rather than "a physical area on a physical drive," as with old-school partitions. Notice that "a collection of extents" says nothing about where the extents are coming from and certainly doesn't impose a fixed limit on the size of a volume group. We can take PEs from a bunch of different drives and toss them into one volume group, which addresses our desire to abstract partitions away from physical drives. We can take a VG and make it bigger simply by adding a few extents to it, maybe by taking them from another VG, or maybe by tossing in a new physical volume and using extents from there. And we can take a VG and move it to different physical storage simply by telling it to relocate to a different collection of extents. Best of all, we can do all this on the fly, without any server downtime.

Setting up LVM

Surprisingly enough, setting up LVM during installation is no harder than setting up RAID. Create partitions on each physical drive you want to use for LVM just as you did with RAID, but tell the installer to use them as physical space for LVM. Note that in this context, PVs are not actual physical hard drives; they are the partitions you're creating.

You don't have to devote your entire drive to partitions for LVM. If you like, you're free to create actual filesystem-containing partitions alongside the storage partitions used for LVM, but make sure you're satisfied with your partitioning choice before you proceed. Once you enter the LVM configurator in the installer, the partition layout on all drives that contain LVM partitions will be frozen.

Consider a server with four drives, which are 10GB, 20GB, 80GB, and 120GB in size. Say we want to create an LVM partition, or PV, using all available space on each drive, and then combine the first two PVs into a 30GB volume group and the latter two into a 200GB one. Each VG will act as a large virtual hard drive on top of which we can create logical volumes just as we would normal partitions.

As with RAID, arrowing over to the name of each drive and pressing Enter will let us erase the partition table. Then pressing Enter on the FREE SPACE entry lets us create a physical volume -- a partition that we set to be used as a physical space for LVM. Once all three LVM partitions are in place, we select Configure the Logical Volume Manager on the partitioning menu.

After a warning about the partition layout, we get to a rather spartan LVM dialog that lets us modify VGs and LVs. According to our plan, we choose the former option and create the two VGs we want, choosing the appropriate PVs. We then select Modify Logical Volumes and create the LVs corresponding to the normal partitions we want to put on the system -- say, one for each of /, /var, /home, and /tmp.

You can already see some of the partition fluidity that LVM brings you. If you decide you want a 25GB logical volume for /var, you can carve it out of the first VG you created, and /var will magically span the two smaller hard drives. If you later decide you've given /var too much space, you can shrink the filesystem and then simply move over some of the storage space from the first VG to the second. The possibilities are endless.

Remember, however, that LVM doesn't provide redundancy. The point of LVM is storage fluidity, not fault tolerance. In our example, the logical volume containing the /var filesystem is sitting on a volume group that spans two hard drives. This means that either drive failing will corrupt the entire filesystem, and LVM intentionally doesn't contain functionality to prevent this problem.

When you need fault tolerance, build your volume groups from physical volumes that are sitting on RAID. In our example, we could have made a partition spanning the entire size of the 10GB hard drive and allocated it to physical space for a RAID volume. Then, we could have made two 10GB partitions on the 20GB hard drive and made the first one also a physical space for RAID. Entering the RAID configurator, we would create a RAID 1 array from the 10GB RAID partitions on both drives, but instead of placing a regular filesystem on the RAID array as before, we'd actually designate the RAID array to be used as a physical space for LVM. When we get to LVM configuration, the RAID array would show up as any other physical volume, but we'd know that the physical volume is redundant. If a physical drive fails beneath it, LVM won't ever know, and no data loss will occur. Of course, standard RAID array caveats apply, so if enough drives fail and shut down the array, LVM will still come down kicking and screaming.

If you've set up RAID and LVM arrays during installation, you'll want to learn how to manage the arrays after the server is installed. We recommend the respective how-to documents from The Linux Documentation Project at http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html and http://www.tldp.org/HOWTO/LVM-HOWTO. The how-tos sometimes get technical, but most of the details should sound familiar if you've understood the introduction to the subject matter here.

Share    Print    Comments   

Comments

on Get to know Ubuntu's Logical Volume Manager

Note: Comments are owned by the poster. We are not responsible for their content.

Get to know the Linux Logical Volume Manager

Posted by: Anonymous [ip: 74.222.216.191] on July 30, 2008 12:27 PM
Are their any overhead percentage figures to show the additional disk space used for the LVM directory "pointers" and what about additional time spent to access additional directory information?

#

Get to know the Linux Logical Volume Manager

Posted by: Anonymous [ip: 81.155.91.100] on July 30, 2008 12:36 PM
Actually, LVM can do mirroring too - you don't need to setup RAID 1.

#

A picture is worth...

Posted by: Anonymous [ip: 192.35.35.35] on July 30, 2008 01:24 PM
From a newb who just discovered LVM this weekend, I found that this article (and the picture it contains) to be very helpful: http://www.howtoforge.com/linux_lvm

#

Get to know the Linux Logical Volume Manager

Posted by: Anonymous [ip: 66.217.88.194] on July 30, 2008 03:19 PM
I have been using lvm since fc4 through fc6 and now fc8.
Prior to using lvm, I was able to access earlier versions of the OS
by using mount. There doesn't seem to be any documentation that
I have been able to find - to mount say fc6 from fc8. Dual boot isn't
even an option as grub seems to get confused. Bottom line if I
want to copy files from an earlier release I have to boot into that
version and copy files to another computer via ethernet or copy onto
optical media. PITA

#

Re: Get to know the Linux Logical Volume Manager

Posted by: Anonymous [ip: 128.104.255.12] on July 31, 2008 12:17 AM
Mounting Logical Volumes from the Volume Groups for older versions of Linux should work, but may take a little digging. Look up the vgscan and maybe lvchange commands. Be careful, though, about LVM versions. Sufficiently old Linux versions may be using the LVM version 1 format as opposed to the current LVM2.

As for GRUB, I must admit I still use the ~100M /boot partition (not in LVM) for each version of a linux OS I'm setting up for multi-booting. It's just been a lot easier to deal with.

#

Re: Get to know the Linux Logical Volume Manager. Accessing previous versions.

Posted by: Anonymous [ip: 129.33.49.251] on July 31, 2008 08:54 PM
If you keep multiple installations on your system. Use a different volume group for system data (/, /usr, /opt, /var....) for each installation.
Then you can vary on each volume group as needed and mount the LVs wherever you need.

As for dual booting it, you either disable distro automatic updating of grub's menu.lst or equivalent, and stick images and initrds in the same one with different parameters for each. Or you setup a /boot partition for each installation and have grub chainloading grub.

Or you could just stop wasting time dual booting. If you like hand holding managing virtual machines there exists virtual box and vmware. If you're more of the DYI type, kvm, kqemu, and xen work well enough.

#

Get to know the Linux Logical Volume Manager

Posted by: Anonymous [ip: 66.142.248.194] on July 30, 2008 03:42 PM
Another friendlier article (no ads, all one page, some pictures).

http://www.ntlug.org/Articles/LVM

#

Why is every tutorial these days about Ubuntu?

Posted by: Anonymous [ip: 68.90.69.174] on July 30, 2008 03:55 PM
Ubuntu may be the most popular distro for the Desktop, but it doesn't come close on the server level. A tutorial/article about LVM would be much more appropriate speaking of CentOS or Fedora than Ubuntu. While I respect the author's right to talk about his distro of choice, a much better title would have been "Logical Volume Manager in Ubuntu" rather that a generic title that suggests it is distro-neutral.

#

For those who do not like to read...

Posted by: Anonymous [ip: 66.167.106.34] on July 31, 2008 03:09 PM
This was a presentation at Lugradio Live on LVM.

http://www.archive.org/details/LRL_USA_2008_lvm

#

Get to know Ubuntu's Logical Volume Manager

Posted by: Anonymous [ip: 66.93.183.244] on July 31, 2008 05:35 PM
Lvm on your root fs slices is scary. Perhaps I have been bitten too much by vxvm.

#

Re: Get to know Ubuntu's Logical Volume Manager

Posted by: Anonymous [ip: 204.214.3.243] on July 31, 2008 08:04 PM
Yepper... placing a root under an LVM with an expiring license is painful (I've been the Veritas route in the past).

#

Re: Get to know Ubuntu's Logical Volume Manager

Posted by: Anonymous [ip: 129.33.49.251] on July 31, 2008 08:29 PM
Having / on a logical volume isn't particularly scary. AIX has been doing it since 1990. Aside from one incident where upgrading AFS corrupted the JFS log in rootvg, it's been rather unexciting aside from "If the disk starts having problems, you'll have problems accessing what's on it."
LVM on Linux, including putting /, /usr, and /var all on a volumegroup has been similarly unexciting since the distros worked out how to generate initrds that can successfully detect hardware and vary on volume groups. I do see youngins do the same mistakes I saw people making on AIX back in 1995 (Extend the volume group containing / to an external harddrive, then try and reboot the machine with that harddrive removed), but most of this decade has been watching people make the same mistakes made back in the 1990s.

#

Get to know Ubuntu's Logical Volume Manager

Posted by: Anonymous [ip: 198.247.174.254] on August 01, 2008 01:31 AM
How exactly is this "Ubuntu's" LVM? The title makes it sound like this is come kind of distro specific tool, then the article links to the tdlp howto that I use every time I forget the commands.

Personally, I don't like to use an LVM for the / partition, I much prefer to put filesystems that might need to actually *grow* on LVM instead. The whole practice of /boot, swap, and / being the only partitions for a server just doesn't sit well with me, you're better off having at least some kind of idea how your server is going to be used and partitioning appropriately. Running a web server? Put most of your drive space in /var/www. File server? How about a mount called /files? DB server? /data would be a good place to start. LVM works quite well here, I use it frequently when building test virtual machines for various installs.

#

Re: Get to know Ubuntu's Logical Volume Manager

Posted by: Anonymous [ip: 68.88.48.126] on August 01, 2008 04:00 PM
Well...

"This article is excerpted from the newly published book The Offical Ubuntu Book, Third Edition published by Prentice Hall Professional, June 2008, Copyright 2008 Canonical, Ltd."

That might be why.

#

Get to know Ubuntu's Logical Volume Manager

Posted by: Anonymous [ip: 99.252.235.116] on August 01, 2008 01:46 PM
I love lvm, it brings a whole new set of possibilities to disk management. Most Unixes use it these days, AIX, HP-UX, Solaris, so getting to know it or at least its concepts is definitely useful if you are looking for a UNIX admin position somewhere. LVM2 on linux is based on HP's LVM implimentation, but there are definite differences in the way that it works. I am an HP-UX admin and I have trouble understanding why you can export a volume group why it is still active in linux and why you still see it after you do?!?
Also the mirroring implimentation isn't fully there yet, you can only create a mirrored LV when you first create it, not vgextend it in as it is not supported yet. This sucks for me, because this is used on all of our HP-UX implimentations, and it also means that I can't break the mirror, reduce out a disk and then recreate it after.

I think Linux LVM has some way to go before it can actually be used in a data center.

#

This story has been archived. Comments can no longer be posted.



 
Tableless layout Validate XHTML 1.0 Strict Validate CSS Powered by Xaraya