CentOS 7 uses the Logical Volume Manager (LVM) to organize the structure and available capacity of your partitions. It is a very dynamic and flexible system that can be extended or rearranged over time, and which is essential in today’s most demanding and ever-changing environments. At the moment, buzzwords such as big data or cloud computing can be heard everywhere. Since massive amounts of data get produced all the time, storage requirements and disk space have to grow at the same steady pace. In this process, you will learn how to work with the LVM system and how to extend your physical drives, and also how to shrink and extend the capacity of your filesystems.
To Start With: What Do You Need?
To complete this process, you will require a working installation of the CentOS 7 operating system with root privileges. We will use virtual block devices instead of real disk devices to show you from scratch how to set up a LVM first, and afterward how to work with it. Please read the Creating a virtual block device process and create three 1 gigabyte virtual block devices with the GPT partition table, which will be labeled as /dev/loop0, /dev/loop1, and /dev/loop2 in this example.
Again, feel free to use real disk devices if you feel ready for it.
The Process
First, we will start by creating an LVM test environment similar to the standard CentOS 7 LVM structure, which is set up during the installation of every server system:
- First, let’s log in as root and show information about our virtual block devices:
lsblk -io NAME,SIZE
- Next, create new partitions spanning the whole disk on each of the three virtual block devices (without a filesystem label):
parted -a optimal /dev/loop0 mkpart primary 2048KiB 100%
parted -a optimal /dev/loop1 mkpart primary 2048KiB 100%
parted -a optimal /dev/loop2 mkpart primary 2048KiB 100% - Now, let’s create LVM physical volumes on each of the loop devices (type yes to remove the gpt label):
pvcreate /dev/loop0p1 pvcreate /dev/loop1p1 pvcreate /dev/loop2p1
- Next, show information about our physical volumes:
pvdisplay
- Next, we will create a new LVM volume group on our first physical volume:
vgcreate myVG1 /dev/loop0p1
- Now, show information about the created group:
vgdisplay myVG1
- Afterward, let’s create some logical volumes on our first volume group, which will be treated as virtual partitions in our Linux system:
lvcreate -L 10m -n swap myVG1
lvcreate -L 100m -n home myVG1
lvcreate -L 400m -n root myVG1 - Next, show information about the logical volumes:
lvdisplay myVG1
- Now, display how much free space our underlying volume group has left, which becomes important if you want to expand some logical volumes (see the section Free PE / Size in the output):
vgdisplay myVG1
- Afterward, let’s create the filesystems on those new logical volumes:
mkswap /dev/myVG1/swap
mkfs.xfs /dev/myVG1/home
mkfs.xfs /dev/myVG1/root - Now, after we have created our test LVM system (which is very similar to the real CentOS LVM standard layout, but with smaller sizes), let’s start working with it.
- First, let’s shrink the root partition, which is currently 400 megabytes (M) in size, by 200 megabytes, and afterward, let’s increase the home partition by 500 megabytes (confirm the possible data loss):
lvresize -L -200m /dev/myVG1/root
lvresize -L +500m /dev/myVG1/homE - Use vgdisplay myVG1 again to see how the volume group’s free space changes by running the previous commands (see Free PE / Size).
- Now, let’s expand the XFS filesystem on the grown logical volume:
mkdir /media/home-test;mount /dev/myVG1/home /media/home-test xfs_growfs /dev/myVG1/home
Note
It is very important not to use resize2fs for growing XFS filesystems, because it’s incompatible and can corrupt them. - Now, let’s say that after some time your data has grown again, and you need the home partition to be 1.5 gigabytes (G), but you only have 184.00 MiB left on the underlying volume group. First, we need to add our two prepared physical volumes from the beginning of this process to our volume group:
vgextend myVG1 /dev/loop1p1 /dev/loop2p1 vgdisplay myVG1
- Afterward, we have enough free space in our volume group (see Free PE / Size) to expand our home logical volume (the volume must stay mounted):
lvresize -L +1500m /dev/myVG1/home xfs_growfs /dev/myVG1/home
How Does It Work?
Here, in this process, we have shown you how to work with the LVM for XFS partitions. It has been developed with the purpose of managing disk space on several hard disks dynamically. You can easily merge many physical hard disks together to make them appear as a single virtual hard disk to the system. This makes it a flexible and very scalable system in comparison to working with plain old static partitions. Traditional partitions are bound to, and cannot grow over, the total disk capacity they reside on, and their static partition layout cannot be changed easily. Also, we have introduced some important LVM technical terms that provide different abstraction layers to a hard disk, and which will be explained in this section so as to understand the concepts behind it: physical volume (pv), volume group (vg), and logical volume (lv).
So, what did we learn from this experience?
We started this process by creating three virtual block devices of 1 gigabyte (G) each and then one partition spanning the whole device on each of them. Afterwards, we defined these single-partition devices as physical volumes (pv) using the pvcreate command. A pv is an LVM term that defines a storage unit in the LVM world. It must be defined on a partition, full drive, or loop device. A pv is just an abstraction of all the space available in the surrounding partition so that we can work with it on an LVM basis. Next, we created a volume group (vg) with the vgcreate command, where we also had to define a volume group name of our choice and put the first pv in it as a basic storage volume. As you can see, a vg is a container for at least one pv (we add more pv’s later). Adding or removing pv’s to or from a vg is the heart of the whole scalability concept of the LVM system. The pv’s don’t have to be all the same size, and it is possible to grow your vg over time by adding dozens of new physical drives all defined as pv. You can have more than one vg on your system, and you can identify them by the unique names you are giving to them. So, in summary, to extend the space of your vg, you have to create pv’s out of physical drives, which you can then add to.
Finally, we created logical volumes (lv) on our vg, which can be seen and used like real physical partitions within a vg. Here, we created three lv’s using the lvcreate command, by which we need to define the name of the vg (remember, there can be more than one vg on your system) that we want to put our target lv on, along with the size of the volume, as well as a name for it as the last parameter. You can add multiple lvs into a vg and you don’t need to use the whole allocated space from the underlying free space of the vg. You can be very flexible with it. The best part is that your decision about your volumes’ size and layout doesn’t have to be fixed for all time; you can change them anytime later. It is a very dynamic system that can be extended and shrunk, deleted and created, without having to unmount the volume beforehand. But you have to remember that all lvs are bound to a vg, and it is not possible to create them without it or outside its spacial boundaries. If you need to extend an lv’s space over the borders of the underlying vg, you have to extend the vg, as show in this process.
Note
As you may have seen, for every LVM term, there is a “display” and “create” command, so it’s easy to remember: pvdisplay, vgdisplay, lvdisplay, pvcreate, vgcreate, lvcreate.
After you have successfully created your lv’s, you can work with them as you would with every other block device partition on your system. The only difference is that they reside in special device folders: /dev/<vg name>/<lv name> or /dev/mapper/<vg name>/<lv name>.
For example, the home volume created in this example has the name /dev/myVG1/home. Finally, in order to use them as normal mount points, we created some test filesystems on them.
In the second part of this , we showed you how to extend our vg and how to shrink and expand our lv’s test system.
We started by using the vgdisplay myVG1 command to show the currently available space on the vg. In the command output, we saw that our current volume group has a total of 996M (VG Size), the allocated size from our lv’s (swap, home, root) is 512M (Alloc PE / Size), and the free size is 484M (Free PE /Size). Next, we used the lvresize command to shrink and expand the logical volume’s root and home. The -L parameter sets the new size of the volume, and with the + or -sign, the value is added to or subtracted from the actual size of the logical volume. Without it, the value is taken as an absolute one. Remember that we could only increase the home partition because the current volume layout does not occupy the complete vg’s total space. After resizing, if we use the vgdisplay command again, we see that we now occupy more space in the vg; its free size has been decreased to 184M. Since we expanded the home volume from 100M to 500M in total, we need to remember to expand its XFS filesystem too, since expanding a volume does not automatically expand its filesystem. Therefore, 400M of the current volume are unallocated without any filesystem information. We used the command, xfs_growfs, which will, without defining a limit parameter, use the complete unallocated area for the XFS filesystem. If you want to resize any other filesystem type, such as ext4, you would use the resize2fs command instead.
Finally, we wanted to grow the home volume by 1.5G, but we only have 184M left on our vg to expand. This is where LVM really shines because we can just add some more physical volumes to it (in the real world, you would just install new hard disks in your server and use them as pvs). We showed you how to extend the capacity of your vg by adding two 1G-sized pvs to it using the vgextend command. Afterward, we used vgdisplay to see that our vg has now grown to 3G in total size, so finally, we could extend our home lv as it would now fit into it. As a last step, we expanded the XFS file system once again to fill up the whole 2G home volume size.
Please remember, all the time, that if you use vg’s with several physical hard disks, your data will be distributed among these. An LVM is not a RAID system and has no redundancy, so if one hard disk fails, your complete vg will fail too and your data will be lost! In order to deal with this problem, a proposed solution could be to use a physical RAID system for your hard disks and create an LVM on top of that.