There a quite a few new changes in OEL 7 (and its twins CentOS7, RHEL7, etc.), one of them is the default filesystem is now xfs instead of ext4. Well, I’d just got used to the latter, and in comes xfs, which you cannot shrink????
I asked and got a nice new VM to install one of our products, but the OEL7 installer, in a wisdom that remains its own, decided to to create one massive /home partition filling up the whole disk along side a normal /boot and / partition.
# df -h Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur /dev/mapper/ol-root 50G 1,2G 49G 3% / devtmpfs 992M 0 992M 0% /dev tmpfs 1002M 0 1002M 0% /dev/shm tmpfs 1002M 8,4M 994M 1% /run tmpfs 1002M 0 1002M 0% /sys/fs/cgroup /dev/mapper/ol-home 248G 33M 248G 1% /home /dev/sda1 497M 125M 372M 26% /boot
However we needed the separate /u01, /u02, /u03 and /u04. So, that should be easy, quick bit of resize on the logical volumes, create the new ones and job done. But… the new default file system can not be reduced, only increased, so we’ll have to do it the hard way using ssm which is the new tool for managing volumes in OEL 7. Let’s install that first, it is part of the system-storage-manager package:
yum -y install system-storage-manager
And run it, widening your default window:
ssm list ------------------------------------------------------------ Device Free Used Total Pool Mount point ------------------------------------------------------------ /dev/sda 300.00 GB PARTITIONED /dev/sda1 500.00 MB /boot /dev/sda2 64.00 MB 299.45 GB 299.51 GB ol ------------------------------------------------------------ --------------------------------------------------- Pool Type Devices Free Used Total --------------------------------------------------- ol lvm 1 64.00 MB 299.45 GB 299.51 GB --------------------------------------------------- ------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ------------------------------------------------------------------------------- /dev/ol/root ol 50.00 GB xfs 49.98 GB 49.05 GB linear / /dev/ol/swap ol 2.00 GB linear /dev/ol/home ol 245.57 GB xfs 247.32 GB 247.32 GB linear /home /dev/sda1 500.00 MB xfs 496.67 MB 396.82 MB part /boot -------------------------------------------------------------------------------
So that 247Gb /home is not what we need, so here’s the plan
- move or backup /home/* into some folder on the / volume
- drop the /home
- recreate the new volumes using ssm
- restore /home/* into new volume of same name
Backup
The -p option will ensure we keep the full folder structure
mkdir -p /backup cp -rp /home/* /backup/
Drop the /home using ssm
umount /home/ ssm remove /dev/ol/home Do you really want to remove active logical volume home? [y/n]: y Logical volume "home" successfully removed
Recreate the volumes
Required mount points
This neat trick allows you to create all four folder in one command
mkdir -p /u0{1,2,3,4}
Recreate /home
Here I use the same label ‘home’ instead of calling my volume ‘lvhome’ like the rest of them. Calling my volume ‘lvhome’, it would fail to mount for some reason whether using the historical mount command or the variant using ssm (ssm mount /dev/ol/lvhome /home)
ssm create -s 10G -n home --fstype xfs -p ol /dev/sda2 /home
You’ll get the following warning message, you can answer yes to wipe the signature:
WARNING: xfs signature detected on /dex/ol/home at offset 0. Wipe it? [y/n]: Wiping xfs signature on /dev/ol/home. Logical volume "home" created. meta-data=/dev/ol/home isize=256 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Create all others
ssm create -s 20G -n lvorabin --fstype xfs -p ol /dev/sda2 /u01 ssm create -s 100G -n lvoradat --fstype xfs -p ol /dev/sda2 /u02 ssm create -s 100G -n lvorafra --fstype xfs -p ol /dev/sda2 /u03 ssm create -s 10G -n lvoraexp --fstype xfs -p ol /dev/sda2 /u04
Check the results with ssm
ssm list ----------------------------------------------------------- Device Free Used Total Pool Mount point ----------------------------------------------------------- /dev/sda 300.00 GB PARTITIONED /dev/sda1 500.00 MB /boot /dev/sda2 7.51 GB 292.00 GB 299.51 GB ol ----------------------------------------------------------- -------------------------------------------------- Pool Type Devices Free Used Total -------------------------------------------------- ol lvm 1 7.51 GB 292.00 GB 299.51 GB -------------------------------------------------- ----------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ----------------------------------------------------------------------------------- /dev/ol/root ol 50.00 GB xfs 49.98 GB 49.05 GB linear / /dev/ol/swap ol 2.00 GB linear /dev/ol/home ol 10.00 GB xfs 9.99 GB 9.99 GB linear /home /dev/ol/lvorabin ol 20.00 GB xfs 19.99 GB 19.99 GB linear /u01 /dev/ol/lvoradat ol 100.00 GB xfs 99.95 GB 99.95 GB linear /u02 /dev/ol/lvorafra ol 100.00 GB xfs 99.95 GB 99.95 GB linear /u03 /dev/ol/lvoraexp ol 10.00 GB xfs 9.99 GB 9.99 GB linear /u04 /dev/sda1 500.00 MB xfs 496.67 MB 396.82 MB part /boot -----------------------------------------------------------------------------------
Modify /etc/fstab
Or the mount points won’t survive a reboot, note that we use the /dev/mapper names rather than the volume names mentioned in the ssm list
/dev/mapper/ol-home /home xfs defaults 1 2 /dev/mapper/ol-lvorabin /u01 xfs defaults 1 2 /dev/mapper/ol-lvoradat /u02 xfs defaults 1 2 /dev/mapper/ol-lvoraexp /u03 xfs defaults 1 2 /dev/mapper/ol-lvorafra /u04 xfs defaults 1 2
Restore backup
And reboot afterwards, just to make sure everything is OK
cp -rp /backup/* /home/ reboot
1 réflexion sur “Managing partitions in Oracle Enterprise Linux 7”
Muy interesante, gracias por publicarlo.