We were patching Exadata and one of the YUM pre-checks failed due to invalid LVM size. We hadn’t done any resizing lately so I think issue had been there already but for whatever reason it wasn’t noticed.

For backing up and rolling back on OS updates Exadata takes snapshot on identical LVM volume as the / volume is.

We had this issue when we checked:

[root@exa1dbadm01 ~]# lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVDbOra1 VGExaDb -wi-ao---- 200.00g
LVDbSwap1 VGExaDb -wi-ao---- 24.00g
LVDbSys1 VGExaDb -wi-ao---- 35.00g
LVDbSys2 VGExaDb -wi-a----- 30.00g
LVDoNotRemoveOrUse VGExaDb -wi-a----- 1.00g

LVDbSys2 was incorrectly sized and had to be resized to 35G.

First verify the root partition device and check there is enough space to extend the backup LVM.

[root@exa1dbadm01 ~]# imageinfo

Kernel version: 2.6.39-400.277.1.el6uek.x86_64 #1 SMP Wed Feb 24 16:13:42 PST 2016 x86_64
Image kernel version: 2.6.39-400.277.1.el6uek
Image version: 12.1.2.3.1.160411
Image activated: 2016-08-14 00:21:09 +0200
Image status: success
System partition on device: /dev/mapper/VGExaDb-LVDbSys1

[root@exa1dbadm01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VGExaDb 2 5 0 wz--n- 1.63t 1.35t

Then activate the LVM, add 5G to it and create filesystem for it:

[root@exa1dbadm01 ~]# lvchange -a y /dev/mapper/VGExaDb-LVDbSys2
[root@exa1dbadm01 ~]# lvm lvextend -L+5G --verbose /dev/mapper/VGExaDb-LVDbSys2
Finding volume group VGExaDb
Archiving volume group "VGExaDb" metadata (seqno 15).
Extending logical volume VGExaDb/LVDbSys2 to 35.00 GiB
Size of logical volume VGExaDb/LVDbSys2 changed from 30.00 GiB (7680 extents) to 35.00 GiB (8960 extents).
Loading VGExaDb-LVDbSys2 table (252:1)
Suspending VGExaDb-LVDbSys2 (252:1) with device flush
Resuming VGExaDb-LVDbSys2 (252:1)
Creating volume group backup "/etc/lvm/backup/VGExaDb" (seqno 16).
Logical volume LVDbSys2 successfully resized

[root@exa1dbadm01 ~]# mkfs -t ext3 -b 4096 /dev/mapper/VGExaDb-LVDbSys2
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
2293760 inodes, 9175040 blocks
458752 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
280 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

And now we are done! You can see LVDbSys1 and LVLDbSys2 are both 35G.

[root@exa1dbadm01 ~]# lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVDbOra1 VGExaDb -wi-ao---- 200.00g
LVDbSwap1 VGExaDb -wi-ao---- 24.00g
LVDbSys1 VGExaDb -wi-ao---- 35.00g
LVDbSys2 VGExaDb -wi-a----- 35.00g
LVDoNotRemoveOrUse VGExaDb -wi-a----- 1.00g

There is also exact note describing the whole process:

Exadata YUM Pre-Checks Fails with ERROR: Inactive lvm not equal to active lvm. Backups will fail. (Doc ID 1988429.1)

After this we were able to run pre-checks without issues and could proceed to patching.

Leave a Reply

Your email address will not be published.