Resizing Veritas volumes with vxresize

We were getting close to running out of space on one of our database volumes last week, and I needed to add some additional storage to ensure that things kept running smoothly. The admin who originally created the VxVM database volume only used half of each of the five disks that were associated with the volume / file system that were at capacity, which meant I had roughly 18GB of free space available on each device to work with:

$ vxdg free | egrep ‘(D01|D02|D03|D04|D05)’

GROUP        DISK         DEVICE       TAG          OFFSET    LENGTH    FLAGS
datadg       D01          c2t0d0s2     c2t0d0       35547981   35547981  -
datadg       D02          c2t1d0s2     c2t1d0       35547981   35547981  -
datadg       D03          c2t2d0s2     c2t2d0       35547981   35547981  -
datadg       D04          c2t3d0s2     c2t3d0       35547981   35547981  -
datadg       D05          c2t4d0s2     c2t4d0       35547981   35547981  -
datadg       D06          c2t5d0s2     c2t5d0       35547981   35547981  -

Now there are a number of ways to resize volumes and file systems with VxVM and VxFS. You can use vxassist to grow or shrink a volume, and then use the fsadm utility to extend the file system. You can also perform both of these operations with vxresize, which takes the name of the volume to resize, the disk group the volume is a part of, and a size parameter to indicate that you want to grow (you use a “+” to indicate that you want to grow the volume by the value immediately following the plus sign) or shrink (you use a “-” to indicate that you want to shrink the volume by the value immediately following the dash) the volume. Since I preferred the most efficient method to resize my volume, I fired up vxresize and told it to extend the volume named datavol01 by 35547981 blocks:

$ /etc/vx/bin/vxresize -g datadg -F vxfs datavol01 +35547981

Instead of specifying blocks, you can also use unit specifiers such as “m” to denote megabytes, and “g” to denote gigabytes. As with all operations that change the structure of storage, you should test any resizing operations on non-production systems prior to changing production systems.

VxFS clear blocks mount option

While reading through the VxFS administrators guide last week, I came across a cool mount option that can be used to zero out file system blocks prior to use:

“In environments where performance is more important than absolute data integrity, the preceding situation is not of great concern. However, for environments where data integrity is critical, the VxFS file system provides a mount -o blkclear option that guarantees that uninitialized data does not appear in a file.”

This is pretty cool, and a useful feature for environments that are super concerned about data integrity

Preallocating files sequentially on VxFS file systems

One cool feature that is built into VxFS is the ability to preallocate files sequentially on disk. This capability can benefit sequential workloads, and will typically result in higher throughput since disk seek times are minimized (LBA addressing, disk drive defect management and storage array abstractions can sometimes obscure this, so this may not always be 100% accurate).

To use the VxFS preallocation features, a file first needs to be created:

$ dd if=/dev/zero of=oradata01.dbf count=2097152
2097152+0 records in
2097152+0 records out

In this example, I created a 1GB file (2097152 blocks * 512-bytes per block gives us 1GB) named oradata01.dbf, and double checked that it was 1GB by running ls with the “-h” option:

$ ls -lh

total 3.1G
-rw-r--r--  1 root root 1.0G Aug 25 09:06 oradata01.dbf

After a file of the correct size has been allocated, the setext utility can be used to reserve blocks for that file, and to create an extent that matches the number of blocks allocated to the file:

$ setext -r 2097152 -e 2097152 oradata01.dbf

To verify the settings that were assigned to the file, the getext utility can be used:

$ getext oradata01.dbf

oradata01.dbf:  Bsize  1024  Reserve 2097152  Extent Size 2097152

This is an awesome feature, and yet another reason why VxFS is one of the best file systems available today!

Defragmenting VxFS file systems

I came across Scott Kaiser’s script a while back, and have found it useful for determining if the VxFS free extent map is fragmented. The script takes a file system as an option, and prints a one-line string to indicate if the file system should be defragmented:

$ /u01
/u01 is badly fragmented. Defragmentation is recommended.

If the script determines that the free extent map (the script doesn’t report on directory entry fragmentation) is fragmented, you can perform an online defragmentation by invoking fsadm with the “-e” (reorganize extents) and “-d” (reorganize directory entries) options:

$ /usr/lib/fs/vxfs/fsadm -v -e -d /u01

UX:vxfs fsadm: INFO: V-3-20287: using device /dev/vx/rdsk/oradg/oravol01
UX:vxfs fsadm: INFO: V-3-20223: directory reorganization complete
UX:vxfs fsadm: INFO: V-3-20261: extent reorg pass 1
AU: aun =  16, tfree =  32064, sfree =    384
aun =  16, seg =   0, nfrag =   95, fblks = 1690, devid =    0 start =   524288, len =  2048
req[0] fset 999 ino 32 blocks 1  off 0x0 len 1
req[0] fset 999 ino 33 blocks 1  off 0x0 len 1
UX:vxfs fsadm: ERROR: V-3-24364: reorg failed for fset 999 ino 33
req[0] fset 999 ino 34 blocks 1  off 0x0 len 1
req[0] fset 999 ino 35 blocks 1  off 0x0 len 1
req[0] fset 999 ino 36 blocks 1  off 0x0 len 1
req[0] fset 999 ino 37 blocks 1  off 0x0 len 1
req[0] fset 999 ino 38 blocks 1  off 0x0 len 1
[ ... ]
aun =  16, seg =   0, nfrag =   86, fblks = 5397, devid =    0 start =   524288, len =  6144
UX:vxfs fsadm: INFO: V-3-20262: extent reorg complete

I prefer to avoid fragmentation, and try to preallocate all files to avoid this issue (for dynamic applications this isn’t always possible). For further information please refer to the fsadm_vxfs(1m) manual page.

Enabling large file support dynamically with VxFS

I recently encountered a VxFS file system that didn’t support largefiles. This issue was causing one of our Oracle databases to complain, which was preventing us from using datafiles optimized for our application access patterns. Since the file system was a Veritas File System (VxFS), I was able to fix this problem with the fsadm utility:

$ /usr/lib/fs/vxfs/fsadm -F vxfs -o largefiles /u01

$ mount -p | grep u01
/dev/vx/dsk/oradg/oravol01 – /u01 vxfs – no rw,suid,delaylog,largefiles,ioerror=mwdisable

This operation can be run against mounted live file systems, which is great for production environments.

Growing a Veritas File System

The Veritas File System (VxFS) allows file systems to be grown and shrunk with the fsadm(1m) utility. This activity can occur while a file system is online, and is relatively safe ( I have personally grown dozens of file systems, and have yet to have a single problem). To display the current size of a file system in blocks, we can use the df(1m) utility:

$df -t /u05
/u05 (/dev/vx/dsk/oradg/oravol05): 209158736 blocks 3268092 files
total: 209698816 blocks 3268096 files

To shrink /u05 to 50000000 blocks, we can invoke fsadm with the desired block count, and the file system to shrink:

$ /usr/lib/fs/vxfs/fsadm -b 50000000 /u05
UX:vxfs fsadm: INFO: V-3-23586: /dev/vx/rdsk/oradg/oravol05 is currently 209698816 sectors – size will be reduced

We can verify that volume was shrunk with the df(1m) utility:

$ df -t /u05
/u05 (/dev/vx/dsk/oradg/oravol05): 49464784 blocks 772859 files
total: 50000000 blocks 772864 files

We could have grown this file system instead of shrinking it by adjusting the number of blocks passed to the “-b” option. As with all operations that modify the structure of storage, you should test this on a non-production system prior to implementing this on production servers.