Sunday, November 27, 2005

RHCE: Run-time RAID Configuration

Here is a link to an excellent article on creating RAID arrays using the mdadm tool.

Here is an excerpt:

Creating an Array

Create (mdadm --create) mode is used to create a new array. In this example I use mdadm to create a RAID-0 at /dev/md0 made up of /dev/sdb1 and /dev/sdc1:

# mdadm --create --verbose /dev/md0 --level=0

--raid-devices=2 /dev/sdb1 /dev/sdc1

mdadm: chunk size defaults to 64K

mdadm: array /dev/md0 started.

The --level option specifies which type of RAID to create in the same way that raidtools uses the raid-level configuration line. Valid choices are 0,1,4 and 5 for RAID-0, RAID-1, RAID-4, RAID-5 respectively. Linear (--level=linear) is also a valid choice for linear mode. The --raid-devices option works the same as the nr-raid-disks option when using /etc/raidtab and raidtools.

In general, mdadm commands take the format:

mdadm [mode]  [options] 

Each of mdadm's options also has a short form that is less descriptive but shorter to type. For example, the following command uses the short form of each option but is identical to the example I showed above.

# mdadm -Cv /dev/md0 -l0 -n2 -c128 /dev/sdb1 /dev/sdc1

Saturday, November 12, 2005

RHCE: Installing RPMs from an NFS share

Using an NFS share to install packages is a very convenient means of ensuring that all of your systems are running similar packages and always having those packages available. I will not cover configuring an NFS share and will assume that the reader is already familiar with that function or has the capacity to figure it out.

1. Verify that the NFS share is available and mount on the local filesystem, if not already mounted.

mount -t nfs /local/install/point /nfs/server

2. If using the command line, simple use the RPM command to install the application:

rpm -Uvh /path/to/share/application.rpm

3. If using the package manager, use the following command from the command line:

system-config-packages --tree=/path/to/nfs/share &

You will now be able to select the applications that you would like to add or remove.

RHCE: Logical Volume Manager

Red Hat Enterprise Linux 4.0 uses a logical volume manager to facilitate efficient management of disks and partitions. With the Logical Volume Manager, partitions can be created that span multiple physical volumes and partitions. In this sense a physical volume is a hard disk. With this ability, an administrator can easily expand a partition or create an efficient partition scheme. RHEL 4.0 uses LVM2 by default.

When using LVM, it may be difficult to remember all of the commands that are possible and necessary to create a Logical Volume. An easy way to get a list of all of the related commands is to enter the lvm console by typing 'lvm' on the command line. Once in the LVM console, type 'help' and all of the available commands will be listed with a short description.

CAUTION: using the logical volume manager can and probably will destroy data. Verify that you have created backups of all of your data before trying the samples below.

Using the Logical Volume Manager

1. Partition the physical hard disks that will be used as part of the Logical Volume(s)

Use 'fdisk' as appropriate. Remember to set the system type as 'Linux LVM', which is type '8e'.

2. Create physical volume(s)

From within the lvm console, use pvcreate on each partition that will participate in the logical volume group.

pvcreate 'physical partition'

Sample:

lvm> pvcreate /dev/hdd1
Incorrect metadata area header checksum
Physical volume "/dev/hdd1" successfully created

3. Create a volume group

Using the vgcreate command, create a volume group which consists of the physical volumes created previously.

vgcreate 'volume group name' 'physical volume' ['physical volume'] ...

Sample:

lvm> vgcreate test1 /dev/hdd1 /dev/hdd2
Incorrect metadata area header checksum
Volume group "test1" successfully created

Verification of the volume group creation can be done with the vgdisplay command:

lvm> vgdisplay
--- Volume group ---
VG Name test1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 93.15 GB
PE Size 4.00 MB
Total PE 23846
Alloc PE / Size 0 / 0
Free PE / Size 23846 / 93.15 GB
VG UUID znEYOy-n4oJ-zmXq-QARI-cRvD-YmY5-Gq6qpd

4. Create a logical volume in an existing Logical Volume Group

Using the lvcreate command, create a logical volume:

lvcreate [-L 'size'] [-n 'logical volume name'] 'logical volume group'

Sample:

lvcreate -L 200M -n vol1 test1
Incorrect metadata area header checksum
Logical volume "vol1" created

Verify that the logical volume is as desired with the lvdisplay command:

lvm> lvdisplay
Incorrect metadata area header checksum
--- Logical volume ---
LV Name /dev/test1/vol1
VG Name test1
LV UUID HPqCiY-58NT-X1ae-5vk3-1hLw-f2no-AYe52O
LV Write Access read/write
LV Status available
# open 0
LV Size 200.00 MB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0

5. Format the logical volume with the filesystem desired (ext3 shown)

[root@primary ~]# mke2fs -j /dev/test1/vol1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

6. In good Red Hat form, create a label for the filesystem with the e2label command

[root@primary ~]# e2label /dev/test1/vol1 test1

7. Create an entry in the /etc/fstab file for this logical volume so that it will be mounted on subsequent boots. Use a meaningful mount point:

LABEL=test1 /test1 ext3 defaults 0 0


These steps have covered how to create a logical volume and use it. You can also expand an existing logical volume and perform other maintenance tasks. The only other task covered here will be expanding an existing logical volume.

Expand a logical volume

1. Add desired disk space to your volume group, if necessary

a. use fdisk to create a new partition of type '8e'
b. use pvcreate to initialize the new partition as a physical volume

[root@primary ~]# pvcreate /dev/hdd3
Physical volume "/dev/hdd3" successfully created

c. use vgextend to add the physical volume to the volume group

[root@primary ~]# vgextend test1 /dev/hdd3
Incorrect metadata area header checksum
Incorrect metadata area header checksum
Volume group "test1" successfully extended

CAUTION: The next step will remove all data from the partitions in question. Verify that you have backups.

d. use lvextend to expand the logical volume to the desired size

[root@primary ~]# lvextend -L 300M /dev/test1/vol1
Incorrect metadata area header checksum
Extending logical volume vol1 to 300.00 MB
Logical volume vol1 successfully resized

e. re-format your logical volume, relabel it, and remount it

umount 'logical volume'
mke2fs -j 'logical volume'
e2label 'path to volume' 'label'
mount 'path to volume in /etc/fstab'


Trouble Shooting

1. 'physical partition' not identified as an existing physical volume

lvm> vgcreate 'volume group' 'physical partition' 'physical partition'
Incorrect metadata area header checksum
Incorrect metadata area header checksum
No physical volume label read from 'physical partition'
not identified as an existing physical volume
Unable to add physical volume 'physical partition' to volume group 'volume group'.

To correct this problem, use lvm to create a physical volume on each partition with the pvcreate command:

lvm> pvcreate 'physical partition'
Incorrect metadata area header checksum
Physical volume "physical partition" successfully created

Create the volume group with the vgcreate command.