RHCE Red Hat Certified Engineer Linux Study Guide (Exam RH302), Fourth Edition [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

RHCE Red Hat Certified Engineer Linux Study Guide (Exam RH302), Fourth Edition [Electronic resources] - نسخه متنی

Michael Jang

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
توضیحات
افزودن یادداشت جدید








Certification Objective 3.05: RAID Configuration and Data Recovery



A Redundant Array of Independent Disks (RAID) is a series of disks that can save your data even if there is a catastrophic failure on one of the disks. While some versions of RAID make complete copies of your data, others use the so-called parity bit to allow your computer to rebuild the data on lost disks.

Linux RAID has come a long way. A substantial number of hardware RAID products support Linux, especially from name brand PC manufacturers. Dedicated RAID hardware can ensure the integrity of your data even if there is a catastrophic physical failure on one of the disks.

Depending on your definitions, RAID has nine or ten different levels, which can accommodate different levels of data redundancy. Only three levels of RAID are supported directly by RHEL 3: levels 0, 1, and 5. Hardware RAID uses a RAID controller connected to an array of several hard disks. A driver must be installed to be able to use the controller. Most RAID is hardware based; when properly configured, the failure of one drive in a RAID 1 or RAID 5 array does not destroy the data in the array. Linux, meanwhile, offers a software solution to RAID. Once RAID is configured on a sufficient number of partitions, Linux can use those partitions just as it would any other block device. However, to ensure redundancy, it's up to you in real life to make sure that each partition in a Linux software RAID array is configured on a different physical hard disk.





On The Job

The RAID md device is a meta device. In other words, it is a composite of two or more other devices such as /dev/hda1 and /dev/hdb1 that might be components of a RAID array.


The following sections describe the basic RAID levels supported on Red Hat Enterprise Linux 3.


RAID 0


This level of RAID makes it faster to read and write to the hard drives. However, RAID 0 provides no data redundancy. It requires at least two hard disks.

Reads and writes to the hard disks are done in parallel, in other words, to two or more hard disks simultaneously. All hard drives in a RAID 0 array are filled equally. But since RAID 0 does not provide data redundancy, a failure of any one of the drives will result in total data loss. RAID 0 is also known as 'striping without parity.'


RAID 1


This level of RAID mirrors information to two or more other disks. In other words, the same set of information is written to two different hard disks. If one disk is damaged or removed, you still have all of the data on the other hard disk. The disadvantage of RAID 1 is that data has to be written twice, which can reduce performance. You can come close to maintaining the same level of performance if you also use separate hard disk controllers. That prevents the hard disk controller from becoming a bottleneck.

And it is expensive. To support RAID 1, you need an additional hard disk for every hard disk worth of data. RAID 1 is also known as disk mirroring.


RAID 4


While this level of RAID is not directly supported by current Linux distributions associated with Red Hat, it is still supported by the current Linux kernel. RAID 4 requires three or more disks. As with RAID 0, data reads and writes are done in parallel to all disks. One of the disks maintains the parity information, which can be used to reconstruct the data. Reliability is improved, but since parity information is updated with every write operation, the parity disk can be a bottleneck on the system. RAID 4 is known as disk striping with parity.


RAID 5


Like RAID 4, RAID 5 requires three or more disks. Unlike RAID 4, RAID 5 distributes, or 'stripes,' parity information evenly across all the disks. If one disk fails, the data can be reconstructed from the parity data on the remaining disks. RAID does not stop; all data is still available even after a single disk failure. RAID level 5 is the preferred choice in most cases: the performance is good, data integrity is ensured, and only one disk's worth of space is lost to parity data. RAID 5 is also known as disk striping with parity.





On The Job

Hardware RAID systems should be 'hot-swappable.' In other words, if one disk fails, the administrator can replace the failed disk while the server is still running. The system will then automatically rebuild the data onto the new disk. If you configure different partitions from the same physical disk for a software RAID system, the resulting configuration can easily fail. Alternatively, you may be able to set up 'spare disks' on your servers; RAID may automatically rebuild data from a lost hard drive on properly configured spare disks.



RAID in Practice


RAID is associated with a substantial amount of data on a server. It's not uncommon to have a couple dozen hard disks working together in a RAID array. That much data can be rather valuable.








Inside The Exam

Creating RAID Arrays

During the Red Hat Installation and Configuration exam, it's generally easier to do as much as possible during the installation process. If you're asked to create a RAID array, it's easiest to do so with Disk Druid, which only works during installation. You can create RAID arrays once RHEL is installed, but as you'll see in the following instructions, it is more time consuming, and involves a process that is more difficult to remember.

However, if you're required to create a RAID array during your exam and forget to create it during the installation process, not all is lost. You can still use the tools I describe in this chapter to create and configure RAID arrays during the exam. And the skills you learn here can serve you well through your career.

The exam may use examples from RAID levels 0, 1, and or 5. However, if the PC that you're using on the exam includes only one physical hard disk, you may have to configure multiple RAID partitions on the same disk.











If continued performance through a hardware failure is important, you can assign additional disks for 'failover,' which sets up spare disks for the RAID array. When one disk fails, it is marked as bad. The data is almost immediately reconstructed on the first spare disk, resulting in little or no downtime. The next example demonstrates this practice in both RAID 1 and RAID 5 arrays. Assuming your server has four physical drives, with the OS loaded on the first, it should look something like this:


All four drives (hda, hdb, hdc, hdd) should be approximately the same size.

This first example shows how to mirror both the /home and the /var directories (RAID 1) on Drive 2 and Drive 3, leaving Drive 4 as a spare.

You need to create nearly identically sized partitions on Drives 2 and 3. In this example, four disks are configured with four partitions of the same size. If you use the Linux fdisk program, use the t command to toggle the drive ID type. You can then set the partition to type fd, which corresponds to the Linux raid autodetect filesystem. You'll get to test it out for yourself shortly in an exercise, as well as a lab at the end of this chapter.

In the partition table of the first drive is /dev/hda3 (currently mounted as /home) and /dev/hda4 (currently mounted as /var). The second drive includes /dev/hdb3 and /dev/hdb4. The third drive is set up with /dev/hdc3 and /dev/hdc4, while the last drive has /dev/hdd3 and /dev/hdd4. All of these partitions have been marked with partition IDs of type 0xFD.





Exam Watch

If you need to create a raidtab configuration file during the exam, it may be faster to start with one of the sample raidtab configuration files. There are several available in the following directory: /usr/share/doc/raidtools-1.00.3.


Next, update the configuration file /etc/raidtab. As shown in the following code, you'll see two different RAID 1 arrays (/dev/md0 and /dev/md1):

raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 1
persistent-superblock 1
chunk-size 4
device /dev/hdb3
raid-disk 0
device /dev/hdc3
raid-disk 1
device /dev/hdd3
spare-disk 0
raiddev /dev/md1
raid-level 1
nr-raid-disks 3
nr-spare-disks 1
persistent-superblock 1
chunk-size 4
device /dev/hdb4
raid-disk 0
device /dev/hdc4
raid-disk 1
device /dev/hdd4
spare-disk 0

Table 3-4 shows what some of the commands are, along with a brief description of what they do. If you haven't already done so, it's time to format these partitions and convert them to the default ext3 filesystem.


























Table 3-4: Commands in raidtab


Command


Description


nr-raid-disks


Number of RAID disks to use


nr-spare-disks


Number of spare disks to use


persistent-superblock


Required for autodetection


chunk-size


Amount of data to read/write


parity-algorithm


How RAID 5 should use parity







Exam Watch

Take special note that raid-disks and spare-disks start counting at 0; nr-raid-disks and nr-spare-disks are the correct number of drives. For example, if nr-raid-disks = 3, then the raid-disks are 0, 1, and 2.


The Linux format command is mkfs; with the right switch, you can automatically set it up to ext3 with a journal. For example, the following command formats the /dev/hda4 partition:

# mkfs -j /dev/hda4

If the partitions in /etc/raidtab are new, repeat this command for all of those other partitions.

Partitions from older Linux computers may be formatted to the ext2 filesystem, which is essentially the same as ext3 without a journal. You can add journaling to an older partition with a command such as:

# tune2fs -j /dev/hda4

When a journal is added to the ext2 filesystem, it upgrades that partition to the ext3 filesystem.





On The Job

There are advantages to the ext3 journaling filesystem. If your system suffers a sudden power failure, it does not have to check every inode for file data; the information is already available in the journal.


The aforementioned /etc/raidtab file includes two RAID devices, /dev/md0 and /dev/md1. To start RAID 1 on those devices, run the following commands:

# mkraid -R /dev/md0
# mkraid -R /dev/md1

If it works, you'll see the result in the dynamic /proc/mdstat file. You can now mount the device and format it with the appropriate mkfs command, and finally mount it on the Linux directory of your choice. You can even set it up to be automatically mounted through /etc/fstab, as described in Chapter 4.





On The Job

Yes, when you configure a RAID device, you're formatting the same space twice. First, you format the partitions that make up the array. Once you've made RAID devices such as /dev/md0, you can then format those devices as if they were new partitions.


For a RAID 5 array on the /var partition (in order to preserve mail, print spools, and log files), the /etc/raidtab file should be modified as follows:

raiddev /dev/md0
raid-level 5
nr-raid-disks 3
nr-spare-disks 1
persistent-superblock 1
chunksize 32
parity-algorithm right-symmetric
device /dev/hda4
raid-disk 0
device /dev/hdb4
raid-disk 1
device /dev/hdc4
raid-disk 2
device /dev/hdd4
spare-disk 0

Now you can run mkraid /dev/md0 to initialize this RAID 5 device. You can then format and mount this RAID array on the Linux directory of your choice.


Formatting the RAID Array


Now you can run the mkfs command to format each RAID array. It's fairly simple; now that you've created arrays such as /dev/md0 and /dev/md1, you can work with them as if they were any other hard drive partition. For example, you can format these arrays to the ext3 filesystems with the following commands:

# mkfs -j /dev/md0
# mkfs -j /dev/md1

The process is straightforward-for example, if you wanted to mount the /home/mj directory on the first RAID array, you'd run the following commands (assume the /hometmp directory exists):

# cp -r /home/mj /hometmp
# mount /dev/md0 /home/mj
# cp -r /hometmp /home/mj

Setting up RAID on a critical set of files such as a /boot directory partition is a bit trickier. Because of the importance of this data, manually copy the contents of the /boot directory (as well as the boot loader file, /etc/grub.conf or /etc/lilo.conf) to a different physical drive.


Implementing the RAID Array


But that's not the last step. You may not get full credit for your work on the exam unless the directory gets mounted on the RAID array when you reboot your Linux computer. Based on a standard RHEL 3 /etc/fstab configuration file, you might add the following line to that file:

LABEL=/home/mj   /home/mj    ext3     defaults    1 2

Before this line can work, you'll need to set the label for this directory with the following command:

# e2label /dev/md0 /home/mj

I describe the /etc/fstab file, including the meaning of the data in each these columns, in more detail in Chapter 4.

Exercise 3-1: Mirror the /home Partition with Software RAID






Don't do this exercise on a production computer. If you have a computer with Red Hat Enterprise Linux already installed with several different physical hard drives that you can use for testing, that is best. One alternative is to use virtual machine technology such as VMWare, which can allow you to set up these exercises with minimal risk to a production system. You can also set up several IDE and SCSI hard disks on a VMWare machine. When you're ready, use the Linux fdisk techniques discussed in Chapter 3 to configure the following two-drive partition scheme:

Drive 1:
hda1 256 /
hda2 64 swap
hda3 500 /home
hda4 256 /var
Drive 2:
hdb1 1200 /usr
hdb2 64 swap
hdb3 100 /tmp
hdb4 500 (not allocated)

Now with the following steps, you can create a mirror of hda3, which stores the /home directory, to the hdb4 partition. (The partition sizes do not have to be identical.)

If you're making fdisk changes on a production computer, back up the data in the /home partition first. Otherwise, all data on the current /dev/hda3 will be lost.



Mark the two partition IDs as type FD using the Linux fdisk utility.

# fdisk /dev/hda
Command (m for help) : t
Partition number (1-4)
3
Partition ID (L to list options): FD
Command (m for help) : w
# fdisk /dev/hdb
Command (m for help) : t
Partition number (1-4)
4
Partition ID (L to list options): FD
Command (m for help) : w



Update the configuration file /etc/raidtab with these lines of code:

# vi /etc/raidtab
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
persistent-superblock 1
chunk-size 4
device /dev/hda3
raid-disk 0
device /dev/hdb4
raid-disk 1



Now make the RAID device file md0 and format it this way:

# mkraid -R /dev/md0
# mkfs -j /dev/md0



All that's left is to restore the files to the device and mount it.



However, for the exam, you may not get full credit for your work unless your Linux system mounts the directory on the RAID device. Make sure to do so in the /etc/fstab configuration file. Run the e2label command as required to make sure that the LABEL that you add to /etc/fstab is read properly the next time you boot Linux.














/ 194