Follow Techotopia on Twitter

On-line Guides
All Guides
eBook Store
iOS / Android
Linux for Beginners
Office Productivity
Linux Installation
Linux Security
Linux Utilities
Linux Virtualization
Linux Kernel
System/Network Admin
Programming
Scripting Languages
Development Tools
Web Development
GUI Toolkits/Desktop
Databases
Mail Systems
openSolaris
Eclipse Documentation
Techotopia.com
Virtuatopia.com
Answertopia.com

How To Guides
Virtualization
General System Admin
Linux Security
Linux Filesystems
Web Servers
Graphics & Desktop
PC Hardware
Windows
Problem Solutions
Privacy Policy

  




 

 

NOTE: CentOS Enterprise Linux 5 is built from the Red Hat Enterprise Linux source code. Other than logo and name changes CentOS Enterprise Linux 5 is compatible with the equivalent Red Hat version. This document applies equally to both Red Hat and CentOS Enterprise Linux 5.

22.3. Using mdadm to Configure RAID-Based and Multipath Storage

Similar to other tools comprising the raidtools package set, the mdadm command can be used to perform all the necessary functions related to administering multiple-device sets. This section explains how mdadm can be used to:

  • Create a RAID device

  • Create a multipath device

22.3.1. Creating a RAID Device With mdadm

To create a RAID device, edit the /etc/mdadm.conf file to define appropriate DEVICE and ARRAY values:

DEVICE /dev/sd[abcd]1
ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1

In this example, the DEVICE line is using traditional file name globbing (refer to the glob(7) man page for more information) to define the following SCSI devices:

  • /dev/sda1

  • /dev/sdb1

  • /dev/sdc1

  • /dev/sdd1

The ARRAY line defines a RAID device (/dev/md0) that is comprised of the SCSI devices defined by the DEVICE line.

Prior to the creation or usage of any RAID devices, the /proc/mdstat file shows no active RAID devices:

Personalities :
read_ahead not set
Event: 0
unused devices: none

Next, use the above configuration and the mdadm command to create a RAID 0 array:

mdadm -C /dev/md0 --level=raid0 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 \ /dev/sdd1
Continue creating array? yes
mdadm: array /dev/md0 started.

Once created, the RAID device can be queried at any time to provide status information. The following example shows the output from the command mdadm --detail /dev/md0:

/dev/md0:
Version : 00.90.00
Creation Time : Mon Mar  1 13:49:10 2004
Raid Level : raid0
Array Size : 15621632 (14.90 GiB 15.100 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Mar  1 13:49:10 2004
State : dirty, no-errors
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
         0       8        1        0      active sync   /dev/sda1
         1       8       17        1      active sync   /dev/sdb1
         2       8       33        2      active sync   /dev/sdc1
         3       8       49        3      active sync   /dev/sdd1
           UUID : 25c0f2a1:e882dfc0:c0fe135e:6940d932
         Events : 0.1

22.3.2. Creating a Multipath Device With mdadm

In addition to creating RAID arrays, mdadm can also be used to take advantage of hardware supporting more than one I/O path to individual SCSI LUNs (disk drives). The goal of multipath storage is continued data availability in the event of hardware failure or individual path saturation. Because this configuration contains multiple paths (each acting as an independent virtual controller) accessing a common SCSI LUN (disk drive), the Linux kernel detects each shared drive once "through" each path. In other words, the SCSI LUN (disk drive) known as /dev/sda may also be accessible as /dev/sdb, /dev/sdc, and so on, depending on the specific configuration.

To provide a single device that can remain accessible if an I/O path fails or becomes saturated, mdadm includes an additional parameter to its level option. This parameter multipath directs the md layer in the Linux kernel to re-route I/O requests from one pathway to another in the event of an I/O path failure.

To create a multipath device, edit the /etc/mdadm.conf file to define values for the DEVICE and ARRAY lines that reflect your hardware configuration.

Note

Unlike the previous RAID example (where each device specified in /etc/mdadm.conf must represent different physical disk drives), each device in this file refers to the same shared disk drive.

The command used for the creation of a multipath device is similar to that used to create a RAID device; the difference is the replacement of a RAID level parameter with the multipath parameter:

mdadm -C /dev/md0 --level=multipath --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
Continue creating array? yes
mdadm: array /dev/md0 started.

Due to the length of the mdadm command line, it has been broken into two lines.

In this example, the hardware consists of one SCSI LUN presented as four separate SCSI devices, each accessing the same storage by a different pathway. Once the multipath device /dev/md0 is created, all I/O operations referencing /dev/md0 are directed to /dev/sda1, /dev/sdb1, /dev/sdc1, or /dev/sdd1 (depending on which path is currently active and operational).

The configuration of /dev/md0 can be examined more closely using the command mdadm --detail /dev/md0 to verify that it is, in fact, a multipath device:

/dev/md0:
Version : 00.90.00
Creation Time : Tue Mar  2 10:56:37 2004
Raid Level : multipath
Array Size : 3905408 (3.72 GiB 3.100 GB)
Raid Devices : 1
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Mar  2 10:56:37 2004
State : dirty, no-errors
Active Devices : 1
Working Devices : 4
Failed Devices : 0
Spare Devices : 3

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       17        1      spare   /dev/sdb1
       2       8       33        2      spare   /dev/sdc1
       3       8        1        3      spare   /dev/sda1
           UUID : 4b564608:fa01c716:550bd8ff:735d92dc
         Events : 0.1

Another feature of mdadm is the ability to force a device (be it a member of a RAID array or a path in a multipath configuration) to be removed from an operating configuration. In the following example, /dev/sda1 is flagged as being faulty, is then removed, and finally is added back into the configuration. For a multipath configuration, these actions would not affect any I/O activity taking place at the time:

# mdadm /dev/md0 -f /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md0
# mdadm /dev/md0 -r /dev/sda1
mdadm: hot removed /dev/sda1
# mdadm /dev/md0 -a /dev/sda1
mdadm: hot added /dev/sda1
#

 
 
  Published under the terms of the GNU General Public License Design by Interspire