Follow Techotopia on Twitter

On-line Guides
All Guides
eBook Store
iOS / Android
Linux for Beginners
Office Productivity
Linux Installation
Linux Security
Linux Utilities
Linux Virtualization
Linux Kernel
System/Network Admin
Programming
Scripting Languages
Development Tools
Web Development
GUI Toolkits/Desktop
Databases
Mail Systems
openSolaris
Eclipse Documentation
Techotopia.com
Virtuatopia.com
Answertopia.com

How To Guides
Virtualization
General System Admin
Linux Security
Linux Filesystems
Web Servers
Graphics & Desktop
PC Hardware
Windows
Problem Solutions
Privacy Policy

  




 

 

SUSE Linux Enterprise Server (SLES 10) Installation and Administration
Previous Page Home Next Page

10.3 Software Configuration

10.3.1 Configuring multipath-tools

If you are using a storage subsystem that is automatically detected (see Section 10.1, Supported Hardware), no further configuration of the multipath-tools is required. Otherwise create /etc/multipath.conf and add an appropriate device entry for your storage subsystem. See /usr/share/doc/packages/multipath-tools/multipath.conf.annotated for a template with extensive comments.

After having set up the configuration, you can perform a dry-run with multipath -v2 -d, which only scans the devices and prints what the setup would look like. The output is similar to the following:

3600601607cf30e00184589a37a31d911 
[size=127 GB]  [features="0"]  [hwhandler="1
    emc"]  
\_ round-robin 0 [first] 
  \_ 1:0:1:2 sdav 66:240  [ready ]
  \_ 0:0:1:2 sdr  65:16   [ready ]
\_ round-robin 0 
  \_ 1:0:0:2 sdag 66:0    [ready ]
  \_ 0:0:0:2 sdc  8:32    [ready ]    
   

Name of the device

Size of the device

Features of the device

Hardware handlers involved

Priority group 1

Priority group 2

Paths are grouped into priority groups. There is only ever one priority group in active use. To model an active/active configuration, all paths end up in the same group. To model active/passive, the paths that should not be active in parallel are placed in several distinct priority groups. This normally happens completely automatically on device discovery.

The output shows the order, the scheduling policy used to balance IO within the group, and the paths for each priority group. For each path, its physical address (host:bus:target:lun), device node name, major:minor number, and state is shown.

10.3.2 Enabling the Components

To start the multipath IO services, run the following commands:

/etc/init.d/boot.multipath start
/etc/init.d/multipathd start
   

The multipath devices should now show up automatically under /dev/disk/by-name/. The default name is the WWN (World Wide Name) of the logical unit, which you can override using /var/lib/multipath/bindings by setting user_friendly_names in /etc/multipath.conf to yes.

To permanently add multipath IO services to the boot sequence, run the following command:

insserv boot.multipath multipathd
   

10.3.3 Querying the Status

Querying the multipath IO status outputs the current status of the multipath maps. To query the current MPIO status, run multipath -l.

The output is very similar to the one already described in Section 10.2, System Configuration, but includes additional information about the state of each priority group and path:

3600601607cf30e00184589a37a31d911
[size=127 GB][features="0"][hwhandler="1 emc"]
\_ round-robin 0 [active][first]
  \_ 1:0:1:2 sdav 66:240  [ready ][active]
  \_ 0:0:1:2 sdr  65:16   [ready ][active]
\_ round-robin 0 [enabled]
  \_ 1:0:0:2 sdag 66:0    [ready ][active]
  \_ 0:0:0:2 sdc  8:32    [ready ][active]
   

10.3.4 Tuning the Failover with Specific Host Bus Adapters

Host bus adapter time-outs are typically set up for non-multipath IO environments, because the only alternative would be to error out the IO and propagate the error to the application. However, with Multipath IO, some faults (like cable failures) should be propagated upwards as fast as possible so that the multipath IO layer can quickly take action and redirect the IO to another, healthy path.

To configure time-outs for your host bus adapter, add the appropriate options to /etc/modprobe.conf.local. For the QLogic 2xxx family of host bus adapters, for example, the following settings are recommended:

options qla2xxx qlport_down_retry=1 ql2xfailover=0 ql2xretrycount=5

10.3.5 Managing IO in Error Situations

In certain scenarios where the driver, the host bus adapter, or the fabric experiences errors leading to loss of all paths, all IO should be queued instead of being propagated upwards.

This can be achieved with the following setting in /etc/multipath.conf.

defaults {
          default_features "1 queue_if_no_path"
}

Because this leads to IO being queued forever unless a path is reinstated, make sure that multipathd is running and works for your scenario. Otherwise, IO might be stalled forever on the affected MPIO device until reboot or until you manually issue

dmsetup message <NAME> 0 fail_if_no_path

This immediately cause all queued IO to fail (replace <NAME> with the the correct map name). You can reactivate queueing by issuing the following command:

dmsetup message <NAME> 0 queue_if_no_path

You can also use these two commands to switch between modes for testing before committing the command to /etc/multipath.conf.

SUSE Linux Enterprise Server (SLES 10) Installation and Administration
Previous Page Home Next Page

 
 
  Published Courtesy of Novell, Inc. Design by Interspire