After identifying the cluster hardware components described in Section 2.1 Choosing a Hardware Configuration, set up the basic cluster
hardware and connect the nodes to the optional console switch and
network switch or hub. Follow these steps:
In all nodes, install the required network adapters and host
bus adapters. Refer to Section 2.3.1 Installing the Basic Cluster Hardware for more
information about performing this task.
Set up the optional console switch and connect it to each
node. Refer to Section 2.3.3 Setting Up a Console Switch for more
information about performing this task.
If a console switch is not used, then connect each node to a
Set up the network switch or hub and use network cables to
connect it to the nodes and the terminal server (if
applicable). Refer to Section 2.3.4 Setting Up a Network Switch or Hub for
more information about performing this task.
After performing the previous tasks, install Red Hat Enterprise Linux as described in
Section 2.4 Installing and Configuring Red Hat Enterprise Linux.
Nodes must provide the CPU processing power and memory
required by applications.
In addition, nodes must be able to accommodate the SCSI or Fibre
Channel adapters, network interfaces, and serial ports that the
hardware configuration requires. Systems have a limited number of
pre-installed serial and network ports and PCI expansion slots. Table 2-10 helps determine how much capacity the
node systems employed require.
|Cluster Hardware Component||Serial Ports||Ethernet Ports||PCI Slots|
|SCSI or Fibre Channel adapter to shared disk storage|| || ||One for each bus adapter|
|Network connection for client access and Ethernet
|| ||One for each network connection|| |
|Point-to-point Ethernet connection for 2-node clusters
(optional)|| ||One for each connection|| |
|Terminal server connection (optional)||One|| || |
Table 2-10. Installing the Basic Cluster Hardware
Most systems come with at least one serial port. If a system has
graphics display capability, it is possible to use the serial console
port for a power switch connection. To expand your serial port
capacity, use multi-port serial PCI cards. For multiple-node
clusters, use a network power switch.
Also, ensure that local system disks are not on the
same SCSI bus as the shared disks. For example, use two-channel SCSI
adapters, such as the Adaptec 39160-series cards, and put the internal
devices on one channel and the shared disks on the other
channel. Using multiple SCSI cards is also possible.
Refer to the system documentation supplied by the vendor for
detailed installation information. Refer to Appendix A Supplementary Hardware Information
for hardware-specific information about using host bus adapters in a
In a cluster, shared disks can be used to store cluster service
data. Because this storage must be available to all nodes running the
cluster service configured to use the storage, it cannot be located on
disks that depend on the availability of any one node.
There are some factors to consider when setting up shared
disk storage in a cluster:
It is recommended to use a clustered filesystem such as
Red Hat GFS to configure Red Hat Cluster Manager storage resources, as it offers shared
storage that is suited for high-availability cluster services. For
more information about installing and configuring Red Hat GFS, refer
to the Red Hat GFS Administrator's Guide.
Whether you are using Red Hat GFS, local, or remote (NFS) storage,
it is strongly recommended that you connect
any storage systems or enclosures to redundant UPS systems for a
highly-available source of power. Refer to Section 2.5.3 Configuring UPS Systems for more information.
The use of software RAID or Logical Volume
Management (LVM) for shared
storage is not supported. This is because these products do not
coordinate access to shared storage from multiple
hosts. Software RAID or LVM may be used on non-shared storage on
cluster nodes (for example, boot and system partitions, and
other file systems which are not associated with any cluster
An exception to this rule is CLVM, the
daemon and library that supports clustering of LVM2. CLVM allows
administrators to configure shared storage for use as a resource
in cluster services when used in conjunction with the CMAN cluster
manager and the Distributed Lock Manager
(dlm) mechanism for prevention of simultaneous node access to data
and possible corruption.
For remote filesystems such as NFS, you may use gigabit
Ethernet for improved bandwidth over 10/100 Ethernet
connections. Consider redundant links or channel bonding for
improved remote file system availability. Refer to Section 2.5.1 Configuring Ethernet Channel Bonding for more information.
Multi-initiator SCSI configurations are not supported due to
the difficulty in obtaining proper bus termination. Refer to Appendix A Supplementary Hardware Information for more information about configuring
A shared partition can be used by only one cluster
Do not include any file systems used as a resource for a
cluster service in the node's local
/etc/fstab files, because the cluster
software must control the mounting and unmounting of service
For optimal performance of shared file systems, make sure to
specify a 4 KB block size with the mke2fs -b
command. A smaller block size can cause long
fsck times. Refer to Section 18.104.22.168 Creating File Systems.
After setting up the shared disk storage hardware, partition the
disks and create file systems on the partitions. Refer to Section 22.214.171.124 Partitioning Disks, and Section 126.96.36.199 Creating File Systems for more information on configuring
Although a console switch is not required for cluster operation,
it can be used to facilitate node management and eliminate
the need for separate monitors, mouses, and keyboards for each cluster
node. There are several types of console switches.
For example, a terminal server enables connection to serial
consoles and management of many nodes from a remote location. For a
low-cost alternative, use a KVM (keyboard, video, and mouse) switch,
which enables multiple nodes to share one keyboard, monitor, and
mouse. A KVM switch is suitable for configurations in which GUI access
to perform system management tasks is preferred.
Set up the console switch according to the documentation provided
by the vendor.
After the console switch has been set up, connect it to each cluster
node. The cables used depend on the type of console switch. For
example, a Cyclades terminal server uses RJ45 to DB9
crossover cables to connect a serial port on each node to
the terminal server.
A network switch or hub, although not required for operating a
two-node cluster, can be used to facilitate cluster and client system
network operations. Clusters of more than two nodes require a switch
Set up a network switch or hub according to the documentation
provided by the vendor.
After setting up the network switch or hub, connect it to
each node by using conventional network cables. A
terminal server, if used, is connected to the network
switch or hub through a network cable.