Follow Techotopia on Twitter

On-line Guides
All Guides
eBook Store
iOS / Android
Linux for Beginners
Office Productivity
Linux Installation
Linux Security
Linux Utilities
Linux Virtualization
Linux Kernel
System/Network Admin
Programming
Scripting Languages
Development Tools
Web Development
GUI Toolkits/Desktop
Databases
Mail Systems
openSolaris
Eclipse Documentation
Techotopia.com
Virtuatopia.com
Answertopia.com

How To Guides
Virtualization
General System Admin
Linux Security
Linux Filesystems
Web Servers
Graphics & Desktop
PC Hardware
Windows
Problem Solutions
Privacy Policy

  




 

 

Red Hat Enterprise Linux 9 Essentials Book now available.

Purchase a copy of Red Hat Enterprise Linux 9 (RHEL 9) Essentials

Red Hat Enterprise Linux 9 Essentials Print and eBook (PDF) editions contain 34 chapters and 298 pages

Preview Book

6.4. Updating a Configuration

Updating the cluster configuration consists of editing the cluster configuration file (/etc/cluster/cluster.conf) and propagating it to each node in the cluster. You can update the configuration using either of the following procedures:

6.4.1. Updating a Configuration Using cman_tool version -r

To update the configuration using the cman_tool version -r command, perform the following steps:
  1. At any node in the cluster, edit the /etc/cluster/cluster.conf file.
  2. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3">).
  3. Save /etc/cluster/cluster.conf.
  4. Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes.
  5. Verify that the updated configuration file has been propagated.
  6. You may skip this step (restarting cluster software) if you have made only the following configuration changes:
    • Deleting a node from the cluster configuration—except where the node count changes from greater than two nodes to two nodes. For information about deleting a node from a cluster and transitioning from greater than two nodes to two nodes, refer to Section 6.2, “Deleting or Adding a Node”.
    • Adding a node to the cluster configuration—except where the node count changes from two nodes to greater than two nodes. For information about adding a node to a cluster and transitioning from two nodes tp greater than two nodes, refer to Section 6.2.2, “Adding a Node to a Cluster”.
    • Changes to how daemons log information.
    • HA service/VM maintenance (adding, editing, or deleting).
    • Resource maintenance (adding, editing, or deleting).
    • Failover domain maintenance (adding, editing, or deleting).
    Otherwise, you must restart the cluster software as follows:
    1. At each node, stop the cluster software according to Section 6.1.2, “Stopping Cluster Software”. For example:
      [root@example-01 ~]# service rgmanager stop
      Stopping Cluster Service Manager:                          [  OK  ]
      [root@example-01 ~]# service gfs2 stop
      Unmounting GFS2 filesystem (/mnt/gfsA):                    [  OK  ]
      Unmounting GFS2 filesystem (/mnt/gfsB):                    [  OK  ]
      [root@example-01 ~]# service clvmd stop
      Signaling clvmd to exit                                    [  OK  ]
      clvmd terminated                                           [  OK  ]
      [root@example-01 ~]# service cman stop
      Stopping cluster: 
         Leaving fence domain...                                 [  OK  ]
         Stopping gfs_controld...                                [  OK  ]
         Stopping dlm_controld...                                [  OK  ]
         Stopping fenced...                                      [  OK  ]
         Stopping cman...                                        [  OK  ]
         Waiting for corosync to shutdown:                       [  OK  ]
         Unloading kernel modules...                             [  OK  ]
         Unmounting configfs...                                  [  OK  ]
      [root@example-01 ~]#
      
    2. At each node, start the cluster software according to Section 6.1.1, “Starting Cluster Software”. For example:
      [root@example-01 ~]# service cman start
      Starting cluster: 
         Checking Network Manager...                             [  OK  ]
         Global setup...                                         [  OK  ]
         Loading kernel modules...                               [  OK  ]
         Mounting configfs...                                    [  OK  ]
         Starting cman...                                        [  OK  ]
         Waiting for quorum...                                   [  OK  ]
         Starting fenced...                                      [  OK  ]
         Starting dlm_controld...                                [  OK  ]
         Starting gfs_controld...                                [  OK  ]
         Unfencing self...                                       [  OK  ]
         Joining fence domain...                                 [  OK  ]
      [root@example-01 ~]# service clvmd start
      Starting clvmd:                                            [  OK  ]
      Activating VG(s):   2 logical volume(s) in volume group "vg_example" now active
                                                                 [  OK  ]
      [root@example-01 ~]# service gfs2 start
      Mounting GFS2 filesystem (/mnt/gfsA):                      [  OK  ]
      Mounting GFS2 filesystem (/mnt/gfsB):                      [  OK  ]
      [root@example-01 ~]# service rgmanager start
      Starting Cluster Service Manager:                          [  OK  ]
      [root@example-01 ~]#
      
      Stopping and starting the cluster software ensures that any configuration changes that are checked only at startup time are included in the running configuration.
  7. At any cluster node, run cman_tools nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example:
    [root@example-01 ~]# cman_tool nodes
    Node  Sts   Inc   Joined               Name
       1   M    548   2010-09-28 10:52:21  node-01.example.com
       2   M    548   2010-09-28 10:52:21  node-02.example.com
       3   M    544   2010-09-28 10:52:21  node-03.example.com
    
  8. At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example:
    [root@example-01 ~]#clustat
    Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010
    Member Status: Quorate
    
     Member Name                             ID   Status
     ------ ----                             ---- ------
     node-03.example.com                         3 Online, rgmanager
     node-02.example.com                         2 Online, rgmanager
     node-01.example.com                         1 Online, Local, rgmanager
    
     Service Name                   Owner (Last)                   State         
     ------- ----                   ----- ------                   -----           
     service:example_apache         node-01.example.com            started       
     service:example_apache2        (none)                         disabled
    
  9. If the cluster is running as expected, you are done updating the configuration.

 
 
  Published under the terms of the Creative Commons License Design by Interspire