Follow Techotopia on Twitter

On-line Guides
All Guides
eBook Store
iOS / Android
Linux for Beginners
Office Productivity
Linux Installation
Linux Security
Linux Utilities
Linux Virtualization
Linux Kernel
System/Network Admin
Programming
Scripting Languages
Development Tools
Web Development
GUI Toolkits/Desktop
Databases
Mail Systems
openSolaris
Eclipse Documentation
Techotopia.com
Virtuatopia.com
Answertopia.com

How To Guides
Virtualization
General System Admin
Linux Security
Linux Filesystems
Web Servers
Graphics & Desktop
PC Hardware
Windows
Problem Solutions
Privacy Policy

  




 

 

Red Hat Enterprise Linux 9 Essentials Book now available.

Purchase a copy of Red Hat Enterprise Linux 9 (RHEL 9) Essentials

Red Hat Enterprise Linux 9 Essentials Print and eBook (PDF) editions contain 34 chapters and 298 pages

Preview Book

6.4.2. Updating a Configuration Using scp

To update the configuration using the scp command, perform the following steps:
  1. At each node, stop the cluster software according to Section 6.1.2, “Stopping Cluster Software”. For example:
    [root@example-01 ~]# service rgmanager stop
    Stopping Cluster Service Manager:                          [  OK  ]
    [root@example-01 ~]# service gfs2 stop
    Unmounting GFS2 filesystem (/mnt/gfsA):                    [  OK  ]
    Unmounting GFS2 filesystem (/mnt/gfsB):                    [  OK  ]
    [root@example-01 ~]# service clvmd stop
    Signaling clvmd to exit                                    [  OK  ]
    clvmd terminated                                           [  OK  ]
    [root@example-01 ~]# service cman stop
    Stopping cluster: 
       Leaving fence domain...                                 [  OK  ]
       Stopping gfs_controld...                                [  OK  ]
       Stopping dlm_controld...                                [  OK  ]
       Stopping fenced...                                      [  OK  ]
       Stopping cman...                                        [  OK  ]
       Waiting for corosync to shutdown:                       [  OK  ]
       Unloading kernel modules...                             [  OK  ]
       Unmounting configfs...                                  [  OK  ]
    [root@example-01 ~]#
    
  2. At any node in the cluster, edit the /etc/cluster/cluster.conf file.
  3. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3">).
  4. Save /etc/cluster/cluster.conf.
  5. Validate the updated file against the cluster schema (cluster.rng) by running the ccs_config_validate command. For example:
    [root@example-01 ~]# ccs_config_validate 
    Configuration validates
    
  6. If the updated file is valid, use the scp command to propagate it to /etc/cluster/ in each cluster node.
  7. Verify that the updated configuration file has been propagated.
  8. At each node, start the cluster software according to Section 6.1.1, “Starting Cluster Software”. For example:
    [root@example-01 ~]# service cman start
    Starting cluster: 
       Checking Network Manager...                             [  OK  ]
       Global setup...                                         [  OK  ]
       Loading kernel modules...                               [  OK  ]
       Mounting configfs...                                    [  OK  ]
       Starting cman...                                        [  OK  ]
       Waiting for quorum...                                   [  OK  ]
       Starting fenced...                                      [  OK  ]
       Starting dlm_controld...                                [  OK  ]
       Starting gfs_controld...                                [  OK  ]
       Unfencing self...                                       [  OK  ]
       Joining fence domain...                                 [  OK  ]
    [root@example-01 ~]# service clvmd start
    Starting clvmd:                                            [  OK  ]
    Activating VG(s):   2 logical volume(s) in volume group "vg_example" now active
                                                               [  OK  ]
    [root@example-01 ~]# service gfs2 start
    Mounting GFS2 filesystem (/mnt/gfsA):                      [  OK  ]
    Mounting GFS2 filesystem (/mnt/gfsB):                      [  OK  ]
    [root@example-01 ~]# service rgmanager start
    Starting Cluster Service Manager:                          [  OK  ]
    [root@example-01 ~]#
    
  9. At any cluster node, run cman_tools nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example:
    [root@example-01 ~]# cman_tool nodes
    Node  Sts   Inc   Joined               Name
       1   M    548   2010-09-28 10:52:21  node-01.example.com
       2   M    548   2010-09-28 10:52:21  node-02.example.com
       3   M    544   2010-09-28 10:52:21  node-03.example.com
    
  10. At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example:
    [root@example-01 ~]#clustat
    Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010
    Member Status: Quorate
    
     Member Name                             ID   Status
     ------ ----                             ---- ------
     node-03.example.com                         3 Online, rgmanager
     node-02.example.com                         2 Online, rgmanager
     node-01.example.com                         1 Online, Local, rgmanager
    
     Service Name                   Owner (Last)                   State         
     ------- ----                   ----- ------                   -----           
     service:example_apache         node-01.example.com            started       
     service:example_apache2        (none)                         disabled
    
  11. If the cluster is running as expected, you are done updating the configuration.

 
 
  Published under the terms of the Creative Commons License Design by Interspire