Follow Techotopia on Twitter

On-line Guides
All Guides
eBook Store
iOS / Android
Linux for Beginners
Office Productivity
Linux Installation
Linux Security
Linux Utilities
Linux Virtualization
Linux Kernel
System/Network Admin
Scripting Languages
Development Tools
Web Development
GUI Toolkits/Desktop
Mail Systems
Eclipse Documentation

How To Guides
General System Admin
Linux Security
Linux Filesystems
Web Servers
Graphics & Desktop
PC Hardware
Problem Solutions
Privacy Policy




Solaris Volume Manager Administration Guide
Previous Next

Backing Up Data on a RAID-1 Volume

Solaris Volume Manager is not meant to be a “backup product.” Solaris Volume Manager does provide a means for backing up mirrored data without causing any of the following to occur:

  • Unmounting the mirror

  • Taking the entire mirror offline

  • Halting the system

  • Denying users access to data

Solaris Volume Manager backs up mirrored data by first taking one of the submirrors offline. During the backup, mirroring is temporarily unavailable. As soon as the backup is complete, the submirror is then placed back online and resynchronized.

Note - The UFS Snapshots feature provides an alternative way to backup a system without taking the file system offline. You can perform the backup without detaching the submirror and incurring the performance penalty of resynchronizing the mirror later. Before performing a backup using the UFS Snapshots feature, make sure you have enough space available on your UFS file system. For more information, see Chapter 26, Using UFS Snapshots (Tasks), in System Administration Guide: Devices and File Systems.

How to Perform an Online Backup of a RAID-1 Volume

You can use this procedure on any file system except the root (/) file system. Be aware that this type of backup creates a “snapshot” of an active file system. Depending on how the file system is being used when it is write-locked, some files on the backup might not correspond to the actual files on disk.

The following limitations apply to this procedure:

  • If you use this procedure on a two-way mirror, be aware that data redundancy is lost while one submirror is offline for backup. A multi-way mirror does not have this problem.

  • There is some overhead on the system when the reattached submirror is resynchronized after the backup is complete.

The high-level steps in this procedure are as follows:

  • Write-locking the file system (UFS only). Do not lock root (/).

  • Flushing all data from cache to disk.

  • Using the metadetach command to take one submirror off of the mirror

  • Unlocking the file system

  • Using the fsck command to check the file system on the detached submirror

  • Backing up the data on the detached submirror

  • Using the metattach command to place the detached submirror back in the mirror

Note - If you use these procedures regularly, put them into a script for ease of use.

Tip - The safer approach to this process is to attach a third or fourth submirror to the mirror, allow it to resynchronize, and use it for the backup. This technique ensures that data redundancy is maintained at all times.

  1. Verify that the mirror is in the “Okay” state.

    A mirror that is in the “Maintenance” state should be repaired first.

    # metastat mirror
  2. Flush data and UFS logging data from cache to disk and write-lock the file system.
    # /usr/sbin/lockfs -w mount-point 

    Only a UFS volume needs to be write-locked. If the volume is set up as a raw device for database management software or some other application, running the lockfs command is not necessary. You might, however, want to run the appropriate vendor-supplied utility to flush any buffers and lock access.

    Caution - Do not write-lock the root (/) file system. Write-locking the root (/) file system causes the system to hang. If you are backing up your root (/) file system, skip this step.

  3. Detach one submirror from the mirror.
    # metadetach mirror submirror 

    Is the volume name of the mirror.


    Is the volume name of the submirror (volume) being detached.

    Reads continue to be made from the other submirror. The mirror is out of sync as soon as the first write is made. This inconsistency is corrected when the detached submirror is reattached in Step 7.

  4. Unlock the file system and allow writes to continue.
    # /usr/sbin/lockfs -u mount-point 

    You might need to perform necessary unlocking procedures based on vendor-dependent utilities used in Step 2.

  5. Use the fsck command to check the file system on the detached submirror. This step ensures a clean backup occurs.
    # fsck /dev/md/rdsk/name
  6. Perform a backup of the offlined submirror.

    Use the ufsdump command or your usual backup utility. For information on performing the backup using the ufsdump command, see Performing Mounted Filesystem Backups Using the ufsdump Command.

    Note - To ensure a proper backup, use the raw volume name, such as /dev/md/rdsk/d4. Using the raw volume name access to storage that is greater than 2 Gbytes.

  7. Attach the submirror.
    # metattach mirror submirror

    Solaris Volume Manager automatically begins resynchronizing the submirror with the mirror.

Example 11-24 Performing an Online Backup of a RAID-1 Volume

This example uses a mirror, d1. The mirror consists of submirrors d2, d3 and d4. The submirror d3 is detached and backed up while submirrors d2 and d4 stay online. The file system on the mirror is /home1.

# metastat d1
d1: Mirror
    Submirror 0: d2
      State: Okay        
    Submirror 1: d3
      State: Okay        
    Submirror 1: d4
      State: Okay        

# /usr/sbin/lockfs -w /home1
# metadetach d1 d3
# /usr/sbin/lockfs -u /home1
# /usr/sbin/fsck /dev/md/rdsk/d3
(Perform backup using /dev/md/rdsk/d3)
# metattach d1 d3
Previous Next

  Published under the terms fo the Public Documentation License Version 1.01. Design by Interspire