Backup and recovery to a separate system
The N series storage systems provide a feature called SnapVault. It uses the snapshot principles to make copies of the data of the primary storage system and put them onto a secondary system. With this method, the secondary system can replace tape backup for normal backup operations.
However, if tape is required, for example, with long data retention periods, tape backups can be taken off the secondary system. This task does not require a special off-hours backup window, because backups do not impact the primary system, as explained in this chapter. It includes the following topics:
11.1 Licensing the SnapVault locations
To use SnapVault, you must license the primary (the one from where data will be replicated) and secondary (the one receiving that data) SnapVault locations.
To license the SnapVault locations, perform the following steps:
1. From N series System Manager, navigate to Configuration  → System Tools  → Licenses, and click Add. Type your license and click Add, then you should receive a message stating that the license was installed successfully,
2. Repeat these steps on the secondary system, entering the license details into the SnapVault ONTAP Secondary field.
 
Tip: Primary and Secondary licenses are different in their purpose, so you cannot use a primary license on different storages to establish data replication between them.
11.2 Setting up the primary storage
When setting up a new environment, you can plan your primary storage allocation based upon the backup schedule that you require. Where possible, co-locate data with similar backup requirements sharing the same volumes. For example, make sure that your transient data is stored on separate volumes from your vital data.
The steps for setting up primary storage are similar to setting up any N series storage for Virtual Infrastructure 5. The difference is that storage that has to be replicated by using SnapVault requires an extra level between the volume and the LUN called a qtree. A qtree provides additional flexibility to assign the specific LUNs to be backed up and restored.
 
Volumes without LUNs: Volumes without LUNs do not require a qtree on the primary storage. Snapshots are taken at the volume level.
11.3 Creating a qtree
After you create your volumes (or if you have existing volumes), each of them will need at least one qtree. To create a qtree, perform the following steps:
1. From N series System Manager, navigate to Storage  → Qtrees (Figure 11-1).
Figure 11-1 Adding a qtree
2. Type a name for the qtree and browse to select the volume to be replicated (Figure 11-2), and click Create.
Figure 11-2 qtree properties
3. If the qtree was created successfully, it will display on the qtree list, as shown in Figure 11-3.
Figure 11-3 qtree created
4. If you did not yet create LUNs in the volume, create them now. Specify the qtree in the path by using the following syntax:
/vol/<vol_name>/<qtree_name>/<lun_name>
If creating a LUN using IBM N series System Manager, browse to the qtree instead of the volume, as shown in Figure 11-4.
Figure 11-4 Creating a LUN in the qtree
5. If your LUN exists in the volume, change its path to the qtree. This process has to be done by command-line, using the lun move command, as shown in Example 11-1
Example 11-1 The lun move command
N6070A> lun show
/vol/VMware_SAN/iSCSI_LUN01 150g (161061273600) (r/w, online, mapped)
 
N6070A> lun move /vol/VMware_SAN/iSCSI_LUN01 /vol/VMware_SAN/qtree_iSCSI-LUNs/iSCSI_LUN01
 
N6070A> lun show
/vol/VMware_SAN/qtree_iSCSI-LUNs/iSCSI_LUN01 150g (161061273600) (r/w, online, mapped)
11.4 Setting up auxiliary storage
After the primary storage configuration, the auxiliary storage has to be set up, which is where the backups will be stored. The auxiliary storage must be configured with a volume at least as large as, or larger than, each primary volume that you intend to back up. You must set the Snapshot Reserve policy on the volume to 0.
To set up auxiliary storage, implement the following steps:
1. Disable Scheduled Snapshots on the secondary filer, as SnapVault will be used to
back up data. select the Storage →Volumes, then select the volume, click Snapshot →Configure.
2. In the Configure Snapshots pane (Figure 11-5), select the secondary volume that you just created. For Scheduled Snapshots, clear the Scheduled check box.
Figure 11-5 Disabling Snapshot schedule
You do not need to set up any qtrees on the secondary volume. SnapVault creates the qtrees for you.
11.5 Configuring SnapVault
To configure backups using SnapVault, you must perform an initial backup to put the data on the secondary system. Then you must set up a schedule for ongoing SnapVault Snapshots. You can configure this schedule for as often as once each hour, depending on your backup needs.
11.5.1 Setting permissions
SnapVault configuration is done by using the N series command line interface (CLI). To run the CLI, use telnet to access the IP address of the N series server.
Set the permissions to allow the secondary system to access SnapVault on the primary system by using the following command on the primary system (Example 11-2):
options snapvault.access host=<secondary>
Example 11-2 Setting SnapVault permissions
N6070A> options snapvault.access host=9.155.66.103
N6070A>
Enter the same command on the secondary system, specifying the primary as the host, as shown in Example 11-3:
options snapvault.access host=<primary>
Example 11-3  
N6070B> options snapvault.access host=9.155.66.113
N6070B>
This configuration allows the primary system to perform restore operations from the secondary system later.
11.5.2 Performing an initial SnapVault transfer
To perform the initial SnapVault transfer, follow these steps:
1. Set up the initial backup by entering the following command on the secondary system (Example 11-4):
snapvault start -S <primary>:<primary_qtree> <secondary>:<secondary_qtree>
The secondary qtree does not exist yet. It is created with the name you provide in the command.
Example 11-4 Initial SnapVault
N6070B> snapvault start -S 9.155.66.113:/vol/VMware_SAN/qtree_iSCSI-LUNs N6070B:/vol/vol_snap/qtree_iSCSI-LUNs-B
 
Snapvault configuration for the qtree has been set.
Transfer started.
Monitor progress with 'snapvault status' or the snapmirror log.
The initial SnapVault might take some time to create, depending on the size of the data on the primary volume and the speed of the connection between the N series systems.
2. Use the snapvault status command to check whether the SnapVault is completed (Example 11-5).
Example 11-5 Checking the SnapVault Status: Initial SnapVault in progress
N6070B> snapvault status
Snapvault secondary is ON.
Source Destination
State Lag Status
9.155.66.113:/vol/VMware_SAN/qtree_iSCSI-LUNs N6070B:/vol/vol_snap/qtree_iSCSI-
LUNs-B Uninitialized - Transferring (12 GB done)
After the initial SnapVault is complete, the snapvault status command is displayed as idle (Example 11-6).
Example 11-6 Check SnapVault Status - Initial SnapVault complete
N6070B> snapvault status
Snapvault secondary is ON.
 
Source Destination
State Lag Status
9.155.66.113:/vol/VMware_SAN/qtree_iSCSI-LUNs N6070B:/vol/vol_snap/qtree_iSCSI-
LUNs-B Snapvaulted 00:11:43 Idle
N6070B>
3. Check the volumes on the secondary system to ensure that they are using the expected amount of space. They need about the same amount as on the primary system.
4. Check that the qtree created by the initial SnapVault is listed in FilerView.
You are now ready to set up the SnapVault schedule for automated snapshot transfers for the future.
11.5.3 Configuring the schedule
Unlike the initial setup of SnapVault, the schedules are configured at the volume level rather than at the qtree level. The schedule must be configured on both the primary and auxiliary storage systems. This way, the primary system can create a snapshot locally and then the destination transfers the data across to itself.
Setting up the primary schedule
Set up the SnapVault schedule on the primary system typing the following command on it:
snapvault snap sched <volume_name> <snap_name> <sched_spec>
where <sched_spec> is <copies>[@<hour_list>][@<day_list>]
For example, you might want to schedule snapshots to run three times a day at 8 a.m., 4 p.m., and midnight, retaining two days worth of backups (that is, six copies). Example 11-7 shows the command and resulting output for this configuration.
Example 11-7 Scheduling SnapVault snapshots on the primary system
N6070A> snapvault snap sched VMware_SAN 8_hourly 6@0,8,16
N6070A> snapvault snap sched
create VMware_SAN 8_hourly 6@0,8,16
Use the snapvault snap sched command to check the newly created schedule.
Setting up the secondary schedule
You must also configure the schedule for the auxiliary storage system in a similar way. However, the secondary needs to transfer the snapshot from the primary system. Therefore, the command is different:
snapvault snap sched -x <volume_name> <snap_name> <sched_spec>
where <sched_spec> is <copies>[@<hour_list>][@<day_list>]
The -x option tells the secondary system to transfer the snapshot from the primary system.
In the previous example, where three backups are taken per day, you might want to retain backups on the secondary system for a longer period. For example, you might want to retain backups for a week (that is, 21 backups in total). Example 11-8 shows the command and resulting output in this situation.
Example 11-8 Scheduling SnapVault snapshot transfers on the secondary system
N6070B> snapvault snap sched -x vol_snap 8_hourly 21@0,8,16
N6070B> snapvault snap sched
xfer vol_snap 8_hourly 21@0,8,16 preserve=default,warn=0
N6070B>
11.6 Tape backups from the SnapVault secondary system
Where off-site backup is required, or if longer retention periods exist than are economical to store on disk, snapshots from the auxiliary storage system can be written to tape. You can perform this task by using the N series dump command with a local tape system. Alternatively, you can use an NDMP-enabled backup application, such as IBM Tivoli Storage Manager.
The volumes of the auxiliary storage system can be mapped directly by the backup server, and the snapshots are stored as subdirectories. Therefore, you can perform backup to tape of the required snapshots at any convenient time before the snapshot retention period expires.
For details about using Tivoli Storage Manager to back up an N series storage system, see Using the IBM System Storage N series with IBM Tivoli Storage Manager, SG24-7243.
11.7 Restoring SnapVault snapshots
Similar to regular snapshots, the type of recovery is determined by the level of restoration that is required. This section explains how to recover a qtree from a SnapVault snapshot. The concepts for recovering a virtual machine or file within a virtual machine are the same as for regular snapshots.
11.7.1 Preparation
If not configured already, set the permissions on the secondary storage to allow the primary to perform the restore by entering the following command on the secondary system (Example 11-2 on page 188):
options snapvault.access host=<primary>
Before recovering SnapVault snapshots to Virtual Infrastructure 4.x, the ESXi host must be configured to allow Volume Resignaturing.
11.7.2 Restoring the qtree
Performing a LUN restore from SnapVault places the restored LUN on a volume on the primary storage system. Enter the following command (Example 11-9) on the primary system:
snapvault restore -S <secondary>:<secondary_qtree> <destination_qtree>
The destination qtree does not yet exist. It is created with the name you provide in the command. This command restores all LUNS from the secondary qtree to the new qtree. The new qtree can be in the same volume or in a different volume from the original source data.
Example 11-9 SnapVault restore command
N6070A> snapvault restore -S 9.155.66.103:/vol/vol_snap/qtree_iSCSI-LUNs-B N6070A:/vol/VMware_SAN/qtree_restore
Transfer started.
Monitor progress with 'snapvault status' or the snapmirror log.
N6070A>
The CLI of the primary system is unavailable for commands until the restore is complete. Alternatively, you can press Ctrl+C to end the restore. To view the status, use the snapvault status command on the secondary system as shown in Example 11-10.
Example 11-10 SnapVault status: Restore underway
N6070B> snapvault status
Snapvault secondary is ON.
Source Destination
State Lag Status
9.155.66.113:/vol/VMware_SAN/qtree_iSCSI-LUNs N6070B:/vol/vol_snap/qtree_iSCSI-
LUNs-B Snapvaulted 01:18:05 Idle
N6070B:/vol/vol_snap/qtree_iSCSI-LUNs-B N6070A:/vol/VMware_SAN/qtree_rest
ore Source - Transferring (321 MB done)
N6070B>
As with the initial snapshot, the restore might take some time, depending on how much data in the qtree has to be restored. When it is completed, the primary CLI shows a success message and becomes available again (Example 11-11).
Example 11-11 Successful restore
N6070A> Wed Nov 14 23:47:42 CET [N6070A:vdisk.qtreeRestoreComplete:info]: Qtree restore is complete for /vol/VMware_SAN/qtree_restore.
11.7.3 Mapping the LUN
After the restore is completed, the restored LUNs are displayed in the new qtree on the primary system. You must map the required LUNs to allow them to be accessed by the VMware host.
11.7.4 Mounting a restored image in the VMware host
After the LUN is mapped, rescan the adapters on the VMware hosts. The data is now accessible. Depending on the restoration you require, perform one of the following actions:
Start the restored guests from the restored location:
a. Check that the original guests are no longer running, or stop them.
a. Open the recovered datastore on an ESXi host.
b. Add each guest to the inventory.
c. Start the recovered guests.
Copy the required guests to an existing datastore:
a. Open the original and restored datastores in vCenter.
b. Copy the required guest folders from the restored datastore into the original datastore.
c. Start the guests in the original datastore.
d. Delete the restored qtree with data.
Temporarily mount a guest to recover individual guest files:
a. Connect the .vmdk file of the restored datastore to a temporary guest.
b. Copy the required files from the restored .vmdk to the original guest.
c. Disconnect and remove the restored qtree with data.
 
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset