Chapter 8. Building a corporate backup and recovery strategy 305
???? Verify that the output paths specified in the control file for the staging and
output file areas exist and are accessible to the DB2 instance user ID.
???? Throttle the resources used by Optim High Performance Unload using the
nbcpu configuration parameter to reduce the number of threads created to
mitigate the effect of db2hpu on other workloads in a production environment.
???? Control the amount of memory to be used by Optim High Performance
Unload by using the bufsize parameter.
Issue the db2hpu command
The db2hpu command is issued as shown in Example 8-3, where-i is the DB2
instance name and -f specifies the control file to be used.
Example 8-3 Issuing the db2hpu command from the command line
# db2hpu -i instname -f controlfile
The control file settings override the defaults set in the product configuration file,
db2hpu.cfg, which is located in the installation directory, typically /opt.
Ensure that any backup, ingest, or other operations scheduled that might affect
the unload process have been altered for the duration of the unload exercise.
Avoid contention for tape drives and TSM resources by ensuring that no other
operations are active at the time of running Optim High Performance Unload.
8.4.7 Install and configure Optim High Performance Unload
Install and configure Optim High Performance Unload on each node where you
intend to use the product. Avoid copying the installation package to each
individual node by locating it on the shared /db2home file system.
When installed in a DB2 10.1 environment, Optim High Performance Unload can
unload data from a backup image for DB2 9.5 or above.
The configuration of Optim High Performance Unload consists of three tasks:
???? Creating and sharing a single configuration file across all nodes
???? Creating a directory structure for Optim High Performance Unload staging
and output files
???? Setting configuration file parameters
Share a single configuration file across all nodes
Minimize administration of configuration files by creating a shared configuration
file on the administration node and modifying the local configuration file on each
306 Solving Operational Business Intelligence with InfoSphere Warehouse Advanced Edition
node to reference it. To create a single configuration file that is shared across all
nodes on which Optim High Performance Unload is installed:
1. Make a copy of the default configuration file, customize it as required, and
save it on a file system that is accessible across all nodes; using /db2home is
recommended.
2. Add the dir_cfg parameter to the default Optim High Performance Unload
configuration file on each data node where Optim High Performance Unload
is installed to reference the shared Optim High Performance Unload
configuration file.
3. Make further customizations to the referenced configuration file only.
Create a directory structure for Optim High Performance
Unload staging and output files
A table in a table space that is 100 GB in size requires up to 100 GB of disk
space for staging data. Additional disk space is required for staging data if
incremental backup images are referenced.
Staging area
You can specify only one directory for the staging area. Create the same
directory structure on each data node to accommodate staging of data. Where
possible, locate the output and staging directories for each node on a separate
file system on separate logical unit numbers (LUNs). Size the staging area to
accommodate the largest table space in your database, and also take into
account the incremental and delta backup policy.
Output area
To enable parallelism when using Optim High Performance Unload, create an
output directory for each database partition. Size the output area to
accommodate the largest table space in your database. Avoid using the same
file systems as used by your database; you do not want DB2 to run out of space
on the production database.
Set configuration file parameters
The product configuration file holds default parameter values that are referenced
each time the Optim High Performance Unload utility is used. Recommendations
apply to these parameters:
???? nbcpu
Determine the number of work threads to be initiated by assessing the
recovery time objective required and the resources available. Change the
configuration parameter as required before running the db2hpu command.
Chapter 8. Building a corporate backup and recovery strategy 307
???? stagedir
Use separate directories for the staging area and output area where the load
formatted output files are written. The stagedir parameter in the db2hpu.cfg
configuration file is set to /tmp by default. Change this entry to reference a
directory that exists on each data node and has enough disk space to
accommodate your Optim High Performance Unload usage strategy.
???? bufsize
Default memory usage of 8 MB per database partition is recommended for
most implementations, which is calculated as 4 MB for each Optim High
Performance Unload buffer pool. Increasing this configuration parameter
value increases the amount of memory reserved by Optim High Performance
Unload with only a marginal increase in throughput and is not recommended.
???? maxselects
Determine the number of tables to process in parallel based on testing and
the amount of resources allocated to recovery.
Optim High Performance Unload disk requirements
The staging area must be sized to accommodate the temporary files created
when data is unloaded. The minimum size needed for the staging area depends
on the DB2 software version and table space type, as noted:
???? The DB2 software version
In DB2 10.1, the entire table space is staged up to the High Water Mark
(HWM) for database-managed space (DMS) table spaces. Use the ALTER
TABLESPACE <TSNAME> REDUCE statement to avoid unnecessary large
HWM on DMS table spaces.
???? The table space type
System-managed space (SMS) table spaces do not require the entire table to
be staged; staging is complete when all table initialization data has been
found.
It is best to allocate space equivalent to the size of the table space.
When multiple tables are unloaded in parallel, the Optim High Performance
Unload utility attempts to minimize the number of times the backup image is
accessed. Additional processing is required for objects including LOB, XML,
LONG, LONG VARCHAR, and LONG VARGRAPHIC.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset