Chapter 8. Optimization and performance tuning 221
Addition of a measure that will be queried against and you want it to be included in
aggregates. Note the following information:
Updates for a new measure to in-database aggregates can be made without
rerunning the Aggregate Advisor. The DBA can modify the database aggregate
creation and refresh scripts to include the measure. Then, using Cognos Cube
Designer, the modeler can update the aggregate cube definition, and publish the
cube.
Updates to in-memory aggregates require rerunning the Aggregate Advisor and
saving a different set of in-memory aggregates from the new advisor results.
? Significant data skew changes are made, such as moving from a small-scale development
system to a larger QA or production system.
? Query performance becomes unsatisfactory.
? Workload characteristic changes significantly. In this case, re-capture workload logs and
rerun the Aggregate Advisor to get new relevant recommendations.
8.4 In-memory aggregates
As described in 2.3, “Cognos Dynamic Cubes caching” on page 27, Cognos Dynamic Cubes
supports two types of precomputed aggregate values: those stored in database tables and
those stored in its in-memory aggregate cache.
This section describes how the set of in-memory aggregates that were recommended by the
Aggregate Advisor and saved in the Content Store with the cube in 8.3, “Overview of the
Aggregate Advisor” on page 203 are loaded into the aggregate cache and used by the query
engine.
8.4.1 Applying in-memory aggregates
In “Results populated in the Advisor Results view” on page 209, the Aggregate Advisor
recommended 22 in-memory aggregates with an estimated total size of less than 90 MB.
After opening and reviewing the advisor result, the set of in-memory aggregate
recommendations were saved to the Content Store and were associated with the
gosldw_sales cube.
With the in-memory aggregate definitions saved and associated with the cube, the next time
gosldw_sales is started or restarted, the in-memory aggregates are loaded if the aggregate
cache is not disabled. An aggregate cache size of zero disables the aggregate cache.
8.4.2 Loading in-memory aggregates into the aggregate cache
This section describes how to enable and load the aggregate cache.
Enabling the aggregate cache
The estimated size of the in-memory aggregates, by the Aggregate Advisor, is an estimate
done at the time of the advisor run. The actual amount of memory that is consumed might
differ because the Aggregate Advisor is making estimates of the size, and as the data
warehouse grows, the amount of memory that is needed to hold the in-memory aggregates
will increase.
222 IBM Cognos Dynamic Cubes
The size of the aggregate cache that is specified in the properties of a dynamic cube is a
maximum. The aggregate cache, though, is loaded on a first-come basis: if an aggregate
result will not fit into the cache, it is discarded.
Setting the value to a very large number does not increase or waste memory. Only enough
memory required to hold the defined aggregates is used. For example, if it takes 90 MB to
hold the aggregates for gosldw_sales, and the aggregate cache size is set to 1 GB, only
90 MB of memory is used. Over time, if the underlying fact tables grow, the aggregates are
allowed to grow to the specified maximum 1 GB.
To verify that the aggregate cache is enabled and has a sufficient amount of memory to load
all the in-memory aggregates, set the Maximum amount of memory to use for aggregate
cache value in the cube properties:
1. Go to IBM Cognos Administration.
2. Select Status.
3. Select System.
4. Select the server.
5. Select the dispatcher.
6. Select QueryService.
7. In the QueryService context menu, select Set properties.
8. Select the Settings tab.
9. In Dynamic cube configurations, select Edit in the Value column.
10.Each cube has its own aggregate cache, so select the pencil icon next to gosldw_sales to
edit the configuration properties of the cube.
11.Enter a value for Maximum amount of memory to use for the aggregate cache (MB).
Specify a value greater than the advisor estimated size so that all the in-memory
aggregates can be loaded. For gosldw_sales, specifying a value of 200 is more than
sufficient for now. See Figure 8-18 on page 223.
12.Click OK.
Note: Set the Maximum amount of memory to use for aggregate cache cube property to a
value that is greater than the advisor-estimated size so that all the in-memory aggregates
can be loaded. Remember that, as the data warehouse grows, the amount of memory
needed to hold the in-memory aggregates also grows based on a sliding scale that is
relative to the size of the member cache.
Chapter 8. Optimization and performance tuning 223
Figure 8-18 Set Maximum amount of memory to use for aggregate cache in the cube properties
Wait several minutes for the updated cube configuration settings to refresh and take effect in
the QueryService. Then, start or restart the cube to initiate loading of the in-memory
aggregates.
Loading the aggregate cache
The following cube administrative actions result in the loading of in-memory aggregates into
the aggregate cache:
? Cube start or cube restart
? Refresh data cache
? Refresh member cache
After the cube is started and available, the in-memory aggregates are loaded. The queries to
load the aggregates are run concurrently and asynchronously. As each individual aggregate
finishes loading, it becomes available for use by the query engine.
Underlying database considerations
The DBA should be aware of the aggregate cache-load activities and consider the impact to
the underlying relational database. The queries that are run to populate the aggregate cache
go against the underlying relational database, and the retrieval and processing of a large
number of cells can be resource-intensive for the underlying relational database.
The aggregate load automatically begins after the cube is running and available. The cube is
open to user queries at this time. User queries can still be processed during the aggregate
load; however, until each in-memory aggregate completes its loading, the aggregate cannot
be used and query performance will not be optimal. Because the in-memory aggregates can
take some time, the cube start should be initiated during, or immediately after, the underlying
database ETL process to allow enough time for the load and provide optimal query
performance to users.
These queries are planned by Cognos Dynamic Cubes in the same manner as data queries,
so if there are any in-database aggregate tables available, the engine can take advantage of
these database aggregates and can route to them. An approach to increase the speed of the
224 IBM Cognos Dynamic Cubes
aggregate cache load is to have supporting database aggregates. These database
aggregates can be created by the DBA, based on specific needs.
To help determine which supporting database aggregates can be used to increase the speed
of aggregate cache load, view the information in the database recommendation output. if the
Aggregate Advisor was run to include both in-memory and in-database recommendations,
information in the in-database recommendation textual output indicates how many
corresponding in-memory aggregate recommendations can be satisfied by the database
aggregate recommendation. For example, the header section for a logical database
aggregate might contains a line similar to the following line:
Number of recommended in-memory aggregates that this aggregate can cover: 5
Controlling the number of aggregates loaded in parallel
By default, the number of aggregates that are loaded concurrently is determined as being
twice the number of processors of the QueryService system. For example, if the
QueryService system has four processors, then there will be eight threads to load the
aggregates in parallel.
The number of queries posed concurrently to populate the in-memory aggregate cache can
be controlled by an advanced property of Cognos Dynamic Cubes to ensure the underlying
relational database is not saturated with concurrent requests computing summary values.
To reduce the number of aggregate load queries against the underlying database at one time,
reduce the value for the qsMaxAggregateLoadThreads property. Note that fewer threads more
require more overall time to load the aggregates.
1. Go to IBM Cognos Administration.
2. Select Status.
3. Select System.
4. Select the server.
5. Select the dispatcher.
6. Select QueryService.
7. In the QueryService context menu, select Set properties.
8. Select the Settings tab.
9. In Advanced settings, select Edit in the Value column.
10.Select the Override the settings from the parent entry check box.
11.In an available row, enter qsMaxAggregateLoadThreads under the Parameter field, and a
value under the corresponding Value field.
12.Click OK.
The setting is a global setting for the QueryService, where each cube has the specified
number of threads to load aggregates in parallel.
Note: The DBA should be aware of the impact to the underlying relational database
resources that the aggregate cache load activities can have. So, the DBA should be aware
of the cube administration tasks that result in an aggregate cache load, such as cube start,
restart, data cache refresh, and member cache refresh.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset