CHAPTER 5
Measures of Risk and Performance

Foundational concepts in alternative assets include risk measurement and performance analysis.

5.1 Measures of Risk

Standard deviation of returns, also known as volatility, is the most common measure of total financial risk. If the return distribution is a well-known distribution such as the normal distribution, then the standard deviation reveals much of or even all of the information about the width of the distribution. If the distribution is not well-known, then standard deviation is usually a first pass at describing the dispersion. However, standard deviation can be an ineffective measure of risk when a distribution is nonsymmetrical. Standard deviation incorporates dispersion from both the right-hand side (typically profit) and the left-hand side (typically loss) of the distribution. The two sides are identical in a symmetrical distribution, but in a nonsymmetrical distribution the sides differ; and in the case of risk, the analyst is primarily concerned with the left, or downside, half of the distribution.

The following section includes risk measures that focus on the left or loss side of the return distribution, as well as other popular measures. This section is not intended as a comprehensive listing; it does not discuss the computation of systematic risk measures (betas) or other less frequently used measures.

5.1.1 Semivariance

Some risk measures focus entirely on the downside of the return distribution, meaning that they are computed without use of the above-mean outcomes other than to compute the mean of the distribution. One of the most popular downside risk measures is the semivariance.

Variance, as a symmetrical calculation, is an expected value of the squared deviations, including both negative and positive deviations. The semivariance uses a formula otherwise identical to the variance formula except that it considers only the negative deviations. Semivariance is therefore expressed as:

(5.1) numbered Display Equation

where T is the total number of observations. Semivariance’s summation includes only the observations with values below the mean. Semivariance provides a sense of how much variability exists among losses or, more precisely, among lower-than-expected outcomes. The equation for the semivariance of a sample is given as:

(5.2) numbered Display Equation

where is the sample mean.

5.1.2 Semistandard Deviation

Semistandard deviation, sometimes called semideviation, is the square root of semivariance. Most statisticians define T in the computation of the semivariance and semistandard deviation as the total number of observations for a series. Some practitioners define T as the number of observations that have a negative deviation. Defining T as including all observations has desirable statistical properties and is the standard in statistics. Defining T as including only the number of negative deviations tends to scale semistandard deviation and standard deviation comparably, allowing easier comparisons of semistandard deviations with standard deviations. Both specifications of T should provide identical rankings when comparing samples with equal numbers of total observations and with equal numbers of negative observations.

The semivariance and semistandard deviation for a return series are rather easily computed. The idea is to include only those observations that have a deviation (return minus its mean) that is negative. All of the negative deviations are squared and summed.

5.1.3 Shortfall Risk, Target Semivariance, and Target Semistandard Deviation

In addition to measuring return risk relative to a mean return or an expected return, some analysts measure risk relative to a target rate of return (such as 5%), chosen by the investor based on the investor's goals and financial situation. Generally, the target return is a constant. Shortfall risk is simply the probability that the return will be less than the investor's target rate of return.

The concept of a target return can also be used in measures of downside dispersion. Target semivariance is similar to semivariance except that target semivariance substitutes the investor's target rate of return in place of the mean return. Thus, target semivariance is the dispersion of all outcomes below some target level of return rather than below the sample mean return. Target semistandard deviation (TSSD) is simply the square root of the target semivariance.

When the target is the mean, target semivariance equals semivariance. A very high target return eliminates only the highest outcomes, whereas a very low target eliminates most of the outcomes. The target should typically be set equal to the investor's target rate of return, such as the minimum return consistent with the investor's goals.

5.1.4 Tracking Error

Tracking error indicates the dispersion of the returns of an investment relative to a benchmark return, where a benchmark return is the contemporaneous realized return on an index or peer group of comparable risk. Although tracking error is sometimes used loosely simply to refer to the deviations between an asset's return and the benchmark return, the term tracking error is usually defined as the standard deviation of those deviations, as shown in Equation 5.3:

where RBench,t is the benchmark return in time period t, and is the mean of (RtRBench,t), which is often assumed to be zero.

Note that the benchmark return in Equation 5.3 is subscripted by t, denoting that it differs from period to period. As a standard deviation, tracking error has the advantage of being able to be roughly viewed as a typical deviation, as discussed earlier in this chapter. Since tracking error is formed based on deviations from a benchmark rather than deviations from its own mean, it is an especially useful measure of the dispersion of an asset's return relative to its benchmark. Therefore, whereas standard deviation of returns might be used for an asset with a goal of absolute return performance, tracking error might be used more often for an asset with a goal of relative return performance.

5.1.5 Drawdown

Drawdown is defined as the maximum loss in the value of an asset over a specified time interval and is usually expressed in percentage-return form rather than currency. For example, an asset reaching a high of $100 and then falling to a subsequent low of $60 would be said to have suffered a drawdown of 40%. Maximum drawdown is defined as the largest decline over any time interval within the entire observation period. Smaller losses during smaller intervals of the observation period are often referred to as drawdowns or individual drawdowns. For example, an asset might be said to have experienced a maximum drawdown of 33% since 1995 (for example, between 2000 and 2002), with individual drawdowns of 23% in 2000 and 14% in 2007.

The measured size of a drawdown can vary based on the frequency of the valuation interval, meaning the granularity of the return and price data. For example, if only quarter-end valuations and quarterly returns are used, the true highest values and lowest values of an asset would not be included unless the high and low happened to coincide with dates at the end of a quarter. Thus, a March 31 quarter-ending value of $60 to an asset may be the lowest quarter-ending figure, but the asset may have traded well below $60 sometime during that quarter. A drawdown figure based on only end-of-quarter values would almost always miss the true highs and lows. More frequent observations have a greater likelihood of capturing the true highs and lows. Thus, using monthly, daily, or even tick-by-tick data generally produces higher measures of drawdown.

5.1.6 Value at Risk

Value at risk (VaR) is the loss figure associated with a particular percentile of a cumulative loss function. In other words, VaR is the maximum loss over a specified time period within a specified probability. The specification of a VaR requires two parameters:

  1. The length of time involved in measuring the potential loss
  2. The probability used to specify the confidence that the given loss figure will not be exceeded

Thus, we might estimate the VaR for a 10-day period with 99% confidence as being $100,000. In this case, the VaR is a prediction that over a 10-day period, there is a 99% chance that performance will be better than the scenario in which there is a $100,000 loss. Conversely, there is a 1% chance that there will be a loss in excess of $100,000, but VaR does not estimate the expected loss or maximum possible loss in extreme scenarios. Additional VaR values could be obtained for other time horizons and with other probabilities, such as a VaR for a one-day period with 90% confidence.

The time horizon selected is often linked to how long the decision maker thinks it might be necessary to take an action, such as to liquidate a position. The probability is linked to whether the manager wants to analyze extremely bad scenarios or more likely scenarios. Longer time horizons generally produce larger VaRs because there is more time for the financial situation to deteriorate further. Higher confidence probabilities produce larger VaRs because they force the loss estimate to be based on more unusual circumstances. There is nothing to prevent management from analyzing a number of VaRs based on multiple time periods and/or confidence levels.

Exhibit 5.1 illustrates the concept of a $100,000 VaR for a portfolio based on a confidence level of 99%. The investor can be 99% confident that the portfolio will not lose more than $100,000 over the specified time interval. Thus, there is a 1% probability that the investor will suffer a loss of $100,000 or more over that time interval.

images

Exhibit 5.1 Example of the Distribution of a $100,000 VaR for a Portfolio Based on a Confidence Level of 99%

The VaR summarizes potential loss in a condensed and easy-to-understand way to facilitate understanding and comparison. However, as a single measure of potential loss, the information that it can contain is limited unless the user knows the shape of the distribution of the potential losses.

Variations of VaR exist, such as conditional value-at-risk. Conditional value-at-risk (CVaR), also known as expected tail loss, is the expected loss of the investor given that the VaR has been equaled or exceeded. Thus, if the VaR is $1 million, then the CVaR would be the expected value of all losses equal to or greater than $1 million. The CVaR provides the investor with information about the potential magnitude of losses beyond the VaR.

5.1.7 Strengths and Weaknesses of VaR

The VaR provides a first glance at risk. It can be computed for a single risk exposure (such as a single security), for a portfolio, for an entire division, or for the entire firm. The VaR is a simplified risk measure that can be relatively uniformly computed and interpreted across divisions within a fund or across funds. Numerous entities request or require the reporting of VaR, and they test through time whether a fund's reported VaR is consistent with the actual risk that is experienced. The VaR is especially useful in situations in which a worst-case analysis makes no sense, such as in derivatives, where some positions have unlimited downside risk.

As a single risk measure, VaR provides rather limited information. Further, in some circumstances, VaR can be extremely deceptive. For example, consider a situation in which there is a 1 in 60 chance that a fund will lose $1 million, and under all other situations, the fund will make $30,000 (such as a fund writing an out-of-the-money binary option with a probability of being exercised of 1/60). The VaR using 90%, 95%, or 98% confidence is $0. But the VaR using 99% confidence is $1 million. A manager seeing only the 99% confidence number will perceive a very different risk exposure than a manager seeing VaR from a lower probability.

The VaR is an important risk measure and can be estimated in a variety of ways based on a variety of circumstances. The estimation of VaR is sufficiently important to warrant an entire section.

5.2 Estimating Value at Risk (VaR)

Consider JAC Fund, which has accumulated a position of 50,000 shares of an exchange-traded fund (ETF) that tracks the S&P 500. This hypothetical ETF trades at $20 per share for a total holding of $1 million. JAC Fund wishes to know how much money could be lost if the ETF fell in value. The theoretical answer is that the fund could lose the entire $1 million, under the highly unlikely scenario that the ETF becomes completely worthless.

JAC Fund's management realizes that to make this number meaningful, they must specify a length of time and a probability of certainty. Thus, they might ask how much is the most money that could be expected to be lost 99% of the time over a 10-business-day interval. A reasonable answer to that question is $100,000. In other words, 99 times out of 100, the position in the ETF will do better than losing $100,000 over 10 business days. But on average, during one two-week period out of every 100 such periods, JAC Fund should expect to lose $100,000 or more. It could compute other VaR estimates using time horizons other than 10 days (1, 2, 5, and 30 days are also common) or probabilities other than 99% (90%, 95%, and 98% are also common).

It is easy to assume that VaR analysis is based on the normal probability distribution, because most VaR applications use the statistics of that distribution. However, VaR computation does not require the use of a normal probability distribution or any other formal probability distribution. It merely requires some method or model of predicting the magnitudes and probabilities of various loss levels.

For example, debt securities do not offer a payout at maturity that is normally distributed. Rather, there is usually a high probability that the debt will be paid off in full and various probabilities that only partial payments will be received. If the potential losses of a position or a portfolio of positions cannot be modeled accurately using the normal distribution or another common distribution, the VaR can be estimated in other ways. If the potential losses form a normal distribution, then a parametric approach can be used.

The following section details the parametric estimation of VaR when the losses are normally distributed. Later sections discuss other methods of estimating VaR.

5.2.1 Estimating VaR with Normally Distributed Returns

If the potential losses being analyzed follow a normal distribution, a parametric approach can be used (i.e., VaR can be based on the parameters of the normal distribution). A VaR computation assuming normality and using the statistics of the normal distribution is known as parametric VaR. Computing parametric VaR begins with estimating a standard deviation and inserting the standard deviation into the following formula based on daily price changes:

The formula can use time periods other than days by adjusting N and σ. For simplicity, the formula assumes that the expected return of the investment is zero. The four components to this formula are:

  1. N is the number of standard deviations, which depends on the confidence level that is specified. The value 2.33 should be used if the user wants to be 99% confident, 1.65 should be used if the user wants to be 95% confident, and so forth, with values that can be found using tables or spreadsheets of confidence intervals based on the normal distribution.
  2. σ is the estimated daily standard deviation expressed as a proportion of price or value (return standard deviation). The standard deviation is a measure of the volatility of the value. For example, the ETF discussed earlier might be viewed in a particular market as having a daily standard deviation of perhaps 1.35%. Given a stock price of $20, we can think of the ETF's daily standard deviation measured in absolute terms as being about $0.27. If the standard deviation is expressed as a dollar value of the entire position, then the formula would omit the last term. The standard deviation can be estimated using historical data, observed through option volatilities, or forecasted in some other way, such as with fundamental analysis.
  3. is the square root of the number of days used for the VaR analysis, such as 1, 2, 5, or 10 days. The reason we use the square root is that risk as measured by VaR often grows proportionally with the square root of time, assuming no autocorrelation. Thus, a two-day VaR is only 41.4% bigger than a one-day VaR.
  4. Value is the market value of the position for which the VaR is being computed. For example, it might be the value of a portfolio.

In many cases, such as in this case of a single position, these four inputs are simply multiplied together to find the VaR. In other cases, a further adjustment might be necessary, such as subtracting the collateral that is being held against the potential loss to find the amount that is at risk, or adjusting for the expected profit on the position over the time interval.

5.2.2 Estimating VaR with Normally Distributed Underlying Factors

The approach just described is the simplest case of the analytic approach to computing VaR. Note that the standard deviation used in the equation was the standard deviation of the value or returns of the position being studied. In more complex examples of the analytic approach, the values being studied (e.g., security prices) are modeled as functions of one or more underlying economic variables or factors, such as when an option price is modeled as a function of five or more variables. In these instances, the VaR equation is expressed using the volatilities and correlations of the underlying factors, as well as the sensitivity of the security prices to those factors. In the case of highly nonlinear price functions, such as options, the sensitivities include terms to capture the nonlinearity of large movements (e.g., by using convexity). Thus, the parametric VaR equation that is rather simple for a single position with value changes that are normally distributed can become quite complex for positions with highly nonlinear relationships to underlying factors and/or positions that depend on several factors.

5.2.3 Two Primary Approaches to Estimating the Volatility for VaR

In most parametric VaR applications, the biggest challenge is estimating the volatility of the asset containing the risk. A common approach is to estimate the standard deviation as being equal to the asset's historical standard deviation of returns. Much work has been done and is being devoted to developing improved forecasts of volatility using past data. These efforts focus on the extent to which more recent returns should be given a higher weight than returns from many time periods ago. Models such as ARCH and GARCH emphasize more recent observations in estimating volatility based on past data and were discussed in the preceding chapter.

Another method of forecasting volatility is based on market prices of options. Estimates of volatility are based on the implied volatilities from option prices. These estimates, when available and practical, are typically more accurate than estimates based on past data, since they reflect expectations of the future. For example, in our case of estimating the VaR on a position linked to an ETF tracking the S&P 500, the analyst may use implied volatilities from options on products that track the S&P 500 or may examine the CBOE Volatility Index (VIX) futures contract that reflects S&P 500 volatility.

5.2.4 Two Approaches to Estimating VaR for Leptokurtic Positions

The VaR computations are sensitive to misestimation of the probabilities of highly unusual events. If a position's risk is well described by the normal distribution, then the probabilities of extreme events are easily determined using an estimate of volatility. But leptokurtic positions have fatter tails than the normal distribution, so VaR is sensitive to the degree to which the position's actual tails exceed the tails of a normal distribution.

One solution is to use a probability distribution that allows for fatter tails. For example, the t-distribution not only allows for fatter tails than the normal distribution but also has a parameter that can be adjusted to alter the fatness of the tails. Also, the lognormal distribution is often viewed as providing a more accurate VaR for skewed distributions. Some applications involve rather complicated statistical probability distributions, such as mixed distributions, to incorporate higher probabilities of large price changes. In these cases, the parametric VaR (Equation 5.4) must be modified to reflect the new probability distribution.

A second and potentially simpler approach to adjusting for fat tails is simply to increase the number of standard deviations in the formula for a given confidence level. The increase should be based on analysis (typically historical analysis) of the extent of the kurtosis. An analyst computing VaR for a 99% confidence level might adjust the number of standard deviations in the VaR computation from the 2.33 value that is derived using a normal distribution to a value reflecting fatter tails, such as 2.70. The higher value would likely be based on empirical analysis of the size of the tails in historical data for the given position or similar positions. It is usually necessary to adjust the number of standard deviations only in the cases of very high confidence levels, since most financial return distributions are reasonably close to being normally distributed within 2 standard deviations of the mean. The adjustment may need to be large for very high confidence levels that focus on highly unusual outcomes. Extreme value theory is often used to provide estimates of extremely unlikely outcomes.

5.2.5 Estimating VaR Directly from Historical Data

Rather than using a parameter such as the standard deviation to compute a parametric VaR, a very simple way to estimate VaR can be to view a large collection of previous price changes and compute the size of the price change for which the specified percentage of outcomes was lower.

For example, consider a data set with a long-term history of deviations of an ETF's return from its mean return. We wish to estimate a five-day 99% VaR. We might collect the daily percentage price changes of the ETF for the past 5,000 days and use them to form 1,000 periods of five days each. We then rank the five-day deviations from the highest to the lowest. Suppose we find that exactly 10 of these 1,000 periods had price drops of more than 6.8%, and all the rest of the periods (99%) had better performance. The 99% five-day VaR for our ETF position could then be estimated at 6.8% of the portfolio's current value, under the assumption that past price changes are representative of future price changes.

The value of this approach is its conceptual simplicity, its computational simplicity, and the fact that it works even if the underlying probability distribution is unknown. The approach requires the process to be stable, meaning that the risk of the assets hasn't changed and that the number of past observations is sufficiently large to make an accurate estimate. The requirement of unchanging asset risk throughout the many previous observation periods usually disqualifies this approach for derivatives and some alternative investments with dynamically changing risk exposure, such as hedge funds. The requirement of sufficient past observations is a challenge for illiquid alternative investments, such as private real estate and private equity.

5.2.6 Estimating VaR with Monte Carlo Analysis

Monte Carlo analysis is a type of simulation in which many potential paths of the future are projected using an assumed model, the results of which are analyzed as an approximation to the future probability distributions. It is used in difficult problems when it is not practical to find expected values and standard deviations using mathematical solutions.

An example outside of investments illustrates the method. An analyst might be trying to figure out the best strategy for playing blackjack at a casino, such as whether a gambler should “stay” at 16 or 17 when the dealer has a face card showing. Solving this problem with math and statistics can get so complex that it may be easier to simulate the potential strategies. To perform a Monte Carlo analysis, a computer program is designed to simulate how much money the gambler would win or lose if the gambler played thousands and thousands of hands with a given strategy. The computer simulates play for thousands of games, one at a time, using the known probabilities of drawing various cards. The strategy that performs best in the simulations is then viewed as the strategy that will work best in the future.

In finance, it can be very complex to use a model to solve directly for the probability of a given loss is in a complex portfolio that experiences a variety of market events, such as interest rate shifts. To address the problem with Monte Carlo simulation, the risk manager defines how the market parameters, such as interest rates, might behave over the future and then programs a computer to project thousands and thousands of possible scenarios of interest rate changes and other market outcomes. Each scenario is then used to estimate results in terms of the financial outcomes on the portfolio being analyzed. These results are then used to form a probability distribution of value changes and estimate a VaR. A Monte Carlo simulation might project one million outcomes to a portfolio. In that case, a 99% VaR would be the loss that occurred in the 10,000th worst outcome.

5.2.7 Three Scenarios for Aggregating VaR

Once VaR has been computed for each asset or asset type, how are the VaRs aggregated into a VaR for the entire portfolio? For example, consider a hedge fund with equally weighted allocations to its only two positions. The fund's analyst reports a VaR of $100,000 for position #1 and a VaR of $100,000 for position #2. The critical question is the VaR of the combined positions. Let's consider three scenarios based on correlations between the returns of the two positions:

  1. PERFECT POSITIVE CORRELATION: If the two positions are identical or have perfectly positive correlated and identical risk exposures, then the VaR of the combination is simply the sum of the individual VaRs, $200,000.
  2. ZERO CORRELATION: If the two positions have statistically independent risk exposures, then under some assumptions, such as normally distributed outcomes, the VaR of the combination might be the sum of the individual VaRs divided by the square root of 2, or $141,421, which can be derived from the equation for the variance of uncorrelated normally distributed returns and the formula for parametric VaR based on the normal distribution.
  3. PERFECT NEGATIVE CORRELATION: If the two positions completely hedge each other's risk exposures, then the VaR of the combination would be $0.

Thus, VaRs should be added together to form a more global VaR only when the risks underlying the individual VaRs are perfectly correlated and have identical risk exposures. In other words, the addition of VaRs assumes that every asset or position will experience a highly abnormal circumstance on the same day. If the risks of the assets or positions are imperfectly correlated with each other, the VaRs should be combined using a model that incorporates the effects of diversification using statistics and the correlation between the risks.

5.3 Ratio-Based Performance Measures

There are two major types of performance measures. The first uses ratios of return to risk. With this method, return can be expressed in numerous ways in the numerator, and risk can be expressed in numerous ways in the denominator. This section discusses the most useful and common return-to-risk ratios. A second method for measuring performance involves estimating the risk-adjusted return of an asset that can be compared with a standard. This and other approaches are discussed in section 5.4.

The numerator of ratio-based performance measures is based on the expected return or the average historical return of the given asset. The numerator usually takes one of three forms: (1) the asset's average return, (2) the asset's average return minus a benchmark or target rate of return, and (3) the asset's average return minus the riskless rate.

The denominator of the ratio can be virtually any risk measure, although the most popular performance measures use the most widely used risk measures, such as volatility (standard deviation) or beta. The risk measure may be an observed estimate of risk or the investor's belief regarding expected risk. This section discusses the most common ratio-based performance measures in alternative investment analysis.

5.3.1 The Sharpe Ratio

The most popular measure of risk-adjusted performance for traditional investments and traditional investment strategies is the Sharpe ratio. The Sharpe ratio has excess return as its numerator and volatility as its denominator:

(5.6) numbered Display Equation

where SR is the Sharpe ratio for portfolio p, E (Rp) is the expected return for portfolio p, Rf is the riskless rate, and σp is the standard deviation of the returns of portfolio p. The numerator is the portfolio's expected or average excess return, where expected or average excess return is defined as expected or average total return minus the riskless rate.

The following examples further illustrate use of the Sharpe ratio.

The Sharpe ratio facilitates comparison of investment alternatives and the selection of the opportunity that generates the highest excess return per unit of total risk. However, the denominator of the Sharpe ratio (the standard deviation) does not reflect the marginal contribution of risk that occurs when an asset is added to a portfolio with which it is not perfectly correlated. In other words, the actual additional risk that the inclusion of an asset causes to a portfolio is less than the standard deviation whenever that asset helps diversify the portfolio. Accordingly, it can be argued that the Sharpe ratio should be used only on a stand-alone basis and not in a portfolio context.

It should be obvious that both the numerator and the denominator of the Sharpe ratio should be measured in the same unit of time, such as quarterly or annual values. The resulting Sharpe ratio, however, is sensitive to the length of the time period used to compute the numerator and the denominator. Note that the numerator is proportional to the unit of time, ignoring compounding. Thus, the excess return expressed as an annual rate will be two times larger than a semiannual rate and four times larger than a quarterly rate, ignoring compounding. However, the denominator is linearly related to the square root of time, assuming that returns are statistically independent through time:

(5.7) numbered Display Equation

where σT is the standard deviation over T periods; σ1 is the standard deviation over one time period, such as one year; and T is the number of time periods.

This formula assumes that the returns through time are statistically independent. Thus, a one-year standard deviation is only times a semiannual standard deviation, and a one-year standard deviation is only twice () the quarterly standard deviation. Thus, switching from quarterly returns to annualized returns roughly increases the numerator fourfold but increases the denominator only twofold, resulting in a twofold higher ratio.

If returns were perfectly correlated through time, the Sharpe ratio would not be sensitive to the time unit of measurement; it would be dimensionless. However, in a perfect financial market, returns are expected to be statistically independent through time, and in practice, returns are usually found to be somewhat statistically independent through time. The point is that Sharpe ratio comparisons must be performed using the same return intervals.

Sharpe ratios should be computed and compared consistently with the same unit of time, such as with annualized data. Sharpe ratios can then be easily intuitively interpreted and compared across investments. However, Sharpe ratios ignore diversification effects and are primarily useful in comparing returns only on a stand-alone basis. This means that Sharpe ratios should typically be used when examining total portfolios rather than evaluating components that will be used to diversify a portfolio. Of course, if the investments being compared are well-diversified portfolios, then the Sharpe ratio is appropriate, since systematic risk and total risk are equal in well-diversified portfolios. It should be noted that in the field of investments, the term well-diversified portfolio is traditionally interpreted as any portfolio containing only trivial amounts of diversifiable risk.

Finally, a Sharpe ratio is only as useful as volatility is useful in measuring risk. In the case of normally distributed returns, the volatility fully describes the dispersion in outcomes. But in the many alternative investments with levels of skew and kurtosis that deviate from the normal distribution, volatility provides only a partial measure of dispersion. Thus, the Sharpe ratio is a less valuable measure of risk-adjusted performance for asset returns with non-normal distributions.

5.3.2 Four Important Properties of the Sharpe Ratio

As detailed in the previous section, the Sharpe ratio has the following four important properties:

  1. It is intuitive. Using annual or annualized data, the Sharpe ratio reflects the added annual excess return per percentage point of annualized standard deviation.
  2. It is a measure of performance that is based on stand-alone risk, not systematic risk. Therefore, it does not reflect the marginal risk of including an asset in a portfolio when there is diversifiable risk.
  3. It is sensitive to dimension. The Sharpe ratio changes substantially if the unit of time changes, such as when quarterly rates are used rather than annualized rates.
  4. It is less useful in comparing investments with returns that vary by skew and kurtosis.

The Sharpe ratio should be used with caution when measuring the performance of particular alternative investments, such as hedge funds. Research has shown that the Sharpe ratio may be manipulated (to the benefit of a hedge fund manager) using optionlike strategies.

5.3.3 The Treynor Ratio

Another popular measure of risk-adjusted performance for traditional investments and traditional investment strategies is the Treynor ratio, which differs from the Sharpe ratio by the use of systematic risk rather than total risk. The Treynor ratio has excess return as its numerator and beta as the measure of risk as its denominator:

(5.11) numbered Display Equation

where TR is the Treynor ratio for portfolio p; E(Rp) is the expected return, or mean return, for portfolio p; Rf is the riskless rate; and βpis the beta of the returns of portfolio p.

The Treynor ratio offers the intuition of estimating the excess return of an investment relative to its systematic risk. The Treynor ratio can be directly compared to the equity risk premium discussed in Chapter 8.

Unlike the Sharpe ratio, the Treynor ratio should not be used on a stand-alone basis. Beta is a measure of only one type of risk: systematic risk. Therefore, selecting a stand-alone investment on the basis of the Treynor ratio might tend to maximize excess return per unit of systematic risk but not maximize excess return per unit of total risk unless each investment were well diversified. Beta does, however, serve as an appropriate measure of the marginal risk of adding an investment to a well-diversified portfolio. In this way, the Treynor ratio is designed to compare well-diversified investments and to compare investments that are to be added to a well-diversified portfolio. But the Treynor ratio should not be used to compare poorly diversified investments on a stand-alone basis. The Treynor ratio is less frequently applied in alternative investments, as beta is not an appropriate risk measure for many alternative investment strategies.

The Treynor ratio depends on the unit of time used to express returns. Generally, the beta of an asset (the denominator of the ratio) would be expected to be quite similar, regardless of the unit of time used to express returns. However, ignoring compounding, the quarterly returns would be expected to be one-quarter the magnitude of annual returns, and monthly returns would be expected to be one-twelfth the magnitude of annual returns. Thus, the numerator is proportional to the time unit, and the denominator is roughly independent of the time unit, meaning that the ratio is proportional to the unit of time.

5.3.4 Four Important Properties of the Treynor Ratio

As detailed in the previous section, the Treynor ratio has the following four important properties:

  1. It is highly intuitive. Using annual or annualized data, the Treynor ratio reflects the added annual excess return per unit of beta.
  2. It is a measure of performance that is based on systematic risk, not stand-alone risk. Therefore, it does not reflect the marginal total risk of including an asset in a portfolio that is poorly diversified.
  3. It is directly proportional to its dimension. The Treynor ratio varies directly with the unit of time used, such that ratios based on annualized rates tend to be four times larger than ratios based on quarterly rates.
  4. It is less useful in comparing investments with returns that vary by skew and kurtosis, because beta does not capture higher moments.

5.3.5 The Sortino Ratio

A measure of risk-adjusted performance that tends to be used more in alternative investments than in traditional investments is the Sortino ratio. The Sortino ratio subtracts a benchmark return, rather than the riskless rate, from the asset's return in its numerator and uses downside standard deviation as the measure of risk in its denominator:

(5.12) numbered Display Equation

where E(Rp) is the expected return, or mean return in practice, for portfolio p; RTarget is the user's target rate of return; and TSSD is the target semistandard deviation (or downside deviation), discussed earlier in the chapter.

As a semistandard deviation, the TSSD focuses on the downside deviations. As a target semistandard deviation, TSSD defines a downside deviation as the negative deviations relative to the target return, rather than a mean return or zero. Thus, the Sortino ratio uses the concept of a target rate of return in expressing both the return in the numerator and the risk in the denominator.

Even if the target return is set equal to the riskless rate, the Sortino ratio is not equal to the Sharpe ratio. Although they would share the same numerator, the denominator would be the same only when distributions were perfectly symmetrical and the mean return of the asset equaled the riskless rate. The point is that the emphasis of the Sortino ratio is the use of downside risk rather than the use of a target rate of return. To the extent that a return distribution is nonsymmetrical and the investor is focused on downside risk, the Sortino ratio can be useful as a performance indicator.

5.3.6 The Information Ratio

The information ratio provides a sophisticated view of risk-adjusted performance. The information ratio has a numerator formed by the difference between the average return of a portfolio (or other asset) and its benchmark, and a denominator equal to its tracking error:

(5.13) numbered Display Equation

where E(Rp) is the expected or mean return for portfolio p, RBenchmark is the expected or mean return of the benchmark, and TE is the tracking error of the portfolio relative to its benchmark return.

Tracking error, which was discussed earlier in this chapter, may be approximately viewed as the typical amount by which a portfolio's return deviates from its benchmark. Technically speaking, tracking error is the standard deviation of the differences through time of the portfolio's return and the benchmark return.

The numerator is the average amount by which the portfolio exceeds its benchmark return (if positive). Thus, the information ratio is the amount of added return, if positive, that a portfolio generates relative to its benchmark for each percentage by which the portfolio's return typically deviates from its benchmark.

Like the Sharpe ratio, the information ratio is sensitive to whether it is computed using annualized returns or periodic (e.g., quarterly) returns. The information ratio is higher when the portfolio's average return is higher, and lower when the portfolio deviates from its benchmark by larger amounts. Accordingly, the use of the information ratio is an attempt to drive the portfolio toward investments that track the benchmark well but consistently outperform the benchmark.

5.3.7 Return on VaR

Value at risk (VaR) was detailed earlier in this chapter as a measure of potential risk for a specified time horizon and level of confidence. Return on VaR (RoVaR) is simply the expected or average return of an asset divided by a specified VaR (expressing VaR as a positive number):

(5.14) numbered Display Equation

In cases in which VaR is a good summary measure of the risks being faced, RoVaR may be a useful metric. In such cases, the risks of the investment alternatives typically share similarly shaped return distributions that are well understood by the analysts using the ratio.

5.4 Risk-Adjusted Return Measures

The previous section focused on ratio-based performance measures. This section discusses three performance measures that are not return-to-risk ratios. Other performance measures exist, and some firms use performance measures unique to their particular firm. In practice, a variety of performance measures should be viewed in a performance review, each of which is selected to view performance from a relevant perspective.

5.4.1 Jensen's Alpha

Jensen's alpha is based on the single-factor market model discussed in Chapter 6. In terms of expected returns, Jensen's alpha may be expressed as the difference between its expected return and the expected return of efficiently priced assets of similar risk. The return of efficiently priced assets of similar risk is usually specified using the single-factor market model, as shown in the following equation:

The right-hand side expresses the alpha as the expected return of the portfolio in excess of the riskless rate and the required risk premium. Any return above the riskless rate and the required risk premium is alpha, which represents superior performance.

Jensen's alpha is a direct measure of the absolute amount by which an asset is estimated to outperform, if positive, the return on efficiently priced assets of equal systematic risk in a single-factor market model. It is tempting to describe the return in the context of the CAPM, but strictly speaking, no asset offers a nonzero alpha in a CAPM world, since all assets are priced efficiently. In practice, expected returns on the asset and the market, as well as the true beta of the asset, are unobservable. Thus, Jensen's alpha is typically estimated using historical data as the intercept (a) of the following regression equation adapted from Chapter 9.

(5.16) numbered Display Equation

where Rt is the return of the portfolio or asset in period t, Rmt is the return of the market portfolio in time t, a is the estimated intercept of the regression, b is the estimated slope coefficient of the regression, and et is the residual of the regression in time t. The error term et estimates the idiosyncratic return of the portfolio in time t, b is an estimate of the portfolio's beta, and a is an estimate of the portfolio's average abnormal or idiosyncratic return. Since the intercept, a, is estimated, it should be interpreted subject to levels of confidence.

5.4.2 M2 (M-Squared) Approach

The M2 approach, or M-squared approach, expresses the excess return of an investment after its risk has been normalized to equal the risk of the market portfolio. The first step is to leverage or deleverage the investment so that its risk matches the risk of the market portfolio. The superior return that the investment offers relative to the market when it has been leveraged or deleveraged to have the same volatility as the market portfolio is M2. A fund is leveraged to a higher level of risk when money is borrowed at the riskless rate and invested in the fund, and a fund is deleveraged when money is allocated to the riskless asset rather than invested in the fund.

Consider three funds with excess returns and volatilities as expressed in the second and third columns of Exhibit 5.2. Note that the three funds differ in volatility (column 3), so their returns cannot be directly compared.

Exhibit 5.2 Sample Computations of M2

(1) (2) (3) (4) (5) (6) (7) (8)
Fund Excess Return Fund Volatility Sharpe Ratio Portfolio Weight Portfolio Volatility Portfolio Excess Return Fund M2
A 3%  5% .60 200% 10% 6% 6% + Rf
B 5% 10% .50 100% 10% 5% 5% + Rf
C 6% 15% .40  67% 10% 4% 4% + Rf

The Sharpe ratio in column 4 reveals that Fund A provides the best excess return per unit of standard deviation. The M2 approach shows Fund A's superior potential with a different metric in light of the opportunity provided by the market portfolio. Assuming that the volatility of the market portfolio is estimated to be 10%, the first step of the M2 approach is to leverage or deleverage each of the funds into a total portfolio that has the same volatility as the market portfolio, which is 10%. Columns 5, 6, and 7 indicate leveraging (Fund A) and deleveraging (Fund C) to create risk levels equal to that of the market. To invest in Fund A, which has a volatility of 5%, with a total volatility of 10%, a manager would use 2:1 leverage, effectively allocating a weight of +200% to Fund A and –100% to the riskless asset, as indicated in column 5. To invest in Fund B with a total volatility of 10%, the manager can simply allocate 100% of a portfolio to Fund B. Finally, to invest in Fund C with a total volatility of 10%, the manager allocates 67% of the portfolio to Fund C and the remaining 33% to the riskless asset. Using leverage and deleverage, all three alternatives can be used to generate portfolios with the same expected volatility as the market, or 10%, as indicated in column 6. The excess returns of the portfolios, found by multiplying the alphas of the funds by the weight of the fund in the portfolio, are shown in column 7.

The most attractive alternative, using Fund A with leverage, is the alternative with the highest excess return, since all three portfolios have the same volatility. The expected return of each portfolio is M2, which is shown in column 8 by adding the riskless rate to the excess return in column 7; M2 provides an estimate of the expected return that an investor can earn using a specified investment opportunity and taking a level of total risk equal to that of the market portfolio. Equation 5.17 provides the formula for M2:

where Rf is the riskless rate, σm is the volatility of the market portfolio, σp is the volatility of the portfolio or asset for which M2 is being calculated, and E(Rp) is the mean or expected return of the portfolio.

It should be noted that there is an alternative formula for M2, sometimes called M2-alpha, which is slightly different from Equation 5.17. This alternative formula can be found both in Modigliani and Modigliani's original paper as well as in subsequent analyses by other authors.1 However, this text focuses on the M2 formula in Equation 5.17, which is more consistent with the original work.

The formula for M2 is an expected return or, in the case of an estimation using sample data, the mean return. Specifically, it is an estimated expected return on a strategy that uses borrowing or lending to bring the total volatility of the position equal to the volatility of the market portfolio. The first term on the right-hand side of the formula for M2 is the riskless rate, the compensation for the time value of money. The term in brackets is a risk premium specific to the portfolio or fund being analyzed. The ratio inside the first set of parentheses is the leverage factor that brings the volatility of the portfolio to the same level as the volatility of the market. That leverage factor is multiplied by the excess return of the underlying fund to form the excess return of the leveraged position.

5.4.3 Average Tracking Error

An important concept in all investing is tracking error, discussed earlier in this chapter. Most applications of the concept of tracking error refer to it as the standard deviation of these differences. Thus, tracking error is most commonly viewed as a standard deviation. However, some sources use the term tracking error to refer generally to the differences through time between an investment's return and the return of its benchmark. When tracking error is used in the latter sense, the term average tracking error simply refers to the excess of an investment's return relative to its benchmark. In other words, it is the numerator of the information ratio.

Review Questions

  1. What are the two main differences between the formula for variance and the formula for semivariance?

  2. What is the main difference between the formula for semistandard deviation and the formula for target semistandard deviation?

  3. Define tracking error and average tracking error.

  4. What is the difference between value at risk and conditional value at risk?

  5. Name the two primary approaches for estimating the volatility used in computing value at risk.

  6. What are the steps involved in directly estimating VaR from historical data rather than through a parametric technique?

  7. When is Monte Carlo analysis most appropriate as an estimation technique?

  8. What is the difference between the formulas for the Sharpe and Treynor ratios?

  9. Define return on VaR.

  10. Describe the intuition of Jensen's alpha.

Note

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset