2006 INTERTEK STUDY

The next study we discuss is based on survey responses and conversations with industry representatives in 2006.7 Although this predates the subprime mortgage crisis and the resulting impact on the performance of quantitative asset managers, the insights provided by this study are still useful. In all, managers at 38 asset management firms managing a total of $4.3 trillion in equities participated in the study. Participants included individuals responsible for quantitative equity management and quantitative equity research at large- and medium-sized firms in North America and Europe.8 Sixty-three percent of the participating firms were among the largest asset managers in their respective countries; they clearly represented the way a large part of the industry was going with respect to the use of quantitative methods in equity portfolio management.9

The findings of the 2006 study suggested that the skepticism relative to the future of quantitative management at the end of the 1990s had given way by 2006 and quantitative methods were playing a large role in equity portfolio management. Of the 38 survey participants, 11 (29%) reported that more than 75% of their equity assets were being managed quantitatively. This includes a wide spectrum of firms, with from $6.5 billion to over $650 billion in equity assets under management. Another 22 firms (58%) reported that they have some equities under quantitative management, though for 15 of these 22 firms the percentage of equities under quantitative management was less than 25%—often under 5%—of total equities under management. Five of the 38 participants in the survey (13%) reported no equities under quantitative management.

Relative to the period 2004–2005, the amount of equities under quantitative management was reported to have grown at most firms participating in the survey (84%). One reason given by respondents to explain the growth in equity assets under quantitative management was the flows into existing quantitative funds. A source at a large U.S. asset management firm with more than half of its equities under quantitative management said in 2006 “The firm has three distinct equity products: value, growth, and quant. Quant is the biggest and is growing the fastest.”

According to survey respondents, the most important factor contributing to a wider use of quantitative methods in equity portfolio management was the positive result obtained with these methods. Half of the participants rated positive results as the single most important factor contributing to the widespread use of quantitative methods. Other factors contributing to a wider use of quantitative methods in equity portfolio management were, in order of importance attributed to them by participants, (1) the computational power now available on the desk top, (2) more and better data, and (3) the availability of third-party analytical software and visualization tools.

Survey participants identified the prevailing in-house culture as the most important factor holding back a wider use of quantitative methods (this evaluation obviously does not hold for firms that can be described as quantitative): more than one third (10/27) of the respondents at other than quant-oriented firms considered this the major blocking factor. This positive evaluation of models in equity portfolio management in 2006 was in contrast with the skepticism of some 10 years early. A number of changes have occurred. First, expectations at the time of the study had become more realistic. In the 1980s and 1990s, traders were experimenting with methodologies from advanced science in the hope of making huge excess returns. Experience of the prior 10 years has shown that models were capable of delivering but that their performance must be compatible with a well-functioning market.

More realistic expectations have brought more perseverance in model testing and design and have favored the adoption of intrinsically safer models. Funds that were using hundred fold leverage had become unpalatable following the collapse of LTCM (Long-Term Capital Management). This, per se, has reduced the number of headline failures and had a beneficial impact on the perception of performance results. We can say that models worked better in 2006 because model risk had been reduced: simpler, more robust models delivered what was expected. Other technical reasons that explained improved model performance included a manifold increase in computing power and more and better data. Modelers by 2006 had available on their desk top computing power that, at the end of the 1980s, could be got only from multimillion-dollar supercomputers. Cleaner, more complete data, including intraday data and data on corporate actions and dividends, could be obtained. In addition, investment firms (and institutional clients) have learned how to use models throughout the investment management process. Models had become part of an articulated process that, especially in the case of institutional investors, involved satisfying a number of different objectives, such as superior information ratios.

Changing Role for Models in Equity Portfolio

The 2006 study revealed that quantitative models were now used in active management to find sources of excess returns (i.e., alphas), either relative to a benchmark or absolute. This was a considerable change with respect to the 2003 Intertek European study where quantitative models were reported as being used primarily to manage risk and to select parsimonious portfolios for passive management.

Another finding of the study was the growing amount of funds managed automatically by computer programs. The once futuristic vision of machines running funds automatically without the intervention of a portfolio manager was becoming a reality on a large scale: 55% (21/38) of the respondents reported that at least part of their equity assets were being managed automatically with quantitative methods; another three planned to automate at least a portion of their equity portfolios within the next 12 months. The growing automation of the equity investment process suggests that there was no missing link in the technology chain that leads to automatic quantitative management. From return forecasting to portfolio formation and optimization, all the needed elements were in place. Until recently, optimization represented the missing technology link in the automation of portfolio engineering. Considered too brittle to be safely deployed, many firms eschewed optimization, limiting the use of modeling to stock ranking or risk control functions. Advances in robust estimation methodologies and in optimization now allow an asset manager to construct portfolios of hundreds of stocks chosen in universes of thousands of stocks with little or no human intervention outside of supervising the models.

Modeling Methodologies and the Industry's Evaluation

At the end of the 1980s, academics and researchers at specialized quant boutiques experimented with many sophisticated modeling methodologies including chaos theory, fractals and multifractals, adaptive programming, learning theory, complexity theory, complex nonlinear stochastic models, data mining, and artificial intelligence. Most of these efforts failed to live up to expectations. Perhaps expectations were too high. Or perhaps the resources or commitment required were lacking. Emanuel Derman provides a lucid analysis of the difficulties that a quantitative analyst has to overcome. As he observed, though modern quantitative finance uses some of the techniques of physics, a wide gap remains between the two disciplines.10

The modeling landscape revealed by the 2006 study is simpler and more uniform. Regression analysis and momentum modeling are the most widely used techniques: respectively, 100% and 78% of the survey respondents said that these techniques were being used at their firms. With respect to regression models used today, the survey suggests that they have undergone a substantial change since the first multifactor models such as Arbitrage Pricing Theory (APT) were introduced. Classical multifactor models such as APT are static models embodied in linear regression between returns and factors at the same time. Static models are forecasting models insofar as the factors at time t are predictors of returns at time behavior t + 1. In these static models, individual return processes might exhibit zero autocorrelation but still be forecastable from other variables. Predictors might include financial and macroeconomic factors as well as company specific parameters such as financial ratios. Predictors might also include human judgment, for example, analyst estimates, or technical factors that capture phenomena such as momentum. A source at a quant shop using regression to forecast returns said,

Regression on factors is the foundation of our model building. Ratios derived from financial statements serve as one of the most important components for predicting future stock returns. We use these ratios extensively in our bottom-up equity model and categorize them into five general categories: operating efficiency, financial strength, earnings quality (accruals), capital expenditures, and external financing activities.

Momentum and reversals were the second most widely diffused modeling technique among survey participants. In general, momentum and reversals were being used as a strategy, not as a model of asset returns. Momentum strategies are based on forming portfolios choosing the highest or lowest returns, where returns are estimated on specific time windows. Survey participants gave these strategies overall good marks but noted that (1) they do not always perform so well; (2) they can result in high turnover (though some were using constraints/penalties to deal with this problem); and (3) identifying the timing of reversals was tricky.

Momentum was first reported in 1993 by Jegadeesh and Titman in the U.S. market.11 Nine years later, they confirmed that momentum continued to exist in the 1990s in the U.S. market.12 Two years later, Karolyi and Kho examined different models for explaining momentum and concluded that no random walk or autoregressive model is able to explain the magnitude of momentum empirically found13; they suggested that models with time varying expected returns come closer to explaining empirical magnitude of momentum. Momentum and reversals are presently explained in the context of local models updated in real time. For example, momentum as described in the original Jegadeesh and Titman study is based on the fact that stock prices can be represented as independent random walks when considering periods of the length of one year. However, it is fair to say that there is no complete agreement on the econometrics of asset returns that justifies momentum and reversals and stylized facts on a global scale, and not as local models. It would be beneficial to know more about the econometrics of asset returns that sustain momentum and reversals.

Other modeling methods that were widely used by participants in the 2006 study included cash flow analysis and behavioral modeling. Seventeen of the 36 participating firms said that they modeled cash flows; behavioral modeling was reported as being used by 16 of the 36 participating firms.14 Considered to play an important role in asset predictability, 44% of the survey respondents said that they use behavioral modeling to try to capture phenomena such as departures from rationality on the part of investors (e.g., belief persistence), patterns in analyst estimates, and corporate executive investment/disinvestment behavior. Behavioral finance is related to momentum in that the latter is often attributed to various phenomena of persistence in analyst estimates and investor perceptions. A source at a large investment firm that has incorporated behavioral modeling into its active equity strategies commented,

The attraction of behavioral finance is now much stronger than it was just five years ago. Everyone now acknowledges that markets are not efficient, that there are behavioral anomalies. In the past, there was the theory that was saying that markets are efficient while market participants such as the proprietary trading desks ignored the theory and tried to profit from the anomalies. We are now seeing a fusion of theory and practice.

As for other methodologies used in return forecasting, sources cited nonlinear methods and cointegration. Nonlinear methods are being used to model return processes at 19% (7/36) of the responding firms. The nonlinear method most widely used among survey participants is classification and regression trees (CART). The advantage of CART is its simplicity and the ability of CART methods to be cast in an intuitive framework. A source in the survey that reported using CART as a central part of the portfolio construction process in enhanced index and longer-term value-based portfolios said,

CART compresses a large volume of data into a form that identifies its essential characteristics, so the output is easy to understand. CART is nonparametric—which means that it can handle an infinitely wide range of statistical distributions—and nonlinear—so as a variable selection technique it is particularly good at handling higher-order interactions between variables.

Only 11% (4/36) of the respondents reported using nonlinear regime-shifting models; at most firms, judgment was being used to assess regime change. Participants identified the difficulty in detecting the precise timing of a regime switch and the very long time series required to estimate shifts as obstacles to modeling regime shifts. A survey participant at a firm where regime-shifting models have been experimented with commented,

Everyone knows that returns are conditioned by market regimes, but the potential for overfitting when implementing regime-switching models is great. If you could go back with fifty years of data—but we have only some ten years of data and this is not enough to build a decent model.

Cointegration was being used by 19% (7/36) of the respondents. Cointegration models the short-term dynamics (direction) and long-run equilibrium (fair value). A perceived plus of cointegration is the transparency that it provides: The models are based on economic and finance theory and calculated from economic data.

Optimization

Another area where much change was revealed by the 2006 study was optimization. According to sources, optimization was being performed at 92% (33/36) of the participating firms, albeit in some cases only rarely. Mean variance was the most widely used technique among survey participants: it was being used by 83% (30/36) of the respondents. It was followed by utility optimization (42% or 15/36) and, robust optimization (25% or 9/36). Only one firm mentioned that it is using stochastic optimization.

The wider use of optimization was a significant development compared to the 2003 study when many sources had reported that they eschewed optimization: The difficulty of identifying the forecasting error was behind the then widely held opinion that optimization techniques were too brittle and prone to error maximization. The greater use of optimization was attributed to advances in large-scale optimization coupled with the ability to include constraints and robust methods for both estimation and optimization. This result is significant as portfolio formation strategies rely on optimization. With optimization feasible, the door was open to a fully automated investment process. In this context, it is noteworthy that 55% of the survey respondents in the 2006 study reported that at least a portion of their equity assets is being managed by a fully automated process.

Optimization is the engineering part of portfolio construction. Most portfolio construction problems can be cast in an optimization framework, where optimization is applied to obtain the desired optimal risk-return profile. Optimization is the technology behind the current offering of products with specially engineered returns, such as guaranteed returns. However, the offering of products with particular risk-return profiles requires optimization methodologies that go well beyond classical mean-variance optimization. In particular, one must be able to (1) work with real-world utility functions and (2) apply constraints to the optimization process.

Challenges

The growing diffusion of models is not without challenges. The 2006 survey participants noted three: (1) increasing difficulty in differentiating products; (2) difficulty in marketing quant funds, especially to noninstitutional investors; and (3) performance decay.

Quantitative equity management has now become so wide spread that a source at a long-established quantitative investment firm remarked:

There is now a lot of competition from new firms entering the space [of quantitative investment management]. The challenge is to continue to distinguish ourselves from competition in the minds of clients.

With quantitative funds based on the same methodologies and using the same data, the risk is to construct products with the same risk-return profile. The head of active equities at a large quantitative firm with more than a decade of experience in quantitative management remarked in the survey, “Everyone is using the same data and reading the same articles: It's tough to differentiate.”

While sources in the survey reported that client demand was behind the growth of (new) pure quantitative funds, some mentioned that quantitative funds might be something of a hard sell. A source at a medium-sized asset management firm servicing both institutional clients and high-net worth individuals said:

Though clearly the trend towards quantitative funds is up, quant approaches remain difficult to sell to private clients: They remain too complex to explain, there are too few stories to tell, and they often have low alpha. Private clients do not care about high information ratios.

Markets are also affecting the performance of quantitative strategies. A 2006 report by the Bank for International Settlements noted that this is a period of historically low volatility. What is exceptional about this period, observes the report, is the simultaneous drop in volatility in all variables: stock returns, bond spreads, rates, and so on. While the role of models in reducing volatility is unclear, what is clear is that models immediately translate this situation into a rather uniform behavior. Quantitative funds try to differentiate themselves either finding new unexploited sources of return forecastability, for example, novel ways of looking at financial statements, or using optimization creatively to engineer special risk-return profiles.

A potentially more serious problem is performance decay. Survey participants remarked that model performance was not so stable. Firms are tackling these problems in two ways. First, they are protecting themselves from model breakdown with model risk mitigation techniques, namely by averaging results obtained with different models. It is unlikely that all models break down in the same way in the same moment, so that averaging with different models allows asset managers to diversify risk. Second, there is an ongoing quest for new factors, new predictors, and new aggregations of factors and predictors. In the long run, however, something more substantial might be required.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset