2007 INTERTEK STUDY

The 2007 Intertek study, sponsored by the Research Foundation of the CFA Institute (now the Chartered Financial Analysts Institute), is based on conversations with asset managers, investment consultants, and fund-rating agencies as well as survey responses from 31 asset managers in the United States and Europe.15 In total, 12 asset managers and eight consultants and fund-rating agencies were interviewed and 31 managers with a total of $2.2 trillion in equities under management participated in the survey. Half of the participating firms were based in the United States; half of the participating firms were among the largest asset managers in their countries. Survey participants included chief investment officers of equities and heads of quantitative management and/or quantitative research.

A major question in asset management that this study focused on was if the diffusion of quantitative strategies was making markets more efficient, thereby reducing profit opportunities. The events of the summer of 2007, which saw many quantitatively managed funds realize large losses, brought an immediacy to the question. The classical view of financial markets holds that market speculators make markets efficient, hence the absence of profit opportunities after compensating for risk. This view had formed the basis of academic thinking for several decades starting from the 1960s. However, practitioners had long held the more pragmatic view that a market formed by fallible human agents (as market speculators also are) offers profit opportunities due to the many small residual imperfections that ultimately result in delayed or distorted responses to news.

A summary of the findings of this study are provided next.

Are Model-Driven Investment Strategies Impacting Market Efficiency and Price Processes?

The empirical question of the changing nature of markets is now receiving much academic attention. For example, using empirical data from 1927 to 2005, Hwang and Rubesam16 argued that momentum phenomena disappeared during the period 2000–2005, while Figelman,17 analyzing the S&P 500 over the period 1970–2004, found new evidence of momentum and reversal phenomena previously not described. Khandani and Lo18 show how a mean-reversion strategy that they used to analyze market behavior lost profitability in the 12-year period from 1995 to 2007.

Intuition suggests that models will have an impact on price processes but whether models will make markets more efficient or less efficient will depend on the type of models widely adopted. Consider that there are two categories of models, those based on fundamentals and those based on the analysis of time series of past prices and returns. Models based on fundamentals make forecasts based on fundamental characteristics of firms and, at least in principle, tend to make markets more efficient. Models based on time series of prices and returns are subject to self-referentiality and might actually lead to mispricings. A source at a large financial firm that has both fundamental and quant processes said:

The impact of models on markets and price processes is asymmetrical. [Technical] model-driven strategies have a less good impact than fundamental-driven strategies as the former are often based on trend following.

Another source commented:

Overall quants have brought greater efficiency to the market, but there are poor models out there that people get sucked into. Take momentum. I believe in earnings momentum, not in price momentum: It is a fool buying under the assumption that a bigger fool will buy in the future. Anyone who uses price momentum assumes that there will always be someone to take the asset off your hands—a fool's theory. Studies have shown how it is possible to get into a momentum-type market in which asset prices get bid up, with every-one on the collective belief wagon.

The question of how models impact the markets—making them more or less efficient—depends on the population of specific models. As long as models based on past time series of prices and returns (i.e., models that are trend followers) are being used, it will not be possible to assume that models make markets more efficient. Consider that it is not only a question of how models compete with each other but also how models react to exogenous events and how models themselves evolve. For example, a prolonged period of growth will produce a breed of models different from models used in low-growth periods.

Performance Issues

When the 2006 Intertek study was conducted on equity portfolio modeling in early 2006, quantitative managers were very heady about performance. By mid-2007, much of that headiness was gone. By July–August 2007, there was much perplexity.

Many participants in the 2007 Intertek study attributed the recent poor performance of many quant equity funds to structural changes in the market. A source at a large financial firm with both fundamental and quantitative processes said:

The problem with the performance of quant funds [since 2006] is that there was rotation in the marketplace. Most quants have a strong value bias so they do better in a value market. The period 1998–1999 was not so good for quants as it was a growth market; in 2001–2005, we had a value market so value-tilted styles such as the quants were doing very well. In 2006, we were back to a growth market. In addition, in 2007, spreads compressed. The edge quants had has eroded.

One might conclude that if markets are cyclical, quant outperformance will also be cyclical. A leading investment consultant who participated in the survey remarked:

What is most successful in terms of producing returns—quant or fundamental—is highly contextual: there is no best process, quant or fundamental. Quants are looking for an earnings-quality component that has dissipated in time. I hate to say it, but any manager has to have the wind behind its strategies, favoring the factors.

Speaking in August 2007, the head of active quantitative research at a large international firm said:

It has been challenging since the beginning of the year. The problem is that fundamental quants are stressing some quality—be it value or growth—but at the beginning of the year there was a lot of activity of hedge funds, much junk value, much froth. In addition, there was a lot of value-growth style rotation, which is typical when there is macro insecurity and interest rates go up and down. The growth factor is better when rates are down, the value factor better when rates are up. Fundamental quants could not get a consistent exposure to factors they wanted to be exposed to.

Another source said, “We tried to be balanced value-growth but the biggest danger is rotation risk. One needs a longer-term view to get through market cycles.” The CIO of equities at a large asset management firm added, “Growth and value markets are cyclical and it is hard to get the timing right.”

The problem of style rotation (e.g., value versus growth) is part of the global problem of adapting models to changing market conditions. Value and growth represent two sets of factors, both of which are captured, for example, in the Fama-French three-factor model.19 But arguably there are many more factors. So factor rotation is more than just a question of value and growth markets. Other factors, such as momentum, are subject to the same problem; that is to say, one factor prevails in one market situation and loses importance in another and is replaced by yet another factor(s).

Other reasons were cited to explain why the performance of quantitative products as a group has been down since 2006. Among these is the fact that there were now more quantitative managers using the same data, similar models, and implementing similar strategies. A source at a firm that has both quant and fundamental processes said:

Why is performance down? One reason is because many more people are using quant today than three, five years ago. Ten years ago the obstacles to entry were higher: data were more difficult to obtain, models were proprietary. Now we have third-party suppliers of data feeds, analytics, and backtesting capability.

A consultant concurred:

The next 12 to 24 months will be tough for quants for several reasons. One problem is…the ease with which people can now buy and manipulate data. The problem is too many people are running similar models so performance decays and it becomes hard to stay ahead. Performance is a genuine concern.

Still another source said:

Quant performance depends on cycles and the secular trend but success breeds its own problems. By some estimates there are $4 trillion in quantitative equity management if we include passive, active, hedge funds, and proprietary desks. There is a downside to the success of quants. Because quants have been so successful, if a proprietary desk or a hedge fund needs to get out of a risk, they can't. Then you get trampled on as others have more to sell than you have to buy. The business is more erratic because of the sheer size and needs of proprietary desks and hedge funds whose clients hold 6 to 12 months against six years for asset managers.

However, not all sources agreed that quantitative managers using the same data or similar models entails a loss of performance. One source said:

Though all quants use the same data sources, I believe that there is a difference in models and in signals. There are details behind the signals and in how you put them together. Portfolio construction is one very big thing.

Another source added:

All quants use similar data but even minor differences can lead to nontrivial changes in valuation. If you have 15 pieces of information, different sums are not trivial. Plus if you combine small differences in analytics and optimization, the end result can be large differences. There is not one metric but many metrics and all are noisy.

Investment consultants identified risk management as among the biggest pluses for a quantitative process. According to one source:

Quantitative managers have a much greater awareness of risk. They are attuned to risk in relation to the benchmark as well as to systemic risk. Fundamental managers are often not aware of concentration in, for example, factors or exposure.

In view of the performance issues, survey participants were asked if they believed that quantitative managers were finding it increasingly difficult to generate excess returns as market inefficiencies were exploited. Just over half agreed, while 32% disagreed and 16% expressed no opinion. When the question was turned around, 73% of the survey participants agreed that, though profit opportunities would not disappear, quantitative managers would find it increasingly hard to exploit them. One source remarked:

Performance is getting harder to wring out not because everyone is using the same data and similar models, but because markets are more efficient. So we will see Sharpe ratios shrink for active returns. Managers will have to use more leverage to get returns. The problem is more acute for quant managers as all quant positions are highly correlated as they all use book to price; fundamental managers, on the other hand, differ on the evaluation of future returns.

When asked what market conditions were posing the most serious challenge to a quantitative approach in equity portfolio management, survey respondents ranked in order of importance on a scale from one to five the rising correlation level, style rotation, and insufficient liquidity. Other market conditions rated important were a fundamental market shift, high (cross sector) volatility and low (cross) volatility. Felt less important were the impact of the dissipation of earnings and nontrending markets.

In their paper on the likely causes of the summer 2007 events, Khandani and Lo20 note the sharp rise in correlations over the period 1998–2007. They observe that this rise in correlations reflects a much higher level of interdependence in financial markets. This interdependence is one of the factors responsible for the contagion from the subprime mortgage crisis to the equity markets in July–August 2007. When problems began to affect equity markets, the liquidity crisis started. Note that liquidity is a word that assumes different meanings in different contexts. In the study, liquidity refers to the possibility of finding buyers and thus to the possibility of deleveraging without sustaining heavy losses. One CIO commented:

Everyone in the quant industry is using the same factors [thus creating highly correlated portfolios prone to severe contagion effects]. When you need to unwind, there is no one there to take the trade: Quants are all children of Fama and French. Lots of people are using earnings revision models.

Another source remarked, “Because quants have been so successful, if you need to get out of a risk for whatever reason, you can't get out. This leads to a liquidity sell-off.”

Specific to recent market turmoil, participants identified the unwinding of long-short positions by hedge funds as by far the most important factor contributing to the losses incurred by some quant equity finds in the summer of 2007. One source said wryly, “Everyone is blaming the quants; they should be blaming the leverage.”

Improving Performance

As it was becoming increasingly difficult to deliver excess returns, many quant managers had turned to using leverage in an attempt to boost performance—a strategy most sources agreed was quite risky. The events of the summer of 2007 were to prove them right. Given the performance issues, survey participants were asked what they were likely to do to try to improve performance.

The search to identify new and unique factors was the most frequently cited strategy and complementary to it, the intention to employ new models. A CIO of equities said:

Through the crisis of July–August 2007, quant managers have learned which of their factors are unique and will be focusing on what is unique. There will be a drive towards using more proprietary models, doing more unique conceptual work. But it will be hard to get away from fundamental concepts: you want to hold companies that are doing well and do not want to pay too much for them.

As for the need to employ new models, the global head of quantitative strategies at a large financial group remarked:

Regression is the art of today's tool kit. To get better performance, we will have to enlarge the tool kit and add information and dynamic and static models. People are always changing things; maybe we will be changing things just a bit quicker.

Other strategies to improve performance given by the 2007 survey participants included attempts to diversify sources of business information and data. As one investment consultant said:

All quant managers rely on the same set of data but one cannot rely on the same data and have an analytical edge; it is a tough sell. Quant managers need an informational edge, information no one else has or uses. It might be coming out of academia or might be information in the footnotes of balance sheet data or other information in the marketplace that no one else is using.

Just over 60% of the survey participants agreed that, given that everyone is using the same data and similar models, quantitative managers need a proprietary informational edge to outperform. Sources mentioned that some hedge fund managers now have people in-house on the phone, doing proprietary market research on firms.

Opinions among survey respondents diverged as to the benefits to be derived from using high-frequency (up to tick-by-tick) data. Thirty-eight percent of the participants believed that high-frequency data can give an informational edge in equity portfolio management while 27% disagreed and 35% expressed no opinion. It is true that there was still only limited experience with using high-frequency data in equity portfolio management at the time of the survey. One source remarked, “Asset managers now have more frequent updates, what was once monthly is now daily with services such as WorldScope, Compustat, Market QA, Bloomberg, or Factset. But the use of intraday data is still limited to the trading desk.”

Fund Flows

Estimates of how much was under management in active quant strategies in 2007 vary from a few hundred million dollars to over $1 trillion. In a study that compared cumulative net flows in U.S. large-cap quantitative and “other” products as a percentage of total assets during the 36-month period that coincided with the 2001–2005 value market, Casey, Quirk and Associates21 found that assets grew 25% at quantitative funds and remained almost flat for other funds. A coauthor of that study commented:

What we have seen in our studies, which looked at U.S. large-cap funds, is that since 2004 investors have withdrawn money from the U.S. large-cap segment under fundamental managers but active quants have held on to their assets or seen them go up slightly.

Addressing the question of net flows into quantitatively managed equity funds before July–August 2007, a source at a leading investment consultancy said:

There has been secular growth for quant equity funds over the past 20 or so years, first into passive quant and, over the past 12–36 months, into active quant given their success in the past value market. Right now there is about an 80/20 market split between fundamental and active quant management. If active quants can continue their strong performance in a growth market that I think we are now in, I can see the percentage shift over the next three years to 75/25 with active quant gaining a few points every year.

Despite the high-profile problems at some long-short quantitative managed funds during the summer of 2007, 63% of the respondents indicated that they were optimistic that, overall, quantitatively managed equity funds will continue to increase their market share relative to traditionally managed funds, as more firms introduce quantitative products and exchange-traded funds (ETFs) give the retail investor access to active quant products. However, when the question was reformulated, that optimism was somewhat dampened. Thirty-nine percent of the survey participants agreed that overall quantitatively managed funds would not be able to increase their market share relative to traditionally managed funds for the year 2007 while 42% disagreed.

Many consultants who were interviewed for the study just before the July–August 2007 market turmoil were skeptical that quantitative managers could continue their strong performance. These sources cited performance problems dating back to the year 2006.

Lipper tracks flows of quantitative and nonquantitative funds in four equity universes: large cap, enhanced index funds, market neutral, and long-short funds. The Lipper data covering the performance of quantitatively and nonquantitatively driven funds in the three-year period 2005–2007 showed that quant funds underperformed in 2007 in all categories except large cap—a reversal of performance from 2005 and 2006 when quant managers were outperforming nonquantitative managers in all four categories. However, Lipper data are neither risk adjusted nor fee adjusted and the sampling of quant funds in some categories is small. For the period January 2005–June 2008, according to the Lipper data, long-only funds—both quant and nonquant—experienced a net outflow while all other categories experienced net inflows—albeit at different rates—with the exception of nonquant market neutral funds. The differences (as percentages) between quant and nonquant funds were not very large but quant funds exhibited more negative results.

In view of the preceding, the survey participants were asked if, given the poor performance of some quant funds in the year 2007, they thought that traditional asset management firms that have diversified into quantitative management would be reexamining their commitment. Nearly one third agreed while 52% disagreed (16% expressed no opinion). Those that agreed tended to come from firms at which equity assets under management represent less than 5% of all equities under management or where there is a substantial fundamental overlay to the quantitative process.

The head of quantitative equity at a large traditional manager said:

When the firm decided back in the year 2000 to build a quant business as a diversifier, quant was not seen as a competitor to fundamental analysis. The initial role of quant managers was one of being a problem solver, for 130-30-like strategies or whereever there is complexity in portfolio construction. If quant performance is down, the firm might reconsider its quant products. Should they do so, I would expect that the firm would keep on board some quants as a support to their fundamental business.

Quantitative Processes, Oversight, and Overlay

Let's define what we mean by a quantitative process. Many traditionally managed asset management firms now use some computer-based, statistical decision-support tool and do some risk modeling. The study referred to an investment process as fundamental (or traditional) if it is performed by a human asset manager using information and judgment, and quantitative if the value-added decisions are made primarily in terms of quantitative outputs generated by computer-driven models following fixed rules. The study referred to a process as being hybrid if it uses a combination of the two. An example of the latter is a fundamental manager using a computer-driven stock-screening system to narrow his or her portfolio choices.

Among participants in the study, two-thirds had model-driven processes allowing only minimum (5%–10%) discretion or oversight, typically to make sure that numbers made sense and that buy orders were not issued for firms that were the subject of news or rumors not accounted for by the models. Model oversight was considered a control function. This oversight was typically exercised when large positions were involved. A head of quantitative equity said, “Decision making is 95% model-driven, but we will look at a trader's list and do a sanity check to pull a trade if necessary.”

Some firms indicated that they had automated the process of checking if there are exogenous events that might affect the investment decisions. One source said:

Our process is model driven with about 5% oversight. We ask ourselves: “Do the numbers make sense?” and do news scanning and flagging using in-house software as well as software from a provider of business information.

This comment underlines one of the key functions of judgmental overlays: the consideration of information with a bearing on forecasts that does not appear yet in the predictors. This information might include, for example, rumors about important events that are not yet confirmed, or facts hidden in reporting or news releases that escape the attention of most investors.

Fundamental analysts and managers might have sources of information that can add to the information that is publicly available. However, there are drawbacks to a judgmental approach to information gathering. As one source said, “An analyst might fall in love with the Chief Financial Officer of a firm, and lose his objectivity.”

Other sources mentioned using oversight in the case of rare events such as those of July–August 2007. The head of quantitative management at a large firm said:

In situations of extreme market events, portfolio managers talk more to traders. We use Bayesian learning to learn from past events but, in general, dislocations in the market are hard to model.

Bayesian priors are a disciplined way to integrate historical data and a manager's judgment in the model.

Another instance of exercising oversight is in the area of risk. One source said, “The only overlay we exercise is on risk, where we allow ourselves a small degree of freedom, not on the model.”

The key question is: Is there a best way to comingle judgment and models? Each of these presents pitfalls. Opinions among participants in the 2007 Intertek study differed as to the advantage of commingling models and judgment and ways that it might be done. More than two-thirds of the survey participants (68%) disagreed with the statement that the most effective equity portfolio management process combines quantitative tools and a fundamental overlay; only 26% considered that a fundamental overlay adds value. Interestingly, most investment consultants and fund-rating firms interviewed for the study shared the appraisal that adding a fundamental overlay to a quantitative investment process did not add value.

A source at a large consultancy said:

Once you believe that a model is stable, effective over a long time, it is preferable not to use human overlay as it introduces emotion, judgment. The better alternative to human intervention is to arrive at an understanding of how to improve model performance and implement changes to the model.

Some sources believed that a fundamental overlay had value in extreme situations, but not everyone agreed. One source said:

Overlay is additive and can be detrimental, oversight is neither. It does not alter the quantitative forecast but implements a reality check. In market situations such as of July–August 2007, overlay would have been disastrous. The market goes too fast and takes on a crisis aspect. It is a question of intervals.

Among the 26% who believed that a fundamental overlay does add value, sources cited the difficulty of putting all information in the models. A source that used models for asset managers said:

In using quant models, there can be data issues. With a fundamental overlay, you get more information. It is difficult to convert all fundamental data, especially macro information such as the yen/dollar exchange rate, into quant models.

A source at a firm that is using a fundamental overlay systematically said:

The question is how you interpret quantitative outputs. We do a fundamental overlay, reading the 10-Qs and the 10-Ks and the footnotes, plus looking at, for example, increases in daily sales invoices. I expect that we will continue to use a fundamental overlay: it provides a common-sense check. You cannot ignore real-world situations.

In summary, overlays and human oversight in model-driven strategies can be implemented in different ways. First, as a control function, oversight allows managers to exercise judgment in specific situations. Second, human judgment might be commingled with a model's forecasts.

Implementing a Quant Process

The 2007 survey participants were asked how they managed the model building and backtesting process. One-fourth of the participants said that their firms admitted several processes. For example, at 65% of the sources, quantitative models are personally built and backtested by the asset manager; at 39%, quantitative models are built and backtested by the firm's central research center. More rarely, at 23% models might also be built by the corporate research center to the specifications of the asset manager, while at 16% models might also be built by the asset manager but are backtested by the research center. (The percentages do not add to 100 because events overlap.)

Some sources also cited a coming together of quantitative research and portfolio management. Certainly this is already the case at some of the largest quantitative players that began in the passive quantitative arena, where, as one source put it, “the portfolio manager has Unix programming skills as a second nature.”

The need to continuously update models was identified by sources as one of the major challenges to a quantitative investment process. A consultant to the industry remarked:

The specifics of which model each manager uses is not so important as long as management has a process to ensure that the model is always current, that as a prism for looking at the universe the model is relevant, that it is not missing anything. One problem in the U.S. in the 1980s–’90s was that models produced spectacular results for a short period of time and then results decayed. The math behind the models was static, simplistic, able to capture only one trend. Today, quants have learned their lesson; they are paranoid about the need to do a constant evaluation to understand what's working this year and might not work next year. The problem is one of capturing the right signals and correctly weighting them when things are constantly changing.

The need to sustain an ongoing effort in research was cited by investment consultants as determinant in manager choices. One consultant said:

When quant performance decays it is often because the manager has grown complacent and then things stop working. When we look at a quant manager, we ask: Can they continue to keep doing research?

One way to ensure that models adapt to the changing environment is to use adaptive modeling techniques. One quantitative manager said:

You cannot use one situation, one data set in perpetuity. For consistently good performance, you need new strategies, new factors. We use various processes in our organization, including regime-shifting adaptive models. The adaptive model draws factors from a pool and selects variables that change over time.

The use of adaptive models and of strategies that can self-adapt to changing market conditions is an important research topic. From a mathematical point of view, there are many tools that can be used to adapt models. Among these is a class of well-known models with hidden variables, including state-space models, hidden Markov models, or regime-shifting models. These models have one or more variables that represent different market conditions. The key challenge is estimation: the ability to identify regime shifts sufficiently early calls for a rich regime structure. Estimating a rich regime shifting model, however, calls for a very large data sample—something we rarely have in finance.

The survey participants were asked if they thought that quantitative-driven equity investment processes were moving towards full automation. By a fully automated quant investment process we intend a process where investment decisions are made by computers with little or no human intervention. An automated process includes the input of data, production of forecasts, optimization and portfolio formation, oversight, and trading. Among those expressing an opinion, as many believed that quantitative managers are moving toward full automation (38%) as not (38%). Industry observers and consultants also had difficulty identifying a trend. One source remarked, “There are all degrees of automation among quants and we see no obvious trend either towards or away from automation.” It would appear that we will continue to see a diversity in management models. This diversity is due to the fact that there is no hard science behind quantitative equity investment management; business models reflect the personalities and skill sets inside an organization.

Obstacles to full automation are not due to technical shortcomings. As noted earlier, there are presently no missing links in the automation chain going from forecasting to optimization. Full automation is doable, but successful implementation depends on the ability to link seamlessly a return forecasting tool with a portfolio formation strategy. Portfolio formation strategies can take the form of full optimization or be based on some heuristics with constraints.

The progress of full automation will ultimately depend on performance and investor acceptance. Consultants that interviewed for this study were divided in their evaluation of the advisability of full automation. One source said, “All things being equal, I actually prefer a fully automated process once you believe that a model is stable, effective over a long time.” However, in a divergent view, another consultant said, “I am not keen on fully automated processes. I like to see human intervention, interaction before and after optimization, and especially before trading.”

Risk Management

The events of July–August 2007 highlighted once more that quantitatively managed funds can be exposed to the risk of extreme events (i.e., rare large—often adverse—events). Fundamentally, managed funds are also exposed to the risk of extreme events, typically of a more familiar nature, such as a market crash or a large drop in value of single firms or sectors. A head of quantitative management remarked, “There are idiosyncratic risks and systemic risks. Fundamental managers take idiosyncratic risk while the quants look at the marginal moves, sometimes adding leverage.”

There seems to be a gap between state-of-the-art risk management and the practice of finance. At least, this is what appears in a number of statements made after the summer of 2007 that attributed losses to multisigma events in a Gaussian world. It is now well known that financial phenomena do not follow normal distributions and that the likelihood of extreme events is much larger than if they were normally distributed. Financial phenomena are governed by fat-tailed distributions. The fat-tailed nature of financial phenomena has been at the forefront of research in financial econometrics since the 1990s. Empirical research has shown that returns are not normal and most likely can be represented as fat-tailed processes.

Facts like this have an important bearing on the distribution of returns of dynamic portfolios. Consequently, the 2007 study asked survey participants if they believed that the current generation of risk models had pitfalls that do not allow one to properly anticipate risks such as those of July–August 2007. Just over two-thirds of the survey respondents evaluated agreed that, because today's risk models do not take into consideration global systemic risk factors, they cannot predict events such as those of July–August 2007. One source commented:

Risk management models work only under benign conditions and are useless when needed. We use two risk methods, principal component analysis and rare (six-sigma) events, and risk models from MSCI Barra and Northfield. But the risk models are misspecified: most pairs of stocks have high correlations.

Another source added:

There are estimation errors in everything, including in risk models. You know that they will fail, so we add heuristics to our models. Risk models do not cover downside risk but they do help control it. Studies have shown that risk models do improve the information ratio.

The growing use of derivatives in equity portfolio management is adding a new type of risk. One source commented:

The derivatives markets are susceptible to chaos; they overheat compared to normal markets. Derivatives contracts are complex and no one knows how they will behave in various scenarios. In addition, there is credit risk or counterparty risk dealing with entities such as Sentinel—not a Wall Street firm—that can go with a puff of smoke. Their going under was blamed on the subprime crisis but it was fraud.

Sixty-three percent of the survey participants agreed that the derivative market is a market driven by its own supply and demand schedule and might present risk that is not entirely explained in terms of the underlying.

Why Implement a Quant Process?

According to survey respondents, three main objectives were behind the decision to adopt (at least partially) a quantitative-based equity investment process: tighter risk control, more stable returns, and better overall performance. The profile of a firm's founder(s) and/or the prevailing in-house culture were correlated in that they provided the requisite environment.

Other major objectives reported behind the decision to implement a quantitative equity investment process include diversification in general or in terms of new products such as 130-30-type strategies and scalability, including the ability to scale to different universes. Relative to the diversification in a global sense, a source at a large asset management firm with a small quant group said:

An important motivating factor is diversification of the overall product lineup performance. Management believes that quant and fundamental products will not move in synch.

As for the ability to offer new products such as the long-short strategies, a source at a sell-side firm modeling for the buy side remarked:

We are seeing a lot of interest by firms known for being fundamental and that now want to introduce quant processes in the form of screens or other. These firms are trying to get into the quant space and it is the 130-30-type product that is pushing into this direction.

It was generally believed that quantitatively managed funds outperform fundamental managers in the 130-30-type arena. The ability to backtest the strategy was cited as giving quantitatively managed funds the edge. A manager at a firm that offers both fundamental and quantitative products said, “Potential clients have told us that new products such as the 130-30 strategies are more believable with extensive quant processes and testing behind them.”

More generally, sources believed that quantitative processes give an edge whenever there is a complex problem to solve. An investment consultant remarked:

Quant has an advantage when there is an element of financial engineering. The investment process is the same but quant adds value when it comes to picking components and coming up with products such as the 130-30.

Another source added:

A quant process brings the ability to create structured products. In the U.S., institutional investors are using structured products in especially fixed income and hedge funds. Given the problem of aging, I would expect more demand in the future from private investors who want a product that will give them an income plus act as an investment vehicle, such as a combination of an insurance-type payout and the ability to decompose and build up.

As for scalability, a consultant to the industry remarked:

One benefit a quantitative process brings to the management firms is the ability to apply a model quickly to a different set of stocks. For example, a firm that had been applying quant models to U.S. large cap also tested these models on 12–15 other major markets in the backroom. Once they saw that the models had a successful in-house track record in different universes, they began to commercialize these funds.

Among survey participants, the desire to stabilize costs, revenues, and performance or to improve the cost–revenues ratio were rated relatively low as motivating factors to introduce quantitative processes. But one source at a large asset management firm said that stabilizing costs, revenues, and performance was an important factor in the firm's decision to embrace a quantitative process. According to this source, “Over the years, the firm has seen great consistency in a quant process: Fees, revenues, and costs are all more stable, more consistent than with a fundamental process.”

Bringing management costs down was rated by participants as the weakest factor behind the drive to implement a quantitative-driven equity investment process. A source at a large asset management firm with a small quantitative group said:

Has management done a cost–benefit analysis of quant versus fundamental equity investment management process? Not to my knowledge. I was hired a few years ago to start up a quant process. But even if management had done a cost–benefit analysis and found quant attractive, it would not have been able to move into a quant process quickly. The average institutional investor has a seven-man team on the fund. If you were to switch to a two-man quant team, 80% of the clients would go away. Management has to be very careful; clients do not like to see change.

Barriers to Entry

The 2007 study concluded with an investigation of the barriers to entry in the business. Seventy-seven percent of the survey respondents believed that the active quantitative arena will continue to be characterized by the dominance of a few large players and a large number of small quant boutiques. Only 10% disagreed.

Participants were asked to rate a number of factors as barriers to new entrants into the quant equity investment space. The most important barrier remained the prevailing in-house culture. While one source at a fundamental-oriented firm said that very few firms are seriously opposed to trying to add discipline and improve performance by applying some quant techniques, the problem is that it is not so easy to change an organization.

A source at a large international investment consultancy commented:

For a firm that is not quant-endowed, it is difficult to make the shift from individual judgment to a quant process. Those that have been most successful in terms of size in the active quant arena are those that began in passive quant. They chose passive because they understood it would be easier for a quantitative process to perform well in passive as opposed to active management. Most of these firms have been successful in their move to active quant management.

A source at a large firm with fundamental and quant management styles said:

Can a firm with a fundamental culture go quant? It is doable but the odds of success are slim. Fundamental managers have a different outlook and these are difficult times for quants.

Difficulty in recruiting qualified persons was rated the second most important barrier while the cost of qualified persons was considered less of a barrier. Next was the difficulty in gaining investor confidence and the entrenched position of market leaders. An industry observer remarked:

What matters most is the investment culture and market credibility. If an investor does not believe that the manager has quant as a core skill, the manager will not be credible in the arena of quant products. There is the risk that the effort is perceived by the investor as a backroom effort with three persons, understaffed, and undercommitted.

Among the selling points, participants (unsurprisingly) identified alpha generation as the strongest selling point for quant funds, followed by the disciplined approach and better risk management. Lower management and trading costs and a statistics-based stock selection process were rated lowest among the suggested selling points.

Survey participants were also asked to rate factors holding back investment in active quant equity products. A lack of understanding of quant processes by investors and consultants was perceived to be the most important factor holding back investments in active quant products. As one quantitative manager at an essentially fundamental firm noted, “Quant products are unglamorous. There are no 'story’ stocks to tell, so it makes it a hard sell for consultants to their clients.”

The need to educate consultants and investors alike, in an effort to gain their confidence, was cited by several sources as a major challenge going forward. Educating investors might require more disclosure about quant processes. At least that was what just under half of the survey participants believed, while one-fourth disagree and one-fourth have no opinion.

One CIO of equities who believes that greater disclosure will be required remarked:

Following events of this summer [i.e., July–August 2007], quants will need to be better on explaining what they do and why it ought to work. They will need to come up with a rationale for what they are doing. They will have to provide more proof-of-concept statements.

However, among the sources that disagreed, the CIO of equities at another firm said:

One lesson from the events of July–August 2007 is that we will be more circumspect when describing what we are doing. Disclosing what one is doing can lead to others replicating the process and thus a reduction of profit opportunities.

Lack of stellar performance was rated a moderately important factor in holding back investments in quantitative funds. Lack of stellar performance is balanced by a greater consistency in performance. A source at a fund rating service said, “Because quant funds are broadly diversified, returns are watered down. Quants do not hit the ball out of the park, but they deliver stable performance.” The ability to deliver stable if not stellar performance can, of course, be turned into a major selling point.

Quantitative managers cite how Oakland Athletics' manager Billy Beane improved his team's performance using sabermetrics, the analysis of baseball through objective (i.e., statistical) evidence. Beane's analysis led him to shifting the accent from acquiring players who hit the most home runs to acquiring players with the most consistent records of getting on base.22 Interestingly, Beane is credited with having made the Oakland Athletics the most cost-effective team in baseball though winning the American League Championship Series has proved more elusive.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset