CHAPTER 2
Dynamic Adjustment in an Economy
Frictions Matter

Architecture starts when you carefully put two bricks together. There it begins.

—Ludwig Mies Van der Rohe

INTRODUCTION

For decision makers, theory provides useful guidelines. Almost every major central bank around the globe, along with international organizations such as the International Monetary Fund (IMF) and the World Bank, utilizes large-scale econometric models (also known as macro-models) based on theory to guide decision making. These models attempt to characterize the behavior of certain agents, including consumers, investors, and public policy makers. However, one of the central assumptions behind many of today’s macro-models is that all agents in the model are in a state of equilibrium simultaneously or move seamlessly to such a state, which creates a general equilibrium and implies a frictionless model for the economy.

Yet the Great Recession, financial crisis, and the recent plunge in oil prices (along with several other events—debt and currency crises, for example) have forced economists to look beyond frictionless models and find alternatives. This chapter discusses some of these models with and without frictions. This first section of the chapter presents theoretical foundations of macro-models (adjustment) with frictions, and in the second section, we characterize the equilibrium states of different sectors and markets. The third and final section discusses guidelines to modeling equilibrium states with frictions.

In our view, frictions (barriers to smooth economic adjustment) exist, and in the presence of frictions, frictionless models do not provide an accurate assessment of the economy. As a result, policy recommendations based on frictionless models lead to misguided decisions and unanticipated outcomes.

In any economy, the short run goals of monetary and fiscal policy decisions change with the business cycle, and policy intervention is sometimes necessary to restore equilibrium in some sectors of the economy. By allowing for frictions in macro-models, policy makers would be more able to accurately assess the state of an economy, suggest appropriate policies, and anticipate the timing and magnitude of any policy change. In the case of the U.S. economy, a number of policies fell short in the wake of the Great Recession due to a lack of acknowledgment of frictions. For example, while theory utilized in frictionless models suggested that significant fiscal stimulus in 2008–2009 and a zero-interest-rate policy would jump-start economic growth, uncertainty among consumers and businesses about financial regulation was an unforeseen friction, holding back credit markets and choking off the recovery.

“The Truth Is in the Details”: Micro-Foundations of a Macro-Model

Standard macro-models consist of several blocks (sectors) and each block represents different aspects of the economy. Typical blocks of macro-models are (1) aggregate demand (AD), (2) aggregate supply (AS), and (3) economic policy.1 The AD block consists of the spending versus saving decisions of economic agents (e.g., consumers, investors, and government), while the AS block evolves from the price-setting decisions of firms and the leisure/work decisions of households. Economic policy actions are included in the macro-models through the policy block. Some assumptions are crucial to understand the structure and output of a macro-model. For example, one assumption of the model is that microeconomics is the foundation of the AD and AS blocks. That is, decisions in the AD and AS blocks are based on a representative consumer and a representative firm. A second assumption states that different markets in the model are in equilibrium, or frictionless, in the long run.2 These two assumptions are important, as they may be violated in the short run or in practice because of frictions. Furthermore, decision makers would need a different set of policy tools to address short-run frictions that would not otherwise exist in a frictionless economy.

How might you actually model these blocks of the economy? Households are typically assumed to be rational, which has a precise definition, although we can loosely define it as the existence of a utility function, and their actions are consistent with maximizing expected utility subject to a budget constraint. Households generally maximize expected lifetime utility, and a simplified household maximization problem is given below, where household utility is assumed to depend only on consumption.

numbered Display Equation

Notice that the utility in any period is given by the natural logarithm of consumption . This is a convenient choice for a utility function because it is monotonically increasing (if then , easy to work with, and has another nice property: diminishing marginal returns. That is, the additional utility from and additional unit of consumption decreases as consumption rises ( ). Combining this with a household budget constraint allows us to solve the household “problem” and obtain equilibrium conditions. Similarly for the production side of the economy, a representative firm is assumed to be a profit maximizer. Their problem is based on the available labor and capital, and they pay rents to the household to utilize these factors in production (assuming households own capital stock). This style of model is incredibly flexible and, although it can be complicated, can be extended in many ways. In addition, their micro-foundations provide a response to the Lucas Critique, as the agents in the economy will still attempt to maximize utility, profits, and so on, and yield a new equilibrium given a change in policy.3

The micro-foundations assumption, which states that a representative household/firm is the decision maker, implicitly assumes preferences such that expectations are identical within a group of economic agents.4 The benefit of this assumption is that it makes determining equilibrium less complicated, as only one set of preferences for the household/firm is needed to represent the preferences of all agents in the model and thus determine the AD/AS blocks.

In reality, however, preferences and expectations are not identical within a group of economic agents (e.g., consumers and investors). These discrepancies, which Keynes originally called “animal spirits,” may create frictions (disequilibrium or partial equilibrium) in the model, at least in the short run.5 As a result of animal spirits, one segment of economic agents may be more optimistic (or pessimistic) about the economy than another segment of agents. This optimism (or pessimism) among certain agents may create a bubble (or bust) in that economy. Recessions provide us with undeniable evidence of frictions. Some recessions, such as the Great Recession of 2007–2009, might represent a structural break, in the sense that equilibrium may have shifted upward or downward for some sectors, or perhaps the entirety, of an economy.

For example, in response to the Great Recession, the Federal Open Market Committee (FOMC) brought down the federal funds target rate to an unprecedented 0.00 to 0.25 percent range, and kept the fed funds rate within that range for seven years (Figure 2.1). Similarly, the unemployment rate ran above the natural level, given by the nonaccelerating inflation rate of unemployment (NAIRU), for a longer period than in prior recoveries, another example of persistent disequilibrium (Figure 2.2).

Graph shows a declining curve for fed funds rate during the period 1990 to 2016. The curve falls steadily at 2008 and is relatively constant during 2009 to 2016.

Figure 2.1 Fed Funds Rate

Source: Federal Reserve Board

Graph shows curves for NAIRU and unemployment rate during the period 1990 to 2016. The highest peak of unemployment rate occurs at 2010.

Figure 2.2 Unemployment Rate

Sources: U.S. Department of Labor and Congressional Budget Office

Moreover, movements from one equilibrium to another are unlikely to be smooth, which further reiterates the importance of frictions. In sum, frictions are possible in the short run, and some short-run frictions even have the ability to shift the longer-run equilibrium. In the current cycle, an obvious piece of evidence of frictions and a shift in the longer-run equilibrium is the high number of employees who are working part-time for economic reasons and would desire a full-time job (Figure 2.3). Another illustration of market frictions is the outward shift in the Beveridge curve. This shift signals that at any given level of unemployed workers, there is a higher level of vacancies that firms cannot fill (Figure 2.4). Therefore, there is a persistent gap between the unemployed and job vacancies, which indicates an ongoing friction in the reestablishment of a new labor market equilibrium.

Graph shows curves for part-time for economic reasons and 1980 - present average during the period 1980 to 2016. Present average curve is relatively constant and part-time for economic reason curve has highest peak at 1983.

Figure 2.3 Part-Time Workers for Economic Reasons

Source: U.S. Department of Labor

Scatter plot graph shows unemployment versus vacancy rate from 2001 to 2010 and from 2011 to present.

Figure 2.4 The Beveridge Curve

Source: U.S. Department of Labor

How Markets Function: Textbook Equilibrium

An economy consists of several markets (e.g., labor, housing, and money markets), and interactions between different economic agents determine equilibrium in these markets. Most economies are also open to international trade, thereby making the foreign exchange and trade markets also important. To understand how these markets function, we raise a few critical questions. What determines equilibrium in a market? What drives a market away from the equilibrium? How can equilibrium (or disequilibrium) in one market affect equilibrium in another market?

In the standard frictionless approach, a market (the labor market for example) is in equilibrium when the quantity demanded (demand) is equal to the quantity supplied (supply), ceteris paribus. Furthermore, this scenario of equilibrium supply and demand determines an equilibrium price (or wage rate, in the case of the labor market). Put differently, equilibrium in the labor market indicates that market participants (workers and employers) have obtained what they were seeking. That is, workers who are willing to work at the equilibrium wage rate (e.g., W*) would find work. By the same token, employers who are offering W* wage would find workers (Figure 2.5).

Wage versus quantity graph shows two lines which intersect at point denoted by W sub(star) on vertical axis and Q sub(star) on horizontal axis. The rising line representing demand for labor and declining line representing supply of labor.

Figure 2.5 Labor Market Example

There are several crucial points we want to stress with this simple illustration, but let us review an intuitive example. First, we can typically divide participants of a market into two groups, which are demand (user) and supply (provider).6 Second, decisions from these two groups determine equilibrium. Third, resources are optimally utilized in the market, as quantity demanded equals quantity supplied. Next, the equilibrium price (W*) does not imply a constant equilibrium value, as it may change over time. In addition, the equilibrium concept represents a stable path, and if one side of the market (e.g., demand) changes, then the equilibrium value (W*) would also change to equalize supply and demand.

Our final point divides changes in equilibrium states into two groups. If quantity supplied (supply) increases, then the equilibrium price would fall to equate supply and demand. In the frictionless sense, this scenario is known as “a movement along the curve.” That is, if the equilibrium value changes due to the intramarket factors, then the economy moves along the supply-and-demand curves. However, if equilibrium changes due to external factors (outside the market), then it is a “shift in the curve.” Both changes would require different policy actions from decision makers if policy makers desire a different outcome than that determined by the market. We shed light on this topic in the next section.

Dynamic Adjustments: “Creative Destruction” and Equilibrium States

A standard, frictionless, macro-model has good theoretical foundations; however, the model may be too simple for practical decision making. In practice, decision makers can design policies for the short run and long run. In the short run, markets experience shocks (internal and external), and those shocks create a state of disequilibrium. Frictions may be significant in preventing a smooth move to a new equilibrium and may further alter the potential future path of the relationship between variables of interest in search of a new equilibrium, forcing us to rethink the model’s existing theoretical foundations. A classic episode of such shocks and related market shocks (movements along the curve and shifts in the curve) has been observed recently. The Great Recession was a shock (a structural break) to the U.S. economy, and the effect of that shock was seen in every major sector of the economy. This begs the question: Can we quantify a market’s distortions?

For some markets, there is more than one indicator to judge market disequilibrium. For instance, common measures to judge labor market performance are the unemployment rate, wage rate, monthly net change in nonfarm payrolls, and the labor force participation rate. For interest rates, short-term rates (the fed funds target rate), long-term rates (the 10-year Treasury yield), or a combination of both (yield spreads) can be utilized to evaluate the market’s position relative to the equilibrium state.

There have been a number of sharp market adjustments in recent years. The first was the S&P 500 index, which dropped 51 percent between October 2007 and March 2009 (Figure 2.6). Another was the consumer confidence index, which dropped 77 percent between July 2007 and February 2009 (Figure 2.7). A final example is the federal funds target rate, which was in the 0.00 to 0.25 percent range from December 2008 to December 2015. There are many more examples of such market movements, and each can be associated with large market disequilibria that require adjustments by market actors. Yet these actors are limited by frictions that alter the speed and completeness of adjustments relative to the assumption of a frictionless model.

Graph shows a rising curve for S&P 500 Index with highest peak at 2015 during the period 1990 to 2016.

Figure 2.6 S&P 500 Index

Source: S&P

Graph shows curve for consumer confidence index with highest peak at 2000 and minimum at 2009 during the period 1990 to 2016.

Figure 2.7 Consumer Confidence Index

Source: The Conference Board, Inc. Reprinted with permission of The Conference Board. For more information, please see www.conference-board.org. Consumer Confidence Index is a registered trademark of The Conference Board, Inc.

Not all shocks create negative or sudden effects. Some shocks may shift equilibria upward, and the effect on the markets may be gradual. In 1942, Schumpeter developed the concept of “creative destruction,” which suggests that new and improved technology not only replaces existing technology but also improves output, thus shifting equilibrium upward.7 One example of creative destruction (positive, gradual shock) is U.S. productivity growth since the mid-1990s (Figure 2.8). Some have used the term productivity resurgence to describe the post-1995 era, as U.S. productivity growth picked up between 1996 and 2006. The average productivity growth rate from 1996 to 2011 is 2.49 percent, higher than the average growth rate during 1974–1995 of 1.46 percent. Some analysts suggest information technology (e.g., the Internet and personal computers) is one of the major sources of productivity resurgence.8 However, during the current economic recovery, productivity growth has slowed down considerably, averaging a weak 0.53 percent since 2011.

Graph shows curve for nonfarm productivity in second quarter with highest peak at 2004 during the period 1974 to 2014.

Figure 2.8 Productivity—Total Nonfarm

Source: U.S. Department of Labor

Graph shows curve for labor market index of LM Index which is 0.5 with steep decline from 2008 to 2009 during the period 2001 to 2013.

Figure 2.9 The Labor Market Index

Source: U.S. Department of Labor

In sum, due to shocks and frictions, market disruptions can persist and the effects of a shock on a market’s equilibrium can persist as well. In addition, a shock can shift a market’s equilibrium upward or downward from the existing equilibrium state.

Why Do We Care about Frictions?

After establishing that frictions exist, the question arises: Why do we care about frictions? For one, if a market is in disequilibrium, then it may imply that resources in the market are not being fully utilized. The Labor Market Index below zero during the Great Recession is an example of the labor market disequilibrium (Figure 2.9). For example, labor market disequilibrium suggests that some workers are unable to find jobs, some employers are unable to find workers, or perhaps both parties are unable to find matches. Another aspect of frictions is that sometimes distortions need to be “fixed.” That is, policy intervention (e.g., monetary and/or fiscal policy) is required to bring the market back to a state of equilibrium.

A key point that is crucial for decision makers is that distortions in one market can affect other markets. For example, the Taylor rule suggests a relationship between interest rates, inflation, and output. Money neutrality implies a relationship between money supply and inflation (Figure 2.10), and the Phillips curve describes a link between the unemployment rate and inflation.

M2 money supply growth versus PCE inflation graph shows curves for M2 growth and PCE inflation during the period 1990 to 2014. M2 growth and PCE inflation have highest peak at 2002 and 1991 respectively.

Figure 2.10 M2 Money Supply Growth vs. PCE Inflation

Sources: U.S. Department of Commerce and Federal Reserve Board

Our takeaway from this review is that there are relationships between different markets, and therefore disequilibrium in one market will likely lead to further adjustments in other markets.9 We have also seen this in practice, as the housing sector was the epicenter of the Great Recession, but the labor market experienced the largest job loss in the post-WWII era (Figure 2.11). The unemployment rate stayed elevated for several years even after the official ending of the Great Recession. Furthermore, the fed funds target rate was held at an unprecedentedly low 0.00 to 0.25 percent range for seven years, far below what many consider normal. Finally, average inflation rates remain far below the Fed’s target of 2 percent. Since 1994, the annual change in the personal consumption expenditures (PCE) deflator has averaged 1.9 percent, while that average has fallen to 1.4 percent since 2009. Therefore, distortions in one market (interest rates) can spread to others (asset prices) and also require policy interventions to restore equilibrium states in markets.

Graph shows curve for nonfarm employment growth of nonfarm employment with highest peak at 1967 and decline at 2009 during the period 1960 to 2012.

Figure 2.11 Nonfarm Employment Growth

Source: U.S. Department of Labor

A final point we want to stress is the possibility of a partial equilibrium, instead of general equilibrium, in an economy. That is, it may be possible that some markets are in disequilibrium and others are simultaneously in a temporary, unstable equilibrium, within an economy. For instance, inflation rates, interest rates, and the labor market experienced distortions, but output and equity markets appear to be functioning without frictions currently. Yet economic growth is considered subpar, while financial asset prices are considered exceedingly high by some. Industrial production and the Institute for Supply Management (ISM) manufacturing index are suggesting normal functioning in the manufacturing sector (Figure 2.12) while labor markets exhibit a persistent excess supply of labor.

Graph shows steadily rising curve for industrial production with highest peak at 2015 and decline between 2008 and 2009 during the period 1987 to 2016.

Figure 2.12 Industrial Production

Source: Federal Reserve Board

Industrial production crossed its prerecession peak in November 2014 and the ISM manufacturing index entered expansionary territory (above 50) in August 2009 and has been above 50 most of the time since then. The Standard & Poor’s (S&P) 500 index, a proxy for the equity market, crossed its prerecession peak in March 2013 and has climbed to new all-time highs. Therefore, the output/financial sectors of the U.S. economy appear to be functioning normally over the past few years, while the labor market remains in significant disequilibrium, as illustrated by the structural shift in the Beveridge curve and the persistence of long-term unemployment (Figure 2.13).

Graph shows fluctuating curve for the share of long-term unemployment with highest peak between 2010 and 2012 during the period 1960 to 2012.

Figure 2.13 Long-Term Unemployment

Source: U.S. Department of Labor

For decision makers, the concepts of frictions and partial equilibrium are crucial since decision makers need to assess (or forecast) the chances of an impending crisis and may need to design appropriate policy tools to address that crisis. Predicting a crisis is often extremely difficult, as suggested by Rudi Dornbusch, who said that a crisis “takes a much longer time coming than you think, and then it happens much faster than you would have thought.”10 However, by allowing for the possibility of frictions and partial equilibria in models, decision makers can improve the decision-making process.

A Frictionless Assumption Is Not a Harmless Assumption

Decision makers utilize many tools to analyze an economy, in particular, when estimating the impact of an impending crisis or the effect of a policy change on different sectors of an economy. One widely employed tool in today’s decision-making world is known as a macro-model. One of the key assumptions of a standard macro- model is the frictionless movement of prices and production. However, frictions exist, and in the presence of frictions, frictionless models do not provide an accurate assessment of the economy. Certainly, this was true in the wake of the Great Recession. Furthermore, policy recommendations based on an incorrect assessment of the functioning of the economy would lead to poor decisions and disappointing results.

In sum, private and public policy decisions change with the business cycle and policy intervention is sometimes necessary to restore equilibrium in some markets of an economy. However, by allowing frictions in the macro-model, we can provide more accurate assessments of the state of the economy and suggest appropriate policy actions. In the case of the United States, during the past eight years, sector-specific policies have been hit and miss in their ability to restore equilibrium in their respective sectors due to the misreading of the actual functioning of the economy.

QUANTIFYING FRICTIONS: IS THE LONG-RUN AVERAGE A USEFUL GUIDE FOR THE FUTURE?

It is the mark of an educated mind to be able to entertain a thought without accepting it.

—Aristotle

Keynes said, “This long run is a misleading guide to current affairs. In the long run, we are all dead.” Still, many analysts utilize the “long-run average” concept as a guideline without regard to possible changing circumstances. Such average projections reflect the presence of the anchoring bias. In addition, some analysts extend past trends to predict the future. Is the past trend useful to predict the future? Is there a long-run average that decision makers can utilize for future guidance? In truth, the answer depends on the individual situation of each sector—there is no a priori answer.

In other words, we must determine the behavior of a data series before we make a prediction or assume a long-run average as a benchmark for our models. When we utilize past information, or long-run averages, the implicit assumption is that the future will be similar to the past. The assumption that the future would be consistent (similar) with the past has serious consequences, if incorrect, for econometric analysis and forecasting. As mentioned earlier, standard econometric models, which assume frictionless (future is similar to the past), may provide misleading analysis in the presence of frictions. This section presents methods to identify frictions in an economy that would impact our forecasting accuracy.

Several econometric methods are utilized to identify frictions in the U.S. economy.11 Some of the major sectors of the economy, including the labor market, interest rates, financial sector, output, exchange rates and others, are characterized by such breaks. Many economic series (e.g., employment, productivity, dollar-index, S&P 500, 10-year Treasury, and money supply) are not mean reverting and experience structural breaks. These findings imply that the assumption that these series move around a stable average value (mean reverting) over time is incorrect. Furthermore, evidence of structural breaks provides caution to analysts that future behavior of these series may be different than the past. In addition, econometric analysis using traditional tools (ordinary least squares, for example) would provide misleading results and inaccurate forecasts as well as forecast bands (confidence intervals). Decision makers should not put heavy weight on the past average behavior of the series and expect the future will be the same.

Picturing Adjustments in Motion

The first method to identify frictions (a different behavior than the past) in a variable (or a sector) is the estimation of a long-run trend using the Hodrick-Prescott (H-P) filter.12 Major benefits of estimating a long-run trend are (1) determining whether a series has a cyclical pattern and (2) at any point in time, we can evaluate whether a series experiences an acceleration/boom (above trend) or deceleration/bust (below trend) around that cyclical pattern. In addition, the presence of a time trend or cyclical feature would be an indication of dynamic adjustment. How?

Figure 2.14 shows the log of nonfarm payrolls (employment) and its long-run trend (H-P filter–based trend). The employment series’ trend moves upward, most of the time, and since 2000, the series also showed cyclical behavior. The upward long-run trend of employment, on average, has some notable points as well as a strong indication of structural breaks. First, the trend experienced several shifts (breaks) and every break reduced the pace of employment growth (trend becomes flatter). Second, between early 2000 and 2015, the trend has flattened and became more like a horizontal line, which indicates a loss of the pre-2000 momentum. Finally, since 2011, the trend resumes along the pattern of a cyclical recovery. That trend, however, is at a different pace than the pre-2000 behavior. In sum, the H-P filter–based trend shows different behaviors for the various time periods for employment growth. The long-run average of employment growth may not be a useful guide for the future and moreover, the long-run average is actually the result of several distinct short-run cyclical trends.

Graph shows two curves for H-P filter–based long-run trend for log of employment rate and employment long-run trend rate during the period 1985 to 2015.

Figure 2.14 H-P Filter–Based Long-Run Trend of Employment

Source: U.S. Department of Labor

The H-P filter–based unemployment rate trend shows strong cyclical patterns (Figure 2.15). The cyclical behavior of the unemployment rate trend is expected, as unemployment tends to move up during recessions and decline during the expansionary phase of a business cycle. The fed funds rate trend, on average, has declined over time (Figure 2.16). The long-run trend of productivity (Figure 2.17) has moved upward over time.13 The rate of growth accelerated during the mid-1990s, although productivity has slowed since the late 2000s.

Graph shows two curves of H-P filter for long-run trend and log of unemployment rate during the period 1985 to 2015.

Figure 2.15 H-P Filter–Based Long-Run Trend of Unemployment Rate

Source: U.S. Department of Labor

Graph shows two curves of H-P filter for long-run trend and log of fed funds rate during the period 1985 to 2015.

Figure 2.16 H-P Filter–Based Long-Run Trend of Fed Funds Rate

Source: Federal Reserve Board

Graph shows two curves of H-P filter for long-run trend and log of productivity in fourth quarter during the period 1973 to 2013.

Figure 2.17 H-P Filter–Based Long-Run Trend of Productivity

Source: U.S. Department of Labor

In sum, these four graphs show that underlying economic series have different behavior during various time periods. That is an indication of dynamic economic adjustment in contrast to a frictionless series, which would show a more consistent behavior over time.

Let’s Put Statistics to Work: Does Volatility Differ Over Time?

If a series experienced different behavior for different subsamples, then that series shows evidence of dynamic adjustment. How can we measure the behavior of a series for subsamples? When we say behavior has changed over time, it implies the average growth rate and/or volatility (stability ratio) has changed over time. We can calculate standard statistics (mean, standard deviation, and stability ratios) to examine the behavior of a series over time (Table 2.1). The average growth rate of employment during 1990–2015 was 1.0 percent, the standard deviation was 1.7 percent, and the stability ratio (standard deviation as a percentage of the mean) was 162.7. A stability ratio above 100 indicates a volatile series, as the standard deviation is greater than the mean. Put differently, for some periods, deviations from the average growth (which is 1.0 percent) were higher than the mean and vice versa. We can then divide employment growth into subsamples including the pre–Great Recession (2000–2007) and post-recession (2009–2015) eras. The 1990s experienced high and stable employment growth, as the highest mean and lowest stability ratio were seen in the 1990s period. The average growth rates for the 2000–2007 and 2009–2015 periods were similar, but the post-recession era (2009–2015) showed higher volatility, as the stability ratio was 271.1. The 2000–2015 period saw the slowest employment growth, on average, and the highest stability ratio (most volatile employment growth). Overall, employment growth showed different behavior in terms of the mean and stability ratio in the various subsamples.

Two other key indicators of the U.S. labor market, the unemployment rate and average hourly earnings, also showed signs of dynamic change over time (Table 2.1). The post–Great Recession period observed the highest unemployment rate along with the lowest wage growth, on average. For all three measures of the labor market, the 2000–2015 period was most volatile as stability ratios are highest for that time period. In sum, all three variables show different behavior for different subsamples.

Table 2.1 Change Over Time in Unemployment Rate and Average Hourly Earnings

Variable 199O-2O15 1990-1999 2OOO-2OO7 2OO9-2O15 2000-2015
Mean S.D.
Stability Ratio
Mean S.D.
Stability Ratio
Mean S.D.
Stability Ratio
Mean S.D.
Stability Ratio
Mean S.D.
Stability Ratio
Nonfarm Payrolls (YoY) 1.0 1.7 162.7 1.8 1.3 69.4 0.8 1.1 139.3 0.7 2.0 271.1 0.5 1.7 328.6
Unemployment Rate 6.1 1.6 25.7 5.8 1.0 18.2 5.0 0.7 13.6 8.1 1.4 17.0 6.4 1.8 28.3
Averg. Hrly Earn. (YoY) 3.0 0.8 26.2 3.2 0.6 19.4 3.2 0.7 22.8 2.1 0.4 18.5 2.9 0.8 29.5

We also calculate the mean, standard deviation and stability ratio for productivity growth to analyze its behavior (Table 2.2). The post–Great Recession era (2009–2014) showed the smallest productivity growth, on average, and the pre-recession period (1996–2007) saw the highest average growth rate. The highest stability ratio was recorded for the 1973–1995 period, which marked the most volatile period for productivity growth. Overall, our findings in the analysis of productivity growth are consistent with the three labor market variables and show signs of dynamic adjustment that may reflect frictions in the real economy that would upset the smooth move to a new market equilibrium as assumed in many traditional economic models.

Table 2.2 Mean, Standard Deviation, and Stability Ratio

Variable 1973-2014 1973-95 1996-2007 2OO9-2O14 1996-2014
Mean S.D. Stability Ratio Mean S.D. Stability Ratio Mean S.D. Stability Ratio Mean S.D. Stability Ratio Mean S.D. Stability Ratio
Productivity (YoY) 1.85 1.66 89.86 1.53 1.75 114.67 2.70 1.22 45.30 1.51 1.57 103.94 2.23 1.46 65.40

Are Markets’ Behaviors Frictionless in the Long Run? Mean Reversion

Does dynamic adjustment in the economy indicate a return to a steady equilibrium or is there evidence of market frictions that prevent such a return? Keynes said that the long-run concept is a misleading guide to current affairs. We can rephrase the question: Is there a long-run average growth rate? Does the series move around that average and return to that average over the long run? Basically, we can test whether a variable is mean reverting (frictionless) in the sense that the variable moves around its mean growth rate and deviations from that average are temporary in the long run. In the next step, we test whether measures of the labor market exhibit mean-reverting behavior. That is, for example, does employment growth move around an average value over time and are deviations from the average values temporary?

Two tests are utilized to determine whether a series is mean reverting. First, we test for a structural break in a series and the second approach is the application of a unit root test.14 If we find evidence of a structural break in a series, then that series is not mean reverting because the series’ behavior (mean and/or standard deviation) is different for the pre- and post-break eras. If there is not a break in a series and the Augmented Dickey-Fuller (ADF) unit root test indicates the series is stationary, then that would suggest the series is mean reverting. The structural break and ADF test results are presented in Table 2.3

Table 2.3 Identifying a Structural Break Using the State-Space Approach

'
Employment (Not Mean Reverting)
Break Date Type of Break Coefficient
Mar-94 Level Shift 0.47
Feb-00 Additive Outlier −0.28  
Mar-10 Level Shift 0.47
Average Hourly Earnings (Not Mean Reverting)
Break Date Type of Break Coefficient
Apr-90 Additive Outlier −0.49
Jan-89 Level Shift  0.52
Jul-85 Additive Outlier −0.40
The Unemployment Rate (Not Mean Reverting)
Break Date Type of Break Coefficient
Nov-10 Additive Outlier 0.45
Jan-86 Additive Outlier −0.4  
Dec-08 Level Shift 0.45
Productivity Growth (Not Mean Reverting)
Break Date Type of Break Coefficient
Jan-02 Additive Outlier  2.14
Jan-82 Additive Outlier −1.80
Jan-93 Level Shift −2.54

Our results indicate that employment growth experienced structural breaks during the 1990–2015 period (identified in Table 2.3 as a level shift in the middle column) and that indicates employment growth is not mean reverting. In addition, this result also indicates that the dynamic adjustment in the labor market faces a set of frictions such that the pace of employment growth does not return to the same trend over time in our sample period. A similar conclusion is found for wage growth, the unemployment rate, and productivity growth. Overall, the four variables show evidence of structural breaks (they are not mean reverting), and therefore these variables do not move around an average value over time. This finding is consistent with Keynes’s notion that the long-run average concept is a misleading guide for current affairs.

Moving Beyond the Labor Market: Not All Frictions Are Equal

An economy consists of several sectors (markets) and the various markets may perform differently over time. In addition, individual markets may react to a shock (or to a recession) differently. Here, we apply the H-P filter on several variables representing several major sectors of the U.S. economy. The long-run trend along with the log of housing starts, a proxy for the housing sector, is plotted in Figure 2.18. The housing starts trend bottomed out in 2010, well after the official end date of the Great Recession. Furthermore, the current level of the trend is significantly below the pre-recession peak, which confirms a slower recovery in the housing sector compared to its prior history.

Graph shows two curves of H-P filter for long-run trend and log of housing starts with highest peak at 2005 during the period 1985 to 2015.

Figure 2.18 H-P Filter–Based Long-Run Trend of Housing Starts

Source: U.S. Department of Commerce

The long-run trend of the trade-weighted broad dollar index (Figure 2.19), a proxy for the foreign exchange market, shows a completely different behavior than the housing starts series. That is, the dollar’s long-run trend peaked in 2001 and continued its downward trend through 2012. Basically, the dollar trend did not show any significant change during the Great Recession.

Graph shows two curves of H-P filter for long-run trend and log trend of dollar index with highest peak at 2002 during the period 1985 to 2015.

Figure 2.19 H-P Filter–Based Long-Run Trend of Broad Dollar Index

Source: Federal Reserve Board

The Great Recession did affect consumer sentiment, as the long-run trend of the consumer confidence index (Figure 2.20) dropped to its lowest level since 1985. However, the current level of the trend is fairly close to the pre-recession peak, which suggests a solid recovery in consumer sentiment. The recovery in the production side was not the same as the housing sector and consumer sentiment recoveries. The long-run trend of industrial production (Figure 2.21), a proxy for real output, is presently at the highest level in our sample period, which starts in 1985. In sum, these variables, representing major sectors of the economy, showed different behavior and reactions to the Great Recession. This finding does shed light on the different types of frictions that will influence dynamic adjustment over time and that not all economic sectors of the economy reacted the same to the Great Recession.

Graph shows two curves of H-P filter for long-run trend and log of consumer confidence during the period 1985 to 2015.

Figure 2.20 H-P Filter–Based Long-Run Trend of Consumer Confidence

Source: The Conference Board, Inc. Reprinted with permission of The Conference Board. For more information, please see www.conference-board.org. Consumer Confidence Index is a registered trademark of The Conference Board, Inc.

Graph shows two curves of H-P filter for long-run trend and log trend of industrial production during the period 1985 to 2015.

Figure 2.21 H-P Filter–Based Long-Run Trend of Industrial Production

Source: Federal Reserve Board

How Volatile Are Some Major Sectors of the U.S. Economy?

To measure volatility, we provide the mean, standard deviation, and stability ratio for some of the major sectors of the U.S. economy in Tables 2.4 and 2.5. To represent the credit/U.S. Treasury market, the U.S. 10-year Treasury yield and federal funds target rate are examined. The lowest 10-year average yields along with the fed funds rate are in the post–Great Recession era (2009–2015). For the 2000–2015 period, the stability ratio for the fed funds rate was above 100, indicating this was a very volatile period for the fed funds rate, despite the appearances of a low level of the policy rate. The growth rate of the dollar index was very volatile for the complete period as well as for the subsamples, as the smallest stability ratio is 442.7. Housing starts and the consumer confidence index series were also very volatile, as the lowest stability ratios were 182.9 and 136.2, respectively. S&P 500 returns, in contrast, were very stable during the 1990s, where the stability ratio was 74.2.

Table 2.4 Mean, Standard Deviation, and Stability Ratio for Major Sectors

Variable 1990-2015 1990-1999 2000-2007 2009-2015 2000-2015
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
10-Year
4.9
1.8
36.7
6.7
1.1
15.8
4.7
0.7
14.1
2.6
0.7
25.3
3.8
1.2
31.4
Fed Funds
3.2
2.4
72.7
5.1
1.4
26.7
3.4
1.9
55.1
0.25
0.0
0.0
2.0
2.1
101.7
Dollar (YoY)
−0.1
5.0
−4,069.0
0.8
4.5
578.3
−1.0
4.5
−442.7
−0.7
4.7
−711.5
−0.7
5.2
−731.2
Housing Starts (YoY)
0.6
18.4
2,830.9
2.8
14.3
504.4
−1.6
13.7
−871.5
10.2
18.6
182.9
−0.8
20.6
−2,638.1
Consumer Conf(YoY)
2.1
23.9
1,130.6
3.9
22.0
560.9
−1.9
16.5
−853.9
16.2
22.0
136.2
0.9
25.1
2,690.0

Table 2.5 Mean, Standard Deviation, and Stability Ratio for Major Sectors

Variable 1990-2015 1990-1999 2000-2007 2009-2015 2000-2015
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
Mean
S.D.
Stability Ratio
S&P 500 (YoY)
8.9
16.5
185.5
15.8
11.7
74.2
2.5
14.8
596.6
14.3
11.9
83.4
4.3
17.5
404.0
Industrial Production (YoY)
2.2
4.1
184.0
3.7
2.6
70.9
1.5
2.6
164.7
2.9
3.9
134.4
1.2
4.5
370.9
ISM-M
51.9
5.0
9.7
51.6
4.5
8.8
52.0
4.8
9.2
54.6
2.7
5.0
52.1
5.3
10.2
M2 Money Supply
3.3
2.9
87.6
1.6
2.8
169.0
3.8
2.0
52.1
4.7
2.4
50.7
4.3
2.4
54.8
PCE Def. (YoY)
2.1
1.0
46.9
2.3
1.0
43.2
2.3
0.6
28.2
1.5
0.8
53.6
2.0
1.0
48.8

The pre–Great Recession era (2000–2007) was the most volatile period for S&P 500 returns, as the stability ratio was 596.6. The 2000–2015 period reported the highest stability ratios for industrial production and the ISM manufacturing index. The post–Great Recession era observed the highest average growth rate of the money supply and the highest stability ratio for the PCE deflator. Overall, consistent with the labor market analysis, these major sectors also experienced different behavior in different subsamples, thereby indicating that frictions acted upon the behavior of each of these variables in different ways and at different times.

Are These Sectors Mean Reverting?

In the next step, we test whether these major series were mean reverting. We found that the U.S. 10-year Treasury, fed funds rate, and dollar index were not mean reverting (Table 2.6).

Table 2.6 Identifying a Structural Break Using the State-Space Approach

10-Year Treasury (Not Mean Reverting)
Break Date Type of Break Coefficient
Dec-08 Level Shift −0.95
May-00 Additive Outlier   0.44
Nov-87 Level Shift −0.68
Fed Funds Rate (Not Mean Reverting)
Break Date Type of Break Coefficient
Jan-08 Level Shift −1.23
Jan-01 Level Shift −1.02
Dec-08 Level Shift −0.75
Dollar (Not Mean Reverting)
Break Date Type of Break Coefficient
Oct-08 Level Shift   4.22
Oct-09 Level Shift −4.72
Oct-85 Additive Outlier −2.26
Consumer Confidence (Mean Reverting)
Break Date Type of Break Coefficient
Oct-12 Additive Outlier 39.20
Feb-11 Additive Outlier 36.78
Mar-10 Additive Outlier 30.12
Housing Starts (Mean Reverting)
Break Date Type of Break Coefficient
Apr-10 Additive Outlier 34.46
Mar-94 Additive Outlier 30.78
Jan-92 Additive Outlier 30.50

The growth rates of consumer confidence and housing starts were mean reverting in our sample period (Table 2.6). However, these two series were volatile and our econometric analysis found several outliers in each series. The ISM manufacturing index was also mean reverting, with possible volatile behavior (Table 2.7). The growth rates of money supply and the PCE deflator, along with S&P 500 returns, are not mean reverting (Table 2.7).

Table 2.7 Identifying a Structural Break Using the State-Space Approach

The S&P 500 (Not Mean Reverting)
Break Date Type of Break Coefficient
Oct-09 Level Shift 18.61
Jan-92 Additive Outlier 11.73
Mar-10 Additive Outlier 11.29
Industrial Production (Not Mean Reverting)
Break Date Type of Break Coefficient
Sep-09 Level Shift   4.30
Sep-08 Level Shift −4.27
ISM-Manufacturing (Mean Reverting)
Break Date Type of Break Coefficient
Oct-01 Additive Outlier −4.33
May-11 Additive Outlier −4.26
Jun-96 Additive Outlier   4.23
Money Supply-M2 (Not Mean Reverting)
Break Date Type of Break Coefficient
Sep-01 Additive Outlier 1.73
Dec-08 Level Shift 1.89
PCE Deflator (Not Mean Reverting)
Break Date Type of Break Coefficient
Nov-08 Level Shift −1.11
Sep-06 Level Shift −1.03
Sep-11 Additive Outlier −0.55

Summing up, we have characterized 14 different variables, representing major sectors of the U.S. economy, and only 3 of them (consumer confidence, housing starts, and the ISM manufacturing index) turned out to be mean reverting. The remaining 11 variables were not mean reverting. That indicates the long-run average of many economic series, as a guideline for future estimates and decision making, can be misleading when projecting future values of these economic indices.

Are There Benefits to Identifying the Existence of Possible Frictions in the Dynamic Adjustment of Economic Series?

Several major sectors of the U.S. economy were analyzed to verify the notion that the long-run average is a misleading guide for the future. Our econometric analysis found that many series (e.g., employment, productivity, dollar index, S&P 500, 10-year Treasury, and money supply) were not mean reverting and experienced structural breaks, evidence of frictions in the dynamic adjustment process.

The findings imply that if decision makers assume that these series move around an average value over time (follow a frictionless behavior over time), they assume incorrectly. Furthermore, evidence of structural breaks provides caution that future behavior of these series may be different than in the past. In addition, econometric analysis using traditional tools (e.g., ordinary least squares [OLS]) would provide misleading analysis and forecasts. In sum, decision makers should not put heavy weight on the past average behavior of these variables as predictors of future values without further testing.

MODELING DYNAMIC ADJUSTMENT DUE TO ECONOMIC FRICTIONS: DECISION MAKING IN AN EVOLVING WORLD

The art of economics consists in looking not merely at the immediate but at the longer effects of any act or policy; it consists in tracing the consequences of that policy not merely for one group but for all groups.

—Henry Hazlitt

Christina Romer once said, “There’s a joke in economics about the drunk who loses his keys in the street but only looks for them under the light posts. When asked why, he says, ‘because that’s where the light is.’” Hazlitt’s statement and Romer’s (insightful) joke shed light on issues related to the limits of conventional econometric tools and hence the opportunity to improve decision making.

First, for instance, the total effect of a policy change is typically distributed over a prolonged period and we should not estimate or expect the impact of a policy change to appear within just one period. Second, a policy change may produce heterogeneous effects among the markets (sectors) of an economy. Globally, sometimes the effect of a policy change in one country may spill over into other economies (countries), as would be a characteristic of U.S. monetary policy. Third, a policy change may produce short- and long-run effects, which may be different from one another. Fourth, we want to stress that the effect of a policy change on markets (e.g., raising the fed funds target rate) may be different during different time periods because relationships between economic/financial variables evolve over time. The fifth and final point we want to highlight in this section is that the frictionless assumption (finding keys only under the light post) could pose serious issues for effective decision making and evaluation.

In previous sections, we have discussed issues related to the frictionless assumption in the dynamic adjustment process and how to identify the existence of possible frictions. This final section provides a guide to identifying a change in economic variables that allow for the existence of economic frictions that influence effective decision making. In particular, we estimate the effect of a policy change on a sector (market) and then determine whether the effect is heterogeneous for multiple markets. In effect, we describe how to search beyond the “light posts.”

To anticipate our results, in our first case study we estimate the effect of a one percentage point increase in employment growth on key labor market indicators. One result we find is that the largest effect was noted for the unemployment rate (a drop of 0.2 percentage points). Second, the change in the fed funds rate produced a heterogeneous effect for multiple markets, ranging from the largest change of 0.12 percentage points in the PCE deflator to no meaningful change in the S&P 500 and the growth rate of housing starts. Third, the effect of a change in interest rates is different during different time periods, which suggests that past benchmarks of policy effectiveness need to be reevaluated.

Finally, our econometric analysis found that the conventional relationship between gross domestic product (GDP) and the unemployment rate (Okun’s Law) is not stable—therefore, the relationship cannot be utilized as a guide (without further investigation). For decision makers in an ever-evolving world, one must go beyond the light posts to search for “keys” (reliable results) to effective decision making.

Estimating the Distributed Effect of a Policy Change: Impulse Response Functions

How might we estimate the effect of a change in the fed funds rate on different real sectors of the economy—for example, the labor and output markets and housing? To answer this question, we turn to the vector autoregression (VAR) modeling methodology.15 The beauty of VARs is that they are simple statistical representations of economic systems, as they rely only on the variables that comprise the system and a few lagged values of those variables. In addition, VARs can be “shocked” to show how all the variables respond to a change in one of the other variables. The way the variables respond over time to a change in the “shocked” variable are called impulse response functions (IRFs).16

Furthermore, we can approximate the total effect of a change in the funds rate on the other real variables of interest where the impact may be distributed over a prolonged period of time. Therefore, we estimate the effect of a change in the fed funds rate in the current month on the unemployment rate, inflation, output, and the housing market over the next 12 months.

What Would Be the Reaction to a Change? A Single-Market Case

Our first application focuses on just one sector, which is the labor market. We estimate the effect of a 1 percentage point increase in employment growth on the unemployment rate, labor force participation rate, and average hourly earnings. This example is a simple one, as it only shows the effect for the labor market and not for other markets. The increase in employment growth is associated with a reduction in the unemployment rate, with the largest drop appearing during the second month, a drop of 0.2 percentage points (Figure 2.22). The rise in employment growth boosts the growth rate in labor force participation by 0.09 percentage points in the first month, and that is the largest change in the participation rate (Figure 2.23). So there is some evidence for the view that employment gains are associated with a rise in participation rates. Average hourly earnings show a negative growth rate (with the largest drop of 0.1 percentage point for the first month) in response to the employment growth increase (Figure 2.24). The drop in the earnings growth rate may suggest that a rise in the employment growth rate boosts participation rates, which put downward pressure on earnings growth as new, less skilled or experienced workers are drawn into the labor force at lower wages. Typically, during the first year of job expansions, workers with less experience and training reenter the job market.

Graph shows curve for unemployment rate which is approximately minus 0.1 percentage with fall in the second month. It rises from fourth month onward.

Figure 2.22 Unemployment Rate: 1983–2015

Graph shows curve for labor force participation rate which is plus 0.9 percentage with steep decline in the second month. It gradually decline from fifth month onward.

Figure 2.23 Labor Force Participation Rate: 1983–2015

Graph shows curve for average hourly earnings which is minus 0.1 percentage with rise in the second month. It gradually increase from fifth month onward.

Figure 2.24 Average Hourly Earnings: 1983–2015

In sum, an increase in the employment growth rate affects other key elements of the labor market and the largest response to the employment growth change is noted for the unemployment rate. In addition, the total effect of an increase in employment on the unemployment rate is 0.97 percentage points (sum of total declines in the unemployment rate over the 12-month period), 0.38 percentage points for the labor force participation rate, and 0.51 percentage points for the average hourly earnings series. This indicates that the total impact on the unemployment rate is more than the combined effect of the labor force and earnings series, another way of saying there is a heterogeneous effect. Therefore, decision makers should estimate the possible impact of a change in one variable on each of the interested variables because the impact could be heterogeneous for different variables in contrast to the common assumption that the adjustment process to any exogenous change would be uniform within a single market.

Does a Change in the Fed Funds Rate Matter? Heterogeneous Reaction among Markets

An economy is comprised of many major markets and, therefore, the reactions to any policy change may be different in timing and size for different markets. Here we build a model that includes information from six major sectors of the U.S. economy. The six sectors are: interest rate/credit markets (fed funds target rate as proxy), prices/inflation (PCE deflator), labor market (unemployment rate), financial/equity market (S&P 500 index), housing sector (housing starts), and output (industrial production).

During midsummer 2015, most commentators expected that the FOMC would raise the target for the fed funds rate in the near future. An important question for analysts is what is the likely effect of a fed funds rate hike on the major sectors of the economy? Using a data set spanning 1983–2015, we estimate the effect of a 1 percentage point increase in the fed funds rate on the remaining five indicators of the chosen major sectors of the economy.17

First, the hike in the fed funds rate is not associated with a drop in inflation, at least for the first couple of months (Figure 2.25). However, the PCE deflator does show nonpositive numbers for the remaining 3 to 12 months of our study. One major reason for the positive relationship with the PCE deflator is that the FOMC usually raises the fed fund target rate during expansions (later part of the recovery/expansion cycle) and that phase of the business cycle is usually associated with rising inflation. Second, a rise in the fed funds rate is also associated with falling unemployment (Figure 2.26). This result is not so surprising given that the FOMC typically raises rates during expansions, and the unemployment rate also tends to fall during those same expansions.

Graph shows curve for PCE deflator which is approximately 0.1 with decline from second month and constant at zero from 2005 onward.

Figure 2.25 PCE Deflator: 1983–2015

Graph shows curve for unemployment rate which is approximately at zero with fall in the second month. It gradually increase from fourth month onward.

Figure 2.26 Unemployment Rate: 1983–2015

Third, the change in the fed funds rate does not produce a meaningful effect on S&P 500 returns (Figure 2.27) or on housing starts (Figure 2.28). The changes in both sectors are approximately zero for all 12 months. Finally, the change in the growth rate of industrial production is positive for all 12 months in response to a fed funds rate hike (Figure 2.29). Overall, a fed funds rate hike is associated with a heterogeneous effect among different markets, ranging from the largest change of +0.12 percentage points for the PCE deflator to no change at all for the S&P 500 and housing starts.

Graph shows curve of S&P 500 which is approximately constant at zero for the duration of twelve months.

Figure 2.27 S&P 500: 1983–2015

Graph shows curve of housing starts which is constant at zero for the duration of twelve months.

Figure 2.28 Housing Starts: 1983–2015

Graph shows curve for industrial production which is approximately plus 0.1 percentage with steep decline in the second month. It gradually decline from third month onward.

Figure 2.29 Industrial Production: 1983–2015

Is It All about the Base Period? The Lucas Critique

For effective decision making, we must make sure the results/conclusions are consistent between subsamples, which is the essence of the so-called Lucas Critique.18 Put differently, the implied conclusion should not change with a change in the sample period base (starting or ending point of the sample). To test for the robustness of our results, we estimate the effect of the fed funds rate hikes on the five major series identified earlier using the 1983–2005 period. We choose an ending point in 2005 because it is approximately in the middle of the previous expansion. Results are reported in Figures 2.30 through Figure 2.34. We do note a change in the magnitude for the PCE deflator (Figure 2.30) and the unemployment rate (Figure 2.31). The largest change was for the PCE deflator, which jumped to 0.19 percentage points from 0.12 percentage points based on the 1983–2015 period. The 1983–2005 period showed the largest drop in unemployment (0.17 percentage points) compared to a drop of 0.10 percentage points for the 1983–2015 period. The response of industrial production was stronger, at least for the first several months, for the 1983–2005 period compared to the 1983–2015 period reaction (Figure 2.34). The overall conclusions, however, are similar for both time periods. The response of inflation, the labor market, and output sectors to a fed funds hike are meaningful. The fed funds hike was unable to make a noticeable effect on the growth rate of housing starts and S&P 500 returns (Figures 2.32 and 2.33).

Graph shows curve for PCE deflator which is nearly 0.2 with decline from second month and increase from fourth month onward for the duration of twelve months.

Figure 2.30 PCE Deflator: 1983–2005

Graph shows curve for unemployment rate which is approximately minus 0.1 percentage with steep decline in the second month. It gradually increase from fourth month onward.

Figure 2.31 Unemployment Rate: 1983–2005

Graph shows curve of S&P 500 which is approximately constant at zero for the duration of twelve months.

Figure 2.32 S&P 500: 1983–2005

Graph shows curve of housing starts which is constant at zero for the duration of twelve months.

Figure 2.33 Housing Starts: 1983–2005

Graph shows curve for industrial production which is approximately plus 0.1 percentage with decline from second month onward for the duration of twelve months.

Figure 2.34 Industrial Production: 1983–2005

Graph shows curve for unemployment rate which is approximately minus 0.1 percentage with steep decline in the second month. It gradually increase from fourth month onward.

Figure 2.35 Unemployment Rate: 1983–2005

Graph shows curve of labor force participation rate  which decline from second month and increase at third month. It again decline at fourth month and increase at fifth month.

Figure 2.36 Labor Force Participation Rate: 1983–2005

Graph shows curve for average hourly earnings which is approximately minus 0.1 percentage and rise to positive value in the second month for the duration of twelve months.

Figure 2.37 Average Hourly Earnings: 1983–2005

In addition, we conduct another analysis using the 1983–2008 time span. The logic behind this subsample is that the ending points of our previous two analyses were in expansions and ending the sample period in 2008 gives us an opportunity to estimate the effect of a change in the fed funds rate on the variables during a recession. Since the impulse response functions (IRFs) are linear, the interpretation can be done for a drop in the fed funds rate by changing the sign of the estimated coefficient. For example, a 1 percentage point increase in the fed funds rate is associated with a 0.12 percentage points increase in the PCE deflator growth rate. We can also interpret that a 1 percentage point drop in the fed funds rate would reduce PCE inflation by 0.12 percentage points. Therefore, we can interpret the 1983–2008 period results for a drop in the fed funds rates as, typically, the FOMC reduces the target for the fed funds rate to combat recessions. Results are shown in the appendix of this chapter. The conclusions from the other two samples were valid for this time period as well.

Therefore, there is no change in the conclusions using three different sample periods, which indicates that our results are robust. Furthermore, we end our sample for both expansionary and contractionary phases of a business cycle, and our conclusions still hold, which may fulfill the Lucas Critique requirement for a policy recommendation.

Does the Base Period Matter for the Labor Market Analysis?

We also test the robustness of the labor market analysis and estimate the effect of a one percentage point increase in employment growth on the unemployment rate, labor force participation rate, and average hourly earnings using the 1983–2005 subsample. Results are shown in Figures 2.35 through 2.37. The results lead to similar conclusions with the 1983–2015:Q4 period. A positive shock to employment growth leads to a decline in the unemployment rate, a pickup in labor force participation, and a slight decline in average hourly earnings. In addition, we rerun our model using the 1983–2008 period to see if our conclusions hold when ending in a recessionary period and we find this is true; see appendix for results.

It is important to note that, although results in both case studies are consistent among these subsamples, this does not necessarily hold for all cases/subsamples. Therefore, before we make policy recommendations, we should test and reconfirm our results using different samples/subsample periods.

Is Okun’s Law Still Valid? Searching beyond the Light Posts

Unfortunately, some decision makers utilize economic/financial heuristic guidelines as benchmarks without reconfirming the validity of these theories with the data. In our view, they are searching for “keys under the light posts.” Economies evolve over time and the relationships between variables also evolve. Therefore, for effective decision making, it is crucial to retest/reconfirm the underlying relationship suggested by a theory before using that theory as a guide. In other words, we must search beyond the light posts to find the “keys.” One important application could be Okun’s Law, which suggests a relationship between GDP and the unemployment rate.19 That is, a boost in GDP growth rates would help to reduce the unemployment rate. In our view, before decision makers utilize Okun’s Law as a guide, they must test the causal relationship. What is the direction of the relationship? Is GDP growth causing (leading) unemployment or vice versa?20

The Granger causality test is a useful tool to determine causal relationship between variables of interest. Results based on the Granger causality test are reported in Table 2.8. The GDP growth rate Granger-causes the unemployment rate using the 1983–2015:Q1 data set (Table 2.8, Box A). That is, the GDP growth rate is a useful predictor for the unemployment rate. However, the unemployment rate is not a useful predictor for the GDP growth rate in our sample period.

Table 2.8 Testing the Causal Relationship: The Granger Causality Test

Time Period Regressor Dependent Variable

A

1983-2015:Q1

Unemployment Rate

real GDP

Unemployment Rate

NA

0.00*

Real GDP

0.34

NA

B 1983-2007:Q3 Unemployment Rate real GDP

NA

0.00*

0.07***

NA

C 1990-2015:Q1 Unemployment Rate Real GDP

NA

0.00*

0.78

NA

D 1990-2007:Q3 Unemployment Rate Real GDP

NA

0.00*

0.23

NA

E 2000-2015:Q1 Unemployment Rate Real GDP

NA

0.00*

0.27

NA

F 2009:Q3-2015:Q1 Unemployment Rate Real GDP

NA

0.79

0.68

NA

* Significant at 1 percent, ** Significant at 5 percent, *** Significant at 10 percent

Before we jump to a conclusion, we need to test the robustness of the results. We rerun the Granger causality analysis between GDP growth and the unemployment rate using the 1983–2007:Q3 period (pre–Great recession era, results in Box B). We find a two-way Granger causality relationship, which implies GDP is Granger-causing the unemployment rate and the unemployment rate is also Granger-causing GDP. Certainly a different result than the one based on the 1983–2015:Q1 period (Table 2.8, Box A). These different results lead us to two different conclusions for the different samples and raises questions about the reliability of Okun’s Law over time.

To retest our results, we run another analysis using the 1990–2015:Q1 period. We utilize this alternate sample period because the last three economic recoveries are considered “jobless” recoveries and that may have posed a structural break in the unemployment rate/GDP relationship. Therefore, testing the relationship between the two variables using the post-1990s period would be crucial for effective decision making. The results (Box C) suggest that GDP growth Granger-causes unemployment but that unemployment does not Granger-cause GDP. We also run the Granger causality test using 1990–2007:Q3 (pre-Great Recession era) and results suggest Granger causality runs from GDP to the unemployment rate only. To find what happens to Okun’s Law in the post–Great Recession world, we utilize the 2009:Q3–2015:Q1 period.21 Results (Box F) indicate there is no Granger causality between GDP and the unemployment rate during the latest period.

Summing up, using several different samples/subsample periods, our analysis suggests that Okun’s Law needs to be reevaluated and may not be utilized (without further investigation) as a guide in decision making. This is a good example of a heuristic guideline in forecasting and policy that does not stand up to standard statistical analysis.

Economies Evolve—So Must Our Evaluation

Often, economic and financial theories are utilized as a guide for decision making. In our view, theories may be utilized as a first step but must be reevaluated over time before implementation in practice. The reason is that economies evolve over time, as do relationships between economic variables. That is, the impact of a change in policy or a variable could be different across sectors as well as between time periods.

Our econometric results suggest that the effect of a 1 percentage point increase in employment growth on key labor market variables is indeed different between variables and over time. Second, the change in the fed funds rate produces a heterogeneous effect for multiple markets, ranging from the largest change of 0.12 percentage points in the PCE deflator to no meaningful change in S&P 500 returns and the growth rate of housing starts. Furthermore, the effect of a change in the fed funds rate is different for different time periods, which suggests that past benchmarks on policy impacts need to be reevaluated. Finally, our econometric analysis found that the conventional relationship between GDP and the unemployment rate (Okun’s Law) is not stable—it cannot be utilized as a guide (without further investigation) for public policy decision making. For decision makers in an ever-evolving world, one must go beyond the light posts to search for “keys” (reliable results) to effective decision making.

DYNAMIC ECONOMIC ADJUSTMENT: AN EVOLUTION UNTO ITSELF

Economic adjustment is different for the various sectors/markets of an economy and over time. The key takeaways of our discussion are, first, the process and speed of adjustment is different for different sectors. Second, the speed of adjustment may change and become faster/slower over time for some markets. Third, changes in one market may affect other markets. Fourth, the relationship between different markets and variables changes over time. Fifth, some markets may be in equilibrium while others are simultaneously not in equilibrium, suggesting the possibility of a partial equilibrium. The sixth and final point is that, due to the evolving nature of economies, the relationship between variables changes over time and decision makers should estimate the actual current relationship between variables of interest using recent data instead of believing historical trends/averages.

Appendix

A CASE FOR THE MULTIPLE MARKETS: 1983–2008

Graph shows curve PCE deflator which is starts from approximately 0.12 percentage and falls to minus 0.01 percentage.

Figure 2.38 PCE Deflator: 1983–2008

Graph shows curve for unemployment rate which is approximately minus 0.1 percentage with decline in the second month. It gradually increase from fourth month onward.

Figure 2.39 Unemployment Rate: 1983–2008

Graph shows curve of S&P 500 which is approximately constant at zero for the duration of twelve months.

Figure 2.40 S&P 500: 1983–2008

Graph shows curve of housing starts which is constant at zero for the duration of twelve months.

Figure 2.41 Housing Starts: 1983–2008

Graph shows curve for industrial production which is approximately plus 0.1 percentage with decline in the second month. It gradually decline from third month onward.

Figure 2.42 Industrial Production: 1983–2008

THE LABOR MARKET: 1983–2008

Graph shows curve for unemployment rate which is minus 0.1 percentage with fall in the second month. It gradually increase from fourth month onward.

Figure 2.43 Unemployment Rate: 1983–2008

Graph shows curve of labor force participation rate  which decline from second month and increase at third month. It again decline at fourth month and increase at fifth month.

Figure 2.44 Labor Force Participation Rate: 1983–2008

Graph shows curve for average hourly earnings which is approximately minus 0.1 percentage with increase in the second month. It gradually increase from fifth month onward.

Figure 2.45 Average Hourly Earnings: 1983–2008

NOTES

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset