• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 21 Jul 2014 20:11

We are still more than a week away from receiving the advance report for U.S. gross domestic product (GDP) from April through June. Based on what we know to date, second-quarter growth will be a large improvement over the dismal performance seen during the first three months of this year. As of today, our GDPNow model is reading an annualized second-quarter growth rate at 2.7 percent. Given that the economy declined by 2.9 percent in the first quarter, the prospects for the anticipated near-3 percent growth for 2014 as a whole look pretty dim.

The first-quarter performance was dominated, of course, by unusual circumstances that we don't expect to repeat: bad weather, a large inventory adjustment, a decline in real exports, and (especially) an unexpected decline in health services expenditures. Though those factors may mean a disappointing growth performance for the year as a whole, we will likely be willing to write the first quarter off as just one of those things if we can maintain the hoped-for 3 percent pace for the balance of the year.

Do the data support a case for optimism? We have been tracking the six-month trends in four key series that we believe to be especially important for assessing the underlying momentum in the economy: consumer spending (real personal consumption expenditures, or real PCE) excluding medical services, payroll employment, manufacturing production, and real nondefense capital goods shipments excluding aircraft.

The following charts give some sense of how things are stacking up. We will save the details for those who are interested, but the idea is to place the recent performance of each series, given its average growth rate and variability since 1990, in the context of GDP growth and its variability over that same period.

140721a


140721b


140721c


140721d


What do we learn from the foregoing charts? Three out of four of these series appear to be consistent with an underlying growth rate in the range of 3 percent. Payroll employment growth, in fact, is beginning to send signals of an even stronger pace.

Unfortunately, the series that looks the weakest relates to consumer spending. If we put any stock in some pretty basic economic theory, spending by households is likely the most forward-looking of the four measures charted above. That, to us, means a cautious attitude is the still the appropriate one. Or, to quote from a higher Atlanta Fed power:

... it will likely be hard to confirm a shift to a persistent above-trend pace of GDP growth even if the second-quarter numbers look relatively good.

This experience suggests to me that we can misread the vital signs of the economy in real time. Notwithstanding the mostly positive and encouraging character of recent data, we policymakers need to be circumspect when tempted to drop the gavel and declare the case closed. In the current situation, I feel it's advisable to accrue evidence and gain perspective. It will take some time to validate an outlook that assumes above-trend growth and associated solid gains in employment and price stability.

Photo of Dave AltigBy Dave Altig, executive vice president and research director, and

 

Photo of Pat HigginsPat Higgins, a senior economist, both in the Atlanta Fed's research department

 


Author: "macroblog" Tags: "Data Releases, Economic Growth and Devel..."
Send by mail Print  Save  Delicious 
Date: Friday, 18 Jul 2014 22:04

With employment trends having turned solidly positive in recent months, attention has focused on the quality of the jobs created. See, for example, the different perspectives of Mortimer Zuckerman in the Wall Street Journal and Derek Thompson in the Atlantic. Zuckerman highlights the persistently elevated level of part-time employment—a legacy of the cutbacks firms made during the recession—whereas Thompson points out that most employment growth on net since the end of the recession has come in the form of full-time jobs.

In measuring labor market slack, the part-time issue boils down to how much of the elevated level of part-time employment represents underutilized labor resources. The U-6 measure of unemployment, produced by the U.S. Bureau of Labor Statistics, counts as unemployed people who say they want to and are able to work a full-time schedule but are working part-time because of slack work or business conditions, or because they could find only part-time work. These individuals are usually referred to as working part-time for economic reasons (PTER). Other part-time workers are classified as working part-time for non-economic reasons (PTNER). Policymakers have been talking a lot about U-6 recently. See for example, here and here.

The "lollipop" chart below sheds some light on the diversity of the share of employment that is PTER and PTNER across industries. The "lolly" end of the lollipop denotes the average mix of employment that is PTER and PTNER in 2013 within each industry, and the size of the lolly represents the size of the industry. The bottom of the "stem" of each lollipop is the average PTER/PTNER mix in 2007. The red square lollipop is the percent of all employment that is PTER and PTNER for the United States as a whole. (Note that the industry classification is based on the worker's main job. Part-time is defined as less than 35 hours a week.)


The primary takeaways from the chart are:

  1. The percent of the workforce that is part time varies greatly across industries (compare for example, durable goods manufacturing with restaurants).
  2. All industries have a greater share of PTNER workers than PTER workers (for example, the restaurant industry in 2013 had 32 percent of workers who said they were PTNER and about 13 percent who declared themselves as PTER).
  3. All industries had a greater share of PTER workers in 2013 than in 2007 (all the lollipops point upwards).
  4. Most industries have a lower share of PTNER workers than in the past (most of the lollipops lean to the left).
  5. Most industries have a greater share of part-time workers (PTER + PTNER) than in the past (the increase in PTER exceeds the decline in PTNER for most industries).

Another fact that is a bit harder to see from this chart is that in 2007, industries with the largest part-time workforces did not necessarily have the largest PTER workforces. In 2013, it was more common for a large part-time workforce to be associated with a large PTER workforce. In other words, the growth in part-time worker utilization in industries such as restaurants and some segments of retail has bought with it more people who are working part-time involuntarily.

So the increase in PTER since 2007 is widespread. But is that a secular trend? If it is, then the increase in the PTER share would be evident since the recession as well. The next lollipop chart presents evidence by comparing 2013 with 2012:


This chart shows a recent general improvement. In fact, 25 of the 36 industries pictured in the chart above have experienced a decline in the share of PTER, and 21 of the 36 have a smaller portion working part-time in total. Exceptions are concentrated in retail, an industry that represents a large share of employment. In total, 20 percent of people are employed in industries that experienced an increase in PTER from 2012 to 2013. So while overall there has been a fairly widespread (but modest) recent improvement in the situation, the percent of the workforce working part-time for economic reasons remains elevated compared with 2007 for all industries. Further, many people are employed in industries that are still experiencing gains in the share that is PTER.

Why has the PTER share continued to increase for some industries? Are people who normally work full-time jobs still grasping those part-time retail jobs until something else becomes available, has there been a shift in the use of part-time workers in those industries, or is there a greater demand for full-time jobs than before the recession? We'll keep digging.

Photo of John RobertsonBy John Robertson, a vice president and senior economist, and

 

Photo of Ellyn TerryEllyn Terry, a senior economic analyst, both of the Atlanta Fed's research department

 


Author: "macroblog" Tags: "Data Releases, Employment, Labor Markets..."
Send by mail Print  Save  Delicious 
Date: Thursday, 10 Jul 2014 19:26

The June 18 statement from the Federal Open Market Committee opened with this (emphasis mine):

Information received since the Federal Open Market Committee met in April indicates that growth in economic activity has rebounded in recent months.... Household spending appears to be rising moderately and business fixed investment resumed its advance, while the recovery in the housing sector remained slow. Fiscal policy is restraining economic growth, although the extent of restraint is diminishing.

I highlighted the business fixed investment (BFI) part of that passage because it contracted at an annual rate of 1.2 percent in the first quarter of 2014. Any substantial turnaround in growth in gross domestic product (GDP) from its dismal first-quarter pace would seem to require that BFI did in fact resume its advance through the second quarter.

We won't get an official read on BFI—or on real GDP growth and all of its other components—until July 30, when the U.S. Bureau of Economic Analysis (BEA) releases its advance (or first) GDP estimates for the second quarter of 2014. But that doesn't mean we are completely in the dark on what is happening in real time. We have enough data in hand to make an informed statistical guess on what that July 30 number might tell us.

The BEA's data-construction machinery for estimating GDP is laid out in considerable detail in its NIPA Handbook. Roughly 70 percent of the advance GDP release is based on source data from government agencies and other data providers that are available prior to the BEA official release. This information provides the basis for what have become known as "nowcasts" of GDP and its major subcomponents—essentially, real-time forecasts of the official numbers the BEA is likely to deliver.

Many nowcast variants are available to the public: the Wall Street Journal Economic Forecasting Survey, the Philadelphia Fed Survey of Professional Forecasters, and the CNBC Rapid Update, for example. In addition, a variety of proprietary nowcasts are available to subscribers, including Aspen Publishers' Blue Chip Publications, Macroeconomic Advisers GDP Tracking, and Moody's Analytics high-frequency model.

With this macroblog post, we introduce the Federal Reserve Bank of Atlanta's own nowcasting model, which we call GDPNow.

GDPNow will provide nowcasts of GDP and its subcomponents on a regularly updated basis. These nowcasts will be available on the pages of the Atlanta Fed's Center for Quantitative Economic Research (CQER).

A few important notes about GDPNow:

  • The GDPNow model forecasts are nonjudgmental, meaning that the forecasts are taken directly from the underlying statistical model. (These are not official forecasts of either the Atlanta Fed or its president, Dennis Lockhart.)
  • Because nowcasts are often based on both modeling and judgment, there is no reason to expect that GDPNow will agree with alternative forecasts. And we do not intend to present GDPNow as superior to those alternatives. Different approaches have their pluses and minuses. An advantage of our approach is that, because it is nonjudgmental, our methodology is easily replicable. But it is always wise to avoid reliance on a single model or source of information.
  • GDPNow forecasts are subject to error, sometimes substantial. Internally, we've regularly produced nowcasts from the GDPNow model since introducing an earlier version of it in an October 2011 macroblog post. A real-time track record for the model nowcasts just before the BEA's advance GDP release is available on the CQER GDPNow webpage, and will be updated on a regular basis to help users make informed decisions about the use of this tool.

So, with that in hand, does it appear that BFI in fact "resumed its advance" last quarter? The table below shows the current GDPNow forecasts:


We will update the nowcast five to six times each month following the releases of certain key economic indicators listed in the frequently asked questions. Look for the next GDPNow update on July 15, with the release of the retail trade and business inventory reports.

If you want to dig deeper, the GDPNow page includes downloadable charts and tables as well as numerical details including the model's nowcasts for GDP, its subcomponents, and how the subcomponent nowcasts are built up from both the underlying source data and the model parameters. This working paper supplies the model's technical documentation. We hope economy watchers find GDPNow to be a useful addition to their information sets.

Photo of Pat HigginsBy Pat Higgins, a senior economist in the Atlanta Fed's research department


Author: "macroblog" Tags: "Data Releases, Economic Growth and Devel..."
Send by mail Print  Save  Delicious 
Date: Monday, 30 Jun 2014 17:56

A recent Policy Note published by the Levy Economics Institute of Bard College shows that what we thought had been a decade of essentially flat real wages (since 2002) has actually been a decade of declining real wages. Replicating the second figure in that Policy Note, Chart 1 shows that holding experience (i.e., age) and education fixed at their levels in 1994, real wages per hour are at levels not seen since 1997. In other words, growth in experience and education within the workforce during the past decade has propped up wages.

Chart 1: Actual and Fixed Real Wages, 1994-2013

The implication for inequality of this growth in education and experience was only touched on in the Policy Note that Levy published. In this post, we investigate more fully what contribution growth in educational attainment has made to the growth in wage inequality since 1994.

The Gini coefficient is a common statistic used to measure the degree of inequality in income or wages within a population. The Gini ranges between 0 and 100, with a value of zero reflecting perfect equality and a value of 100 reflecting perfect inequality. The Gini is preferred to other, simpler indices, like the 90/10 ratio, which is simply the income in the 90th percentile divided by the income in the 10th percentile, because the Gini captures information along the entire distribution rather than merely information in the tails.

Chart 2 plots the Gini coefficient calculated for the actual real hourly wage distribution in the United States in each year between 1994 and 2013 and for the counterfactual wage distribution, holding education and/or age fixed at their 1994 levels in order to assess how much changes in age and education over the same period account for growth in wage inequality. In 2013, the Gini coefficient for the actual real wage distribution is roughly 33, meaning that if two people were drawn at random from the wage distribution, the expected difference in their wages is equal to 66 percent of the average wage in the distribution. (You can read more about interpreting the Gini coefficient.) A higher Gini implies that, first, the expected wage gap between two people has increased, holding the average wage of the distribution constant; or, second, the average wage of the distribution has decreased, holding the expected wage gap constant; or, third, some combination of these two events.

Chart 2: Wage Distribution Gini Coefficients over Time

The first message from Chart 2 is that—as has been documented numerous other places (here and here, for example)—inequality has been growing in the United States, which can be seen by the rising value of the Gini coefficient over time. The Gini coefficient’s 1.27-point rise means that between 1994 and 2013 the expected gap in wages between two randomly drawn workers has gotten two and a half (2 times 1.27, or 2.54) percentage points larger relative to the average wage in the distribution. Since the average real wage is higher in 2013 than in 1994, the implication is that the expected wage gap between two randomly drawn workers grew faster than the overall average wage grew. In other words, the tide rose, but not the same for all workers.

The second message from Chart 2 is that the aging of the workforce has contributed hardly anything to the growth in inequality over time: the Gini coefficient since 2009 for the wage distribution that holds age constant is essentially identical to the Gini coefficient for the actual wage distribution. However, the growth in education is another story.

In the absence of the growth in education during the same period, inequality would not have grown as much. The Gini coefficient for the actual real wage distribution in 2013 is 1.27 points higher than it was in 1994, whereas it's only 0.49 points higher for the wage distribution, holding education fixed. The implication is that growth in education has accounted for about 61 percent of the growth in inequality (as measured by the Gini coefficient) during this period.

Chart 3 shows the growth in education producing this result. The chart makes apparent the declines in the share of the workforce with less than a high school degree and the share with a high school degree, as is the increase in the shares of the workforce with college and graduate degrees.

Chart 3: Distribution of the Workforce across Educational Status

There is little debate about whether income inequality has been rising in the United States for some time, and more dramatically recently. The degree to which education has exacerbated inequality or has the potential to reduce inequality, however, offers a more robust debate. We intend this post to add to the evidence that growing educational attainment has contributed to rising inequality. This assertion is not meant to imply that education has been the only source of the rise in inequality or that educational attainment is undesirable. The message is that growth in educational attainment is clearly associated with growing inequality, and understanding that association will be central to the understanding the overall growth in inequality in the United States.

Photo of Jessica DillBy Julie L. Hotchkiss, a research economist and senior policy adviser at the Atlanta Fed, and

Fernando Rios-Avila, a research scholar at the Levy Economics Institute of Bard College


Author: "macroblog" Tags: "Education, Employment, Inequality, Labor..."
Send by mail Print  Save  Delicious 
Date: Thursday, 26 Jun 2014 17:19

On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.

This is the last of three posts on that talk. The first post reviewed alternative inflation measures; the second looked at ways to work with the Consumer Price Index to get a clear view of inflation. The full text of the speech is available on the Atlanta Fed's events web page.

The challenge of communicating price stability

Let me close this blog series with a few observations on the criticism that measures of core inflation, and specifically the CPI excluding food and energy, disconnect the Federal Reserve from households and businesses "who know price changes when they see them." After all, don't the members of the Federal Open Market Committee (FOMC) eat food and use gas in their cars? Of course they do, and if it is the cost of living the central bank intends to control, the prices of these goods should necessarily be part of the conversation, notwithstanding their observed volatility.

In fact, in the popularly reported all-items CPI, the Bureau of Labor Statistics has already removed about 40 percent of the monthly volatility in the cost-of-living measure through its seasonal adjustment procedures. I think communicating in terms of a seasonally adjusted price index makes a lot of sense, even if nobody actually buys things at seasonally adjusted prices.

Referencing alternative measures of inflation presents some communications challenges for the central bank to be sure. It certainly would be easier if progress toward either of the Federal Reserve's mandates could be described in terms of a single, easily understood statistic. But I don't think this is feasible for price stability, or for full employment.

And with regard to our price stability mandate, I suspect the problem of public communication runs deeper than the particular statistics we cite. In 1996, Robert Shiller polled people—real people, not economists—about their perceptions of inflation. What he found was a stark difference between how economists think about the word "inflation" and how folks outside a relatively small band of academics and policymakers define inflation. Consider this question:

140626_tbl1

And here is how people responded:

140626_tbl2

Seventy-seven percent of the households in Shiller's poll picked number 2—"Inflation hurts my real buying power"—as their biggest gripe about inflation. This is a cost-of-living description. It isn't the same concept that most economists are thinking about when they consider inflation. Only 12 percent of the economists Shiller polled indicated that inflation hurt real buying power.

I wonder if, in the minds of most people, the Federal Reserve's price-stability mandate is heard as a promise to prevent things from becoming more expensive, and especially the staples of life like, well, food and gasoline. This is not what the central bank is promising to do.

What is the Federal Reserve promising to do? To the best of my knowledge, the first "workable" definition of price stability by the Federal Reserve was Paul Volcker's 1983 description that it was a condition where "decision-making should be able to proceed on the basis that 'real' and 'nominal' values are substantially the same over the planning horizon—and that planning horizons should be suitably long."

Thirty years later, the Fed gave price stability a more explicit definition when it laid down a numerical target. The FOMC describes that target thusly:

The inflation rate over the longer run is primarily determined by monetary policy, and hence the Committee has the ability to specify a longer-run goal for inflation. The Committee reaffirms its judgment that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve's statutory mandate.

Whether one goes back to the qualitative description of Volcker or the quantitative description in the FOMC's recent statement of principles, the thrust of the price-stability objective is broadly the same. The central bank is intent on managing the persistent, nominal trend in the price level that is determined by monetary policy. It is not intent on managing the short-run, real fluctuations that reflect changes in the cost of living.

Effectively achieving price stability in the sense of the FOMC's declaration requires that the central bank hears what it needs to from the public, and that the public in turn hears what they need to know from the central bank. And this isn't likely unless the central bank and the public engage in a dialog in a language that both can understand.

Prices are volatile, and the cost of living the public experiences ought to reflect that. But what the central bank can control over time—inflation—is obscured within these fluctuations. What my colleagues and I have attempted to do is to rearrange the price data at our disposal, and so reveal a richer perspective on the inflation experience.

We are trying to take the torture out of the inflation discussion by accurately measuring the things that the Fed needs to worry about and by seeking greater clarity in our communications about what those things mean and where we are headed. Hard conversations indeed, but necessary ones.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist in the Atlanta Fed's research department

 


Author: "macroblog" Tags: "Business Cycles, Data Releases, Inflatio..."
Send by mail Print  Save  Delicious 
Date: Wednesday, 25 Jun 2014 16:46

On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.

In this, and the following two blogs, I'll be posting a modestly edited version of that talk. A full version of my prepared remarks will be posted along with the third installment of these posts.

The ideas expressed in these blogs and the related speech are my own, and do not necessarily reflect the views of the Federal Reserve Banks of Atlanta or Cleveland.

Part 1: The median CPI and other trimmed-mean estimators

A useful place to begin this conversation, I think, is with the following chart, which shows the monthly change in the Consumer Price Index (CPI) (through April).


The monthly CPI often swings between a negative reading and a reading in excess of 5 percent. In fact, in only about one-third of the readings over the past 16 years was the monthly, annualized seasonally adjusted CPI within a percentage point of 2 percent, which is the FOMC's longer-term inflation target. (Officially, the FOMC's target is based on the Personal Consumption Expenditures price index, but these and related observations hold for that price index equally well.)

How should the central bank think about its price-stability mandate within the context of these large monthly CPI fluctuations? For example, does April's 3.2 percent CPI increase argue that the FOMC ought to do something to beat back the inflationary threat? I don't speak for the FOMC, but I doubt it. More likely, there were some unusual price movements within the CPI's market basket that can explain why the April CPI increase isn't likely to persist. But the presumption that one can distinguish the price movements we should pay attention to from those that we should ignore is a risky business.

The Economist retells a conversation with Stephen Roach, who in the 1970s worked for the Federal Reserve under Chairman Arthur Burns. Roach remembers that when oil prices surged around 1973, Burns asked Federal Reserve Board economists to strip those prices out of the CPI "to get a less distorted measure. When food prices then rose sharply, they stripped those out too—followed by used cars, children's toys, jewellery, housing and so on, until around half of the CPI basket was excluded because it was supposedly 'distorted'" by forces outside the control of the central bank. The story goes on to say that, at least in part because of these actions, the Fed failed to spot the breadth of the inflationary threat of the 1970s.

I have a similar story. I remember a morning in 1991 at a meeting of the Federal Reserve Bank of Cleveland's board of directors. I was welcomed to the lectern with, "Now it's time to see what Mike is going to throw out of the CPI this month." It was an uncomfortable moment for me that had a lasting influence. It was my motivation for constructing the Cleveland Fed's median CPI.

I am a reasonably skilled reader of a monthly CPI release. And since I approached each monthly report with a pretty clear idea of what the actual rate of inflation was, it was always pretty easy for me to look across the items in the CPI market basket and identify any offending—or "distorted"—price change. Stripping these items from the price statistic revealed the truth—and confirmed that I was right all along about the actual rate of inflation.

Let me show you what I mean by way of the April CPI report. The next chart shows the annualized percentage change for each component in the CPI for that month. These are shown on the horizontal axis. The vertical axis shows the weight given to each of these price changes in the computation of the overall CPI. Taken as a whole, the CPI jumped 3.2 percent in April. But out there on the far right tail of this distribution are gasoline prices. They rose about 32 percent for the month. If you subtract out gasoline from the April CPI report, you get an increase of 2.1 percent. That's reasonably close to price stability, so we can stop there—mission accomplished.


But here's the thing: there is no such thing as a "nondistorted" price. All prices are being influenced by market forces and, once influenced, are also influencing the prices of all the other goods in the market basket.

What else is out there on the tails of the CPI price-change distribution? Lots of stuff. About 17 percent of things people buy actually declined in price in April while prices for about 13 percent of the market basket increased at rates above 5 percent.

But it's not just the tails of this distribution that are worth thinking about. Near the center of this price-change distribution is a very high proportion of things people buy. For example, price changes within the fairly narrow range of between 1.5 percent and 2.5 percent accounted for about 26 percent of the overall CPI market basket in the April report.

The April CPI report is hardly unusual. The CPI report is commonly one where we see a very wide range of price changes, commingled with an unusually large share of price increases that are very near the center of the price-change distribution. Statisticians call this a distribution with a high level of "excess kurtosis."

The following chart shows what an average monthly CPI price report looks like. The point of this chart is to convince you that the unusual distribution of price changes we saw in the April CPI report is standard fare. A very high proportion of price changes within the CPI market basket tends to remain close to the center of the distribution, and those that don't tend to be spread over a very wide range, resulting in what appear to be very elongated tails.


And this characterization of price changes is not at all special to the CPI. It characterizes every major price aggregate I have ever examined, including the retail price data for Brazil, Argentina, Mexico, Columbia, South Africa, Israel, the United Kingdom, Sweden, Canada, New Zealand, Germany, Japan, and Australia.

Why do price change distributions have peaked centers and very elongated tails? At one time, Steve Cecchetti and I speculated that the cost of unplanned price changes—called menu costs—discourage all but the most significant price adjustments. These menu costs could create a distribution of observed price changes where a large number of planned price adjustments occupy the center of the distribution, commingled with extreme, unplanned price adjustments that stretch out along its tails.

But absent a clear economic rationale for this unusual distribution, it presents a measurement problem and an immediate remedy. The problem is that these long tails tend to cause the CPI (and other weighted averages of prices) to fluctuate pretty widely from month to month, but they are, in a statistical sense, tethered to that large proportion of price changes that lie in the center of the distribution.

So my belated response to the Cleveland board of directors was the computation of the weighted median CPI (which I first produced with Chris Pike). This statistic considers only the middle-most monthly price change in the CPI market basket, which becomes the representative aggregate price change. The median CPI is immune to the obvious analyst bias that I had been guilty of, while greatly reducing the volatility in the monthly CPI report in a way that I thought gave the Federal Reserve Bank of Cleveland a clearer reading of the central tendency of price changes.

Cecchetti and I pushed the idea to a range of trimmed-mean estimators, for which the median is simply an extreme case. Trimmed-mean estimators trim some proportion of the tails from this price-change distribution and reaggregate the interior remainder. Others extended this idea to asymmetric trims for skewed price-change distributions, as Scott Roger did for New Zealand, and to other price statistics, like the Federal Reserve Bank of Dallas's trimmed-mean PCE inflation rate.

How much one should trim from the tails isn't entirely obvious. We settled on the 16 percent trimmed mean for the CPI (that is, trimming the highest and lowest 8 percent from the tails of the CPI's price-change distribution) because this is the proportion that produced the smallest monthly volatility in the statistic while preserving the same trend as the all-items CPI.

The following chart shows the monthly pattern of the median CPI and the 16 percent trimmed-mean CPI relative to the all-items CPI. Both measures reduce the monthly volatility of the aggregate price measure by a lot—and even more so than by simply subtracting from the index the often-offending food and energy items.


But while the median CPI and the trimmed-mean estimators are often referred to as "core" inflation measures (and I am guilty of this myself), these measures are very different from the CPI excluding food and energy.

In fact, I would not characterize these trimmed-mean measures as "exclusionary" statistics at all. Unlike the CPI excluding food and energy, the median CPI and the assortment of trimmed-mean estimators do not fundamentally alter the underlying weighting structure of the CPI from month to month. As long as the CPI price change distribution is symmetrical, these estimators are designed to track along the same path as that laid out by the headline CPI. It's just that these measures are constructed so that they follow that path with much less volatility (the monthly variance in the median CPI is about 95 percent smaller than the all-items CPI and about 25 percent smaller than the CPI less food and energy).

I think of the trimmed-mean estimators and the median CPI as being more akin to seasonal adjustment than they are to the concept of core inflation. (Indeed, early on, Cecchetti and I showed that the median CPI and associated trimmed-mean estimates also did a good job of purging the data of its seasonal nature.) The median CPI and the trimmed-mean estimators are noise-reduced statistics where the underlying signal being identified is the CPI itself, not some alternative aggregation of the price data.

This is not true of the CPI excluding food and energy, nor necessarily of other so-called measures of "core" inflation. Core inflation measures alter the weights of the price statistic so that they can no longer pretend to be approximations of the cost of living. They are different constructs altogether.

The idea of "core" inflation is one of the topics of tomorrow's post.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist in the Atlanta Fed's research department

Author: "macroblog" Tags: "Data Releases, Economic conditions, Infl..."
Send by mail Print  Save  Delicious 
Date: Tuesday, 24 Jun 2014 21:29

On May 30, the Federal Reserve Bank of Cleveland generously allowed me some time to speak at their conference on Inflation, Monetary Policy, and the Public. The purpose of my remarks was to describe the motivations and methods behind some of the alternative measures of the inflation experience that my coauthors and I have produced in support of monetary policy.

This is the second of three posts based on that talk. Yesterday's post considered the median CPI and other trimmed-mean measures.

Is it more expensive, or does it just cost more money? Inflation versus the cost of living

Let me make two claims that I believe are, separately, uncontroversial among economists. Jointly, however, I think they create an incongruity for how we think about and measure inflation.

The first claim is that over time, inflation is a monetary phenomenon. It is caused by too much money chasing a limited number of things to buy with that money. As such, the control of inflation is rightfully the responsibility of the institution that has monopoly control over the supply of money—the central bank.

My second claim is that the cost of living is a real concept, and changes in the cost of living will occur even in a world without money. It is a description of how difficult it is to buy a particular level of well-being. Indeed, to a first approximation, changes in the cost of living are beyond the ability of a central bank to control.

For this reason, I think it is entirely appropriate to think about whether the cost of living in New York City is rising faster or slower than in Cleveland, just as it is appropriate to ask whether the cost of living of retirees is rising faster or slower than it is for working-aged people. The folks at the Bureau of Labor Statistics produce statistics that can help us answer these and many other questions related to how expensive it is to buy the happiness embodied in any particular bundle of goods.

But I think it is inappropriate for us to think about inflation, the object of central bank control, as being different in New York than it is in Cleveland, or to think that inflation is somehow different for older citizens than it is for younger citizens. Inflation is common to all things valued by money. Yet changes in the cost of living and inflation are commonly talked about as if they are the same thing. And this creates both a communication and a measurement problem for the Federal Reserve and other central banks around the world.

Here is the essence of the problem as I see it: money is not only our medium of exchange but also our numeraire—our yardstick for measuring value. Embedded in every price change, then, are two forces. The first is real in the sense that the good is changing its price in relation to all the other prices in the market basket. It is the cost adjustment that motivates you to buy more or less of that good. The second force is purely nominal. It is a change in the numeraire caused by an imbalance in the supply and demand of the money being provided by the central bank. I think the concept of "core inflation" is all about trying to measure changes in this numeraire. But to get there, we need to first let go of any "real" notion of our price statistics. Let me explain.

As a cost-of-living approximation, the weights the Bureau of Labor Statistics (BLS) uses to construct the Consumer Price Index (CPI) are based on some broadly representative consumer expenditures. It is easy to understand that since medical care costs are more important to the typical household budget than, say, haircuts, these costs should get a greater weight in the computation of an individual's cost of living. But does inflation somehow affect medical care prices differently than haircuts? I'm open to the possibility that the answer to this question is yes. It seems to me that if monetary policy has predictable, real effects on the economy, then there will be a policy-induced disturbance in relative prices that temporarily alters the cost of living in some way.

But if inflation is a nominal experience that is independent of the cost of living, then the inflation component of medical care is the same as that in haircuts. No good or service, geographic region, or individual experiences inflation any differently than any other. Inflation is a common signal that ultimately runs through all wages and prices.

And when we open up to the idea that inflation is a nominal, not-real concept, we begin to think about the BLS's market basket in a fundamentally different way than what the BLS intends to measure.

This, I think, is the common theme that runs through all measures of "core" inflation. Can the prices the BLS collects be reorganized or reweighted in a way that makes the aggregate price statistic more informative about the inflation that the central bank hopes to control? I think the answer is yes. The CPI excluding food and energy is one very crude way. Food and energy prices are extremely volatile and certainly point to nonmonetary forces as their primary drivers.

In the early 1980s, Otto Eckstein defined core inflation as the trend growth rate of the cost of the factors of production—the cost of capital and wages. I would compare Eckstein's measure to the "inflation expectations" component that most economists (and presumably the FOMC) think "anchor" the inflation trend.

The sticky-price CPI

Brent Meyer and I have taken this idea to the CPI data. One way that prices appear to be different is in their observed "stickiness." That is, some prices tend to change frequently, while others do not. Prices that change only infrequently are likely to be more forward-looking than are those that change all the time. So we can take the CPI market basket and separate it into two groups of prices—prices that tend to be flexible and those that are "sticky" (a separation made possible by the work of Mark Bils and Peter J. Klenow).

Indeed, we find that the items in the CPI market basket that change prices frequently (about 30 percent of the CPI) are very responsive to changes in economic conditions, but do not seem to have a very forward-looking character. But the 70 percent of the market basket items that do not change prices very often—these are accounted for in the sticky-price CPI—appear to be largely immune to fluctuations in the business conditions and are better predictors of future price behavior. In other words, we think that some "inflation-expectation" component exists to varying degrees within each price. By reweighting the CPI market basket in a way that amplifies the behavior of the most forward-looking prices, the sticky-price CPI gives policymakers a perspective on the inflation experience that the headline CPI can't.

Here is what monthly changes in the sticky-price CPI look like compared to the all-items CPI and the traditional "core" CPI.


Let me describe another, more radical example of how we might think about reweighting the CPI market basket to measure inflation—a way of thinking that is very different from the expenditure-basket approach the BLS uses to measure the cost of living.

If we assume that inflation is ultimately a monetary event and, moreover, that the signal of this monetary inflation can be found in all prices, then we might use statistical techniques to help us identify that signal from a large number of price data. The famous early-20th-century economist Irving Fisher described the problem as trying to track a swarm of bees by abstracting from the individual, seemingly chaotic behavior of any particular bee.

Cecchetti and I experimented along these lines to measure a common signal running through the CPI data. The basic idea of our approach was to take the component data that the BLS supplied, make a few simple identifying assumptions, and let the data itself determine the appropriate weighting structure of the inflation estimate. The signal-extraction method we chose was a dynamic-factor index approach, and while we didn't pursue that work much further, others did, using more sophisticated and less restrictive signal-extraction methods. Perhaps most notable is the work of Ricardo Reis and Mark Watson.

To give you a flavor of the approach, consider the "first principal component" of the CPI price-change data. The first principal component of a data series is a statistical combination of the data that accounts for the largest share of their joint movement (or variance). It's a simple, statistically shared component that runs through all the price data.

This next chart shows the first principal component of the CPI price data, in relation to the headline CPI and the core CPI.


Again, this is a very different animal than what the folks at the BLS are trying to measure. In fact, the weights used to produce this particular common signal in the price data bear little similarity to the expenditure weights that make up the market baskets that most people buy. And why should they? The idea here doesn't depend on how important something is to the well-being of any individual, but rather on whether the movement in its price seems to be similar or dissimilar to the movements of all the other prices.

In the table below, I report the weights (or relative importance) of a select group of CPI components and the weights they would get on the basis of their contribution to the first principal component.

140624b

While some criticize the CPI because it over weights housing from a cost-of-living perspective, it may be these housing components that ought to be given the greatest consideration when we think about the inflation that the central bank controls. Likewise, according to this approach, restaurant costs, motor vehicle repairs, and even a few food components should be taken pretty seriously in the measurement of a common inflation signal running through the price data.

And what price movements does this approach say we ought to ignore? Well, gasoline prices for one. But movements in the prices of medical care commodities, communications equipment, and tobacco products also appear to move in ways that are largely disconnected from the common thread in prices that runs through the CPI market basket.

But this and other measures of "core" inflation are very much removed from the cost changes that people experience on a monthly basis. Does that cause a communications problem for the Federal Reserve? This will be the subject of my final post.

Photo of Mike BryanBy Mike Bryan, vice president and senior economist in the Atlanta Fed's research department

 

Author: "macroblog" Tags: "Business Cycles, Data Releases, Inflatio..."
Send by mail Print  Save  Delicious 
Date: Friday, 20 Jun 2014 20:34

Just before Wednesday's confirmation from Fed Chairwoman Janet Yellen that the Federal Open Market Committee (FOMC) does indeed still see slack in the labor market, Jon Hilsenrath and Victoria McGrane posted a Wall Street Journal article calling notice to the state of debate:

Nearly four-fifths of those who became long-term unemployed during the worst period of the downturn have since migrated to the fringes of the job market, a recent study shows, rarely seeking work, taking part-time posts or bouncing between unsteady jobs. Only one in five, according to the study, has returned to lasting full-time work since 2008.

Deliberations over the nature of the long-term unemployed are particularly lively within the Federal Reserve.... Fed officials face a conundrum: Should they keep trying to spur economic growth and hiring by holding short-term interest rates near zero, or will those low rates eventually spark inflation without helping those long out of work?

The article goes on to provide a nice summary of the ongoing back-and-forth among economists on whether the key determinant of slack in the labor market is the long-term unemployed or the short-term unemployed. Included in that summary, checking in on the side of "both," is research by Chris Smith at the Federal Reserve Board of Governors.

We are fans of Smith's work, but think that the Wall Street Journal summary buries its own lede by focusing on the long-term/short-term unemployment distinction rather than on what we think is the more important part of the story: In Hilsenrath and McGrane's words, those "taking part-time posts."

We are specifically talking about the group officially designated as part-time for economic reasons (PTER). This is the group of people in the U.S. Bureau of Labor Statistics' Household Survey who report they worked less than 35 hours in the reference week due to an economic reason such as slack work or business conditions.

We have previously noted that the long-term unemployed have been disproportionately landing in PTER jobs. We have also previously argued that PTER emerges as a key negative influence on earnings over the course of the recovery, and remains so (at least as of the end of 2013). For reference, here is a chart describing the decomposition from our previous post (which corrects a small error in the data definitions):

140620a

Our conclusion, clearly identified in the chart, was that short-term unemployment and PTER have been statistically responsible for the tepid growth in wages over the course of the recovery. What's more, as short-term unemployment has effectively returned to prerecession levels, PTER has increasingly become the dominant negative influence.

Our analysis was methodologically similar to Smith's—his work and the work represented in our previous post were both based on annual state-level microdata from the Current Population Survey, for example. They were not exactly comparable, however, because of different wage variables—Smith used the median wage while we use a composition-adjusted weighted average—and different regression controls.

Here is what we get when we impose the coefficient estimates from Smith's work into our attempt to replicate his wage definition:

140620b

Some results change. The unemployment variables, short-term or long-term, no longer show up as a drag in wage growth. The group of workers designated as "discouraged" do appear to be pulling down wage growth and in ways that are distinct from the larger group of marginally attached. (That is in contrast to arguments some of us have previously made in macroblog that looked at the propensity of the marginally attached to find employment.)

It is not unusual to see results flip around a bit in statistical work as this or that variable is changed, or as the structure of the empirical specifications is tweaked. It is a robustness issue that should always be acknowledged. But what does appear to emerge as a consistent negative influence on wage growth? PTER.

None of this means that the short-term/long-term unemployment debate is unimportant. The statistics are not strong enough for us to be ruling things out categorically. Furthermore, that debate has raised some really interesting questions, such as Glenn Rudebusch and John Williams's recent suggestion that the definition of economic slack relevant for the FOMC's employment mandate may be different from the definition appropriate to the FOMC's price stability mandate.

Our message is pretty simple and modest, but we think important. Whatever your definition of slack, it really ought to include PTER. If not, you are probably asking the wrong question.

Photo of Dave AltigBy Dave Altig, executive vice president and research director, and

 

Photo of Pat HigginsPat Higgins, a senior economist, both of the Atlanta Fed's research department


 

Author: "macroblog" Tags: "Employment, Labor Markets, Unemployment"
Send by mail Print  Save  Delicious 
Date: Monday, 09 Jun 2014 20:55

.centeredImage { display: block !important; margin-left: auto !important; margin-right: auto !important; }

Despite Friday´s report of a further solid increase in payroll employment, the utilization picture for the official labor force remains mixed. The rate of short-term and long-term unemployment as well as the share of the labor force working part time who want to work full time (a cohort also referred to as working part time for economic reasons, or PTER) rose during the recession.

The short-term unemployment rate has since returned to levels experienced before the recession. In contrast, longer-term unemployment and involuntary part-time work have declined, but both remain well above prerecession levels (see the chart).

Alternative Labor Utilization Measures

Some of the postrecession decline in the short-term unemployment rate has not resulted from the short-term unemployed finding a job, but rather the opposite—they failed to get a job and became longer-term unemployed. Before the recession, the number of unemployed workers who said they had been looking for a job for more than half a year accounted for about 18 percent of unemployed workers. Currently, that share is close to 36 percent.

Moreover, job finding by unemployed workers might not completely reflect a decline in the amount of slack labor resources if some want full-time work but only find part-time work (that is, are working PTER). In this post, we investigate the ability of the unemployed to become fully employed relative to their experience before the Great Recession.

The job-finding rate of unemployed workers (the share of unemployed who are employed the following month) generally decreases toward zero with the length of the unemployment spell. Job-finding rates fell for all durations of unemployment in the recession.

Since the end of the recession, job-finding rates have improved, especially for shorter-term unemployed, but remain well below prerecession levels. The overall job-finding rate stood at close to 28 percent in 2007 and was about 20 percent for the first four months of 2014. The chart below shows the job-finding rates for select years by unemployment duration:

1 Month Job Finding Rate

What about the jobs that the unemployed find? Most unemployed workers want to work full-time hours (at least 35 hours a week). In 2007, around 75 percent of job finders wanted full-time work and either got full-time work or worked PTER (the remainder worked part time for noneconomic reasons). For the first four months of 2014, the share wanting full-time work was also about 75 percent. But the portion of job finders wanting full-time work and only finding part-time work increased from about 22 percent in 2007 to almost 30 percent in 2014, and this job-finding underutilization share has become especially high for the longer-term unemployed.

The chart below displays the job-finding underutilization share for select years by unemployment duration. (You can also read further analysis of PTER dynamics by our colleagues at the Federal Reserve Board of Governors.)

Share of Job Finders

Finding a job is one thing, but finding a satisfactory job is another. Since the end of the recession, the number of unemployed has declined, thanks in part to a gradually improving rate of job finding. But the job-finding rate is still relatively low, and the ability of an unemployed job seeker who wants to work full-time to actually find full-time work remains a significant challenge.

Photo of John RobertsonJohn Robertson, a vice president and senior economist and

Photo of Ellyn TerryEllyn Terry, a senior economic analyst, both of the Atlanta Fed's research department

Author: "macroblog" Tags: "Employment, Labor Markets, Unemployment"
Send by mail Print  Save  Delicious 
Date: Monday, 02 Jun 2014 20:49

Of the many statistical barometers of the U.S. economy that we monitor here at the Atlanta Fed, there are few that we await more eagerly than the monthly report on employment conditions. The May 2014 edition arrives this week and, like many others, we will be more interested in the underlying details than in the headline job growth or unemployment numbers.

One of those underlying details—the state of the pool of “discouraged” workers (or, maybe more precisely, potential workers)—garnered special attention lately in the wake of the relatively dramatic decline in the ranks of the official labor force, a decline depicted in the April employment survey from the U.S. Bureau of Labor Statistics. That attention included some notable commentary from Federal Reserve officials.

Federal Reserve Bank of New York President William Dudley, for example, recently suggested that a sizeable part of the decline in labor force participation since 2007 can be tied to discouraged workers exiting the workforce. This suggestion follows related comments from Federal Reserve Chair Janet Yellen in her press conference following the March meeting of the Federal Open Market Committee:

So I have talked in the past about indicators I like to watch or I think that are relevant in assessing the labor market. In addition to the standard unemployment rate, I certainly look at broader measures of unemployment… Of course, I watch discouraged and marginally attached workers… it may be that as the economy begins to strengthen, we could see labor force participation flatten out for a time as discouraged workers start moving back into the labor market. And so that's something I'm watching closely.

What may not be fully appreciated by those not steeped in the details of the employment statistics is that discouraged workers are actually a subset of “marginally attached” workers. Among the marginally attached—individuals who have actively sought employment within the most recent 12-month period but not during the most recent month—are indeed those who report that they are out of the labor force because they are discouraged. But the marginally attached also include those who have not recently sought work because of family responsibilities, school attendance, poor health, or other reasons.

In fact, most of the marginally attached are not classified (via self-reporting) as discouraged (see the chart):

140602

At the St. Louis Fed, B. Ravikumar and Lin Shao recently published a report containing some detailed analysis of discouraged workers and their relationship to the labor force and the unemployment rate. As Ravikumar and Shao note,

Since discouraged workers are not actively searching for a job, they are considered nonparticipants in the labor market—that is, they are neither counted as unemployed nor included in the labor force.

More importantly, the authors point out that they tend to reenter the labor force at relatively high rates:

Since December 2007, on average, roughly 40 percent of discouraged workers reenter the labor force every month.

Therefore, it seems appropriate to count some fraction of the jobless population designated as discouraged (and out of the labor force) as among the officially unemployed.

We believe this logic should be extended to the entire group of marginally attached. As we've pointed out in the past, the marginally attached group as a whole also has a roughly 40 percent transition rate into the labor force. Even though more of the marginally attached are discouraged today than before the recession, the changing distribution has not affected the overall transition rate of the marginally attached into the labor force.

In fact, in terms of the propensity to flow into employment or officially measured unemployment, there is little to distinguish the discouraged from those who are marginally attached but who have other reasons for not recently seeking a job (see the chart):

140602b

What we take from these data is that, as a first pass, when we are talking about discouraged workers' attachment to the labor market, we are talking more generally about the marginally attached. And vice versa. Any differences in the demographic characteristics between discouraged and nondiscouraged marginally attached workers do not seem to materially affect their relative labor market attachment and ability to find work.

Sometimes labels matter. But in the case of discouraged marginally attached workers versus the nondiscouraged marginally attached workers—not so much.

Photo of Dave AltigBy Dave Altig, executive vice president and research director,

Photo of John RobertsonJohn Robertson, a vice president and senior economist, and

Photo of Ellyn TerryEllyn Terry, a senior economic analyst, all of the Atlanta Fed's research department

Author: "macroblog" Tags: "Employment, Federal Reserve and Monetary..."
Send by mail Print  Save  Delicious 
Date: Tuesday, 20 May 2014 21:31

During last week's "National Small Business Week," Janet Yellen delivered a speech titled "Small Business and the Recovery," in which she outlined how the Fed's low-interest-rate policies have helped small businesses.

By putting downward pressure on interest rates, the Fed is trying to make financial conditions more accommodative—supporting asset values and lower borrowing costs for households and businesses and thus encouraging the spending that spurs job creation and a stronger recovery.

In general, I think most small businesses in search of financing would agree with the "rising tide lifts all boats" hypothesis. When times are good, strong demand for goods and services helps provide a solid cash flow, which makes small businesses more attractive to lenders. At the same time, rising equity and housing prices support collateral used to secure financing.

Reduced economic uncertainty and strong income growth can help those in search of equity financing, as investors become more willing and able to open their pocketbooks. But even when the economy is strong, there is a business segment that's had an especially difficult time getting financing. And as we've highlighted in the past, this is also the segment that has had the highest potential to contribute to job growth—namely, young businesses.

Why is it hard for young firms to find credit or financing more generally? At least two reasons come to mind: First, lenders tend to have a rearview-mirror approach for assessing commercial creditworthiness. But a young business has little track record to speak of. Moreover, lenders have good reason to be cautious about a very young firm: half of all young firms don't make it past the fifth year. The second reason is that young businesses typically ask for relatively small amounts of money. (See the survey results in the Credit Demand section under Financing Conditions.) But the fixed cost of the detailed credit analysis (underwriting) of a loan can make lenders decide that it is not worth their while to engage with these young firms.

While difficult, obtaining financing is not impossible. Over the past two years, half of small firms under six years old that participated in our survey (latest results available) were able to obtain at least some of the financing requested over all their applications. This 50-percent figure for young firms strongly contrasts with the 78 percent of more mature small firms that found at least some credit. Nonetheless, some young firms manage to find some credit.

This leads to two questions:

  1. What types of financing sources are young firms using?
  2. How are the available financing options changing?

To answer the first question, we pooled all of the financing applications submitted by small firms in our semiannual survey over the past two years and examined how likely they were to apply for financing and be approved across a variety of financing products.

Applications and approvals
While most mature firms (more than five years old) seek—and receive—financing from banks, young firms have about as many approved applications for credit cards, vendor or trade credit, or financing from friends or family as they do for bank credit.

The chart below shows that about two-thirds of applications on behalf of mature firms were for commercial loans and lines of credit at banks and about 60 percent of those applications were at least partially approved. In comparison, fewer than half of applications by young firms were for a commercial bank loan or line of credit, fewer than a third of which were approved. Further, about half of the applications by mature firms were met in full compared to less than one-fifth of applications by young firms.



In the survey, we also ask what type of bank the firm applied to (large national bank, regional bank, or community bank). It turns out this distinction matters little for the young firms in our sample—the vast majority are denied regardless of the size of the bank. However, after the five-year mark, approval is highest for firms applying at the smallest banks and lowest for large national banks. For example, firms that are 10 years or older that applied at a community bank, on average, received most of the amount requested, and those applying at large national banks received only some of the amount requested.


Half of young firms and about one-fifth of mature firms in the survey reported receiving none of the credit requested over all their applications. How are firms that don't receive credit affected? According to a 2013 New York Fed small business credit survey, 42 percent of firms that were unsuccessful at obtaining credit said it limited their business expansion, 16 percent said they were unable to complete an existing order, and 16 percent indicated that it prevented hiring.

This leads to the next couple of questions: How are the available options for young firms changing? Is the market evolving in ways that can better facilitate lending to young firms?

When thinking about the places where young firms seem to be the most successful in obtaining credit, equity investments or loans from friends and family ranked the highest according to the Atlanta Fed survey, but this source is not highly used (see the first chart). Is the low usage rate a function of having only so many "friends and family" to ask? If it is, then perhaps alternative approaches such as crowdfunding could be a viable way for young businesses seeking small amounts of funds to broaden their financing options. Interestingly, crowdfunding serves not just as a means to raise funds, but also as a way to reach more customers and potential business partners.

A variety of types of new lending sources, including crowdfunding, were featured at the New York Fed's Small Business Summit ("Filling the Gaps") last week. One major theme of the summit was that credit providers are increasingly using technology to decrease the credit search costs for the borrower and lower the underwriting costs of the lender. And when it comes to matching borrowers with lenders, there does appear to be room for improvement. The New York Fed's small business credit survey, for example, showed that small firms looking for credit spent an average of 26 hours searching during the first half of 2013. Some of the financial services presented at the summit used electronic financial records and relevant business data, including business characteristics and credit scores to better match lenders and borrowers. Another theme to come out of the summit was the importance of transparency and education about the lending process. This was considered to be especially important at a time when the small business lending landscape is changing rapidly.

The full results of the Atlanta Fed's Q1 2014 Small Business Survey are available on the website.

Photo of Ellyn TerryBy Ellyn Terry, an economic policy analysis specialist in the Atlanta Fed's research department


Author: "macroblog" Tags: "Economic conditions, Small Business"
Send by mail Print  Save  Delicious 
Date: Friday, 16 May 2014 14:54

Yesterday's report on consumer price inflation from the U.S. Bureau of Labor Statistics moved the needle a bit on inflation trends—but just a bit. Meanwhile, the European Central Bank appears to be locked and loaded to blast away at its own (low) inflation concerns. From the Wall Street Journal:

The European Central Bank is ready to loosen monetary policy further to prevent the euro zone from succumbing to an extended period of low inflation, its vice president said on Thursday.

"We are determined to act swiftly if required and don't rule out further monetary policy easing," ECB Vice President Vitor Constancio said in a speech in Berlin.

One of the favorite further measures is apparently charging financial institutions for funds deposited with the central bank:

On Wednesday, the ECB's top economist, Peter Praet, in an interview with German newspaper Die Zeit, said the central bank is preparing a number of measures to counter low inflation. He mentioned a negative rate on deposits as a possible option in combination with other measures.

I don't presume to know enough about financial institutions in Europe to weigh in on the likely effectiveness of such an approach. I do know that we have found reasons to believe that there are limits to such a tool in the U.S. context, as the New York Fed's Ken Garbade and Jamie McAndrews pointed out a couple of years back.

In part, the desire to think about an option such as negative interest rates on deposits appears to be driven by considerable skepticism about deploying more quantitative easing, or QE.

A drawback, in my view, of general discussions about the wisdom and effectiveness of large-scale asset purchase programs is that these policies come in many flavors. My belief, in fact, is that the Fed versions of QE1, QE2, and QE3 can be thought of as three quite different programs, useful to address three quite distinct challenges. You can flip through the slide deck of a presentation I gave last week at a conference sponsored by the Global Interdependence Center, but here is the essence of my argument:

  • QE1, as emphasized by former Fed Chair Ben Bernanke, was first and foremost credit policy. It was implemented when credit markets were still in a state of relative disarray and, arguably, segmented to some significant degree. Unlike credit policy, the focus of traditional or pure QE "is the quantity of bank reserves" (to use the Bernanke language). Although QE1 per se involved asset purchases in excess of $1.7 trillion, the Fed's balance sheet rose by less than $300 billion during the program's span. The reason, of course, is that the open-market purchases associated with QE1 largely just replaced expiring lending from the emergency-based facilities in place through most of 2008. In effect, with QE1 the Fed replaced one type of credit policy with another.
  • QE2, in contrast, looks to me like pure, traditional quantitative easing. It was a good old-fashioned Treasury-only asset purchase program, and the monetary base effectively increased in lockstep with the size of the program. Importantly, the salient concern of the moment was a clear deterioration of market-based inflation expectations and—particularly worrisome to us at the Atlanta Fed—rising beliefs that outright deflation might be in the cards. In retrospect, old-fashioned QE appears to have worked to address the old-fashioned problem of influencing inflation expectations. In fact, the turnaround in expectations can be clearly traced to the Bernanke comments at the August 2010 Kansas City Fed Economic Symposium, indicating that the Federal Open Market Committee (FOMC) was ready and willing pull the QE tool out of the kit. That was an early lesson in the power of forward guidance, which brings us to...
  • ...QE3. I think it is a bit early to draw conclusions about the ultimate impact of QE3. I think you can contend that the Fed's latest large-scale asset purchase program has not had a large independent effect on interest rates or economic activity while still believing that QE3 has played an important role in supporting the economic recovery. These two, seemingly contradictory, opinions echo an argument suggested by Mike Woodford at the Kansas City Fed's Jackson Hole conference in 2012: QE3 was important as a signaling device in early stages of the deployment of the FOMC's primary tool, forward guidance regarding the period of exceptionally low interest rates. I would in fact argue that the winding down of QE3 makes all the more sense when seen through the lens of a forward guidance tool that has matured to the point of no longer requiring the credibility "booster shot" of words put to action via QE.

All of this is to argue that QE, as practiced, is not a single policy, effective in all variants in all circumstances, which means that the U.S. experience of the past might not apply to another time, let alone another place. But as I review the record of the past seven years, I see evidence that pure QE worked pretty well precisely when the central concern was managing inflation expectations (and, hence, I would say, inflation itself).

Photo of Dave AltigBy Dave Altig, executive vice president and research director of the Atlanta Fed


Author: "macroblog" Tags: "Federal Reserve and Monetary Policy, Mon..."
Send by mail Print  Save  Delicious 
Date: Thursday, 15 May 2014 15:20

Today’s news brings another indication that low inflation rates in the euro area have the attention of the European Central Bank. From the Wall Street Journal (Update: via MarketWatch):

Germany's central bank is willing to back an array of stimulus measures from the European Central Bank next month, including a negative rate on bank deposits and purchases of packaged bank loans if needed to keep inflation from staying too low, a person familiar with the matter said...

This marks the clearest signal yet that the Bundesbank, which has for years been defined by its conservative opposition to the ECB's emergency measures to combat the euro zone's debt crisis, is fully engaged in the fight against super-low inflation in the euro zone using monetary policy tools...

Notably, these tools apparently do not include Fed-style quantitative easing:

But the Bundesbank's backing has limits. It remains resistant to large-scale purchases of public and private debt, known as quantitative easing, the person said. The Bundesbank has discussed this option internally but has concluded that with government and corporate bond yields already quite low in Europe, the purchases wouldn't do much good and could instead create financial stability risks.

Should we conclude that there is now a global conclusion about the value and wisdom of large-scale asset purchases, a.k.a. QE? We certainly have quite a bit of experience with large-scale purchases now. But I think it is also fair to say that that experience has yet to yield firm consensus.

You probably don’t need much convincing that QE consensus remains elusive. But just in case, I invite you to consider the panel discussion we titled “Greasing the Skids: Was Quantitative Easing Needed to Unstick Markets? Or Has it Merely Sped Us toward the Next Crisis?” The discussion was organized for last month’s 2014 edition of the annual Atlanta Fed Financial Markets Conference.

Opinions among the panelists were, shall we say, diverse. You can view the entire session via this link. But if you don’t have an hour and 40 minutes to spare, here is the (less than) ten-minute highlight reel, wherein Carnegie Mellon Professor Allan Meltzer opines that Fed QE has become “a foolish program,” Jeffries LLC Chief Market Strategist David Zervos declares himself an unabashed “lover of QE,” and Federal Reserve Governor Jeremy Stein weighs in on some of the financial stability questions associated with very accommodative policy:


You probably detected some differences of opinion there. If that, however, didn’t satisfy your craving for unfiltered debate, click on through to this link to hear Professor Meltzer and Mr. Zervos consider some of Governor Stein’s comments on monitoring debt markets, regulatory approaches to pursuing financial stability objectives, and the efficacy of capital requirements for banks.

Photo of Dave AltigBy Dave Altig, executive vice president and research director of the Atlanta Fed.


Author: "macroblog" Tags: "Banking, Capital Markets, Economic condi..."
Send by mail Print  Save  Delicious 
Date: Friday, 09 May 2014 20:23

You might be unaware that May is Disability Insurance Awareness Month. We weren’t aware of it until recently, but the issue of disability—as a reason for nonparticipation in the labor market—has been very much on our minds as of late. As we noted in a previous macroblog post, from the fourth quarter of 2007 through the end of 2013, the number of people claiming to be out of the labor force for reasons of illness or disability increased almost 3 million (or 23 percent). The previous post also noted that the incidence of reported nonparticipation as a result of disability/illness is concentrated (unsurprisingly) in the age group from about 51 to 60.

In the past, we have examined the effects of the aging U.S. population on the labor force participation rate (LFPR). However, we have not yet specifically considered how much the aging of the population alone is responsible for the aforementioned increase in disability as a reason for dropping out of the labor force.

The following chart depicts over time the percent (by age group) reporting disability or illness as a reason for not participating in the labor force. Each line represents a different year, with the darkest line being 2013. The chart reveals a long-term trend of rising disability or illness as a reason for labor force nonparticipation for almost every age group.

Percent of Age Group Reporting Disability or Illness as the Reason for Not Participating in the Labor Market


The chart also shows that disability or illness is cited most often among people 51 to 65 years old—the current age of a large segment of the baby boomer cohort. In fact, the proportion of people in this age group increased from 20 percent in 2003 to 25 percent in 2013.

How much can the change in demographics during the past decade explain the rise in disability or illness as a reason for not participating in the labor market? The answer seems to be: Not a lot.

Following an approach you may have seen in this post, we break down into three components the change in the portion of people not participating in the labor force due to disability or illness. One component measures the change resulting from shifts within age groups (the within effect). Another component measures changes due to population shifts across age groups (the between effect). A third component allows for correlation across the two effects (a covariance term). Here’s what you get:

Contribution to Change in the Portion of the Population Who Don't Want a Job Because They Are Disabled or Ill


To recap, only about one fifth of the decline in labor force participation as a result of reported illness or disability can be attributed to the population aging per se. A full three quarters appears to be associated with some sort of behavioral change.

What is the source of this behavioral change? Our experiment can’t say. But given that those who drop out of the labor force for reasons of disability/illness tend not to return, it would be worth finding out. Here is one perspective on the issue.

You can find even more on this topic via the Human Capital Compendium.

Photo of Dave AltigBy Dave Altig, research director and executive vice president at the Atlanta Fed, and

Ellyn TerryEllyn Terry, a senior economic analyst in the Atlanta Fed's research department


Author: "macroblog" Tags: "Employment, Labor Markets, Unemployment"
Send by mail Print  Save  Delicious 
Date: Monday, 28 Apr 2014 18:40
#threeColLeftColumn { padding-top: 5px !important; margin-bottom: 0px !important; padding-bottom: 2000px !important; } #threeColCenterColumn { margin-bottom: 0px !important; padding-bottom: 0px !important; } #threeColRightColumn { margin-bottom: 0px !important; padding-bottom: 0px !important; } ol { list-style-type: arabic numbers !important; } #threeColLeftColumn { padding-top: 5px !important; margin-bottom: 0px !important; padding-bottom: 2000px !important; } #threeColCenterColumn { margin-bottom: 0px !important; padding-bottom: 0px !important; } #threeColRightColumn { margin-bottom: 0px !important; padding-bottom: 0px !important; }

New Data Sources: A Conversation with Google's Hal Varian

In recent years, there has been an explosion of new data coming from places like Google, Facebook, and Twitter. Economists and central bankers have begun to realize that these data may provide valuable insights into the economy that inform and improve the decisions made by policy makers.

Photo of Hal VarianAs chief economist at Google and emeritus professor at UC Berkeley, Hal Varian is uniquely qualified to discuss the issues surrounding these new data sources. Last week he was kind enough to take some time out of his schedule to answer a few questions about these data, the benefits of using them, and their limitations.

Mark Curtis: You've argued that new data sources from Google can improve our ability to "nowcast." Can you describe what this means and how the exorbitant amount of data that Google collects can be used to better understand the present?
Hal Varian: The simplest definition of "nowcasting" is "contemporaneous forecasting," though I do agree with David Hendry that this definition is probably too simple. Over the past decade or so, firms have spent billions of dollars to set up real-time data warehouses that track business metrics on a daily level. These metrics could include retail sales (like Wal-Mart and Target), package delivery (UPS and FedEx), credit card expenditure (MasterCard's SpendingPulse), employment (Intuit's small business employment index), and many other economically relevant measures. We have worked primarily with Google data, because it's what we have available, but there are lots of other sources.

Curtis: The ability to "nowcast" is also crucially important to the Fed. In his December press conference, former Fed Chairman Ben Bernanke stated that the Fed may have been slow to acknowledge the crisis in part due to deficient real-time information. Do you believe that new data sources such as Google search data might be able to improve the Fed's understanding of where the economy is and where it is going?
Varian: Yes, I think that this is definitely a possibility. The real-time data sources mentioned above are a good starting point. Google data seems to be helpful in getting real-time estimates of initial claims for unemployment benefits, housing sales, and loan modification, among other things.

Curtis: Janet Yellen stated in her first press conference as Fed Chair that the Fed should use other labor market indicators beyond the unemployment rate when measuring the health of labor markets. (The Atlanta Fed publishes a labor market spider chart incorporating a variety of indicators.) Are there particular indicators that Google produces that could be useful in this regard?
Varian: Absolutely. Queries related to job search seem to be indicative of labor market activity. Interestingly, queries having to do with killing time also seem to be correlated with unemployment measures!

Curtis: What are the downsides or potential pitfalls of using these types of new data sources?
Varian: First, the real measures—like credit card spending—are probably more indicative of actual outcomes than search data. Search is about intention, and spending is about transactions. Second, there can be feedback from news media and the like that may distort the intention measures. A headline story about a jump in unemployment can stimulate a lot of "unemployment rate" searches, so you have to be careful about how you interpret the data. Third, we've only had one recession since Google has been available, and it was pretty clearly a financially driven recession. But there are other kinds of recessions having to do with supply shocks, like energy prices, or monetary policy, as in the early 1980s. So we need to be careful about generalizing too broadly from this one example.

Curtis: Given the predominance of new data coming from Google, Twitter, and Facebook, do you think that this will limit, or even make obsolete, the role of traditional government statistical agencies such as Census Bureau and the Bureau of Labor Statistics in the future? If not, do you believe there is the potential for collaboration between these agencies and companies such as Google?
Varian: The government statistical agencies are the gold standard for data collection. It is likely that real-time data can be helpful in providing leading indicators for the standard metrics, and supplementing them in various ways, but I think it is highly unlikely that they will replace them. I hope that the private and public sector can work together in fruitful ways to exploit new sources of real-time data in ways that are mutually beneficial.

Curtis: A few years ago, former Fed Chairman Bernanke challenged researchers when he said, "Do we need new measures of expectations or new surveys? Information on the price expectations of businesses—who are, after all, the price setters in the first instance—as well as information on nominal wage expectations is particularly scarce." Do data from Google have the potential to fill this need?
Varian: We have a new product called Google Consumer Surveys that can be used to survey a broad audience of consumers. We don't have ways to go after specific audiences such as business managers or workers looking for jobs. But I wouldn't rule that out in the future.

Curtis: MIT recently introduced a big-data measure of inflation called the Billion Prices Project. Can you see a big future in big data as a measure of inflation?
Varian: Yes, I think so. I know there are also projects looking at supermarket scanner data and the like. One difficulty with online data is that it leaves out gasoline, electricity, housing, large consumer durables, and other categories of consumption. On the other hand, it is quite good for discretionary consumer spending. So I think that online price surveys will enable inexpensive ways to gather certain sorts of price data, but it certainly won't replace existing methods.

By Mark Curtis, a visiting scholar in the Atlanta Fed's research department


Author: "macroblog" Tags: "Economics, Forecasts, Technology, Web/Te..."
Send by mail Print  Save  Delicious 
Date: Thursday, 17 Apr 2014 20:37

At a recent speech in Miami, Atlanta Fed President Dennis Lockhart had this to say:

Wage growth by most measures has been very low. I take this as a signal of labor market weakness, and in turn a signal of a lack of significant upward unit cost pressure on inflation.

This macroblog post examines whether the data support this assertion (answer: yes) and whether wage inflation is more sensitive to some measures of labor underutilization than other measures (answer: apparently, yes). San Francisco Fed President John Williams touched on the latter topic in a recent speech (emphasis mine):

We generally look at the overall unemployment rate as a good yardstick of labor market slack and inflation pressures. However, its usefulness may be compromised today by the extraordinary number of long-term unemployed—defined as those out of the workforce for six months or longer... Standard models of inflation typically do not distinguish between the short- and long-term unemployed, because they're assumed to affect wage and price inflation in the same way. However, recent research suggests that the level of long-term unemployment may not influence inflation pressures to the same degree as short-term unemployment.

And Fed Chair Janet Yellen said this at her March 19 press conference:

With respect to the issue of short-term unemployment and its being more relevant for inflation and a better measure of the labor market, I've seen research along those lines. I think it would be tremendously premature to adopt any notion that says that that is an accurate read on either how inflation is determined or what constitutes slack in the labor market.

The research to which President Williams refers are papers by economists Robert Gordon and Mark Watson, respectively. (For further evidence, see this draft by Princeton economists Alan Krueger, Judd Cramer and David Cho.)

The analysis here builds on this research by broadening the measures of labor underutilization beyond the short-term and long-term unemployment rates that add up to the standard unemployment rate called U-3. The U-5 underutilization measure includes both conventional unemployment and "marginally attached workers" who are not in the labor force but who want a job and have actively looked in the past year. The difference between U-5 and U-3 is a very close proxy for the number of marginally attached relative to the size of the labor force.

U-6 encompasses U-5 as well as those who work less than 35 hours for an economic reason. The difference between U-6 and U-5 is a very close proxy for the share of "part-time for economic reason" workers in the labor force. These nonoverlapping measures of labor underutilization rates are all shown in the chart below.


The series are highly correlated, making it difficult to isolate the impact of any particular labor underutilization rate on wage inflation (e.g., "How much will wage inflation change if the short-term unemployment rate rises 1.0 percentage point, holding all of the underutilization measures in the above figure constant?").

We follow the approach of Staiger, Stock, and Watson (2001) by using state-level data to relate wage inflation to unemployment in a so-called "wage-Phillips curve." Because the 2007–09 recession hit some states harder than others, we can use the cross-sectional variation in outcomes across states to arrive at more precise estimates of the separate impacts of the labor underutilization measures on wage inflation (see the chart).


Five-year state-level wage inflation rates for 2008–13, using monthly Current Population Survey(CPS) microdata, are shown on the vertical axis. The CPS microdata are also used to construct all of the labor underutilization measures. Each circle represents an individual state (red for long-term unemployment and blue for short-term unemployment), and each circle's area is proportional to the state's population share. Two noteworthy states are pointed out for illustration. North Dakota has had lower unemployment and (much) higher wage inflation than the other states (presumably because of its energy boom). And California has had higher unemployment and (somewhat) lower wage inflation than average. Even after excluding North Dakota, we see a clear negative relationship between wage inflation and underutilization measured with either short-term or long-term unemployment.

Because short-term and long-term unemployment are highly correlated (also apparent in the above plot), one can't tell visually if one underutilization measure is more important for wage inflation than the other. To make this assessment, we need to estimate a regression. The regression—which also includes both U-5 minus U-3 and U-6 minus U-5—adjusts wages for changes in the composition of the workforce. This composition adjustment, also made by Staiger, Stock and Watson (2001), controls for the fact that lower-skilled workers tend to be laid off at a disproportionately higher rate during recessions, thereby putting upward pressure on wages. The regression also weights observations by population shares.

The regression estimates imply that short-term unemployment is the most important determinant of wage inflation while U-6 minus U-5—the proxy for "part-time for economic reason" workers—also has a statistically significant impact. The other two labor underutilization measures do not affect wage inflation statistically different from zero. Rather than provide regression coefficients, we decompose observed U.S. wage inflation for 1995–2013 into contributions from the labor underutilization measures, workforce composition changes, and everything else (see the chart).


Both short-term unemployment and workers who are part-time for economic reasons have pushed down wage inflation. But the "part-time for economic reason" impact has become relatively more important recently because of the stubbornly slow decline in undesired part-time employment.

Photo of Patrick HigginsBy Pat Higgins, a senior economist in the Atlanta Fed's research department


Author: "macroblog" Tags: "Employment, Labor Markets, Unemployment"
Send by mail Print  Save  Delicious 
Date: Thursday, 10 Apr 2014 18:50

As a follow up to this post on recent trends in labor force participation, we look specifically at the prime-age group of 25- to 54-year-olds. The participation decisions of this age cohort are less affected by the aging population and the longer-term trend toward lower participation of youths because of rising school enrollment rates. In that sense, they give us a cleaner window on responses of participation to changing business cycle conditions.

The labor force participation rate of the prime-age group fell from 83 percent just before the Great Recession to 81 percent in 2013. The participation rate of prime-age males has been trending down since the 1960s. The participation rate of women, which had been rising for most of the post-World War II period, appears to have plateaued in the 1990s and has more recently shared the declining pattern of participation for prime-age men. But the decline in participation for both groups appears to have accelerated between 2007 and 2013 (see chart 1).

140410_1

We look at the various reasons people cite for not participating in the labor force from the monthly Current Population Survey. These reasons give us some insight into the impact of changes in employment conditions since 2007 on labor force participation. The data on those not in the official labor force can be broken into two broad categories: those who say they don't currently want a job and those who say they do want a job but don't satisfy the active search criteria for being in the official labor force. Of the prime-age population not in the labor force, most say they don't currently want a job. At the end of 2007, about 15 percent of 25- to 54-year-olds said they didn't want a job, and slightly fewer than 2 percent said they did want a job. By the end of 2013, the don't-want-a-job share had reached nearly 17 percent, and the want-a-job share had risen to slightly above 2 percent (see chart 2).

140410_2

Prime-Age Nonparticipation: Currently Want a Job
Most of the rise in the share of the prime-age population in the want-a-job category is due to so-called marginally attached individuals—they are available and want a job, have looked for a job in the past year, but haven't looked in the past four weeks—especially those who say they are not currently looking because they have become discouraged about job-finding prospects (see the blue and orange lines of chart 3). In 2013, there were about 1.1 million prime-age marginally attached individuals compared to 0.7 million in 2007, and the prime-age marginally attached accounted for about half of all marginally attached in the population.

140410_3

The marginally attached are aptly named in the sense that they have a reasonably high propensity to reenter the labor force—more than 40 percent are in the labor force in the next month and more than 50 percent are in the labor force 12 months later (see chart 4). This macroblog post discusses what the relative stability in the flow rate from marginally attached to the labor force means for thinking about the amount of slack labor resources in the economy.

140410_4

Prime-Age Nonparticipation: Currently Don't Want a Job
As chart 2 makes evident, the vast majority of the rise in prime-age nonparticipation since 2009 is due to the increase in those saying they do not currently want a job. The largest contributors to the increase are individuals who say they are too ill or disabled to work or who are in school or training (see the orange and blues lines in chart 5).

140410_5

Those who say they don't want a job because they are disabled have a relatively low propensity to subsequently (re)enter the labor force. So if the trend of rising disability persists, it will put further downward pressure on prime-age participation. Those who say they don't currently want a job because they are in school or training have a much greater likelihood of (re)entering the labor force, although this tendency has declined slightly since 2007 (see chart 6).

140410_6

Note that the number of people in the Current Population Survey citing disability as the reason for not currently wanting a job is not the same as either the number of people applying for or receiving social security disability insurance. However, a similar trend has been evident in overall disability insurance applications and enrollments (see here).

Some of the rise in the share of prime-age individuals who say they don't want a job could be linked to erosion of skills resulting from prolonged unemployment or permanent changes in the composition of demand (a different mix of skills and job descriptions). It is likely that the rise in share of prime-age individuals not currently wanting a job because they are in school or in training is partly a response to the perception of inadequate skills. The increase in recent years is evident across all ages until about age 50 but is especially strong among the youngest prime-age individuals (see chart 7).

140410_7

But lack of required skills is not the only plausible explanation for the rise in the share of prime-age individuals who say they don't currently want a job. For instance, the increased incidence of disability is partly due to changes in the age distribution within the prime-age category. The share of the prime-age population between 50 and 54 years old—the tail of the baby boomer cohort—has increased significantly (see chart 8).

140410_8

This increase is important because the incidence of reported disability within the prime-age population increases with age and has become more common in recent years, especially for those older than 45 (see chart 9).

140410_9

Conclusions
The health of the labor market clearly affects the decision of prime-age individuals to enroll in school or training, apply for disability insurance, or stay home and take care of family. Discouragement over job prospects rose during the Great Recession, causing many unemployed people to drop out of the labor force. The rise in the number of prime-age marginally attached workers reflects this trend and can account for some of the decline in participation between 2007 and 2009.

But most of the postrecession rise in prime-age nonparticipation is from the people who say they don't currently want a job. How much does that increase reflect trends established well before the recession, and how much can be attributed to the recession and slow recovery? It's hard to say with much certainty. For example, participation by prime-age men has been on a secular decline for decades, but the pace accelerated after 2007—see here for more discussion.

Undoubtedly, some people will reenter the labor market as it strengthens further, especially those who left to undertake additional training. But for others, the prospect of not finding a satisfactory job will cause them to continue to stay out of the labor market. The increased incidence of disability reported among prime-age individuals suggests permanent detachment from the labor market and will put continued downward pressure on participation if the trend continues. The Bureau of Labor Statistics projects that the prime-age participation rate will stabilize around its 2013 level. Given all the contradictory factors in play, we think this projection should have a pretty wide confidence interval around it.

Note: All data shown are 12-month moving averages to emphasize persistent shifts in trends.

Melinda PittsBy Melinda Pitts, director, Center for Human Capital Studies,

John RobertsonJohn Robertson, a vice president and senior economist in the Atlanta Fed's research department, and

Ellyn TerryEllyn Terry, a senior economic analyst in the Atlanta Fed's research department

Author: "macroblog" Tags: "Business Cycles, Employment, Labor Marke..."
Send by mail Print  Save  Delicious 
Date: Wednesday, 09 Apr 2014 14:56

Introduction
The rate of labor force participation (the share of the civilian noninstitutionalized population aged 16 and older in the labor force) has declined significantly since 2007. To what extent were the Great Recession and tepid recovery responsible?

In this post and one that will follow, we offer a series of charts using data from the Current Population Survey to explore some of the possible reasons behind the 2007–13 drop in participation. This first post describes the impact of the changing-age composition of the population and changes in labor force participation within specific age cohorts—see Calculated Risk posts here and here for a related treatment, and also this recent BLS study. The next post will look at the issue of potential cyclical impacts on participation by examining the behavior of the prime-age population.

Putting the decline in context
After rising from the mid-1960s through 1990, the overall labor force participation rate was relatively stable between 1990 and 2007. But participation has declined sharply since 2007. By 2013, participation was at the lowest level since 1978 (see chart 1).

140407_1

For men, the longer-term declining trend of participation accelerated after 2007. For women, after having been relatively stable since the late 1990s, participation began to decline after 2009. The decline for both males and females since 2009 was similar (see chart 2).

140407_2

The impact of retirement
One of the most important features of labor force participation is that it varies considerably over the life cycle: the rate of participation is low among young individuals, peaks during the prime-age years of 25 to 54, and then declines (see chart 3). So a change in the age distribution of the population can result in a significant change in overall labor force participation.

140407_3

The age distribution of the population has been shifting outward for some time. This is a result of the so-called baby boomer generation—that is, people born between 1946 and 1964 (see chart 4). The oldest baby boomers turned 62 in 2008 and became eligible for Social Security retirement benefits.

140407_4

At the same time the age distribution of the population has shifted out, the rate of retirement of older Americans has been declining. Retirement rates have generally been drifting down since the early 2000s (see chart 5). The decline in age-specific retirement rates has resulted in rising age-specific labor force participation rates. For example, from 1999 to 2013, the share of 62-year-old retirees declined from 38 percent to 28 percent. The BLS projects that this trend will continue at a similar pace in coming years (see table 3 of the BLS report).

140407_5

Although the decline in the propensity to retire has put some upward pressure on overall labor force participation, that effect is dominated by the sheer increase in the number of people reaching retirement age. The net result has been a steep rise in the share of the population saying they are not in the labor force because they are retired (see chart 6).

140407_6

Participation by age group
Individuals aged 16–24
The labor force participation rate for young individuals (between 16 and 24 years old) has been generally declining since the late 1990s. After slowing in the mid-2000s, the decline accelerated again during the Great Recession. However, participation has been relatively stable since 2009 (see chart 7). Nonetheless, the BLS projects that the participation rate for 16- to 24-year-olds will decline further, albeit at a slower pace than it declined between 2000 and 2009, and will fall a little below 50 percent by 2022.

140407_7

The change in participation among young people can be attributed almost entirely to enrollment rates in education programs (see here) and lower labor force participation among enrollees (see chart 8). The change in the share of 16- to 24-year-olds who say they don't currently want a job because they are in school closely matches the change in labor force participation for the entire cohort.

140407_8

Individuals aged 25–54 (prime age)
Generally, people aged 25 to 54 are the group most likely to be participating in the labor market (see chart 3). These so-called prime-age individuals are less likely to be making retirement decisions than older individuals, and less likely to be enrolled in schooling or training than younger individuals.

However, the prime-age labor force participation rate declined considerably between 2007 and 2013, and at a much faster pace than had been seen in the years prior to the recession (see chart 9). Reflective of the overall gender-specific participation differences seen in chart 2, the decline in prime-age female participation did not take hold until after 2009, and since 2009 the decline in both prime-age male and female participation has been quite similar. Nevertheless, the BLS projects that prime-age participation will stabilize in coming years and prime-age participation in 2022 will be close to its 2013 level.

140407_9

Implications
The BLS projects that participation by age group will look like this in 2022 relative to 2013 (see chart 10).

140407_10

Participation by youths is projected to continue to fall. The participation of older workers is projected to increase, but it will remain significantly lower than that of the prime-age group. Combined with an age distribution that has also continued to shift outward (see chart 11), the overall participation rate is expected to decline over the next several years from its 2013 level of around 63.3 percent. From the BLS study:

A combination of demographic, structural, and cyclical factors has affected the overall labor force participation rate, as well as the participation rates of specific groups, in the past. BLS projects that, as has been the case for the last 10 years or so, these factors will exert downward pressure on the overall labor force participation rate over the 2012–2022 period and the rate will gradually decline further, to 61.6 percent in 2022.

140407_11

However, an important assumption in the BLS projection is that the post-2007 decline in prime-age participation will not persist. Indeed, the data for the first quarter of 2014 does suggest that some stabilization has occurred.

But separating what is trend from what is cyclical is challenging. The rapid pace of the decline in participation among the prime-age population between 2007 and 2013 is somewhat puzzling. Could this decline reflect a temporary cyclical effect or something more permanent? A follow-up blog will explore this question in more detail using the micro data from the Current Population Survey.

Note: All data shown are 12-month moving averages to emphasize persistent shifts in trends.

 

Melinda PittsBy Melinda Pitts, director, Center for Human Capital Studies,

John RobertsonJohn Robertson, a vice president and senior economist in the Atlanta Fed's research department, and

Ellyn TerryEllyn Terry, a senior economic analyst in the Atlanta Fed's research department

Author: "macroblog" Tags: "Business Cycles, Employment, Unemploymen..."
Send by mail Print  Save  Delicious 
Date: Tuesday, 18 Mar 2014 17:17

div.centeredImage { display: block !important; text-align: center !important; margin-left: auto !important; margin-right: auto !important; } div.centeredImage p { display: block !important; text-align: center !important; margin-left: auto !important; font-weight: bold !important; margin-right: auto !important; }

A little more than a week ago, all eyes were on the February Employment Situation report released by the U.S. Bureau of Labor Statistics. The Establishment Survey surprised on the upside: nonfarm payrolls rose 175,000 in February, and payrolls were revised upward for December and January. The Household Survey indicated that the unemployment rate edged up slightly to 6.7 percent in February from 6.6 percent the prior month, and the labor force participation rate held steady at 63.0 percent.

These are some of the facts on the table as the Federal Open Market Committee meets today and tomorrow and, judging from recent comments from the folks who will be at that meeting, those facts (and more like them) will be very much front of mind.

These days, multiple tools are available to assist both casual and expert observers in navigating the rich and sometimes baffling story of labor markets in the post-Great Recession world. Just last week, you could find a new "Guide for the Perplexed" on labor market slack in The New York Times and an interactive feature on the "Eight Different Faces of the Labor Market" at the New York Fed's Liberty Street Economics blog. And that's not to mention the most recent update of the Atlanta Fed’s own 13-headed Labor Market Spider Chart.

All of these contributions reflect a great deal of effort to understand the story of what's happening in labor markets. As part of that effort, our colleagues across the Federal Reserve System have been taking deeper dives into employment statistics and reaching out into their communities to get a better understanding of labor force dynamics and workforce development issues. This research can be found on the various Reserve Bank and Board websites.

To facilitate access to that work, the Atlanta Fed's Center for Human Capital Studies has worked to bring those resources together in the Federal Reserve Human Capital Compendium (HCC). We are pleased to announce that we have recently enhanced the HCC so you can perform simple or advanced searches that allow you to research whatever facet of that research strikes your fancy (see the figure):

We encourage you to take your own deeper dive into the latest research across the Federal Reserve System by browsing the HCC or searching out those labor topics that have piqued your interest lately.

Photo of Whitney MancusoBy Whitney Mancuso, a senior economic analyst in the Atlanta Fed's research department

Author: "macroblog" Tags: "Employment, Labor Markets, Unemployment"
Send by mail Print  Save  Delicious 
Date: Saturday, 08 Mar 2014 15:59

Today's employment report for the month of February maybe took a bit of drama out of one key question going into the next meeting of the Federal Open Market Committee (FOMC): What will happen to the FOMC's policy language when the economy hits or passes the 6.5 percent unemployment rate threshold for considering policy-rate liftoff? With the unemployment rate for February checking it at 6.7 percent, a breach of the threshold clearly won't have happened when the Committee meets in a little less than two weeks.

I say "maybe took a bit of drama out" because I'm not sure there was much drama left. All you had to do was listen to the Fed talkers yesterday to know that. This is from the highlights summary of a speech yesterday by Charles Plosser, president of the Philadelphia Fed...

President Plosser believes the Federal Open Market Committee has to revamp its current forward guidance regarding the future federal funds rate path because the 6.5 percent unemployment threshold has become irrelevant.

... and this from a Wall Street Journal interview with William Dudley, president of the New York Fed:

Mr. Dudley, in a Wall Street Journal interview, also said the Fed's 6.5% unemployment rate threshold for considering increases in short-term interest rates is "obsolete" and he would advocate scrapping it at the Fed's next meeting March 18–19.

From our shop, Atlanta Fed president Dennis Lockhart echoed those sentiments in a speech at Georgetown University:

Given that measured unemployment is so close to 6.5 percent, the time is approaching for a refreshed explanation of how unemployment or broader employment conditions are to be factored into a liftoff decision.

That statement doesn't mean we in Atlanta are disregarding the unemployment rate altogether. We have for some time been describing the broader net we have cast in fishing for labor market clues. One important aspect of that broader perspective is captured in the so-called U-6 measure of unemployment, about which President Lockhart's speech gives a quick tutorial:

The data used to construct the unemployment rate come from a survey of households conducted by the Census Bureau for the Bureau of Labor Statistics. To be counted as a participant in the labor force, a respondent must give rather specific qualifying answers to questions in the survey...

Those who are available, have looked for work in the past year, but have not recently looked for work are labeled "marginally attached." They are not in the official labor force, so they are not officially unemployed. You might say they are a "shadow labor force"...

One measure that counts the marginally attached in the pool of the unemployed is U-6.

U-6 also includes working people who identify themselves as working "part time for economic reasons." These are people who want to work full time (defined as 35 hours or more) but are able only to get fewer than 35 hours of work.

The "shadow labor force" comment is based on these observations. First, in President Lockhart's words:

The makeup of the class of marginally attached workers is quite fluid. About 40 percent of the marginally attached in any given month join the official labor force in the subsequent month.

There is no new story there. The frequency with which people move from marginally attached to in the labor force has been stable for quite a while (see the chart):


President Lockhart's second observation regarding the marginally attached is more important:

But only about 10 percent of those who move into the labor force find a job right away. In effect, they went from unofficially unemployed to officially unemployed.

The chart below depicts this observation:


Relative to before the Great Recession, the frequency with which people transitioned from marginally attached to employment has fallen by about 5 percentage points.

That decline is related to this conclusion (again from President Lockhart):

Here's my point: what U-6 captures matters. Measures such as marginally attached and part time for economic reasons became elevated in the recession and have not come down materially. Said differently, broader measures of unemployment like U-6 suggest that a significant level of slack remains in our employment markets.

It is not that we have failed to see progress in the U-6 measure of labor market slack. In fact, since the end of the recession, the U-6 unemployment rate has declined about in tandem with the standard official unemployment rate (designated U-3 by the U.S. Bureau of Labor Statistics; see the chart):


What is the case is that we have failed to undo the outsized run-up in the marginally attached and people working part-time for economic reasons that occurred during the recession (see the chart):


One interpretation of these observations is that the relative increase in U-6 represents structural changes that cannot be fixed by policies aimed at stimulating spending. But we are drawn to the fact, described above, that the marginally attached are flowing into the labor market at the same pace as before the recession, but they are finding jobs at a much slower pace, making us hesitant to fully embrace a structural interpretation.

Or, as our boss said yesterday:

As a policymaker, I am concerned about the unemployed in the official labor force, but I am also concerned about the unemployed in the shadow labor force. To get close to full employment, as I think of it, would involve substantial absorption of this shadow labor force. I do not think we're near that point yet. This is one of the reasons I support continuing with a highly accommodative policy and deferring liftoff for a while longer.

But if you are looking for some good news, here it is: Though the official unemployment rate has been essentially flat for the past three months, the broader U-6 measure that we are monitoring closely has fallen by half a percentage point. More of that, and we will really be getting somewhere.

Photo of Dave AltigBy Dave Altig, research director and executive vice president at the Atlanta Fed


Author: "macroblog" Tags: "Employment, Labor Markets, Unemployment"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader