Continued stagnation in July.
Wisconsin’s Department of Workforce Development (DWD) released today figures for July employment (release here).
Figure 1: Log private nonfarm payroll employment in United States (black), and in Wisconsin (red), both seasonally adjusted, and normalized to 2011M01=0. Source: BLS for US, and DWD for Wisconsin, and author’s calculations.
Figure 2: Log nonfarm payroll employment in United States (black), and in Wisconsin (red), both seasonally adjusted, and normalized to 2011M01=0. Source: BLS for US, and DWD for Wisconsin, and author’s calculations.
There were downward revisions to both series, pushing down the already pathetic June growth rates.
(Note that the more volatile estimates derived from the household survey indicate a loss of 1.4 thousand jobs in July, on a seasonally adjusted basis.)
Since January 2011, Wisconsin private nonfarm payroll employment growth has been a cumulative 2.8% lower than that for the United States as a whole, while NFP has been 2.1% lower.
According to these figures, private employment is now 105.1 thousand below the trend consistent with his promise to create 250 thousand net new jobs by the end of his first term (reiterated one year ago).
These figures are likely to be revised downward when the Quarterly Census of Employment and Wages (QCEW) numbers are released (see here). Recall, Governor Walker was for the QCEW before he was against the QCEW…(or at least preferring the establishment survey).
Figure 3: Private nonfarm payroll employment in Wisconsin (red), linear trend consistent with 250,000 net new jobs by 2015M01 (dark gray), and employment forecast from Wisconsin Economic Outlook (March 2014), quadratic interpolation. Source: DWD, Wisconsin Economic Outlook, and author’s calculations.
In order to hit the target by January 2015, the Wisconsin economy will need to generate 22.6 thousand net new jobs in each of every month until January. Mean job creation over Governor Walker’s term thus far has been 2.7 thousand per month. This suggests that it is unlikely that the goal will be achieved.
Data on Kansas, which has experienced stagnant employment in recent months , has not yet been released. I will update as soon as they are available. For depiction of June data for CA, MN, KS, relative to WI, see .
Update, 11:30AM 8/15: The Wisconsin GOP has their take on the employment situation: “Wisconsin is Back On, More Jobs Created Under Scott Walker”.
Consider this prognostication from 2011:
Americans face the most predictable economic crisis in this nation’s history. Absent reform, the panic ahead is no longer a question of if, but rather when. A deterioration of confidence by investors in government’s ability to pay its bills will drive interest rates up, increasing borrowing costs for government, small businesses and families alike. A vicious cycle of debt will compound upon itself; the available exit options once the crisis hits will be limited; and all will involve pain. (p.59)
Writing about the President’s FY2012 budget:
…Autopilot spending will soon crowd out all other priorities in the federal budget, with spending on Medicare, Medicaid, Social Security and interest on the national debt eclipsing all anticipated revenue by 2025. Borrowing and spending by the public sector will crowd out investment and growth in the private sector. … [emphasis added] (p.56)
Those quotes are from Representative Paul Ryan’s FY2012 “Path to Prosperity”.
It is all the more surprising, then, to consider actual data that has come out over the past three years since publication of that document.
First, consider the sources of saving available to the US economy, by reference to the National Savings Identity:
(S-I) + (T-G) ≡ CA
where S and I are private savings and investment, (T-G) is the public sector budget balance, and CA is the current account. Figure 1 shows the evolution of these series, all expressed as a share of nominal GDP (note that I have omitted the statistical discrepancy for the sake of clarity).
Figure 1: Private sector net savings (S-I) (blue), public sector net savings, or budget balance (T-G) (red), and current account (CA) (green), all as a share of GDP. NBER defined recession dates shaded gray. Source: BEA, 2014Q2 advance release, NBER, and author’s calculations.
Notice the drastic reduction in the public sector deficit, even prior to the implementation of the sequester in March 2013. The current account balance has improved substantially since the onset of the Great Recession — in line with a conventional macro model that relies on a marginal propensity to import — although it has stabilized at about 2.5% of GDP.
The identity is merely that — an identity, useful for accounting purposes. However, it does make clear that the US is not borrowing as much from the rest of the world as it was during the G.W. Bush years (peaking at about 6.2% annualized in a couple quarters). That is, the rest-of-the-world has thus far been happy to finance the US aggregate saving needs (as well as that of the Federal government — more on that in a bit).
The improvement in the public sector balance, particularly the Federal one, is quite marked if one abstracts from cyclical effects. Figure 2 presents (by fiscal year) the actual Federal budget balance and the one abstracting from automatic stabilizers, normalized by the potential GDP.
Figure 2: Federal budget balance (blue) and balance without automatic stabilizers (red), as percent of potential GDP (%), by fiscal year. NBER defined recession dates shaded gray. Source: CBO, Budget and Economic Outlook, February 2014, and NBER.
Hence one should not be too surprised that the apocalyptic vision regarding interest rate trajectories held by Representative Ryan in his various “Paths” have not come to pass. In fact real borrowing costs for the US government have remained quite low by historical standards.
Figure 3: Ten year constant maturity Treasury yields (blue), ten year constant maturity yields minus ten year (median) expected inflation (red +), and ten year constant maturity TIPS (black). NBER defined recession dates shaded gray. Source: Federal Reserve via FRED, Survey of Professional Forecasters, and author’s calculations.
Substantial (portfolio) crowding out of private investment as feared by Representative Ryan is unlikely to occur if interest rates do not rise appreciably (relative to the counterfactual); and in fact if investment depends on output, then decreased government spending and hence government deficits might actual yield greater crowding out than the counterfactual (see this post for discussion).
Is it all the Fed’s doing? Fed holdings of Treasury debt (all maturities) were about 18.4% of total publicly held Treasury debt, as of March 2014. Compare this against the previous peak of 17.3% in March 2003, and 24% in June 1974. So Fed purchases are part of the story, but not by any means all of the story. In fact, one might point to another factor as being even more important, namely foreign demand. Thus far the foreign sector (both private and public) seems quite content to continue acquiring US Treasurys, despite the best efforts of some policymakers to drive up the risk premium; see also . Figure 4 depicts foreign holdings of long term US Treasurys.
Figure 4: Foreign holdings of long term US Treasurys (blue) and holdings of the People’s Republic of China (red). +,x indicate benchmark data, line indicates TIC monthly estimates. Source: Treasury.
Looking at levels of holdings in dollars can be misleading, given the increasing size of US Federal debt. However, normalizing doesn’t yield a substantially different perspective. The ratio of long term US Treasurys to total publicly held Federal debt has held fairly constant over the past several years.
Figure 5: Foreign holdings of long term US Treasurys (blue) and holdings of the People’s Republic of China (red), all divided by Federal government publicly held debt (both short and long term). +,x indicate benchmark data, line indicates TIC monthly estimates. Source: Treasury, FRED, and author’s calculations.
One reason why Treasurys have been so favored during the global financial crisis and aftermath was the safe haven aspect of US Federal debt — not only in terms of default risk, but also in terms of geopolitical risk. To the extent that the VIX proxies for these factors — including geopolitical ones like Ukraine/Russia, one should expect (1) an obvious inverse correlation and (2) an abatement of the demand once risk dissipates. On (1), consider Figure 6.
Figure 6: Ten year constant maturity Treasury yield (blue) and DJIA VIX, divided by 10 (red). Observations for August are for August 8. Source: FRED.
I think there is something to the inverse correlation, in addition to the clear downward trend in the nominal ten year government yield in the background. But it clearly isn’t everything.
Regarding (2), it’s not obvious to me that there is going to be a sustained drop in uncertainty; if that is the case, elevated demand for US Treasury’s may be a persistent condition. That in turn suggests that the idea that long term funding costs for the Federal government (including in real) reverting to long run norms might not occur for some time. By the way, these arguments abstract away from the discussion of secular stagnation, as modeled by Eggertsson and Mehrotra (2014), among others. Secular stagnation merely adds onto the reasons why one should expect low Treasury borrowing costs going into the future.
What are the ramifications of an outcome where Federal funding costs are diminished for an extended period of time? One implication is the growth of government debt-to-GDP is less pronounced than, for instance, what was implied in the CBO scenarios used in Kitchen-Chinn (2011) (see discussion in this post). The urgency for rapid fiscal consolidation is thus commensurately reduced.
I was interested to take a look at our recent weak economic performance from a longer-term perspective.
The graph below plots private domestic fixed investment as a fraction of GDP for the United States going back to 1929, along with the median value for this fraction over the whole period. We always see investment fall as a fraction of GDP during an economic downturn. And in prolonged economic slumps– the Great Depression and its aftermath in the 1930s, and more recently the Great Recession and its aftermath– investment as a fraction of GDP remains significantly below normal for a prolonged period.
Clearly the feedback runs in both directions. When the economy is doing badly, nobody wants to invest, and when investment is low, there’s that much less spending to contribute to GDP. Interestingly we see the same pattern in Japan. Japan’s anemic economy over the last two decades has been characterized as a prolonged period of below-normal investment spending.
It’s interesting also to look at the individual components of investment. They each exhibit the same broad comovements with output, but with some idiosyncrasies. Notably, residential fixed investment accounted for almost all of the gain in investment as a fraction of GDP in the years just prior to the Great Recession. And in 2013, residential investment was still below normal as a fraction of GDP, and is the primary reason that investment overall remains lower than normal.
Allowing Private Sector Innovation Holds the Most Promise, if Government Doesn’t Impede Progress
Today we are fortunate to have a contribution written by Clifford Winston, Searle Freedom Trust Senior Fellow at the Brookings Institution. This post is based on a more extensive analysis available here.
Since 2005 Congress has not passed a long-term transportation bill and has instead engaged in political theatre about how to revive a wheezing Highway Trust Fund that is running a deficit. This embarrassing vaudeville act needs to be yanked off the stage with a cane because increasing government spending has not significantly improved infrastructure performance. Instead, implementing private sector technologies hold the most promise for improvements for the American traveler. Government can help by not impeding private sector efforts.
There is no “strategy” in the public sector’s decades-long history of increasing spending to build the nation’s way out of congestion and the public sector is unlikely to ever develop a sustainable strategy that could improve infrastructure performance. Accordingly, there are three ways that the private sector could help: purchase infrastructure facilities from the government and operate them more efficiently (privatization); develop technological innovations that the public sector could implement to improve current infrastructure performance; and make technological advances that greatly improve the operations of transportation modes that use the infrastructure.
All the options are promising, but outright privatization may be premature without experiments in the United States given the mixed experience with privatization in other countries. The best course of action is to rely on the private sector to develop technological advances in the major modes and for the government not to impede those advances. Policymakers have shown a decided lack of interest in implementing technological innovations to improve transportation. For example, they could encourage the use of technologies such as:
- GPS devices, Bluetooth signals, and mobile software applications to provide motorists with real-time information about traffic speeds, volumes, and conditions on alternate routes, thereby allowing drivers to make better informed decisions about their routes.
- Weigh-in-motion capabilities to provide real-time information about truck weight and axle configurations to do away with weigh-stations and to set efficient pavement-wear charges, which would encourage truckers to shift to vehicles with more axles that do less damage to roads.
- Governments also could apply adjustable-lane technologies and variable speed limits, adapting to traffic flows, and could set real-time tolls to encourage drivers to explore alternative routes, modes, and times of day to travel.
- Efficiency in air travel could be enhanced through technologies such as heated runways, which would reduce delays caused by time-consuming manual snowplowing; advanced screening technologies, such as full-body scanners and biometrics, to speed security measures; and the adoption of a next-generation satellite-based air traffic control system known as NextGen.
Unfortunately, the government has a status quo bias, which impedes the adoption of new technologies and slows economic progress.
Fortunately, the private sector does not share this bias. In spite of public sector foot-dragging, the private sector continues to innovate in myriad ways, such as: automobile safety technological advances, including electronic stability control, warning and emergency braking systems, speed alerts, and mirrors with blind spot warnings; airlines have installed more powerful and efficient jet engines and are planning to incorporate improved wing designs to reduce fuel consumption.
Moreover, major innovations in the modes are on the horizon. There is no doubt that driverless cars and trucks can be operated effectively, gathering and reacting to real-time information about traffic conditions and eliminating human failings, such as distracted or impaired driving; digital communications and GPS could automate routine air traffic control measures; and drones could be used for commercial purposes—if the FAA would lift its ban on their use.
Those innovations and undoubtedly others could significantly improve the efficiency and safety of current infrastructure and significantly reduce the need for government to pass huge spending bills to expand transportation capacity.
This post written by Clifford Winston.
An ambitious plan to cut income taxes in Kansas will end up costing the state more money than it initially estimated after a key ratings agency downgraded the state’s debt on Wednesday.
Standard & Poor’s cited structural imbalances created by the tax cut in its decision to slice Kansas’s bond rating from AA+ to AA. That means Kansas will have to offer a higher interest rate to lenders when it issues new bonds.
The package of tax cuts, backed by Gov. Sam Brownback (R) and his conservative allies in the state legislature, was never offset with equal spending cuts, S&P said Wednesday. The lost revenue is expected to eat up much of Kansas’s budget reserves during this fiscal year; S&P said it expected the state to face a $333 million budget shortfall this year.
“In our opinion, there is reason to believe the budget is not structurally aligned,” S&P analysts wrote.
Moody’s has already downgraded Kansas. And so the experiment continues.
John Fund, in National Review Online, writes of:
“…an ever-expanding government that chokes off economic opportunities for the middle class and those who aspire to it.
Time for some data. Figure 1 shows Federal government current expenditures normalized by the size of the economy.
Figure 1: Federal government current expenditures as a share of GDP (red) and as a share of potential GDP (blue). NBER recession dates shaded gray. Source: BEA, 2014Q2 advance release, CBO (February 2014), NBER and author’s calculations.
Federal government expenditures are now at 22.7% of GDP, far below the 25% recorded in 1982Q4 during the Reagan administration.
One point that all can agree on — even if there is no agreement on how to deal with the issue — is the deficient level of public investment. The American Society of Civil Engineers (ASCE) provided an assessment in 2013; their current estimated funding requirement through 2020 is $3.6 trillion. Figure 2 presents real investment as a log ratio to output.
Figure 2: Log ratio Federal real government investment expenditures to real GDP (red) and to real potential GDP (blue). NBER recession dates shaded gray. Source: BEA, 2014Q2 advance release, CBO (February 2014), NBER and author’s calculations.
Note plotted are real quantities for real government investment (which includes intellectual property products, such as software). This graph can be read as follows: the log ratio of investment to GDP rose from -4.99 in 2007Q4 to -4.82 in 2011Q1, which means that real government investment grew a cumulative 17% faster than real GDP. On an annual basis, over this 3.25 year period, investment was growing 5.2% faster.
Since 2011Q1, the log ratio has plunged to -5.03, which is a cumulative decline of 21% over a 3.25 year period. On an annual basis over this period, this is a 6.5% rate of decline.
A lot of concern has focused on the decline in government investment in physical capital — bridges, roads, sewer systems, etc. That concern is focused not only on Federal investment but also state and local. I haven’t had time to generate the real investment ex-intellectual property products series, so I present in Figure 3 investment in structures and equipment as a share of nominal GDP.
Figure 3: Federal nondefense investment in structures and equipment (blue) and state and local investment in structures and equipment (red) as a share of nominal GDP. NBER defined recession dates shaded gray. Source: BEA 2014Q2 advance release, NBER and author’s calculations.
To me, the implications are clear. With borrowing costs extremely low, it makes sense to invest in infrastructure. Some disagree. Then, the question is, are you with Barry Eichengreen or Sarah Palin? By the way, despite talk of the taper, and incipient Fed tightening, borrowing costs for the government remain quite low; this is shown in Figure 4.
Figure 4: Ten year constant maturity Treasury yields (blue), ten year constant maturity yields minus ten year (median) expected inflation (red +), and ten year constant maturity TIPS (black). NBER defined recession dates shaded gray. Source: Federal Reserve via FRED, Survey of Professional Forecasters, and author’s calculations.
For a discussion of transportation investment needs and potential impacts, see this NEC/CEA report.
For previous installments in the series on the myths regarding the ever-expanding government, see , , , , .
Quick links to a few items I found interesting.
Bill McBride observes:
Right now 2014 is on pace to be the best year for both total and private job growth since 1999.
Others comment on my new paper, The Changing Face of World Oil Markets:
Steven Kopits: Hamilton has it right on oil.
Sooner or later, we will have some combination of benefits cuts and/or revenue increases…. But why, exactly, is that something that must be done immediately?
And here’s the abstract from a new paper by Jens Hilscher, Alon Raviv, and Ricardo Reis:
We propose and implement a method that provides quantitative estimates of the extent to which higher- than-expected inflation can lower the real value of outstanding government debt. Looking forward, we derive a formula for the debt burden that relies on detailed information about debt maturity and claimholders, and that uses option prices to construct risk-adjusted probability distributions for inflation at different horizons. The estimates suggest that it is unlikely that inflation will lower the US fiscal burden significantly, and that the effect of higher inflation is modest for plausible counterfactuals. If instead inflation is combined with financial repression that ex post extends the maturity of the debt, then the reduction in value can be significant.
The employment release reported a 209,000 net increase in nonfarm payroll (NFP) employment was below consensus, but still represented the sixth straight month of +200K net job creation. The net change in private NFP was 198,000. Here I want to note (1) the household survey based alternate measure of nonfarm payroll employment also continues to rise; (2) revisions in NFP and private NFP have typically been positive in recent months; (3) the 2014Q1 drop in GDP seems a little out of line with labor input.
Regarding employment, trends in the official NFP and the household series adjusted to conform to the NFP concept are positive.
Figure 1: Nonfarm payroll employment (blue), and household survey employment adjusted to the NFP concept (red), both seasonally adjusted, in ’000s. Source: BLS via FRED BLS.
The household adjusted series is a check for those who are skeptical of the BLS firm birth/death model. It’s interesting that the series is above the official series during the expansion, in contrast to the pre-recession period when it typically lagged.
Next, observe that recent revisions have typically been to the upside.
Figure 2: Annualized growth rate in nonfarm payroll employment from July release (blue), June release (red), May release (green), April (pink), March (teal), February (purple), and January (chartreuse), all calculated using log-differences. Source: BLS (various releases) and author’s calculations.
Figure 3: Annualized growth rate in private nonfarm payroll employment from July release (blue), June release (red), May release (green), April (pink), March (teal), February (purple), and January (chartreuse), all calculated using log-differences. Source: BLS (various releases) and author’s calculations.
This pattern suggests that July’s figure will be revised upward as well.
Finally, it’s of interest to see how labor input has covaried with output, particularly over the past couple quarters. In Figure 4, I present the first log difference in GDP, NFP, and aggregate private hours.
Figure 4: Annualized quarter-on-quarter growth in GDP (black), nonfarm payroll (blue), and aggregate private sector hours (purple), all calculated as log-differences. Source: BEA (2014Q2 advance GDP), BLS (July release), and author’s calculations.
Notice that the first quarter growth rate was revised up from -2.9% to -2.1% (SAAR) in the comprehensive annual revision. Nonetheless, the gap between (the growth rates of) hours and output seems pretty large. This point is shown in a slightly different form in Figure 5.
Figure 5: Annualized quarter-on-quarter growth in aggregate private NFP hours against real GDP, all calculated using log-differences.. Hours data is average of monthly observations. Red line is OLS regression line. Source: BEA (2014Q2 advance GDP), BLS (July release), and author’s calculations.
The 2014Q1 observation is far off the regression line — but not as far as in 2009Q1-Q2. However, these two quarters are periods when government consumption and investment rose substantially, in part due to the ARRA (recall aggregate hours are only for the nonfarm private sector). Hence, it may be that Q1 growth rates will be eventually revised noticeably upward.
Side note: Joint Economic Committee Chair Kevin Brady writes:
“Every new job is welcome news, but with millions looking for work we can’t celebrate a modest 209,000 jobs a month. Five years after the recession ended, the Obama recovery remains stuck in second gear.”
For purposes of comparison, five years and six months into President Obama’s presidency, nonfarm payroll employment is 3.8% higher than when he came into office. At a comparable point in President George W. Bush’s presidency, employment was only 2.9% higher than when he entered office.
With apologies to Richard Hofstadter.
On reading “New Classical Kansas”, James Sexton comments:
What? Log? Why Log? Why not just “economic activity”, whatever that is?
Why all the babbling bs? ARIMA(1,1,1)? You believe that holds any meaning in relation to Kansas’ “economic activity”? Why?
Was there an expectation of huge growth in Kansas in response to the tax cuts? No. But, our unemployment is decidedly below the national average. You know, that’s like more people working and earning, and stuff. But, better, … well, worse in your point-of-view, people get to actually keep more of what they earned. This is an end, in and of itself. Kansas is fine. And, it will be more fine, because people get to keep what they earned, and are not forced by a government to give what they earned to people they don’t wish to give it to. In America, we call this “freedom”. It’s a strange concept, look it up.
Rather than celebrate this fact, the author wishes to convince people Kansas is economically declining. He does this by using a meaningless method … to whit …
Yeh, the old “log coincident indicators” …. well, I’m a believer!!!!
In a rejoinder to my comment, Mr. Sexton continues:
… To whit, tax cuts are an end in and of itself. Whether or not there’s increased economic activity, or even decreased because government is wasting less, is only secondary to allowing people to keep more of what they, themselves, had earned.
I would say the person who is burning books is the one least familiar with the concepts and precepts of individual liberty and freedom. Which, is appalling assuming it is an American writing the post.
One might very well wonder why I dwell on these comments. It’s because I think it’s an excellent example of anti-intellectualism in blogospheric discourse (a separate issue from trolls, discussed in this post).
Key attributes of blogospheric anti-intellectualism:
1. Anti-log-ism. I thought logs were taught in high school, but apparently the concept inspires vitriolic contempt. See Jim Hamilton’s post for a discussion of the usefulness of the mathematical concept.
2. Selective anti-metrics. Individuals will happily cite an unemployment rate (for Kansas) while disparaging coincident indicators (for Kansas). And yet, in the end, both series are estimates. And in fact, as Justin Wolfers discusses, the state-level unemployment rates (which Mr. Sexton cites so admiringly cites) are subject to particularly high levels of uncertainty.
3. Anti-statistical methods. The mention of an ARIMA(1,1,1) elicited a strong response by Mr. Sexton. In point of fact, I didn’t apply the model to Kansas but to US population; I was using the Kansas City Fed’s forecasts in the leading indicators to project Kansas economic activity, given they have more resources than I to devote to the project. I suppose that I would be criticized for “appealing to authority” had he actually understood what I did — this is another tendency of anti-intellectualism, although to his credit Mr. Sexton does not engage in it. (Apparently “appeal to authority” is okay if you ask your doctor about a prescription, but absolutely verboten if in any other context.)
4. Absolutist assertion of objective functions without any argument, to whit “…tax cuts are an end in and of itself.” This does not seem to based on a utilitarian argument; nor do “tax cuts” make an appearance in the Declaration of Independence, or in the Preamble to the Constitution. (I mention this latter couple omissions because Mr. Sexton asserts that all Americans must agree with his definition of freedom.)
On a separate note, I appreciated the contingency written in the last line: “assuming it is an American writing the post.” This line is redolent of another aspect of the blogospheric discourse.
Update, 8/3 1:20PM Pacific: Paul Krugman has more on anti-intellectualism in public policy discourse, here.
Today, we’re fortunate to have Alex Nikolsko-Rzhevskyy, Assistant Professor of Economics at Lehigh University, David Papell and Ruxandra Prodan, respectively Professor and Clinical Assistant Professor of Economics at the University of Houston, as Guest Contributors
In a contentious hearing before the House Financial Services Committee, Federal Reserve Chair Janet Yellen reacted extremely negatively to the proposed “Federal Reserve Accountability and Transparency Act of 2014”, introduced into Congress on July 7, which would require the Fed to adopt a policy rule. As reported in The New York Times and the Wall Street Journal on July 16, she called the proposal a “grave mistake” which would “essentially undermine central bank independence.” In the next day’s Wall Street Journal, Alan Blinder wrote that “In a town like Washington, the message to the Fed would be clear: Depart from the original Taylor rule at your peril.”
Did the proposed legislation deserve such a strong response? The act specifies two rules. The “Directive Policy Rule” would be chosen by the Fed, and would describe how the federal funds rate would respond to a change in the intermediate policy inputs. If the Fed deviated from its rule, the Chair of the Fed would be required to testify before the appropriate congressional committees as to why it is not in compliance. In addition, the report must include a statement as to whether the Directive Policy Rule substantially conforms to the “Reference Policy Rule,” with an explanation or justification if it does not. The Reference Policy Rule is the Taylor rule.
In a recent Econbrowser post, we discussed the justification for using the Taylor rule as the Reference Policy Rule. Here, we consider how Janet Yellen might have responded more positively to the proposed legislation. According to the Taylor rule, the Fed should set the short-term interest rate as follows:
r = π + 0.5 (π – π*) + 0.5 y + R,
where r is the federal funds rate, π is the inflation rate over the previous four quarters, y is the output gap, the percentage deviation of GDP from potential GDP, π* is the inflation target of 2.0 percent, and R is the equilibrium real interest rate, also 2.0 percent. Collecting terms,
r = 1.0 + 1.5 π + 0.5 y.
In a June 2012 speech, Yellen expressed a preference for a variant of the Taylor rule that is twice as responsive to economic slack, which she called the “balanced-approach” rule,
r = 1.0 + 1.5 π + 1.0 y.
With GDP forecasted to remain below potential, the balanced-approach rule prescribed raising the federal funds rate above the zero lower bound later than the Taylor rule prescription. She also discussed an optimal control path for the federal funds rate using the Fed’s FRB/US economic model, which prescribed an even later lift-off date.
Yellen’s speech provides a template for how the Fed might respond to the proposed legislation. There is nothing in the legislation that requires the Fed to adopt the Taylor rule. Suppose the Fed had chosen the balanced approach rule as the Directive Policy Rule. Her speech clearly showed how the Directive Policy (balanced approach) Rule differed from the Reference Policy (Taylor) Rule and, using the FRB/US model, provided a justification for using the Fed’s rule.
Suppose that the proposed legislation was current law. How might the Fed respond? For the purpose of argument, assume that the Fed chooses the balanced approach rule as the Directive Policy Rule. Using the economic projections from the June 2014 FOMC meeting for inflation and real GDP growth, the Q1 final release real GDP numbers from the Bureau of Economic Analysis, and the potential GDP projections from the Congressional Budget Office, the prescribed federal funds rate using the Reference Policy (Taylor) Rule is 1.63 percent at the end of 2014 and 2.35 percent at the end of 2015. With the assumed Directive Policy (balanced approach) Rule, it is -0.15 percent at the end of 2014 and 1.14 percent at the end of 2015.
The prescribed federal funds rates with the balanced approach rule for the end of 2014 and 2015 are almost exactly the same as the target federal funds rates reported by the FOMC participants in the June 2014 economic projections. For the end of 2014, 15 Federal Reserve Board Members and Bank Presidents reported a target federal funds rate of 0.25 percent, with one participant at 1.0 percent. Given that the target federal funds rate is constrained by the zero lower bound, this is very close to prescription of the balanced approach rule. For the end of 2015, three participants reported a target of 1.0 percent and three more reported a target of 1.25 percent, with five participants below 1.0 percent and five above 1.25 percent. The median of 1.125 percent is amazingly close to the 1.14 percent rate prescribed by the balanced approach rule. The target rates for both the end of 2014 and 2015 are, not surprisingly, considerably below the rates prescribed by the Taylor rule.
Current Fed policy can be very well described by the balanced approach rule. If it was adopted as the Directive Policy Rule, there would be no need for the Chair to testify before Congress because, at least at this time, there are no deviations. If the Fed were to deviate in the future, it would improve transparency for the Chair to explain why. While the Directive Policy (balanced approach) Rule would not be the same as the Reference Policy (Taylor) Rule, there is nothing in the proposed legislation that requires them to be the same, and Janet Yellen has certainly been both willing and capable of defending her preference. We do not see how central bank independence would be undermined if the process was formalized.
This post written by Alex Nikolsko-Rzhevskyy, David Papell and Ruxandra Prodan.
The Bureau of Economic Analysis announced today that U.S. real GDP grew at a 4.0% annual rate in the second quarter. Hopefully that’s the start of something good; but so far, it’s only a start.
In part, the growth from Q1 to Q2 looked good because the level for Q1 was so bad. True, the BEA’s new estimate of Q1 growth (a 2.1% drop at an annual rate) was a little improvement over the -2.9% figure the BEA announced last month. But the revised estimates still imply a pathetic annual growth rate below 1% for the first half of the year. And true also, the BEA revised up the estimates for 2013, so that the last 4 quarters clocked a more respectable average growth rate of 2.5%. But upward revisions to 2013 came at the expense of downward revisions to 2011 and 2012– we seem to be always borrowing from Peter to pay Paul.
Our Econbrowser Recession Indicator Index, which uses today’s data release to form a picture of where the economy stood as of the end of 2014:Q1, jumped up to 24.1%, reflecting the sharp drop in GDP in the first quarter. Our policy of always waiting one quarter for data revisions and trend recognition before calculating the index is looking particularly wise given the roller coaster ride the BEA has put us through in their different versions of the 2014:Q1 estimates. And our index, unlike the BEA figures, is never revised. According to our gauge, the Q1 drop and slow first half growth were disturbing, but not the start of a recession– our indicator would have to rise to 67% before we would declare that a new recession had begun.
In terms of the individual components behind the Q2 GDP growth, the positive contributions of investment and exports are certainly encouraging. But 1.7 percentage points of the 4.0% growth came from inventory build-up, representing goods that need to find buyers in the future– perhaps borrowing more from Peter again.
And the White House was unusually candid in warning us that BEA has essentially no idea based on data available so far about what happened to real health care services (which account for about a sixth of total consumption spending) during the second quarter. Indeed, the BEA’s estimates of real health care services in their first announcement for a given quarter (like the one we just received) turn out to be negatively correlated with the estimate for the same quarter that they will announce two months later.
Notwithstanding, this could finally be the start of a string of encouraging GDP numbers that many of us have been expecting for some time. The always-to-be-trusted Bill McBride notes that state-and-local government spending, which had put a persistent drag on the economy over 2010-2012, has finally turned around. And certainly there is potential for a bigger contribution from residential fixed investment.
So maybe we’re in for a string of good news. But like I say, so far, it’s only a start.
Two years ago, Governor Brownback asserted:
Our new pro-growth tax policy will be like a shot of adrenaline into the heart of the Kansas economy.
Newly released coincident indicators from the Philadelphia Fed (calibrated to match trends in real Gross State Product) indicate Kansas continues to lag the Nation. Leading indicators released by the Philadelphia Fed indicate continued lagging.
Figure 1: Log coincident indicators for US (blue), level implied by leading index (blue square), and for Kansas (red), level implied by leading index (red triangle), all normalized to 2011M01=0. Forecasts from July release. Dashed vertical line at 2011M01. Source: Philadelphia Fed, and author’s calculations.
Notice the complete stagnation of Kansas output since January 2014, and deceleration starting from January 2013.
This pattern recurs, albeit with greater force, if one normalizes the coincident indices by estimates of population. I divide the Kansas index by quadratic-match interpolated population, and the US index by mid-month population (for 2014M06-2014M12, I use a dynamic forecast from an ARIMA(1,1,1) on log mid-month population.
Figure 2: Log coincident indicators for US (blue), level implied by leading index (blue square), and for Kansas (red), level implied by leading index (red triangle), all in per capita terms, normalized to 2011M01=0. Forecasts from July release. Dashed vertical line at 2011M01. Source: Philadelphia Fed, FRED, and author’s calculations.
The forecasted trajectory for Kansas is fairly dismal. Some would say that this outcome is an artifact of the normalization date — that since Kansas fell less than the Nation, it should have bounced back more, or something like that. I’ve never found that “bounceback” argument particularly convincing, but be that as it may. In point of fact, Kansas fell more than the Nation, and has bounced back less.
Figure 3: Log coincident indicators for US (blue), level implied by leading index (blue square), and for Kansas (red), level implied by leading index (red triangle), all in per capita terms, normalized to 2011M01=0. Forecasts from July release. NBER defined recession dates shaded gray; tax cut regime shaded tan. Dashed vertical line at 2007M12. Source: Philadelphia Fed, FRED, NBER, and author’s calculations.
Why has Kansas output per capita stagnated, and in any case, declined relative to the Nation’s? One way of thinking about this is to take a supply-determined, or real business cycle, interpretation of output determination :
Y = Φ F(K,N)
Where Φ is total factor productivity, K is the capital stock, and N is the labor force.
If output is completely supply determined, then one has to think about the fact that the change in output must be attributable to either the change in total factor productivity, TFP (Φ), the capital stock (K), or the labor stock (N). That is (after taking logs, and assuming a Cobb-Douglas production function):
Δy = Δφ + σ Δ k + (1- σ) Δ n
Where σ (1-σ) is capital (labor) share of income. Now as best as I can determine, population has continued to increase through 2013 (the 2014 figures are an extrapolation based on lagged population growth). There has been no war in Kansas that might have destroyed the extant capital stock, so it seems unlikely K has declined. This suggests that (with increasing population and likely increasing capital stock) there must have been technological regress — that is Φ must have declined. How? Perhaps some amount of Kansas technology has been destroyed. Another interpretation consistent with a supply determined view is that the first equation omits human capital, and somehow human capital has disappeared — and increasingly so, over the past year and a half.
Some New Classical interpretations would, strictly speaking, append an error term to output, to represent the fact that expectational shocks could drive output away from full employment of resources. However, to make the deviations serially correlated, either the expectational shocks must be serially correlated themselves, or there are real impediments to adjustment, such as capital adjustment costs or sticky prices (But going down that route would lead to a New Keynesian model, so let’s dispense with that).
Another possibility is that tax policy has changed the labor leisure tradeoff (the tan shaded area denotes the period in which the new tax regime was in place). Perhaps the wedge between pre-tax and after-tax returns to labor or capital have increased thus reducing the optimal levels of capital and labor employed. However, the entire point of the tax cuts implemented by Governor Brownback was to reduce the burden on households (particularly higher income households) and firms. In essence, there is a puzzle.
A Reconciliation of Facts and Theory
Or maybe output is demand-determined in the short to medium run…
Page 13 of the Kansas State FY2014 Comparison Report indicates that for FY 2014 (which just ended) is 3.1% lower than FY2013; FY2015 will be 0.6% lower than FY2013. What about the trend in real spending? Since the CPI-Midwest rose 1.7% (y/y through June), then real spending fell 4.8%. Even more telling is that nominal economic activity probably rose about 3.3% through June (change in Philadelphia Fed coincident index plus change in CPI-Midwest). So that is a drastic fall in spending, when considered as a share of nominal GSP.
More on ALEC, Laffer, Wisconsin and Kansas, here.
Update, 6PM Pacific: In a panagyric to Brownback, the WSJ has detached itself further from reality (if that is possible), in Why Liberals Hate Kansas: Sam Brownback’s tax cuts must be discredited before they succeed.:
By liberal accounts Kansas is experiencing a major fiscal and economic meltdown like well, you know, Illinois. The truth is that it’s too soon to draw grand conclusions about the tax cuts, which have been in effect for all of 19 months. But some early economic indicators suggest they may be producing modest positive effects. The danger is that a coalition of Democrats and big-spending Republicans will pull out the rug before the benefits fully materialize.
I found interesting the characterization of Illinois economic performance. The comparison of Kansas to Illinois is not flattering. Since I have read this comparison in the comments section in the past, here without further ado is the relevant comparison:
Figure 4: Log coincident indicators for Illinois (teal), level implied by leading index (teal circle), and for Kansas (red), level implied by leading index (red triangle), normalized to 2011M01=0. Forecasts from July release. NBER defined recession dates shaded gray; tax cut regime shaded tan. Dashed vertical line at 2007M12. Source: Philadelphia Fed, FRED, NBER, and author’s calculations.
…any short run gains from delay tend to be outweighed by the additional costs arising from the need to adopt a more abrupt and stringent policy later.7 An analysis of the collective results from that research, described in more detail in Section II, suggests that the cost of hitting a specific climate target increases, on average, by approximately 40 percent for each decade of delay. These costs are higher for more aggressive climate goals: the longer the delay, the more difficult it becomes to hit a climate target. Furthermore, the research also finds that delay substantially decreases the chances that even concerted efforts in the future will hit the most aggressive climate targets.
The paper presents a meta-analysis of costs and delays:
…The data set for this analysis consists of the results on all available numerical estimates of the average or total cost of delayed action from our literature search. Each estimate is a paired comparison of a delay scenario and its companion scenario without delay. To make results comparable across studies, we convert the delay cost estimates (presented in the original studies variously as present values of dollars, percent of consumption, or percent of GDP) to percent change in costs as a result of delay.20 We capture variation across study and experimental designs using variables that encode the length of the delay in years; the target CO2e concentration; whether only the relatively more-developed countries act immediately (partial delay); the discount rate used to calculate costs; and the model used for the simulation.21 All comparisons consider policies and outcomes measured approximately through the end of the century. To reduce the effect of outliers, the primary regression analysis only uses results with less than a 400 percent increase in costs (alternative methods of handling the outliers are discussed below as sensitivity checks), and only includes paired comparisons for which both the primary and delayed policies are feasible (i.e. the model was able to solve for both cases).22 The dataset contains a total of 106 observations (paired comparisons), with 58 included in the primary analysis. All observations in the data set are weighted equally.
Analysis of these data suggests two main conclusions, both consistent with findings from specific papers in the underlying literature. The first is that, looking across studies, costs increase with the length of the delay. Figure 2 shows the delay costs as a function of the delay time. Although there is considerable variability in costs for a given delay length because of variations across models and experiments, there is an overall pattern of costs increasing with delay.
For example, of the 14 paired simulations with 10 years of delay (these are represented by the points in Figure 2 with 10 years of delay), the average delay cost is 39 percent. The regression line shown in Figure 2 estimates an average cost of delay per year using all 58 paired experiments under the assumption of a constant increasing delay cost per year (and, by definition, no cost if there is no delay), and this estimate is 37 percent per decade. This analysis ignores possible confounding factors, such as longer delays being associated with less stringent targets, and the multiple regression analysis presented below controls for such confounding factors.
Three years ago I called attention to the NYU Stern Volatility Laboratory. Since then it’s grown into an even more amazing resource, giving anyone access to constantly updated information about financial conditions in dozens of countries around the globe. Of particular interest are recent changes in their measure of the systemic risk posed by financial institutions.
The basic idea behind the NYU measure of systemic financial risk is to use real-time summaries of the valuations and correlations across different equities to analyze how much of a drop would be expected in the stock valuation of a given financial institution if the country were to face another financial crisis, defined as a 40% decline in the broad market stock index over a space of 6 months. Technical details of how that calculation is done are described in a paper by Acharya, Engle, and Richardson (2012). This loss in equity value is then compared with the institution’s current assets and liabilities to calculate how much additional capital might be needed in order to keep the institution solvent in the event of a financial crisis. The sum of this cost across all financial institutions then gives a measure of a country’s overall systemic risk at any point in time.
For example, here’s what their measure looks like for the United States. It peaked at over a trillion dollars in the fall of 2008, but has improved significantly since, and is now down to about $300 billion.
By contrast, sovereign debt concerns pushed the index for Europe in 2011-12 back up near values from the financial crisis of 2008. The measure of systemic risk for Europe has since abated, though it remains quite elevated at about $1.3 trillion.
And reason for concern about Japan has actually increased substantially in the years following the 2008 financial crisis. Notwithstanding, the policies adopted early in 2013 popularly described as “Abenomomics” seem to have helped.
But here’s what really caught my eye– the NYU measure of systemic financial risk for China this year reached an all-time high. This seems to be a result of the fact that liabilities of the Chinese banks have continued to grow rapidly while stock valuations of the institutions appear quite vulnerable to a downturn.
Russia’s central bank has unexpectedly raised its key bank interest rate over concerns about inflation and “geopolitical tension”.
The bank’s board decided to raise the interest rate by 50 basis points, or half a percent, to 8% per year.
The Central Bank of Russia said on Friday that it will raise the interest rate on Monday to ease inflationary pressure.
“Inflation risks have increased due to a combination of factors, including, inter alia, the aggravation of geopolitical tension and its potential impact on the rouble exchange rate dynamics, as well as potential changes in tax and tariff policy,” the bank said.
In June, core inflation grew to 7.5%, well above the bank’s forecast of up to 6.5% for the year.
This is the third rate hike within the last half year.
Figure 1: USD/RUB exchange rate (blue, left scale), overnight repo rate, % (red, right scale). Vertical dashed line at 17 July 2014. A rise in the exchange rate is a rouble appreciation. Source: Pacific exchange service, Central Bank of the Russian Federation.
Previous posts on sanctions and Russia, see here, here, and here. The IMF yesterday released updates for the WEO; the July forecast for Russian growth in 2014 (y/y) is fully 1.1 percentage points lower than the April forecast.
Update, 7/26, 12:15PM Pacific: From Reuters, reading the tea leaves:
“Basically … we can expect the key rate to go higher if new risks materialise (for, example, introduction of level III sanctions on Russia and the rouble getting seriously hit), which is not beyond imagination,” analysts at Gazprombank said in a note.
The decision indicates Russia is preparing for what may lie ahead, analysts said. “Maybe the central bank has been given the nod by their political masters in the Kremlin that this crisis is still going to get worse before it gets better,” Ash, of Standard Bank said.
I was at the NBER Summer Institute’s meeting of the International Finance and Macro group where (in addition to finally meeting Jim Hamilton) I had the opportunity to hear two papers on a topic near and dear to me — namely the relationship between the forward premium (the gap between the forward and spot rate, or equivalently in the absence of political risk, the interest differential) and the carry trade. (For discussion of related papers at last year’s IFM, see this post).
Recall, the basic issue is the negative correlation between interest differentials and the ex post depreciation over the corresponding time period. I plot the relationship at the 1 year horizon (for more regarding the data, see Chinn and Quayyum (2012), or this post).
Figure 1: Pooled data for Canadian dollar, euro, Japanese yen, Swiss franc, and British pound against US dollar, 1979Q1-2011Q4. Note: “0.08″ indicates 8%.
Notice that the zero arbitrage profits condition, under risk neutrality and rational expectations implies the slope of the red line should be upward sloping, with slope indistinguishable from unity. In contrast, as is well known, the slope at short maturities is typically negative, and statistically significantly so (see the survey in Chinn (2006)). In the graph above, the slope is -0.54, and statistically significantly different from zero (and from unity).
The first paper was by Tarek Hassan and Rui Mano, “Forward and spot exchange rates in a multi-currency world”. From the abstract:
We decompose violations of uncovered interest parity into a cross-currency, a between time-and-currency, and a cross-time component. We show that most of the systematic violations are in the cross-currency dimension. By contrast, we find no statistically reliable evidence that currency risk premia respond to deviations of forward premia from their time- and currency-specific mean. These results imply that the forward premium puzzle (FPP) and the carry-trade anomaly are separate phenomena that may require separate explanations. The carry trade is driven by static differences in interest rates across currencies, whereas the FPP appears to be driven primarily by cross-time variation in all currency risk premia against the US dollar. Models that feature two symmetric countries thus cannot explain either of the two phenomena. Once we make the appropriate econometric adjustments we also cannot reject the hypothesis that the elasticity of risk premia with respect to forward premia in all three dimensions is smaller than one. As a result, currency risk premia need not be correlated with expected changes in exchange rates.
Another way of putting this is that the carry trade is driven by differences in the constants in the Fama regression (along with what they term dynamic trade, based on variation in between-time-and-currency forward premia), while the forward premium trade relies upon the across-time covariation of exchange rate changes and the forward premia (along with the dynamic trade term).
The comments were various; one observer noted that these were unconditional correlations, while it’s known that some factors — including PPP deviations — are useful in predicting excess returns. My comment was that asymmetries were important — the expected UIP relationship holds when US interest rates are below foreign, and not when above (Bansal and Dahlquist).
Another interesting observation was that the decomposition, while useful, failed to illuminate a key question in the literature — where do these currency fixed effects come from. They’re apparently persistent, but don’t seem to be correlated with any particular observable macro variables.
The second paper was by Hanno Lustig, Andrea Stathopoulos, and Adrien Verdelhan, “The term structure of currency carry trade risk premium”. From the abstract:
We find that average returns to currency carry trades decrease significantly as the maturity of the foreign bonds increases, because investment currencies tend to have small local bond term premia. The downward term structure of carry trade risk premia is informative about the temporal nature of risks that investors face in currency markets. We show that long-maturity currency risk premia only depend on the domestic and foreign permanent components of the pricing kernels, since transitory currency risk is automatically hedged by interest rate risk for long-maturity bonds. Our findings imply that there is more cross-border sharing of permanent than transitory shocks.
One finding (based on a model which assumes importantly complete markets) is that excess returns on the dollar bonds appears as in Figure 1 from the paper.
It is interesting that at a maturity of ten years, the premium is essentially zero. While these pertain to returns, and not yields to maturity, it is interesting that excess returns are zero at a horizon at which Chinn and Meredith (2004) find one cannot typically reject the unbiasedness hypothesis (uncovered interest parity combined with rational expectations). This pattern is shown in Figure 2.
Figure 2: Beta coefficients from Fama regressions at different horizons. Source: Author’s calculations.
However, as pointed by my colleague Charles Engel, there are actually at least two interlinked puzzles regarding interest rates and exchange rates (paper here; post here). The first is the the tendency for exchange rates to appreciate when the interest differential is positive — which is addressed in the Lustig et al. paper. The second is the fact that the exchange rate tends to appreciate when the currency is strong (which pertains to the level of the exchange rate and subsequent change). That puzzle is not addressed in this paper.
I have been thinking about this paper, particularly over the past day:
…Across two studies and two measures of trolling,…[trolls] displayed high levels of the Dark Tetrad [narcissism, Machiavellianism, psychopathy, and sadistic personality] traits and a BFI [Big Five Inventory] profile consistent with those traits. It was sadism, however, that had the most robust associations with trolling of any of the personality measures, including those of the Big Five.
In fact, the associations between sadism and GAIT scores were so strong that it might be said that online trolls are prototypical everyday sadists (Buckels et al., 2013). Note that the Dark Tetrad associations were specific to trolling. Enjoyment of other online activities, such as chatting and debating, was unrelated to sadism. Subsequent analyses confirmed that the Dark Tetrad associations were largely due to overlap with sadism. When their unique contributions were assessed in a multiple regression, only sadism predicted trolling on both measures (trolling enjoyment and GAIT scores). In contrast, when controlling for sadism and the other Dark Tetrad measures, narcissism was actually negatively related to trolling enjoyment. Given that controlling for overall Internet use did not affect these results, personality differences in broader tendencies of Internet use and familiarity cannot explain the findings.
In the final analysis of Study 2, we found clear evidence that sadists tend to troll because they enjoy it. When controlling for enjoyment, sadism’s impact on trolling was cut nearly in half;…
…The Internet is an anonymous environment where it is easy to seek out and explore one’s niche, however idiosyncratic. Consequently, antisocial individuals have greater opportunities to connect with similar others, and to pursue their personal brand of “self expression” than they did before the advent of the Internet.
Here’s the introduction to a new paper I just finished:
This year the oil industry celebrated its 155th birthday, continuing a rich history of booms, busts and dramatic technological changes. Many old hands in the oil patch may view recent developments as a continuation of the same old story, wondering if the high prices of the last decade will prove to be another transient cycle with which technological advances will again eventually catch up. But there have been some dramatic changes over the last decade that could mark a major turning point in the history of the world’s use of this key energy source. In this article I review five of the ways in which the world of energy may have changed forever.
Below I provide a summary of the paper’s five main conclusions along with a few of the figures from the paper.
1. World oil demand is now driven by the emerging economies.
2. Growth in production since 2005 has come from lower-quality hydrocarbons.
3. Stagnating world production of crude oil meant significantly higher prices.4. Geopolitical disturbances held back growth in oil production.
5. Geological limitations are another reason that world oil production stagnated.
And here is the paper’s conclusion:
Although the oil industry has a long history of temporary booms followed by busts, I do not expect the current episode to end as one more chapter in that familiar story. The run-up of oil prices over the last decade resulted from strong growth of demand from emerging economies confronting limited physical potential to increase production from conventional sources. Certainly a change in those fundamentals could shift the equation dramatically. If China were to face a financial crisis, or if peace and stability were suddenly to break out in the Middle East and North Africa, a sharp drop in oil prices would be expected. But even if such events were to occur, the emerging economies would surely subsequently resume their growth, in which case any gains in production from Libya or Iraq would only buy a few more years. If the oil industry does experience another price cycle arising from such developments, any collapse in oil prices would be short-lived.
My conclusion is that hundred-dollar oil is here to stay.
In a recent article, Amity Shlaes asserts official statistics mismeasure how we experience inflation. I’m going to agree, but not for the reasons you might think. It’s not because John Williams’ Shadowstats, which she appeals to, is right (Jim has comprehensively documented why each and every person who cites that source should be drummed out of the society of economists or aspiring economic commentators). Rather it’s because I think people do have biases — i.e., the steady-state rational expectations hypothesis might not be applicable.
Ms. Shlaes writes in “Inflation Vacation”:
…The price zap is an inflation zap. The reason you thought you could afford this vacation in the first place was that you know a little about money. All the official numbers, especially the Consumer Price Index, say that inflation is reasonable. Economists you respect tell you the wages are low because of “misallocation of resources.” Janet Yellen, the new Fed chairman, says she’s not worried. Maybe she will have a good vacation.
But other numbers suggest that inflation is higher than what the official data suggest. One set, from which some of the price bites above were taken, is here. For a more thorough review of why official numbers err, have a look at the work of John Williams, a consultant who has tracked data over the years.
It’s at this point that I believe the findings by Coibion and Gorodnichenko (2013), discussed earlier by Jim, is of interest. They note that households hold noticeably different expectations regarding inflation than do professional forecasters, or markets; in this sense, household expectations are “unanchored” to the extent that they are consistently higher than ex post realizations of inflation.
Source: Figure 6, Panel A, Coibion and Gorodnichenko.
Why does this pattern arise? From the paper:
…why did households hold such different beliefs than professional forecasters? Our suggested answer can be seen in … Figure 7. Household inflation forecasts have tracked the price of oil extremely closely since the early 2000s, with almost all of the short-run volatility in inflation forecasts corresponding to short-run changes in the level of oil prices. From January 2000 to March 2013, for example, the correlation between the two series was 0.74. In contrast, the correlation between SPF inflation forecast and oil price over the same period was -0.12. The strong sensitivity of consumers’ inflation expectations to oil price has historically been strong: the correlation between MSC inflation expectations and real oil price over 1960-2000 was 0.67.
Source: Figure 7, from Coibion and Gorodnichenko.
Households that tend to have more gasoline expenditures tend to be more sensitive to oil prices. This further buttresses the view that expectations differ from professional forecasters, and market based measures which are unbiased. See more on the characteristics of household surveys by R. Waldmann/Angry Bear.
On another count, Ms. Shlaes discounts both hedonics and price index theory:
The Bureau of Labor Statistics or the Fed also argued that the quality of some items (camera, movie) had improved over the years. The technology it took to make X-Men: Days of Future Past is leagues ahead of the technology used for Gladiator. The movie theater itself has better seats. Therefore, the ticket price should be higher. The economists at the BLS say they discount for that: “The hedonic quality adjustment method removes any price differential attributed to change in quality,” they write. But perhaps they use such indexes to hide true price increases.
Decades ago, authorities pointed out that people substitute a cheaper item when what they originally bought was too expensive. They altered the index to capture substitution. If steak is expensive, you buy chicken. The result of their fiddle is that inflation looks lower than it would otherwise. That’s disappointing. No vacation is a true vacation without a really good tenderloin.
On the first point — hedonics — it’s telling that movies were mentioned, but not telecommunications. From my standpoint, I can say there has been a big
de increase in what my smartphone can do relative to my first mobile phone in 2000.
On the second point, this observation is funny because the CPI is a quasi-Laspeyres index (see this post), so it tends to minimize (although not completely eliminate) the effect she speaks of. Now, it’s true that the index weights are updated each two years, but even then, those with an acquaintance of price index theory understand that in the absence of the true underlying utility function of consumers, measuring the “true” rate of inflation is infeasible, and the Laspeyres index tends to over-state the rate of inflation (the chained CPI inflation rate tends to be lower than that obtained using the standard quasi-Laspeyres CPI ).
Actually, I suspect that the reported inflation rate actually understates the inflation relevant to Ms. Shlaes, if her household is in the upper 25% income decile. That’s because the CPI weights are appropriate to a household at the 75% income decile (see this post on plutocratic vs. democratic price indices). Those at lower income levels are likely facing a higher inflation rate than she does.
One point where I do disagree vociferously with Ms. Shlaes assessment is here, where she discusses the Liesman-Santelli exchange (discussed in this post):
If you study the last part of the video, where the CNBC host gets bullied into silence by Steve Liesman, you’ll see the problem. The price today for talking about inflation is itself too high.
Lots of people have been discussing inflation and hyper-inflation for the past six years. I don’t think the costs are that high at all. And in fact I expect to hear more inflation worries, day after day.
Final point: to the extent that the CPI growth has changed over time, it is interesting to consider the price level counterfactual implied by the inflation rate over the 2001M01-2009M01 period. This is shown in Figure 1.
Figure 1: Log CPI (blue) and linear trend estimated over 2001M01-09M01 period (red). Sample period for estimation of trend shaded tan. Source: BLS (May 2014 release) via FRED and author’s calculations.
It certainly looks to me that inflation is lower (i.e., the slope is flatter) post 2008 than before. Biases that existed in the past regarding “zaps” exist now. Indeed, if anything one bias that existed before — due to imputation of owner occupied rent — is probably resulting in an overstatement of inflation right now 
Update, 7/24 3PM Pacific: Lars Jonung points out that he in earlier work found serial correlation in forecast errors from surveys of consumers, and further conjectures that the costs of acquiring data might be the source of the lack of unbiasedness. See Jonung (1981) and Jonung and Laidler (1988). This point is consistent with my statement that the forecast errors from the Michigan survey failed to exhibit behavior consistent with the “steady-state rational expectations hypothesis”. This doesn’t mean the behavior is necessarily irrational; it could be the learning process is slow enough in response to new shocks that errors are serially correlated (near rationality a la Akerlof and Yellen for instance).
Production flows from a given oil field naturally decline over time, but we keep trying harder and technology keeps improving. Which force is winning the race?
An oil reservoir is a pool of hydrocarbons embedded and trapped under pressure in porous rock. As oil is taken out, the pressure decreases and the annual rate of flow necessarily declines. A recent study of every well drilled in Texas over 1990-2007 by Anderson, Kellogg, and Salant (2014) documents very clearly that production flows from existing wells fall at a very predictable rate that is quite unresponsive to any incentives based on fluctuations in oil prices.
If you want to produce more oil, you have to drill a new well, and in contrast to production from existing wells, drilling effort clearly does respond to price incentives.
When a given region is found to be promising, more wells are drilled, and production initially increases. But eventually the force of declining pressure takes over, and we see a broad decline in oil production from a given producing region that additional effort and price incentives can do little to reverse. For example, production from the North Sea and Mexico, which had been quite important in the world total in 2000, have been declining steadily for the last decade despite a huge increase in the price of oil.
It’s also interesting to look at graphs for each of the oil-producing U.S. states. Production from Pennsylvania, where the oil industry began in 1859, peaked in 1891, and in 2013 was at a level only 1/6 of that achieved in 1891. But despite falling production from Pennsylvania after 1891, U.S. production continued to increase, because of the added boost from Ohio (which peaked in 1896) and West Virginia (which peaked in 1900). And so the story continued until 1970, with total U.S. production continuing to increase despite declines from the areas first exploited.
The table below lists the date at which production from an indicated state reached its highest point. Use of horizontal drilling methods in the Bakken and Niobrara shales brought production in North Dakota and Colorado to all-time highs in 2013. Production was also higher in 2013 than in 2005 for 22 of the 31 states graphed above, though 2013 levels were still below the historical peak for all but 3 of these states.
Date of peak
|Louisiana and GOM||1971|
Another perspective on the U.S. trends comes from looking at broader categories of production. The red area in the graph below summarizes field production of conventional crude oil from the lower 48 U.S. states. This peaked in 1970, and today is 5.5 mb/d below the value achieved then. Factors temporarily slowing the trend of declining production were development of offshore oil (in dark blue) and Alaska (in light blue). But the combined contribution of all three of these has nevertheless been falling steadily for the last 20 years.
That downward trend was dramatically reversed over the last few years with the advent of horizontal drilling and fracturing to get oil out of tighter geologic formations, as seen in the green region in the graph above. If success with tight oil formations continues, we may yet see the historical peak production of many of the states above eventually exceeded, and indeed perhaps even for the United States as a whole.
But it’s also worth noting that as we moved through the succession of colors in the graph above we have been turning to increasingly more expensive sources of oil. Today’s frackers would all be put out of business if we were to return to the oil prices of a decade ago.
And even if prices remain high or go higher, eventually that green curve is going to turn around and start falling with the others.