At the session, Greg Leiserson of the Washington Center for Equitable Growth presented his very timely paper, “Removing the free lunch from dynamic scores: Reconciling the scoring perspective with the optimal tax perspective.”
Obviously, this paper comes in the immediate aftermath of the enactment of the 2017 tax act, in which “dynamic scoring” played what I would call a quasi-prominent role. I modify “prominent” with “quasi” because:
1) the regular score, not the dynamic score, was the one that ended up being used to measure compliance with the budget rule capping the revenue loss at $1.5 trillion,
2) due to the rushed process, the Joint Committee on Taxation wasn’t able to issue its dynamic score until late in the process, when the end result was verging on fixed and certain. So it doesn’t seem to have had any direct influence on what happened. Plus, once the score came out, showing an estimated dynamic revenue loss of about $1 trillion, Senate Republicans promptly rejected it for no good reason, based purely on their supposed intuitions.
Nonetheless, the score did matter atmospherically in a couple of ways that may linger. For one, it actually rebutted claims that the act would pay for itself, and even come close for doing so. For another, it did indeed predict a significant, albeit inadequate, growth response to the tax cuts, which it showed as reducing the revenue cost by about one-third.
As it happens, there are many reasons why I and others think that the $1 trillion dynamic revenue score was way too optimistic. Just as a starting point to motivate viewing it as very contingent, it’s based on weighted combinations of three radically different forecasting models, without disclosure as to what each of those models separately showed. JCT really hasn’t disclosed very much regarding the inputs to the dynamic aspect. But that says nothing as such about too low versus too high, except insofar as one might draw inferences from the extreme pressure JCT was under from its bosses to lean towards optimism, at least to the degree consistent with its incentive to limit harm to its own institutional reputation.
My reasons for thinking the score was probably too low, not too high, are as follows:
1) I think their regular score was too low, reflecting insufficient adjustment for the massive tax planning opportunities that the act has encouraged and that we will be reading about in the newspapers within a year or two, if not sooner.
2) There are several reasons for thinking the dynamic adjustments were too high. Perhaps the most important is that the model is based on assuming that other countries won’t respond by lowering corporate tax rates in response to what we’ve done. That’s an extremely unrealistic assumption, adopted solely because it’s outside their models to build in how other countries are likely to respond. But just because that’s hard to model (or outside their models) doesn’t mean it isn’t both likely and important.
3) My prior would have been to lean towards the Penn-Wharton budget model, which came out a bit higher.
All this is just background for Greg Leiserson’s paper. He is interested in dynamic scoring as an approach, not (for purposes of this paper) in whether the estimators got it right. He raises important challenges to what he calls the emerging consensus on how dynamic scoring is (and ostensibly should be) done. But I will save this set of issues for my next post.