Consistent with findings in Presimetrics, the book I wrote with Michael Kanell which will be released in August, I’ve had some posts whose results contradict standard economic theory. In some cases, readers have insisted that the results must be some sort of anomaly. Perhaps the biggest offender is a graph which appeared in this post. The graph shows growth rates in real GDP per capita by Presidency, where each President is color coded by whether he increased the tax burden (in this case defined as federal government revenues as a share of GDP) or decreased it:
The graph shows that Presidents who cut the tax burden produced slower growth, on average, than Presidents who increased the tax burden. For many people, this doesn’t make sense at all. (An explanation for why these results show up is provided here. Now, I’ve already some of the objections that have been raised to this, mostly in private or in comments at the Angry Bear blog. In fact, the graph above by itself answers one of the objections – I was told many times that FDR only produced faster growth because the economy accelerated during World War 2. Thus, the graph only shows FDR’s results through 1938, which avoids the war, the build-up to the war, and even Lend Lease.
Another criticism is that somehow the fact that I’m assuming that Presidents have an effect on the economy is the problem, and that a better way to do this is to look at the business cycle. But I’ve had posts looking at the business cycle, and increasing tax burdens doesn’t look all that much better as an economic strategy across the business cycle either. For instance, consider the following graph, one of several fairly damning graphs I posted here and here:
If cutting the tax burdens is the right prescription during a recession, it isn’t evident from the above graph.
I’ve also been accused of cherry-picking, though I’ve gone as far back as the data allows, and on most posts on this subject, I’ve actually chucked the WW2 years. In addition, I’ve noted that throwing out outliers doesn’t change results materially. But what if the last eight decades have all been an anomaly? After all, shouldn’t we consider the “Roaring ‘20s,” a period which every textbook tells you was a period of rapid growth. After all, such intellectuals as Glenn Beck, Thomas Woods, and Amity Shlaes are quick to assure that the prosperity of the 1920s was due to tax cuts.
Unfortunately, the NIPA tables don’t go that far back, so we don’t have actual data on real GDP per capita (or the tax burden). But I’ve done my best to examine that claim; the following graph, which first appeared here, shows the top marginal tax rates and the periods (shown in gray) when the economy was in recession.
While the 1920s was, indeed, a period when marginal tax rates were reduced, it was a period of recession followed by recession followed by recession followed by the Great Depression. The longest consecutive months spent outside of recession during the 1920s was 27 months! Two years and a quarter. In fact, the economy was in recession (if not outright Depression) a full 44% of the time during the 1920s. If these were roaring years, those roars were extremely short-lived.
Which moves us to the next issue – I’ve been criticized for focusing on the tax burden rather than the marginal tax rate, the exception being in periods where the tax burden is not available. Frankly, of all the criticisms, that one seems especially weak. For those with enough income, the marginal tax rate is as much a fiction as the MSRP on a car; Warren Buffett’s offer of $1 million dollars to anyone on the Forbes 400 list who could prove they pay a higher share of their income in taxes than their secretary remains untaken.
Moving on to the next criticism… I’ve been told my interpretation of the above graph is wrong – it is not so much that higher tax burdens are correlated with faster economic growth, but rather that administrations that produced rapid economic growth tended to feel they could raise tax burdens, and that administrations suffering from poor growth kept tax burdens low to try to remedy a bad situation.
I dealt with that issue in recent posts by grouping the periods from 1929 to the present into eight year administrations, where possible. Those administrations were made up of the eight year terms of Presidents who served two terms (or in FDR’s case, the first two terms only). Added to those were eight year periods in which a Vice Presidents took over for a President who died or otherwise left office. For each of those eight year terms, I created the following graph . It shows the change in the tax burden in the first two years of each administration along one axis, and economic growth on the other axis:
Clearly, economic growth in years 3 through 8 cannot explain changes in the tax burden in earlier years unless one assumes time travel, clairvoyance, or one heck of a coincidence.
And speaking of coincidence, we arrive at another common complaint: there simply are not enough observations to reach a conclusion of any sort. Left unstated, of course, is that though the tax cutting Presidents had the lousiest economies, and that (as per graph 4) tax cutting led the economic growth, somehow a supposed lack observations validates the idea that lower taxes does produce faster economic growth.
Now, those who complain about the lack of observations generally insist that a) I don’t know the first thing about statistics and b) you need 30 or more observations to reach a conclusion. Now that critique dates back at least three years, having been made by an anonymous blogger for the Economist who I understand is now known as Megan McArdle, and my answer (to point b.) back then is as good of an answer as any:
Which is what degrees of freedom are for… Maybe there’s something wrong with the textbooks on my shelf, but the t distribution tables in the back of those textbooks have as few as 1 (one) degree of freedom. When the degrees of freedom are low, the t-statistics has to be really high in order in to reject H0. Or something like that – what do I know?
Allow me to explain. If there are a large enough number of observations to work with, it is possible to find a statistically significant difference between two things (events, peoples, outcomes, whatever) that at first glance or from a distance look very similar to the unaided eye. However, if there are a very small number of observations, then differences have to be larger and more obvious for them to be statistically significant. Consider three medications to extend the lives of patients with a specific type of cancer, where two have obtained FDA certification and the third was cooked up by the creepy guy who lives two houses down and uses cat puke as an active ingredient. It might take hundreds of observations to tell which of the first two medications is more effective, but it shouldn’t take very many to tell how well the third one compares.
And while in the past I sometimes decided to answer this question by running a t-test or some non-parametric test, it always seems to lead to questions about the assumptions by people who clearly never ran a hypothesis test in their lives but have one or another political point to defend to the death. So let me try something else – an argument by analogy. Consider the graph below.
If such a graph came from a study comparing outcomes of medications, where patients were assigned a medication but otherwise told to go about their daily business, there’d be no argument that, at least as a first approximation, there’s no reason whatsoever to assume that Medication 2 was more efficacious than Medication 1. And if the disease in question was a particularly rare one, and the graph above represented the testing performed on every known sufferer since 1929, most of us would be appalled if a doctor decided to treat the next sufferer with Medication 2. At the very least, we would assume that the burden of proof going forward should lie not with the proponents of Medication 1 but rather with the supporters of Medication 2, whether we understood the mechanisms by which either of these pharmaceuticals worked (or purported to work) or not.
And yet, this graph is identical to Figure 1, except that I changed the title and some labeling.
But there is another problem with the small sample objection. See, there are many ways to test whether there has been a negative correlation between tax burdens and economic growth, and looking at the national economy is only one of those ways. I’ve also had a number of posts at the Angry Bear blog looking at how states have fared over the years, and there are 50 of those. In fact, that was the topic of the first post I ever wrote four years ago next month. In that post, a comparison of the states, using data from 1990 to 2005 yielded the following conclusion:
Thus, the data doesn’t seem to support the idea that lower taxes are associated with faster growth rates. In fact, the opposite is true, especially for the fastest growing states. One way to interpret this is to conclude that taxes are actually below their optimal rates, and therefore, at the margin, the government is actually more efficient than individuals at converting its spending into growth. Society needs a certain amount of public goods (infrastructure, public health, confronting the Canadian menace, etc.) for businesses to thrive, and perhaps we currently have too little provision of public goods rather than too much.
And the other posts I’ve had using similar state level data have all led to the same findings.
Another problem frequently brought up is that growth is too complicated to be explained by a single variable. We agree, and in the book, we actually provide a model that uses several variables. But be that as it may, it isn’t reasonable to state that while cutting tax burdens produces faster economic growth, that effect gets swamped by opposing forces when you look at the data systematically, whether you’re looking at the performance of Presidents, at the growth during business cycles, or the performance of states. Clearly, if reducing the amount that people pay in taxes is so beneficial to the economy, somewhere that effect would show up. It shouldn’t be overwhelmed by other variables pushing in the opposite direction every time one tried to test it systematically and consistently.
Moving on, we have another little gem – that the performance of the two regimes (tax cutters and tax hikers) is not independent. The argument is this: tax hikers do well because they follow tax cutters who laid the foundation for growth. Tax cutters do poorly because they have the misfortune of following tax hikers who set up the economy for a fall.
This one is particularly easy to hit out of the park. First, note that there is only one tax hiker that is followed by another tax hiker: LBJ followed JFK. And LBJ produced the second fastest growth in our sample, which is to say that simply following a tax hiker is no guarantee of poor performance.
Now, look what happens when you consider only Presidents that followed their tax burden cutting peers:
Notice that following a tax cutting President doesn’t mean one will turn in a poor performance… unless one is also a tax cutting President. In fact, tax cutting Presidents that followed other tax cutting Presidents did worse than tax cutting Presidents who followed tax hikers. Imagine that. It’s almost as if the longer tax burdens are cut, the worse the outcome.
And yes, I included Hoover in the above graph though we don’t have the data to know with certainty that his predecessor, Calvin Coolidge cut the burden since Coolidge is renown as a small government guy. But leave out Hoover, and leave out Obama’s first year, and you still aren’t left with anything other than: In fact, tax cutting Presidents that followed other tax cutting Presidents did worse than tax cutting Presidents who followed tax hikers.
Which leads to the sorriest objection I’ve heard, namely that the American public, the constantly gulled American public, has the ability to reason out the outcome of economic policies on the macroeconomy to near-perfection, at least in 4 year increments. And the way it manifests itself here is this: when the economy is about to sour, we elect tax cutters, who, in turn, manage to limit the scale of the impending disaster.
This, ahem, theory (gurgle, choke) is the efficient market hypothesis on LSD. But it has the advantage of being able to explain pretty much anything. The problem is that it does so by breaking everything down to utter nonsense. For instance, it would indicate that the recent housing bubble and economic meltdown, rather than being a surprise, was actually anticipated on some unconscious level by the American public, and selected for as being much better than the alternative. Ditto the Great Depression. So what was this worse thing that was avoided? Locusts? Famine and pestilence? Billions of furious yetis descending on us from their Himalayan stronghold?
And yet, despite the fact that this story makes a virtue out of nonsense, it still isn’t internally consistent. For instance, if the American public understood that only a series of tax cuts were going to save us from something worse than the Great Recession, then wouldn’t GW have managed to have won the popular vote in 2000 and achieved a landslide in 2004. Conversely, does the fact GW received fewer votes than Al Gore indicate that perhaps the American public did not perceive that really big threat a few years in the future? It’s easy to knock down a story that is built on nonsense.
All of which brings us back to the point of this post. Michael Kanell and I have noted that lower tax burdens are not correlated with more rapid economic growth. In fact, from 1929 to the present (and in the book, we focus on the period from 1952 to the present) administrations that have cut the tax burden have performed worse than administrations that raised the tax burden.
And I think we, together and separately, have answered every reasonable objection that has come up, and even quite a few unreasonable objections to boot. And we’ve done so in a consistent and open manner. In our book and in my posts, we’ve been open and clear about our methods and data sources, and we’ve made an effort to treat the data as consistently and systematically as we were able. At some point, the burden of proof should no longer lie with us, but rather on those who cling to a story that simply is not consistent with the data we have observed in the U.S. over the past few decades. Frankly, I think we’re well past that point.