I recently saw a misleading presentation of COVID data pertaining to Israel. In this post I’m sharing several graphs that I made to counter this misleading image.
Israel is currently a popular object of those committed to an anti-vax narrative because a high proportion of the population is fully vaccinated and cases there are currently spiking. The situation is obviously concerning. However, it is being misrepresented to feed anti-vax sentiments, which are dangerous. Epidemiological study shows that widespread vaccination is key to taming the vaccine, although not in isolation.
I went to the same Israeli government data source as the original misleading table. You can find the data for these charts here. It is in Hebrew. Thank the gods for Google Translate.
The first graph presents the data in a way similar to the table. It compares the share of the cases that are vaccinated with the share of the population that is vaccinated for different age groups.
The table being circulated by anti-vaxxers presents this data in ways designed to tell a misleading narrative. First, the table left out the data for the 12-15 and 16-19 age groups, which is already telling. Second, the choice of a table makes the values seem closer together than they are. Third, and most egregious, the table creator cherry-picked data from different datasets to make the values appear closer together than they are. All the data needed for the comparison can be found in a single, downloadable Excel workbook. However, the comparison made using that data is not as alarming as the misleading data.
As seen in the first graph, the values are closer together than one would like. However, this is still a misleading way to look at the data. As the vaccinated share of the population increase, the share of cases among the vaccinated will necessarily increase. The next graph is a more common way to present the data. In fact, this data is offered in this form.
This shows the number of cases among 100,000 people within each category of unvaccinated, vaccinated, and partially vaccinated. As you can see, unvaccinated people are much more likely to become infected.
Among 12-15 year olds, unvaccinated people are 9 times more likely to be infected. The gap between unvaccinated and vaccinated people closes for the 15 to 59 age groups. It is at its lost among 20 to 29 year olds, but still unvaccinated people are contracting the virus 30% more often than fully vaccinated people.
It might seem surprising that the case rate for fully vaccinated people is higher than the rate of partially vaccinated people. However, both the closing of the gap, and the lower case rate for the partially vaccinated likely demonstrate behaviour change among the vaccinated. Vaccinated people likely began to ignore other public health measures. This is why vaccination alone, even in highly vaccinated populations, will be insufficient. Also, the sub-80% vaccination rates among more highly social 20- and 30-somethings is also insufficient. These charts show that being vaccinated does reduce the likelihood of contracting the virus, though not as effectively as we would like. However, the next chart shows were vaccines are especially important: reducing the severity of a COVID infection.
The third chart is the number of severe infections per 100,000 people within the categories of vaccinated and unvaccinated. Severe infections are much, much higher among the unvaccinated. Because the rate of severe cases among our elders is so high, it makes it hard to see the difference for younger cohorts.
The next chart is limited to 20 to 59 year olds.
Among 40-49 year olds, an unvaccinated person is 30 times more likely to have a severe case.
Another way to look at this is to consider the share of cases deemed severe. I do this in the fifth chart. The numbers show us that unvaccinated people are more likely to get infected. Further, once infected, they are more likely to have a severe case. Among those 70 and over, an unvaccinated person who catched COVID has a one-in-five chance of it being a severe case. For the vaccinated, it is less than one-in-14.
Not getting COVID obviously benefits the person who avoids the virus. However, it also benefits everyone they come into contact with. Because unvaccinated people are more likely to contract the virus, it also increases their likelihood of spreading it. This does not even account for the fact that some early research indicates that even vaccinated people that have contracted the virus are less likely to transmit it. Healthy people who can become vaccinated, but choose not to, pose a danger to those who cannot be vaccinated, such as everyone under 12. Although COVID-19 has generally been less severe for children, the delta variant has sent more young kids to hospitals.
Because unvaccinated people have a higher likelihood of a severe case they also impose a greater cost, and present a greater risk to the health care system, which endangers us all. If they require hospitalization that diverts resources. The diversion of resources is not just within health care systems. In Orlando, COVID patients require so much oxygen, it is diverting supply from the city’s water treatment plants. City officials have asked residents to reduce their water usage because supplies are running low. At high enough levels of hospitalization, severe cases can completely overwhelm hospitals. Not only does this threaten the well-being of anyone else needed hospital care, it imposes a massive physical and emotional toll on health care workers.
I understand being skeptical of corporate controlled science. My PhD was essentially about the deleterious effects of corporate power. Corporations have introduced all sorts of manufactured chemicals into our lives with little regard for their long-term harms. However, most of those chemicals have never undergone anything close to the scrutiny of these vaccines.
Lots of the research done to produce these vaccines did not take place within corporations whose sole purpose is improving the bottom line. University research has been key. While there is excessive corporate influence on university research, much research remains outside the drive of commercialization.
Further, epidemiologists working outside the corporate ambit are pleading with the public to get vaccinated as a key to controlling the spread of COVID-19. Epidemiology is a fascinating field that deals with complex systems. It is a well-established field with a proven track record of prediction. Epidemiological models have demonstrated again and again that high levels of vaccination will staunch the spread.
There is uncertainty about the vaccine. And there are risks. We should not deny that. However, the risks and uncertainties associated with the virus are much, much higher. For both ourselves and for our neighbours, everyone that can get vaccinated should do so, as soon as possible.
Just as importantly, we need to demand the patents on the vaccines be waived. We need global manufacture and distribution of these life-saving medicines to ramp up quickly. While our first concern should be the people of poorer countries that cannot afford the vaccines, there are also selfish reasons to want high global vaccination. The longer there are large unvaccinated populations the more epigenetic drift we will get. Epigenetic drift increases the likelihood of more dangerous variants.
If nothing else, this pandemic should have taught us that the notion of the isolated, self-sufficient individual is a myth. Our lives are all interconnected. We to be accountable to each other, and take care of each other. A care and accountability approach to the vaccine requires us to get vaccinated, if we are physically able, and to make the vaccine available to everyone as quickly as possible.
In Part 2, I looked at the shifts in U.S. household consumption that occurred during WWII. While aggregate consumption increased alongside massive government intervention, the qualitative mix of that consumption changed in some drastic ways. This analysis was intended to augment the analogy made by J.W. Mason and Mike Konczal between WWII and the prospects for a government-led post-pandemic economic boom. In this post, I will move the analogy from the U.S. to the U.K.
For several reasons, the U.K. is a better analog for our current situation.
Our ecological dire straits are much more like living adjacent to a war zone that occasionally spills over onto us than it is like living across the ocean from one.
U.S. and U.K. Economies During WWII
The economic trajectories of the WWII allies were dramatically different before the war. Although both countries had significant economic downturns in the early 1930s, the decline of the U.K. was much smaller and shorter, as seen in the figure below.
Each set of bars is the cumulative change in nominal GDP beginning in 1929. By 1933, U.S. GDP was almost 50 percent below 1929. In the U.K. the decline was about 10 percent. While the U.S. began to recover in 1934, it has not returned to its pre-Great Depression level of GDP by 1939. Meanwhile, the U.K. had almost fully recovered by 1935. In 1939, GDP was more than 25 percent higher than 1929.
With the onset of the war, both countries saw sizeable increases in GDP. As mentioned in Part 2, it took the war for the U.S. to actually complete its recovery from the Great Depression. In 1940, U.S. nominal GDP remained two percent below 1929, although it had grown ten percent from 1939. By 1941, U.S. GDP had more than recovered and was 24 percent above 1929.
The U.K.’s GDP continued to outpace the U.S. during the early years of the war (see the next figure). This was entirely driven by government. By 1941, U.K. nominal household consumption was 12 percent above 1939, while for the U.S. it was 21 percent higher. Then, in 1942, U.S. GDP growth overtook the U.K. as its consumer expenditure continued to grow more and government expenditure jumped dramatically.
During the war years, government expenditure became a much larger portion of both countries’ GDP, as seen in the next figure.
Although it is universally known among economists that government spending is a part of GDP, they generally neglect that fact when advocating for increased GDP as evidence of increased well-being. Belief in ‘crowding out’, which I discussed in Part 1, could be a reason why. Or, perhaps ‘crowding out’ offers a convenient theoretical justification for an ideological opposition to government.
Currently available data does not have values for U.K. government investment during the war years. Aggregate investment in “fixed capital formation” is available. However, comparisons between the U.S. and the U.K. suggest there may be accounting differences that affect the distribution of values between expenditure and investment. For the figure below, I imputed a value for U.K. government spending on assets using the U.S. government’s relative share of total investment in non-residential assets. Before and after the war, the government share of total investment was similar for the U.K. and the U.S., even if the investment share of GDP was quite different.
In the figure below, the full height of the black/gray bars is U.K. government spending on both consumption expenditures and investment, as a share of GDP. The investment segment is in gray. The red/pink bars are U.S. government spending. The pink segment is the spending categorized as investment. If there are accounting differences, then the two categories are not as meaningful as the total amount.
In 1940, total U.K. government spending was already more than 40 percent of GDP. For 1942 until 1944, it was over 50 percent. Because the U.S. did not enter the war until 1941, the level of government spending as a share of GDP actually fell slightly in 1940. Nominal government spending increased, but because GDP increased more, the share fell.
U.S. government spending never broke the 50 percent mark, although it came close. Obviously, the U.K. and U.S. governments comprised so much economic activity because of the all-out war effort. Both governments were commanding enormous amounts of material and labour to equip and conduct the war.
U.S. national accounts have data on defence spending. In 1939, U.S. defence spending was $1.7 billion. In 1944, it was $97 billion! In 1941, the U.S. spent $15 billion on defence, which was more than the total amount spent from 1929 to 1939.
So, both the U.K. and the U.S. had growing economies, as measured by nominal GDP, with a substantial portion of that growth in the form of increased government spending. Importantly, that spending becomes some people’s income. The economist J.M. Keynes recognized this fiscal reality—a fact ignored, denied, or misunderstood by most of our prominent contemporary economists—and saw in it both opportunity and risk. The spending offered an opportunity to reduce inequality. The risk was that the money spent to fund the war effort could lead to destructive inflation, especially of goods needed to conduct the war.
Keynes offered his solution in a short pamphlet titled How to Pay for the War. He called for a mixture of taxes and forced saving, which would draw money out of circulation. I will discuss Keynes’ solution and its applicability to our present situation in the conclusion to this third, and final, part. However, for now it suffices to say that the U.S. and the U.K. were affected very differently by the war, not least in the domain of household consumption. These differences are important if we are going to look back at WWII as an analogy for dealing with the post-pandemic recovery in the context of the climate crisis.
U.K. Household Consumption
The inspiration for this series of posts was a pair of articles; one written by Peter Coy, the other by J.W. Mason and Mike Konczal. The latter made the point that price-adjusted U.S. household consumption rose throughout WWII, even as the U.S. government spent heavily. This inspired Coy to check the fact and determine that Mason and Konczal were correct. However, the situation was markedly different in the U.K.
Consumption by U.K. households fell every year for the first four years of the war. Although it increased in 1944 and 1945, it would not fully recover until 1946. As we will see below, even in 1946 many categories of household consumption remained well below their pre-war levels.
Unlike the U.S. economy, the U.K. economy was unable to provide simultaneously more goods and services tohouseholds and the war effort.
Changes in U.K. Household Consumption
Unfortunately, the detail available in U.S. national accounting is not available for the U.K. And the disaggregate categories are not the same as the U.S. data. That means I cannot directly compare shifts in household purchases of durable goods, non-durable goods, and services. Nonetheless, the disaggregated data highlights some important qualitative shifts in household consumption that accompanied the aggregate decline.
First, the data shows that the war required a decline in several categories of goods that we can consider ‘essential’: fuel & lighting, food, and clothing.
Food consumption fell by 21 percent. It is worth noting that in 1943 almost 20 percent of the working-age population was in the armed forces. That means a significant portion of food consumption would have shifted from a household expense to a government expense.
Additionally, we know about the ‘Victory Gardens‘, which produced food for consumption but was excluded from the national accounts. This points at a standard criticism of GDP: it only includes most activities once they are monetized, which excludes a lot of activities that are nonetheless valuable. The classic example is the exclusion of housework, which is heavily performed by women. If a stay-at-home mother cares for her children and cleans her home, it contributes nothing to the national accounts. However, if she goes back to work, and hires a nanny and house cleaner to perform those tasks, that expense is added to GDP.
The reduction of fuel & light likely meant, on the one hand, more discomfort due to cooler homes in winter, and more inconvenience from reduced lighting. On the other hand, it also meant more diligent conservation, which can offer its own rewards. Anyone who has We cannot assume that a reduction in household purchases automatically means deprivation.
Similarly, the reduced spending on clothing likely brought disappointment to many. Those who enjoy wearing the latest fashions undoubtedly had to forego that luxury. However, it also meant people got more use out of their clothing by wearing it for longer. Additionally, as with food, a large portion of the population were provided a portion of their clothing by the government.
The two figure above—aggregate consumption and consumption of ‘essentials’—both overstate how much consumption by the U.K. population actually declined. At the same time, figures of aggregate U.S. household consumption understate how much consumption by the U.S. population actually increased. A similar portion of the U.S. working age population ended up in the armed forces during WWII, although, as noted in Part 2, “food furnished to employees (including military)” is a subcategory of household consumption in U.S. national accounts.
Other Goods & Services
In the U.K. data, five categories of goods are combined into a single annual value during the war years. It includes several types of goods that saw declines in the U.S., such as furniture. The aggregated category saw a 44 percent decline. Vehicle purchases dropped to nothing in 1943 and 1944. Expenditures on vehicle operations declined by 78 percent. A broad category of ‘other services’ fell by nine percent.
In fact, other than housing, which had a minor increase, only two categories of U.K. household consumption expenditure increased between 1939 and 1943: tobacco and public travel & communication.
These two categories correspond to some highlighted in my analysis of shifts in U.S. household consumption during the war. I discussed how increased consumption is not necessarily a sign that people are better off. Tobacco is the perfect example. While smoking can be an enjoyable activity, it is also physically harmful and addictive. Also, consumption is exacerbated by stress. In actual pounds, tobacco expenditure rose as high as 8.9 percent of total household expenditures during the war.
However, the increase in public travel & communication was mostly a good thing. People were connecting with friends and family more. It is likely a healthier way to manage stress. This greater connection was part-and-parcel of the social cohesion and solidarity that those who lived through the war described as a highlight among all the stress and terror.
Post-war Household Consumption
By 1946, aggregate U.K. household price-adjusted purchases exceeded the pre-war level. However, several categories of goods remained much lower. For the following, I am comparing 1946 to 1938. That allows me to compare some of the categories that were combined during the war years. It also accounts for the fact that some categories of consumption had already declined by 1939.
First, it is worth noting that a comparison with 1938 shows that the purchase of books, which is not reported for the war years, was 46 percent higher in 1946 (not shown). Perhaps this is due to a dramatic recovery from wartime decline. Paper use was restricted during the war. However, U.K. consumers may have paralleled U.S. consumers, who bought more books during the war, as seen in Part 2.
The categories that were higher in 1946 than 1938, apart from tobacco, public travel & communications, and books, included food and fuel & light. However, clothing remained 21 percent below 1938. All the categories that might be considered ‘durable goods’ remained well below pre-war levels. Household purchases of vehicles would not achieve pre-war levels until the early 1950s. The same was true of furniture and household appliances.
The sacrifice of U.K. households during wartime is the less surprising outcome compared to the increased household consumption in the U.S. The latter might hold out a promise that we can recover from the pandemic and confront the climate crisis and not sacrifice our well-being. However, we also need to be prepared to give up some things. The mix of things we purchase is going to change.
We need to acknowledge that an aggregate measure of household consumption is an extremely problematic indicator of well-being, for many reasons. First, there is the issue of distribution. All measures examined in this analysis were for consumption by the entire population. If consumption at the top of the income/wealth hierarchy grows by more than it falls at the bottom, then aggregate consumption would increase.
Second, increased consumption of certain goods and services does not necessarily mean we are actually better off. If we are consuming more of certain ‘bads’ in order to manage stress or other disorders, that is hardly evidence we are better off. Additionally, if households are increasing their spending on goods and services that would be more effectively, efficiently, and fairly distributed via public institutions, that is not necessarily evidence of improved well-being.
Third, the concept of ‘consumption’ is itself problematic, as David Graeber, among others, has written. The issue was touched on above, in a different context: government consumption vs. investment. As mentioned, the U.S. and U.K. data seems to differ in terms of how government purchases were classified as either ‘consumption’ or ‘investment’. The dividing line is hardly clear-cut.
We never think of household purchases as an investment unless it is for a business, at which point the buyer ceases to be considered a household, although even this categorical distinction is not as clear-cut at the margins. Household purchases, sometimes labelled “final consumption expenditures” because they are considered the endpoint of our economy, do not become assets that generate income. Yet, as Graeber notes, much of what we buy is not actually ‘consumed’.
When a teenager buys a guitar and begins to learn to play, that is more an investment than consumption. This is obviously true if the teenager goes on to become a paid musician. However, it is also true if it just contributes to building the skills of a lifelong hobby that is never monetized. While the concept of durable goods somewhat compensates, using the term ‘consumption’ to describe the relationship between the household and those goods is misleading.
Finally, focus on increased ‘consumption’ means that more purchases of ‘goods’ is axiomatically better, even if these increases are due to faster obsolescence or break-down. As our household objects become more complicated, we lose our ability to repair them, either ourselves or through the service of a local repairperson. The sellers of these goods have little interest in making them more durable or more easily repaired. Quite the opposite.
Much of the obsolecense of durable household goods diverts resources from sytems of production into waste sinks. Increased ‘consumption’ of durable goods is likely making us worse-off in the long-run.
Constrained Planning and Plenty
Mason and Konczal analogy is important and useful to combat mounting calls for austerity. But there is an extended debate to be had about how we use our resources. The post-pandemic recovery offers an opportunity to begin doing what needs to be done to manage our multiple ecological crises.
Government spent heavily during the pandemic to support economies and households when other sources of income disappeared. Taxes did not rise to match that spending, so the amount of money in the economy increased. Many of our usual spending outlets, such as restaurants and entertainment, were greatly reduced. This led to record household savings rates in many countries.
It also has economists anticipating a major boom in household spending as pandemic restrictions are lifted. While it is a hopeful sign in terms of creating jobs, it is worrisome in terms of the potential ecological impact. What are we going to ‘consume’? How disposable will those things be?
The pandemic is a reminder of the need for government to participate in economic management during times of crisis. It was a lesson learned intensively during WWII. Yet, almost everyone in government, as well as media, is ignoring the lesson as we stumble along, trying to do as little as possible to deal with the climate crisis.
Keynes recognized that spending on WWII would add money to the economy, but that this was a mixed blessing. While it would ensure continued income for workers, increased spending power meant they would compete for resources needed to fight the war. We face a similar problem now, even if our leaders have failed to recognize the necessity to spend huge. Just as Keynes recommended, we will likely need a combination of taxes and forced savings.
We need material budgeting, as was done during WWII. What materials are available? What is needed for a just transition? What is left over? We cannot expect markets and prices to achieve a sustainable outcome.
The qualitative mix of goods and services will change, as it did in the U.S. and the U.K. during WWII. Our absolute consumption of material goods will likely also decline. At the very least, just as fighting the war was prioritized when distributing resources, achieving a just transition must be prioritized. However, this need not mean absolute deprivation. We need to plan our economies within the material constraints of the Earth, but th
At the very least, we can likely have more leisure time. We can also have more public provision. Even in our sacrifices, we can find pleasure, knowing that we are doing so as part of ensuring a just, sustainable future for humanity. Key to appreciation of sacrifice is that it be shared. That is why Keynes thought the economic management needed to fight WWII offered an opportunity to reduce inequality. We should grab the same opportunity that exists now.
After WWII, many economies had significant, sustained booms. Unfortunately, those booms had ecological consequences, many of which we are only acknowledging now. Further, many of those ecological consequences were borne by marginalized communities, especially Indigenous peoples who were displaced by extraction. We cannot have the same sort of boom following a just transition.
We will need continued management of many resources and that management must include the people most affected by extraction and disposal. However, that does not mean we cannot have richer, more satisfying lives, with plenty of consumptive opportunities within those constraints. While our material resources have opportunity costs—using rare earth minerals for an MRI machine means those minerals are not available as inputs to iPhones—they can be combined and recombined with emergent effects that serve ‘consumers’ in vastly different ways. While our resources are constrained, their potential becomings are infinite. We must acknowledge constraints while denying scarcity.
We need to supercede the naive—and destructive—individualism of both mainstream economic theory and American right-wing populist rhetoric. Our lives are lived within many commons. We can acknowledge our intractable interdependence without denying the individual, whose individuality is both output and input of the different publics in which they participate. The Earth is materially fixed. But that materiality can express itself as both a wasteland devoid of life or a thriving cornucopia of ecological and cultural diversity.
Getting from here to there will require much effort, including the sacrifice of things that many of us currently take for granted. Thankfully, we have the resources, expertise, and creative capacity to not only do it, but to do it joyfully.
In Part 1, I explained the motivation for this series.
I want to use the analogy of WWII, as invoked by economists JW Mason and Mike Konczal in an NYT op-ed, to consider how we ought to manage a potential post-pandemic economic boom. In this post, I will look at the qualitative transformations that took place in U.S. household consumption during WWII. These transformations occurred, in part, because of government control of material resources.
While the aggregate level of household consumption only fell for a single year—1942—the mix of goods and services shifted dramatically, and some categories of consumption dropped for several years. The best known example are private automobiles. The productive capacity, and material inputs, of the automaking industry were turned over to wartime production and the sale of new automobiles dropped to just over one percent its 1941 high-water mark.
At this point, it is worth noting that when WWII began, U.S. consumption had not yet recovered from the Great Depression. Nominal sales of new automobiles in 1941 were barely half a percent higher than they had been in 1929. In 1940, nominal consumption was still almost 10 percent lower than 1929, while the population was almost nine percent higher.
It is widely recognized that although the New Deal helped to support people suffering through the Great Depression, it was insufficient for kickstarting the U.S. economy. It took WWII to get production back on track. That is an important lesson for those of us that advocate a Green New Deal.
What we actually need is a Green WWII, although perhaps a wartime metaphor is not ideal for the collective, cooperative global effort required. For more on the WWII analogy for achieving a just transition, see Seth Klein’s book A Good War, which describes how Canada’s economy was reshaped for the war effort. Klein argues that a just transition requires a similar scope and scale of transformation.
The chart below shows the change in U.S. household consumption of goods and services from 1941 to 1944.
Goods and services are the two broadest categories of consumption in the U.S. national accounts. While the purchase of services increased by 20 percent, goods purchased fell by 8 percent.
Goods are further disaggregated into 1) durable, and 2) non-durable. Services are disaggregated into 1) household expenditures on services, and 2) expenditures on services of non-profits serving households. Each of the categories, except the last, have at least two more levels of disaggregation.
The fall in goods consumption was entirely a fall in the purchase of durable goods, as seen in the next figure.
In aggregate, the purchase of non-durable goods increased modestly, while certain sub-categories fell. The purchase of durable goods fell every single year from 1941 to 1944. It had fully recovered by 1946.
As already noted, a large portion of the decline in durable goods was automobiles, which were about one-quarter of all durable goods purchased in 1941. Household appliance purchases fell by 89 percent. Two other categories that saw declines of more than 50 percent were audio and photographic equipment, and musical equipment. Notably, the purchase of books increased by almost 60 percent!
The chart above also shows that in 1944 the purchase of ‘Jewelry and Watches’ was 28 percent higher than in 1941. This understates just how much these purchases increased during the war, since they fell slightly in 1944.
I used data on the purchase of ‘Jewelry and Watches’ as part of the research and analysis for my dissertation, which was about the De Beers diamond cartel during WWII. I argued that the increase in purchases was evidence of the company’s success in conjoining diamonds with engagement. However, the war was a vital context.
At a time when Americans were foregoing luxuries, such as automobiles, appliances, cameras, and radios, they dramatically increased their purchase of jewelry, perhaps the most luxurious of luxuries. Why? During WWII the number of U.S. marriages increased dramatically. Men and women were getting engaged and married before the men went overseas. This led to a dramatic increase in the sale of diamonds rings. For more on this, you can read the dissertation, or read the presentation I gave at my doctoral defence, or watch this video of a presentation from early in my research and analysis.
The reasons for consumption changes are always many. I suspect that multiple dissertations could be written about every category of household consumption expenditure.
The fall in durable goods was not because the population could not afford them. It was because the materials needed to produce many of them were diverted to the war effort. Obviously the steel needed for private automobiles was required for military transportation and weaponry. It also required large amounts of optical materials, rubber, fine copper wire, and much more. This left much less of these essential inputs available to provide durable goods to households.
For non-durable goods, almost every sub-category saw an increase, except those associated with automobiles and those associated with recreational activities.
However, an examination of the categories raises an important question about the meaning of consumption.
Economists and policy-makers take for granted that greater consumption means greater well-being. Of course, they will acknowledge that the reality is messier than this. But that acknowledgement does little to alter the reflexive use of national accounting measures as measures of well-being. Indeed, many of the goods and services we consume are a response to a harm.
Three of the categories of goods—alcohol, tobacco, pharmaceuticals—are at least partially coping mechanisms. The war was a stressful time. Almost everyone would have known someone serving overseas; most would have known someone who was killed. In the early years, when Germany, Italy, and Japan were swiftly claiming territories, there was incredible uncertainty about the war’s outcome. Understandably, people would have medicated that stress.
Examination of the make-up of household consumption in post-war years hammers home that increased expenditure need not mean greater well-being.
In nominal terms, total personal consumption expenditure has grown at an annualized rate of 6.5 percent per year. Certain categories have grown much more and now comprise a much larger share of people’s spending. The relationship between greater spending on these items and well-being is ambiguous, to say the least.
The above shows five categories of spending that have variable relationships with well-being. For example, greater spending on doctors is partially about greater access to more and better kinds of treatment. However, it is also about more maladies. The same is true of ambulances and hospital fees. A greater share being spent on nursing homes is partially about longer life spans. But it is also about less intra-family care for our elders, which has mixed consequences. Greater shares spent on financial services is partially about more people having wealth that needs to be managed, but also about the wealthy spending more to manage wealth in ways that avoid taxes.
Beyond the above contradictions, the majority of medical services should not be paid for by individuals. They should be free at the point-of-service, and funded by the government as a public good. The fact that hospitals now comprise eight percent of American consumer spending is outrageous when combined with the scale of medical debt. Medical care should be a right, not a growing expense. And increased household consumption that includes increased spending on medical care cannot unambiguously be claimed to express greater well-being.
As with non-durable goods, almost every sub-category, other than those associated with automobiles and recreational activities, saw an increase in consumption.
The largest increase was in the category of “food furnished to employees (including military)”. This is the only category where the military build up had obviously bled over into the data for household consumption. In 1939, it was less than one percent of all service spending. By 1945, it was 5.3 percent. However, that misleading boost to personal consumption expenditure is not sufficient to change the overall image that Mason and Konczal deploy.
Because fuel costs had risen, and fewer cars were being purchased, there was much more use of public transit. In fact, this would be a high for public ground transportation (as a share of total consumption) for as long as the U.S. has kept national accounts. In 1943, public ground transportation was over 80 percent of all transportation services, which includes automobile service and repair. By 2019, it was barely 12 percent, having been displaced by automobile service and air travel.
When we look at the increases in the purchase of services, we see that people were eating out more and they were communicating more. Both largely express positives for people’s well-being. However, the large increase in gambling is less ambiguously good. While gambling is benign entertainment for many, it is also a problematic, even destructive, habit for others.
Perhaps the most interesting increase is in the purchase of social and/or religious services. On the one hand, this could be another expression of the undercurrent of anxiety that comes with living in wartime. On the other hand, much of the spending on these kinds of services is not for oneself; it is charitable spending.
In my research on De Beers, one of the things that jumped out from first person descriptions of life in the United States during WWII was the feeling of unity. The government very deliberately sought to create this sentiment. It wanted the entire population to feel like they were part of the war effort, even if they were not on the frontline.
In The Good War, Seth Klein writes about the Canadian government’s effort at creating public unity. It reflected on the experience of the First World War and a perceived lack of unity that hampered the war effort. Part of the failure came from a widespread sense that the burden of the war was disproportionately falling on the working class. That was part of the motive for extremely high top marginal income tax rates, and excess profits taxes.
We can debate how fairly the costs of the war were actually distributed. We debate how inclusive U.S. and Canadian societies actually became. Racial and ethnic divisions certainly did not disappear. However, many reported feeling a greater sense of solidarity. In such a setting, people likely felt feel giving.
One of the few services that did decline during the war was household maintenance, which fell by 13 percent. Conversely, purchases of tools were among the few durable goods with a stable level of expenditure. This suggests that people substituted their own labour for the the purchase of repair services. All other things being equal, this would have reduced GDP, since expenditure of our own labour is not included in national accounts, while spending for third-party labour is.
In Mason and Konczal’s op-ed, they mobilized the fact that price-adusted personal consumption expenditures in the U.S. continued to rise, except for 1942, even though the government massively intervened in the economy. They used the fact to argue that government can and should spend to support a post-pandemic boom. I concur with this assessment. However, I want to call attention to the qualitative shifts that occurred in consumption patterns. Aggregate consumption—a very problematic measure when adjusted for price-changes—might have continued to increase, but there were substantial differences across the categories of household consumption.
The end of the pandemic is likely to reinvigorate household consumption. Since the climate crisis continues, we must acknowledge both the necessity and the possibility of qualitative transformation. During WWII, public transportation, as a share of personal consumption expenditure, jumped by more than one percentage point. At the same time, purchases of private automobiles collapsed. The purchase of goods to service private automobiles also dropped significantly.
While there were undoubtedly certain hardships associated with this collapse, it was necessary for the war effort. For similar—and additional—reasons we need to confront the likelihood that current private automobile ownership must be drastically reduced. However, this does not have to mean that we are necessarily worse off. This will be discussed more in the conclusion to Part 3.
“On the economic front we lack not material resources but lucidity and courage.”
Those words are found in a 1940 pamphlet titled How to Pay for the War, written by pioneering economist John Maynard Keynes. Sadly, they apply to current discussions about the economics of the climate emergency. In fact, we are further behind in talks about an even greater threat than the U.K. was in 1940.
Keynes took for granted widespread acceptance that government spending would dramatically increase to wage war. Unfortunately, we have not yet accepted that governments will need to spend huge amounts of money to achieve a just transition to a green economy. That makes it much more difficult to answer the question of how to pay for it.
I was made aware that too many have their heads in the sand during a recent, frustrating appearance before the House of Commons Standing Committee on Industry, Science and Technology, where I was invited as a representative of Canadians for Tax Fairness to speak about economic recovery from the COVID-19 pandemic.
Importantly, every witness spoke about recovery in the context of the climate emergency. However, the discussion failed to address the massive elephant in the room: the inadequacy of the federal government’s response given the scale and scope of the crisis.
Consider the scale of the problem. Canada recently announced an updated target of reducing emissions by 40-45 per cent by 2030 relative to 2005 levels. But it still has no plan to get all the way there, and the latest figures show that emissions actually rose between 2017 and 2019. Further, the current target, which we are failing to meet, is considered inadequate by the International Panel on Climate Change, if the globe is to limit warming to 1.5 C.
Consider the scope of the problem. We know we need to retrofit our buildings, transform our transportation infrastructure, and rapidly reduce, eliminate, or innovate high-emission industries. But these sectors span a wide range of companies, and are too slow-moving to meet the urgent need for change. There is much expert knowledge capable of telling us how to do what needs to be done, but we need coordinated mobilization and support of that expertise.
At the same time, we need to protect people from the myriad, mounting harms of climate change, harms that risk being compounded by the necessary transformations of our energy systems. Jobs will disappear. Some goods and services will become more expensive or unavailable. People should not struggle because decades of overly cheap fossil fuels have malformed our economy.
The problem is massive and complicated.
In response to the pandemic, the federal government spent unprecedented amounts of money. Inexplicably, they are not doing the same to fund a proper response to the climate emergency.
Although the government of Prime Minister Justin Trudeau is the first in two generations to increase federal expenditures as a share of GDP, planned program spending is lower than under former prime minister Brian Mulroney. This is exemplary of what Seth Klein, author of A Good War: Mobilizing for the Climate Emergency, calls “the new climate denialism.”
The long decline in federal expenditures has been justified by belief in the free market. Profit-driven competition was supposed to deliver efficiency and innovation. Government was to be as limited as possible.
But we cannot compete our way out of the climate emergency. First, there is too much uncertainty about potential profits to entice companies to do what is necessary. Second, research and development need to be cooperative, and successful inventions shared widely. Finally, much of the work to be done, such as ecological rehabilitation, has no source of revenue, let alone profit.
The federal government must do as it did during World War II and spend whatever it takes to make the necessary economic change.
Keynes’ plan to pay for the war was designed to ensure the greatest availability of resources for the war effort. However, it was also intended to reduce inequality and to maintain economic stability through low inflation. Similar motives ought to be part of our climate plan.
Money spent by government to transition our economy becomes someone’s income. That may drive demand for certain goods and services beyond supply capacity, leading to inflation. Households may compete with government for goods needed in the climate fight. To lessen those pressures, governments can use progressive taxation.
More importantly, taxes are necessary to reduce the unearned increase in wealth at the top of our economic hierarchy. We have a trickle-up economy. As the money spent by the government circulates, portions are diverted as profit and interest, which accrue with asset owners.
Asset ownership is highly unequal, so the rich get richer, just because they are already rich. That wealth is a source of power, giving the rich excessive influence over government, which is part of the reason the current climate response is so anemic.
Taxes reduce inflationary concerns and inequality, while providing revenue for the ongoing climate crisis mitigation and economic transformation.
The crisis is huge. More spending and more taxes are unavoidable if Canada is to achieve the scale of coordinated effort required for a just transition.
In Janet Yellen’s confirmation speech to United States Senators in January 2021, she said the government must “act big” to deal with the pandemic’s economic fallout.
Three months into her role as the first woman treasury secretary, Yellen is going beyond big spending with an ambitious plan to rollback decades of corporate tax cutting.
Canada’s Liberal government must grab the same opportunity to go as big, if not bigger.
Just like Yellen, Finance Minister Chrystia Freeland is the first woman to hold the federal government’s most important fiscal position.
With her first budget since taking on the job mid-pandemic, Freeland has an opportunity to be remembered as a pathbreaking finance minister. She can take plenty of inspiration from Yellen’s “Made in America Tax Plan.”
Yellen’s call for a 21 percent global minimum corporate tax, and an increase to the US federal corporate tax rate back to 28 percent attracted the most attention. Canada should support these measures as part of ending the decades-long race to the bottom on corporate taxes. A recent study showed Canada would gain at least $11 billion by supporting the global minimum.
However, Yellen’s plan calls for much more. Let us consider three components that the Canadian government could emulate.
First, the federal government needs to close the gap between profits reported to shareholders and profits reported to tax authorities. The US government plans to impose a 15 percent tax on the excess between ‘book income’ and ‘taxable income.’
In 2019, at least 25 Canadian corporations with book income greater than $100 million paid no net income tax. With a 15 percent minimum book income tax, those companies would have paid a combined $5.1 billion. Instead, they claimed almost $4 billion in net tax deductions.
A minimum tax on book income reduces the incentive for fancy accounting to reduce taxable income, including through international profit shifting.
Second, Canada should shift tax subsidies from fossil fuels to clean energy production. The Biden administration’s plan includes a range of supports to guide the US transition toward a carbon-neutral economy. At the same time, it will remove existing subsidies for fossil fuel companies, and increase ‘polluter pays’ tax penalties.
Despite decades of federal and provincial support for fossil fuel companies, the industry is waning. Since the industry’s peak in 2014, employment had fallen almost 20 percent before the pandemic. Over that same time frame, fossil fuel companies claimed $12.4 billion in various tax reductions, including more than $800 million worth of investment tax credits.
The pandemic has wreaked havoc on employment. The federal government should use this opportunity to aggressively reshape the country’s energy industry.
Yellen’s Treasury Department recognizes that support for clean energy production will attract investment. More investment can create higher-paying jobs to replace those being lost in oil and gas.
Third, the Trudeau government should invest in more tax enforcement. Since 2015, the federal government has increased funding for the Canada Revenue Agency. Yet, the CRA’s spending on international and large business compliance was actually reduced from 2015/16 to 2018/19. Planned spending for all reporting compliance in 2021 remains more than 10 percent below its 2008 level.
The Parliamentary Budget Office estimated that each additional dollar spent on business tax compliance returns about five dollars in fiscal benefit. More robust enforcement would both discourage law-bending tax schemes and hold corporations accountable when they do underpay.
Decades of corporate tax cutting failed to produce the benefits promised by proponents. Instead, it has contributed to worsening wealth inequality, which is implicated in myriad social harms.
The IMF recently said governments need to use every measure at their disposal to recover from the pandemic in a way that reduces inequality. Combine that with the findings of a PBO report on tax avoidance that concluded “it may be time for a ‘fundamental rethink’ on international corporate taxation.”
The finance minister, as author of a book called Plutocrats, knows that inequality is the product of political choices as much as economic forces.
The Biden administration is offering an unprecedented opportunity for Freeland to join an international effort and make bold political choices. The ball is in Canada’s court.
D.T. Cochrane is an economist and policy researcher with Canadians for Tax Fairness. D.T. has lived in Ontario for more than 20 years, but still considers himself a Saskatchewanian at heart.
The recent RCMP incursion into Wet’suwet’en territory was aimed at enforcing an injunction. Coastal GasLink was awarded the injunction against Wet’suwet’en land defenders who were blocking construction of its pipeline.
Injunctions have long been an important part of “business as usual” for corporations that operate on Indigenous lands. But are they still a useful tool for protecting corporate assets?
The refusal to comply with court injunctions, and the use of legal expertise to fight them, pose major challenges for Canadian businesses.
Several other blockades have not been served with injunctions and the government of Canada has said it will not use force to remove occupiers. This suggests that governments and corporations understand that the landscape of Indigenous resistance has shifted.
The asymmetry of injunctions
The Yellowhead Institute documented the use of injunctions by corporations as a tool to “deny recognition of Indigenous peoples’ inherent rights.” According to Yellowhead’s research, corporations are awarded injunctions against First Nations in 76 per cent of cases. Conversely, First Nations are awarded injunctions against corporations in only 19 per cent of cases.
One of the tests for an injunction is the relative impact on the two parties. In other words, who will suffer the greater harm?
Corporations point to the financial costs associated with blockades. In the original injunction awarded to Coastal GasLink, the judge cited a claim by the company that the blockade could cause hundreds of million of dollars in losses.
The financial considerations cannot be applied equally.
Resource extraction and transportation projects often threaten or harm Indigenous economies. But these communities are not measuring value just in terms of dollars and cents.
But what is the price tag on a people’s sovereign right to continue their way of life?
Holding Canada accountable
The solidarity blockades in Tyendinaga and across Canada are intended to speak the financial language meaningful to corporations and governments. They are succeeding.
Sean Finn, chief legal officer of Canadian National Railway, acknowledged the tactic’s effectiveness when speaking about a solidarity blockade near Smithers, B.C. He said the blockade will have “a major impact on the economy going forward” because consumer goods and commodities are unable to make it to market.
But the cost to corporations in Canada involves more than just lost revenue. It also means greater uncertainty.
Uncertainty in the code of capital
The bedrock of a corporation’s value is not its economic attributes. It is the relevant legal code. Katharina Pistor, a law professor at Columbia University, calls this the “code of capital.” Legal uncertainty reduces the value of a corporation’s assets.
B.C. and federal leaders did not accomplish the desired surrender. However, neither did they recognize the rights and jurisdiction of Indigenous nations.
It is within this context of unresolved uncertainty that injunctions operate.
The value of injunctions
If the Canadian government does not have sovereignty over Indigenous lands, then it does not have the right to grant access. Corporate incursions, even with permits from Canadian governments, become illegitimate and illegal.
The Wet’suwet’en refused to allow Coastal GasLink access to their lands. Therefore, it is the land defenders that are upholding the appropriate law: Wet’suwet’en law.
Rather than deal with the actual titleholders, corporations have sought protection from Canadian courts via injunctions. Injunctions are used to mitigate the uncertainty associated with Indigenous jurisdiction. However, for the Indigenous communities and their supporters, the issues are too important and too urgent for automatic compliance.
The refusal of Indigenous land defenders and solidarity protesters to accede to injunctions sends a message to all of Canada: if Canada will not comply with Indigenous laws, then they will not comply with Canadian laws.
The distribution of income is a function of power. It is also an outcome of power. The powerful use their power to maintain that power. One of the results is worsening income distribution.
Data Source: Statistics Canada Table 11–10–0192–01; calculations by author
It is a fitful process. However, in Canada, the distribution of income has steadily worsened.
In 1976, the top 10% were paid 5.0 times the income of the bottom 70%. In 2017, the ratio was 7.2 times.
In fact, in 2017 the price-adjusted income of the bottom 70% of Canadians was lower than in the late-1970s. Meanwhile, the wealthiest 10% of Canadians were receiving over 50% more income.
One of the things powerful people do with their power is ensure the system continues to favour them. Of course, within the most powerful cohort, there is no single, simple, consensus on how best to maintain the status quo and entrench their power. This is one reason federal electoral victory in Canada has oscillated between the Conservatives and the Liberals. Both parties protect the position of the most powerful but do so with a different mix of policies.
The distribution of income has worsened under both parties, although it has taken a different form.
On average, income growth for all income groups is lower during Conservative governments than during Liberal governments. The bottom 70% actually suffered an average income decline of 0.2% per year under Conservative governments. The top 10% average income gains of 0.5% per year.
With the Liberals in government, both the bottom 70% and the top 10% generally see income gains. However, the gap between the two groups grows more than under the Conservatives.
Under the Conservatives, the income of the top 10% grows 0.7 percentage points more per year than the bottom 70%. Under the Liberals, the richest decile gains 0.8 percentage points per year more than the bottom 70%.
Indeed, in 2017, under the Liberal government, the ratio of the income of top 10% to the bottom 70% hit its highest level ever as the top 10% received its largest one-year increase—$9,200—in almost 20 years.
Our tax system somewhat reduces inequality. The after-tax income of the bottom 70% has increased 15% since the late-1970s, while the top 10% has gained 45%. Through the 1980s and early 1990s, after-tax inequality slowly increased. Then, under the Chrétien/Martin Liberals, after-tax inequality jumped substantially. With both the Liberals and Conservatives prioritizing tax cuts that favour the top-end of the income hierarchy, this trend of worsening after-tax inequality will continue if either party takes the election.
Income inequality is a problem for many reasons. First, and most obvious, it is unfair. The highest paid do not work harder or contribute more to society. This is not to say they do not work hard and do not contribute. Some do and some do not. The place where any income-earner is found on the income ladder is largely a product of luck.
Both luck of birth and luck of circumstance are incredibly consequential in how much we will be paid and how wealthy we will become. Warren Buffet, one of the three richest men in the world, has acknowledged that with his set of skills, if he had been born almost anywhere other than the United States in the first half of the 20th century, he would not be so obscenely wealthy.
More worrisome than the fairness of inequality is the distribution and use of power. The rich wield excess influence within our political institutions. Our voting system may be one-person, one-vote but our economy is one-dollar, one-vote. One thing that the rich buy with their riches is influence. And, returning to the point made at the top, one of the most important goals of that influence is to maintain the status quo and their relative power.
Improving both pre- and after-tax income distribution is not just about achieving more fair outcomes, it is also about minimizing the excessive power and influence of the richest members of our society.