Dear Reader,
Today’s topic is about a book written by an economist named Russ Roberts. It doesn’t exist yet and he doesn’t even know he’ll write it, but he should.
The book will be about the most important lesson he has taught me. And, after listening to him for an hour each week for 14 years, he’s taught me a lot.
Russ’ podcast Econtalk has probably had a bigger intellectual influence on me than anything else in my life. It’s mostly about asking insightful questions in the right way to discover how what we assume is often wrong. In other words, it’s about surprising truths.
For example, did derivatives cause the financial crisis? Or perhaps you blame the repackaging of sub-prime loans into SPVs and CDOs?
Well, consider an alternative explanation. What if the numbers and assumptions of those derivatives were based on false information — fraudulent information. What if the CDOs really would’ve been AAA safe if the information that rating was based on had been correct? What if the meltdown would never have happened if it weren’t for the lies which the initial borrowers and lenders promulgated?
Then the blame would lie with the fraudsters, not the mathematical whizz kids who supposedly blew up the financial system. And, sure enough, if you compare what sub-prime borrowers claimed on their loan applications to their tax filings, you get quite a large gap. In other words, it was mortgage fraud in the origination process which misled everything thereafter.
This fundamentally alters the narrative of the 2008 financial crisis for most people.
I bring up that example because the same thing took place here in Australia. Mortgage fraud was rife. But that’s another story.
There’s a particular type of misunderstanding which Russ’ podcasts often uncover. Today, what I call the tyranny of the denominator is back more than ever before.
The basic claim is that our assumptions, beliefs, and opinions are based on misunderstanding the data on which they are based. And, often, that misunderstanding is based on what I call the denominator. I’m referring to the bottom number in a fraction, but in a vague sense.
Here’s a simple example. According to one measure, air traffic hasn’t crashed that much. Wolf Street explains the numbers:
‘The industrywide passenger load factor — the percentage of available seats filled — remained at an all-time low in June, at 57.6%, down by 26.8 percentage points from June last year.’
A 27% drop in how full planes are to 58%? That doesn’t sound so bad. United Airlines reports ‘70 percent of seats filled’, for example. That’s not exactly a disastrous collapse in travel, is it?
But the denominator is misleading.
How full each plane is doesn’t take into account reductions in the number of planes flying. If air traffic halves, but half of flights are cancelled, the passenger load factor remains stable. Wolf Street has the relevant numbers:
‘June, industry-wide capacity, as measured in available seat-kilometers (ASKs) was still down -80.1% from June last year, but a slight improvement from May (-86%).’
Now that’s disastrous. But do you see how misunderstanding or misusing the denominator in the initial data gave the wrong impression?
The best example of the tyranny of the denominator is the one I learned from Russ Roberts. It’s all about the inequality statistics.
The popular narrative goes that inequality is soaring out of control. And economists like Thomas Piketty get famous for explaining why.
The denominator has changed
But what if most or even all the rise in measured inequality has little to do with inequality itself? It’s just changes in denominators.
For example, most inequality measures use household data. But what makes up a typical household has dramatically changed over the long-time periods over which inequality is measured. The denominator has changed.
The proportion of nuclear families with a single earner has fallen dramatically. Either side of that, low- and high-income style households have become common. Single parent households, households with two incomes and no children, retirees, and many other combinations have soared in number. This is one change which the rising inequality statistics reflect, instead of inequality itself.
Another example is the marriage habits of people today. They tend to marry people with the same income level as their own. This shows up as inequality in the data by doubling the income effect of a household.
Then there’s income mobility. Income inequality is often measured in quintiles — 20% groupings of households. The bottom quintile compared to the top quintile’s income is compared over time, for example. The bottom quintile’s earnings haven’t increased much, while the top’s has surged.
Can you spot the error? It’s in the denominator — the presumption that the same households are inside the same quintiles over time. Which isn’t true.
Over the course of a lifetime, people move between quintiles. Especially in certain countries like the US, where income mobility is high.
Med and law students spend time in the bottom quintile at first because of their lengthy studies, before rising to the top quintile as they enter the job market. And then they’re likely to drop back into the bottom quintile again too in retirement. This impressive contribution to inequality is however not reflective of them being poor or disadvantaged.
One way to adjust for this is to consider wealth inequality too, not just income inequality. But that reveals a lot of things which equality campaigners don’t like to acknowledge. Like Scandinavian nations being top of the charts on wealth inequality. There is less income mobility in these nations, so the rich stay rich and the poor stay poor, but income is more equally distributed.
The point is not a debate about equality. It’s about how quirks in the nature of the data have more impact on the result than the thing we are trying to measure. Changes in households, changes in income mobility and plenty more are behind what’s being called ‘rising inequality’.
This sort of problem is true of all long-term data series. For example, climate change modelling is heavily dependent on population projections, not just emissions per person.
What if the number of people in 2100 will be a third less than the UN’s climate projections assume, as a new Lancet study claims? What if that number of people, not our emission intensity, is the real key to the amount of emissions which climate scientists worry about? Where would that leave climate change campaigners? Subsidising condoms?
Over in Monaco we have an amusing example of the denominator’s tyranny. According to World Prison Planet, 100% of Monaco’s prison population is foreign criminals. Of course, the reason is obvious. Only a fifth of the population in Monaco is from Monaco…
The denominator explains the statistic, not some sort of bizarre crime wave or xenophobic discrimination by Monaco’s police force. Although campaigners would no doubt be protesting if they were aware of the statistic and could afford to be in Monaco.
As I mentioned above, COVID-19 has brought all this out into the fore more than ever. Models, projections, and assumptions about the virus have all been laughably wrong. And people are getting an intuitive understanding of how and why. They are waking up to the tyranny of the denominator.
[conversion type=”in_post”]
For example, the second wave we’re seeing in the media is more a consequence of higher testing than higher numbers of infections. If you have a population with a given number of infected people and you double testing, you will find twice as many people with COVID-19. But this reflects the denominator — the number of tests, not the number of people with COVID-19.
The giveaway in the calculation is the positive response rate — how many people being tested test positive. That percentage has fallen in many places the media is whipping up second wave fears for.
Estimates of the initial dangers of COVID-19 made the same mistake. They took the number of fatalities in China and divided it by the number of reported cases to come up with a fatality rate. But the denominator is misleading because it reflects the number of tests, not the number of people who have COVID-19. Far more had COVID-19 than tested positive, so the fatality was actually much lower.
The cruise ship experiments were ignored, despite being a perfect petri dish. All the variables were known — the tyranny of the denominator wasn’t operating.
Barry Norris explained in his Argonaut Capital newsletter:
‘Alternative evidence-based (i.e. theories based on facts) population samples already existed: the most prominent being the Diamond Princess Cruise Ship; which at the end of February accounted for over half of all confirmed infections outside of China. “Cruise ships are like an ideal experiment of a closed population”, according to Stamford Professor of Medicine John Ioannidis. “You know exactly who is there and at risk and you can measure everyone”.
‘Quarantined for over a month after a virus outbreak, the entire cruise ship “closed population” of 3,711 passengers and crew, with an average age of 58, were repeatedly tested. There were 705 cases (19% total infection rate) and six deaths (a Case Fatality Rate of just 1%) by the end of March (eventually 14 in total). This compared to 116 deaths that would have been predicted by the Imperial model).
‘Over half of the cruise ship cases were asymptomatic, at a time when the official “science” behind the lockdown, Prof. Neil Ferguson (UK), dismissed the lack of any evidence for a high proportion of cases so mild that they had no symptoms and Dr Anthony Fauci (US) had written in the New England Journal of Medicine that in the event of a high proportion of asymptomatic cases, the COVID mortality rate would ultimately be “akin to a severe seasonal influenza” (a statement which he now at least seems to have clearly forgotten in his enthusiasm for a vaccine solution).
‘The cruise ship deaths were exclusively amongst an over 70’s age cohort. Although the Diamond Princess sample size was small it remains the earliest and most accurate predictor of mortality, infection and asymptomatic cases16. Extrapolating this data to the wider, younger population would logically lead to downward revision on the mortality risk and upwards revisions to the level of asymptomatic cases. COVID outbreaks aboard naval ships with younger populations confirmed this: only 1 death and 3 hospitalised cases out of 1,156 infections on the USS Theodore Roosevelt; zero deaths out of 1,046 confirmed cases on the Charles de Gaulle. Even in ships which could not carry out effective social distancing the virus mortality rate, whilst a serious public health risk, was certainly not the “Spanish flu”.’
The many comparisons between nations’ COVID-19 data is perhaps the best example of tyranny of the denominator going public.
First the number of infected people were compared between nations, making Italy look dreadful.
But what about adjusting for the size of the population, the age, and plenty of other things? When you do that, the league tables of COVID-19 disaster nations change dramatically. Belgium looks bad.
On actual deaths, the UK took pole position. Until people realised that anyone who had ever had COVID-19 and died of any subsequent cause was listed as having died from COVID-19. The Guardian has the quote:
‘A Department of Health and Social Care source summed this up as: “You could have been tested positive in February, have no symptoms, then be hit by a bus in July and you’d be recorded as a Covid death.”’
Of course, each analyst has a different agenda and adjusts the denominator slightly differently to come up with the result they want. That’s the tyranny side of things. The complex nature of data makes twisting it easy.
The denominator exerting its tyranny being money
Proponents of lockdowns compare Sweden to Norway and Denmark. Anti-lockdown campaigners compare Sweden to New York. Each time, the denominator is dominating the analysis.
Financial markets are another great example of the tyranny of the denominator in action. The way returns are measured can be deeply misleading. The denominator exerting its tyranny being money.
Stock markets go up in the long run, but not adjusted for inflation and taxes. The value of money is falling, but capital gains don’t take inflation into account, making long-term comparisons misleading. Not to mention the changing make-up of what’s in the stock market index. Another misleading denominator.
The upcoming drama over government debt levels will be another example of the denominator tyrannising rational debate.
The government doesn’t have a claim on GDP. So, the commonly quoted debt-to-GDP ratio is very misleading. The correct denominator to use for the ratio is not GDP but tax revenue — the government’s actual income.
I hope I’ve given you a long enough list of examples to help you tease out the meaning for yourself. Russ Roberts would’ve been better at explaining the phenomenon himself.
Whenever you see an analysis of any debate topic, ask yourself whether the change being measured is actually happening in the denominator, not the numerator that is being pointed out as important.
Until next time,
Nickolai Hubble,
For The Daily Reckoning Australia
Comments