The nature and causes of climate change are a worthy challenge for the best scientists using the most sophisticated tools available. Unfortunately, the study of climate change has been co-opted by pseudoscientists using flawed models, rigged data, and hyperbolic claims echoed by ill-informed media and politicians with hidden agendas.
Among the best-known boosters of climate alarm are Gillian Tett at the Financial Times and BlackRock’s Laurence ‘Larry’ D Fink.
Fortunately, there are rigorous scientists using hard data and robust models to address the phenomenon. This more scientific group includes Michael Shellenberger, Steven E Koonin, Bjørn Lomborg, Bruce C Bunker, MJ Sangster, and many more.
These sober voices mostly agree that slight global warming is detectable, but it’s not a crisis and will not become a crisis in the foreseeable future.
They concur that it’s unclear whether CO2 emissions are the main cause of warming, even if they are a contributing cause. They point to other causes including solar cycles, ocean salinity, ocean currents like El Niño and La Niña, cloud cover, aerosols, volcanoes, agricultural practices, and natural methane release.
There are also numerous official reports that reach the same conclusion. Although, you may have to scan the footnotes to discover that official reports produce scary headlines heavily diluted by detailed content.
The single most important contribution of real scientists is to demonstrate how badly flawed the models used by the climate alarmists are.
A climate model divides the surface of the planet into a grid with squares of about 360 square miles (932 square kilometres) each over land surfaces, and 36 square miles (92 square kilometres) each over the oceans.
That’s about 101 million squares. Each square is extrapolated into a stack about 30 miles (70 kilometres) high to the outer edge of the stratosphere. All weather occurs in this zone, with most weather occurring within 10 miles (26 kilometres) of the earth’s surface in the troposphere.
The vertical stacks are sliced horizontally into thin layers like pancakes, and each layer is analysed separately for climate conditions in that slice, the impact of such conditions on adjacent pancakes in adjacent stacks, and so on. One must model this activity to a first approximation before getting to recursive functions.
If each pancake is one mile thick, that comes to 3.03 billion pancakes. Analysing one pancake is tricky. Analysing 3.03 billion pancakes is mind-boggling. Analysing the interaction of each of the 3.03 billion pancakes with each of the other 3.03 billion pancakes, even allowing for attenuated interaction at a distance, is a super-linear function that borders on the impossible in terms of computational complexity!
One scientist estimates that if we had supercomputers 1000 times faster than today’s computers, the run time on the problem described above would be several months.
Climatology is complexity theory par excellence.
So how do scientists work with models that cannot be run with today’s computers? They make assumptions. Lots of assumptions.
This process begins with a recognition that there are no direct observations of most of the atmospheric slices.
We have satellites and weather stations recording temperature and precipitation, but those inputs include only a small fraction of the surface areas and heights described.
The point is that climate models are so complex and sensitive to assumptions that scientists can get almost any result they want by tweaking inputs and running multiple scenarios.
It also means the outputs are mostly worthless because of unfounded assumptions, computational complexity, and flawed model design.
Most climate models are so deficient they can’t even simulate the past based on known data, let alone forecast the future. If a model of your own design can’t back-test correctly, why should it be relied on to forecast?
Yet these models are routinely touted as showing an ‘existential threat to mankind’.
Regards,
![]() |
Jim Rickards,
Strategist, The Daily Reckoning Australia
|