If you have paid attention to the recent congressional hearing on Climate-Gate you will no doubt have heard that forecasting guru
Dr. J. Scott Armstrong has proven the IPCC models are outperformed by a simple modelArmstrong argues:
Those involved in the global warming alarm have violated the “simple methods” principle.
He recommends that:
"To help ensure objectivity, government funding should not be provided for climate-change forecasting. As we have noted, simple methods are appropriate for forecasting for climate change. Large budgets are therefore not necessary."
If you doubt Dr Armstrong is a
forecasting guru check the testimony:
Dr Armstrong ... is the author of Long-range Forecasting, the creator of forecastingprinciples.com, and editor of Principles of Forecasting (Kluwer 2001), an evidence-based summary of knowledge on forecasting methods. He is a founder of the Journal of Forecasting, the International Journal of Forecasting, and the International Symposium on Forecasting. He has spent 50 years doing research and consulting on forecasting
So yes he's very much involved in forecasting.
We conducted a validation test of the IPCC forecasts based on the assumption that there would be no interventions. This test found that the errors for IPCC model long-term forecasts (91 to 100 years in the future) were 12.6 times larger than those from an evidence-based “no change” model. Based on our analyses, we concluded that the global warming alarm is an anti-scientific political movement.
This is music to my ears, and the ears to other deniers the Internet wide. At last we have scientific sounding justification for our claims that the experts know less than simple folk. We can figure it out ourselves. Oh they might have fancy equations and computers but what really counts is wild ass guesses from those willing to think out of their armchairs.
The conclusion I like to draw is that simple models
always work better than more complex models. Sounds right to me. And of course Armstrong is right, he was after-all the first man on the moon.
Glowing recommendations abound. Noone quite understands what Armstrong did, but we share absolute conviction that he's justified our basic dogma:
"I have not heard any testimony but am under the impression Scott Armstrong knows a great deal about complex modeling and has rejected it as failed (at least long term modeling)" - blog comment
An Analysis of Armstrong's validation test of the IPCC forecasting modelBut unlike other denier blogs lets go a bit further and actually try to understand what Armstrong did to demonstrate a simple model beats the IPCC models at making longterm forecasts. This is a technical blog afterall.
The validation test Armstrong performed is detailed in his 2009 paper,
Validity of climate change forecasting for public policy decision making, co-authored by Willie Soon and published in the International Journal of Forecasting (wait where have I heard of that before?).
What Armstrong did was to use discredited global temperature data published by the university at the center of Climate Gate. But in this case we can trust the data because it leads to a conclusion we want to believe.
Hadcrut3, the temperature data used to test IPCC model and simple benchmark model forecasts.Armstrong made a simple benchmark model that forecasts temperature. It is very simple, it just predicts that future temperature will be identical to today's. So his simple benchmark model's 100 year forecast starting from 1851 predicts that the 1951 temperature anomaly will be the same as the 1851 temperature anomaly.
Because forecasting single annual anomalies is exactly the kind of thing the IPCC does.
Armstrong first tested his benchmark model against IPCC forecasts made in 1992. Unfortunately this way he could only test a 17 year forecast made by the IPCC and he noted that policymakers were more interested in long-term forecasts (eg more like 100 years ahead, not 17):
"Policymakers are concerned with long-term climate forecasting, and the ex ante analysis we have described was limited to a small sample of short-horizon projections. To address this limitation, we calculated rolling projections from 1851 to illustrate a proper validation procedure."
What he really wanted to be able to do was to test something like a 100 year IPCC forecast made in 1851 against the forecast made by his benchmark model. But just how could he obtain 100 year IPCC forecasts made in 1851 when the IPCC didn't even exist in 1851? Armstrong found a simple solution:
Dangerous manmade global warming became an issue of public concern after NASA scientist James Hansen testified on the subject to the US Congress on June 23, 1988 (McKibben, 2007), after a 13-year period from 1975 over which global temperature estimates were up more than they were down. The IPCC (2007) authors explained, however, that “Global atmospheric concentrations of carbon dioxide, methane and nitrous oxide have increased markedly as a result of human activities since 1750” (p. 2). There have even been claims that human activity has been causing global warming for at least 5000 years (Bergquist, 2008).
It is not unreasonable, then, to suppose, for the purposes of our validation illustration, that scientists in 1850 had noticed that the increasing industrialization of the world was resulting in an exponential growth in “greenhouse gases”, and projected that this would lead to global warming of 0.03 C per year.
Yes that's right, the IPCC didn't exist in 1851, but we can always imagine what they would have said if they had existed in 1851. After-all it isn't like the 0.03C per year warming rate is based on a complicated model. The IPCC models are simple right? 0.03C/year, wherever that comes from, is clearly based on nothing more than the notion that temperature will go up. 0.3C per decade is just a kind of universal warming rate that any IPCC scientist will eventually fixate on, even if that IPCC scientist exists in 1851.
The alternative to making it up would have been to take GCM hindcasts and compare them to HadCRUT. But that's quite involved. The idea here is to take the simpler route. It's simpler just to make shit up. That's one of the principles of forecasting in fact - make shit up.
So now lets compare the simple benchmark forecast with the IPCC forecast. At 0.03C warming per year the 1851 IPCC would have predicted the hadcrut 1951 temperature anomaly to be +2.7C, compared to the actual anomaly of -0.17C. Armstrong's simple benchmark model performs much better, predicting a 1951 temperature anomaly of -0.3C.
The absolute mean error in this case for Armstrong's model is 0.13C error. For the IPCC model it's a massive 2.87C error. The 1851 IPCC loses.
So when you next hear that simple models perform better at forecasting than complex IPCC climate models, now you know the technical details behind that fact. Thank god someone with the competence of Armstrong was brought in to testify before congress on such an important issue.