CBS: East Anglia CRU covered up bad data, computer modeling

Declan McCullough dives into the East Anglia CRU exposure and delivers a well-researched and fair look at the controversy for CBS News.  McCullough looks at the various e-mails, including portions that have not yet gotten much attention from the media, and concludes that the CRU has acted without transparency.  He also shows why the data itself has become suspect, as well as the modeling on which anthropogenic global-warming activists rely for stoking public demand for action:

Advertisement

As the leaked messages, and especially the HARRY_READ_ME.txt file, found their way around technical circles, two things happened: first, programmers unaffiliated with East Anglia started taking a close look at the quality of the CRU’s code, and second, they began to feel sympathetic for anyone who had to spend three years (including working weekends) trying to make sense of code that appeared to be undocumented and buggy, while representing the core of CRU’s climate model.

One programmer highlighted the error of relying on computer code that, if it generates an error message, continues as if nothing untoward ever occurred. Another debugged the code by pointing out why the output of a calculation that should always generate a positive number was incorrectly generating a negative one. A third concluded: “I feel for this guy. He’s obviously spent years trying to get data from undocumented and completely messy sources.”

Programmer-written comments inserted into CRU’s Fortran code have drawn fire as well. The file briffa_sep98_d.pro says: “Apply a VERY ARTIFICAL correction for decline!!” and “APPLY ARTIFICIAL CORRECTION.” Another, quantify_tsdcal.pro, says: “Low pass filtering at century and longer time scales never gets rid of the trend – so eventually I start to scale down the 120-yr low pass time series to mimic the effect of removing/adding longer time scales!”

Advertisement

While much of the attention has focused on CRU director Phil Jones and his messages about “hiding the decline,” McCullough focuses more attention on this:

I am seriously worried that our flagship gridded data product is produced by Delaunay triangulation – apparently linear as well. As far as I can see, this renders the station counts totally meaningless. It also means that we cannot say exactly how the gridded data is arrived at from a statistical perspective – since we’re using an off-the-shelf product that isn’t documented sufficiently to say that. Why this wasn’t coded up in Fortran I don’t know – time pressures perhaps? Was too much effort expended on homogenisation, that there wasn’t enough time to write a gridding procedure? Of course, it’s too late for me to fix it too. Meh.

I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh! There truly is no end in sight… So, we can have a proper result, but only by including a load of garbage!

One thing that’s unsettling is that many of the assigned WMo codes for Canadian stations do not return any hits with a web search. Usually the country’s met office, or at least the Weather Underground, show up – but for these stations, nothing at all. Makes me wonder if these are long-discontinued, or were even invented somewhere other than Canada!

Knowing how long it takes to debug this suite – the experiment endeth here. The option (like all the anomdtb options) is totally undocumented so we’ll never know what we lost. 22. Right, time to stop pussyfooting around the niceties of Tim’s labyrinthine software suites – let’s have a go at producing CRU TS 3.0!since failing to do that will be the definitive failure of the entire project.

Ulp! I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more. So what the hell can I do about all these duplicate stations?…

Advertisement

If that looks like the CRU relied on unreliable data sets, bad computer models, and a desire to reach a conclusion rather than do actual science, you’re not alone.  McCullough concludes:

The irony of this situation is that most of us expect science to be conducted in the open, without unpublished secret data, hidden agendas, and computer programs of dubious reliability. East Anglia’s Climatic Research Unit might have avoided this snafu by publicly disclosing as much as possible at every step of the way.

Most critics of AGW advocacy have never considered it science in the first place.  This just shows that we were right; it’s a religious belief, and its high priests apparently have few scruples about rigging the models and the data to show what they want, rather than pursue science and the scientific method.  Be sure to read all of Declan’s lengthy and excellent article.

Now — when will CBS put this on their Evening News?  Or will they have to bump this for Katie Couric’s Poetry Corner?

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
Advertisement